title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
SafeAuto: Knowledge-Enhanced Safe Autonomous Driving with Multimodal Foundation Models
Accept (poster)
Summary: This paper presents SafeAuto, an MLLM-based autonomous driving system. SafeAuto has three major innovations. First, it uses a new Position-Dependent CE loss (PDCE) loss, which supervises the predicted number tokens based on their numerical difference from the ground-truth number. Second, it has a knowledge-enhanced post-safety verification module using Markov Logic Networks (MLNs). It can explicitly encode domain knowledge and structured traffic rules into the decision-making process of the MLLM, which can be used to verify and correct the predicted high-level actions. Third, it uses a novel training method for constructing a unified embedding that effectively integrates all modalities. The authors evaluated the SafeAuto method on the BDD-X dataset and the DriveLM dataset. They showed that SafeAuto has better driving performance than the state-of-the-art baselines. ## update after rebuttal No updates. Claims And Evidence: * The proposed PDCE loss and MLN-based reasoning module are well-designed * The overall SafeAuto method shows strong performance against state-of-the-art baselines on popular benchmarks. Methods And Evaluation Criteria: * The datasets used for evaluation make sense. Theoretical Claims: N/A Experimental Designs Or Analyses: * The experimental setup is valid. However, I think all the results are from open-loop simulation. I would like to see some results in closed-loop simulation as well. Supplementary Material: * I did not read the supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: * I don't have anything to add. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: * The motivation for the PDCE loss is that using the MSE loss on regression outputs will "disrupt the MLLM’s autoregressive token generation, transforming it into a pure transformer encoder (Tan et al., 2024) used only for regression tasks and losing its language generation capabilities necessary for high-level question-answering". I hope the authors can compare their method against this MSE-loss baseline and show how the high-evel question-answer capabilities are impacted. * Figure 2. Having two peaks is not necessarily worse than having just one peak. It could be that there are two different valid maneuvers in this scenario. It will be useful to have more analysis on why the traditional CE loss leads to the observed behavior. * It is not clear how SafeAuto leverages past experiences to inform current decision-making and how Multimodal RAG helps. Questions For Authors: See "Other Comments or Suggestions". Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We extend our sincere gratitude to the reviewer for their meticulous and constructive feedback. Their insightful observations and valuable recommendations have greatly contributed to improving the rigor and clarity of our work! > **Q1: The motivation for the PDCE loss is that using the MSE loss on regression outputs will "disrupt the MLLM’s autoregressive token generation, transforming it into a pure transformer encoder (Tan et al., 2024) used only for regression tasks and losing its language generation capabilities necessary for high-level question-answering". I hope the authors can compare their method against this MSE-loss baseline and show how the high-level question-answer capabilities are impacted.** Thank you for the valuable suggestion! In fact, once we apply an MSE loss at the MLLM’s hidden layer to directly regress the low-level control signals, the model stops generating tokens—only numeric outputs are produced via an MLP head, rather than text tokens, and thus we cannot do a further next-token generation. Consequently, it loses its autoregressive language generation capability, making further high-level QA impossible. As a result, the TimeLLM can only handle low-level predictions but cannot generate further textual answers or explanations. Moreover, even when focusing exclusively on low-level tasks, our proposed SafeAuto still outperforms TimeLLM by a wide margin on BDD-X dataset, as illustrated in the tables below: Speed: | Method | RMSE | A0.1 | A0.5 | A1.0 | A5.0 | A10.0 | | ------------ | -------- | --------- | --------- | --------- | --------- | --------- | | TimeLLM | 1.17 | 21.34 | 53.13 | 74.14 | 99.67 | 99.86 | | **SafeAuto** | **0.65** | **55.49** | **88.84** | **95.34** | **99.81** | **99.91** | Course: | Method | RMSE | A0.1 | A0.5 | A1.0 | A5.0 | A10.0 | | ------------ | -------- | --------- | --------- | --------- | --------- | --------- | | TimeLLM | 4.10 | 65.70 | 83.47 | 89.59 | 97.60 | 98.59 | | **SafeAuto** | **3.85** | **76.26** | **89.68** | **94.11** | **98.30** | **99.25** | > **Q2: Figure 2. Having two peaks is not necessarily worse than having just one peak. It could be that there are two different valid maneuvers in this scenario. It will be useful to have more analysis on why the traditional CE loss leads to the observed behavior.** Thank you for the insightful question! In that case, the historical speed values are [6.23, 6.56, 5.62, 6.27, 7.09, 6.67, 10.24] with the ground truth being 12.46. This means that the peak around 13 is actually more valid in this scenario. But we agree that in some cases, two peaks might indicate different valid maneuvers. However, since we are predicting for the very next frame—a short time interval—the two peaks should not be widely separated as shown in Fig 2. In such cases, we would expect the distribution to resemble a Gaussian with one dominant peak more. The issue with CE loss lies in how it computes the joint probability. The calculation follows p('1') × p('2'|'1') × p('4'|'12') × p('6'|'12.4'), treating every digit as equally important during training. Thus, when the training data is not large enough, the model might learn to assign higher values to the last three digits instead of the first digit to make the overall probability high. As a result, even if the latter probabilities are high, a lower value for p('1') can cause the model to generate a different starting digit (for instance, '3' instead of '1'), which then leads to the subsequent digits being generated incorrectly. We also provide two case studies in Figure 6 on Page 16 to illustrate these failure cases with CE loss more clearly. We will also include more additional analysis in our final version to further explain this behavior! > **Q3: It is not clear how SafeAuto leverages past experiences to inform current decision-making and how Multimodal RAG helps.** Sorry for the confusion, to clarify, once we retrieve similar past experiences, we incorporate the corresponding historical video along with both high-level and low-level predictions into the current context—that is, we append them before the current video and question in prompt. This process is similar to few-shot learning because it provides the MLLM with additional context from similar driving scenarios, thereby informing its current predictions. We will provide a more detailed explanation in our final version. If you have any further questions or suggestions, please feel free to let us know. Your feedback is greatly appreciated and will certainly help us improve our work a lot! --- Rebuttal Comment 1.1: Comment: Thank you for your clarifications.
Summary: This paper proposes SafeAuto, a novel framework for autonomous driving using multimodal foundation models. It addresses the challenges of integrating high-level reasoning and low-level control. The main algorithm ideas include three key components. First, the Position-Dependent Cross-Entropy (PDCE) loss function improves low-level numerical prediction accuracy while maintaining the autoregressive nature of the Multimodal Large Language Model (MLLM). Second, Knowledge-Enhanced Post-Safety Verification uses Markov Logic Networks (MLNs) to integrate traffic rules into the MLLM's decision-making process, verifying and correcting high-level actions. Third, Multimodal Retrieval Augmented Generation (RAG) learns from similar driving experiences by integrating video data, control signals, and environmental predicates. The paper's main findings show that SafeAuto outperforms existing baselines. On the BDD-X and DriveLM datasets, it reduces the Root Mean Square Error (RMSE) for speed and course predictions and improves high-level action prediction performance. On the BDD-X dataset, it reduces the RMSE for speed and course predictions by 5.8% and 14.1% respectively, and boosts high-level action performance by 28.0% under the CIDEr metric. ## Update after rebuttal Thanks for authors to provide answers that address all my concerns. I've also read the comments from the other reviewers. In overall consideration, I'd like to raise my assessment score to be positive. Claims And Evidence: The paper claims that SafeAuto, with its novel components like PDCE loss, MLN-based safety verification, and Multimodal RAG, outperforms existing baselines. This is supported by experiments on BDD-X and DriveLM datasets. For example, it shows significant improvements in low-level control accuracy, such as reducing RMSE for speed and course predictions, and high-level action prediction performance, like boosting CIDEr scores. Ablation studies are provided to help in understanding the contribution of each component. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are suited for autonomous driving. The PDCE loss, MLN-based safety verification, and Multimodal RAG directly address the challenges of integrating high-level reasoning and low-level control. Benchmark datasets like BDD-X and DriveLM, along with relevant metrics such as RMSE and CIDEr, comprehensively assess system performance in both high-level action prediction and low-level control accuracy, making them appropriate for this application. Theoretical Claims: The paper focuses more on empirical validation through experiments instead of theoretical proofs. Experimental Designs Or Analyses: For the PDCE loss experiments, using the BDD-X dataset to compare its performance against the original CE loss is valid. Measuring RMSE for speed and course predictions directly assesses the improvement in numerical prediction accuracy. The MLN-based safety verification experiments are well-designed in terms of using real and simulated data for training. The Multimodal RAG experiments use appropriate datasets and metrics. Supplementary Material: Appendix A which details the SafeAuto-reasoning component, including traffic rule mapping, YOLOv8 fine-tuning, and predicate extraction. Appendix B provides the pseudocode for PDCE loss, which is key for understanding its implementation. Appendix C presents ablation study experiments, helping to assess the impact of different modules and hyperparameters. Relation To Broader Scientific Literature: Existing methods using MLLMs struggle with low-level control and safety. SafeAuto's PDCE loss improves low-level prediction accuracy compared to traditional cross-entropy loss. Its use of MLNs for safety verification addresses the lack of explicit safety checks in previous approaches. Multimodal RAG enhances decision-making by leveraging past experiences. Essential References Not Discussed: There seem no missing essential related works. Other Strengths And Weaknesses: Strengths: The paper introduces concepts like PDCE loss, use of MLNs and Multimodal RAG. Experimental results shows that by integrating the designs, SafeAuto outperforms existing baselines across multiple datasets. Weaknesses: - Despite of the integration to form the SafeAuto pipeline, the novelty for each component seems marginal, which is my major concern. - The writing is lengthy, and it seems some parts are overlength such as the abstract section. - The arrangement of space between texts and figures need to be checked. Other Comments Or Suggestions: See questions for authors. Questions For Authors: Despite of the integration to form the SafeAuto pipeline, the novelty for each component seems not clear. (1) The PDCE is weighted sum of the KLdivergence, what is new in this loss design? (2) What is the difference between this work and the Markov Logic Networks (MLNs) of Richardson& Domingos, 2006? (3) What are the special challenges addressed in the multimodal RAG method design? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful and constructive feedback, and for recognizing the value of our work! The insightful suggestions and detailed comments provided have substantially contributed to enhancing the quality of our work! > **Q1: The PDCE is the weighted sum of the KL divergence, what is new in this loss?** Thanks for the insightful question! Our motivation for proposing the PDCE loss for LLM digit prediction is twofold. (1) When fine-tuning an LLM for digit prediction, one natural idea is to use a combination of CE loss for the word part and MSE loss for the digit part (which we have also tried internally before). However, this approach introduces some challenges: it is challenging to balance the magnitudes of the CE and MSE losses, the self-attention mechanism injects noise into the final output features for forecasting digits, and applying MSE loss on hidden features would also impair the LLM's autoregressive capability (i.e., it cannot do the next-token generation). (2) Additionally, using CE loss as existing work directly on digit prediction treats each digit as equally important, which introduces bias as highlighted in Figure 2. To address these challenges, we introduce the PDCE loss, which employs a careful weighted sum of the KL divergence. However, note that as shown in Fig 3, both the weights and the target distribution in the KL divergence are not randomly chosen, but determined formally by a predefined soft target distribution, such as a Gaussian distribution, which aims to approximate the behavior of the MSE loss on digits in string form. In this way, it solves the above challenges and eliminates the need to balance disparate losses for finetuning LLM, preserves the autoregressive property of the LLM, and also emulates the behavior of MSE loss during training on float numbers in string, as demonstrated in Figure 2. We also present a performance comparison with TimeLLM, which directly applies MSE loss on the extracted hidden features from LLM for regression, as suggested by Reviewer KjQv, below, and our method still performs much better. Due to the word limit, we kindly encourage you to review our rebuttal in Q1 for Reviewer KjQv. > **Q2: What is the difference between this work and the Markov Logic Networks (MLNs) of Richardson & Domingos, 2006?** Thank you for your insightful question! The key distinction in our work lies in how we integrate them into a data-driven vision-language model framework for autonomous driving. Specifically: 1. **Integration with Data-Driven Paradigms:** Our approach is the first to combine MLNs with Multimodal LLM in the context of autonomous driving. This integration allows us to inject explicit safety knowledge—such as traffic rules—into the prediction process. Directly training a model on data alone does not guarantee adherence to these safety constraints. 2. **Different Source of Predicates:** In our work, the grounding of predicates comes from multiple external modules. We incorporate outputs from our self-trained YOLOv8 detector and the original predictions from the MLLM to form the predicates. 3. **Joint Simulated and Real-Data Training:** Another novel aspect of our work is the joint training strategy for the MLN weights using both simulated and real-world data instead of purely relying on real-world data which may be hard to collect in practice. > **Q3: What are the special challenges addressed in the M-RAG?** Thank you for the insightful question! In contrast to relying solely on the video modality (as in RAGDriver), our Multimodal RAG must integrate three key modalities—(i) video or image, (ii) control signals, and (iii) environmental predicates—into a single unified embedding for ranking and retrieval. One challenge here is the lack of a clear objective for aligning these diverse modalities in autonomous driving retrieval tasks. To address this gap, we exploit textual scenario descriptions, since they naturally encapsulate information about all modalities. During training, our approach guides the unified embedding to replicate the relative ranking derived from these textual embeddings. Concretely, we use a contrastive learning-based procedure where each batch’s embedding distribution is pushed to match the ranking distribution from the text embedding for that scenario. This approach is both efficient (as it avoids iterating over external databases mid-training) and effective at aligning multiple modalities. By contrast, methods such as RAGDriver use a triplet-loss formulation, which only focuses on top-k similarities but ignores fine-grained ranking signals and requires pre-fetching positive/negative examples before training. The improved performance results also highlight the effectiveness of our proposed multimodal RAG, and as demonstrated in Table 10 on page 15, we find incorporating environmental predicates significantly boosts retrieval accuracy by mitigating some of the noise present in raw video embeddings.
Summary: This paper proposes SafeAuto, a novel framework that enhances MLLM-based autonomous driving systems by incorporating both unstructured and structured knowledge. The model can predict high-level and low-level action prediction. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. However, the proposed method uses Weighted Target Token Probability to predict the float number of speed/course, somehow not a very accurate predicting approach for these fine-grained signals. Theoretical Claims: NA. Experimental Designs Or Analyses: The experiments are convincing and show the effectiveness of each component. However, the experiments are only conducted on the BDD-X/DriveLM datasets and can only be compared with some narrow works. Supplementary Material: NA. No supplementary material is uploaded. Relation To Broader Scientific Literature: This paper provides a Knowledge-Enhanced large model for autonomous driving to predict high-level and low-level actions. Essential References Not Discussed: No other essential references are not discussed. Other Strengths And Weaknesses: Strengths: - This paper describes the details of the method very clearly. Although it takes some effort to follow, the proposed method is very systematic. Weaknesses: - The experiments are only conducted on the BDD-X/DriveLM datasets and may only be compared with some narrow works. - The proposed method uses Weighted Target Token Probability to predict the float number of speed/course, somehow not a very accurate predicting approach for these fine-grained signals. - I have some doubts about whether the proposed method is very effective for low-level actions. The BDD-X dataset is not a universally recognized best dataset for evaluating low-level actions. Maybe it needs to be compared with some end-to-end autonomous driving works. Other Comments Or Suggestions: The writing is not very friendly for those readers in the autonomous driving field. I recommend adding relevant background information about the methods. Questions For Authors: Please refer to the weakness mentioned above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are deeply grateful to the reviewer for their thorough and insightful feedback. Their expertise and dedicated time have significantly contributed to improving the quality of this work! > **Q1: The experiments are only conducted on the BDD-X/DriveLM datasets and may only be compared with some narrow works.** Thank you for the valuable suggestion. We fully recognize the need to test our framework on a broader range of datasets to better demonstrate its generalizability and effectiveness. Unfortunately, the availability of comprehensive multimodal autonomous driving datasets is currently limited. For example, nuScenes, while extensive, lacks high-level action or reasoning annotations (and DriveLM is just developed based on nuScenes which provides the high-level annotation). Similarly, the Waymo dataset does not provide essential modality data like images or videos. Given these constraints, BDD-X and DriveLM represent the most relevant and widely used datasets for evaluating multimodal autonomous vehicle systems involving LLMs. We hope that the future development of diverse multimodal datasets will allow for more comprehensive evaluations of frameworks like ours. > **Q2: The proposed method uses Weighted Target Token Probability to predict the float number of speed/course, somehow not a very accurate predicting approach for these fine-grained signals.** Thank you for your insightful question! As detailed in our paper, our PDCE loss yields notably higher accuracy on these fine-grained signals. For example, on the DriveLM dataset—where the goal is to predict the next six waypoints—our method achieves an ADE of 0.84, substantially lower than the 1.51 reported for DriveLM-agent and even approaching the performance of the full regression model UniAD (full) with ADE as 0.80. Additionally, on the BDD-X dataset, and following reviewer KjQv’s suggestion, our approach still outperforms TimeLLM, which employs an MSE loss on LLM-extracted hidden features. The performance comparison for speed prediction is summarized below: Speed: | Method | RMSE | A0.1 | A0.5 | A1.0 | A5.0 | A10.0 | | ------------ | -------- | --------- | --------- | --------- | --------- | --------- | | TimeLLM | 1.17 | 21.34 | 53.13 | 74.14 | 99.67 | 99.86 | | **SafeAuto** | **0.65** | **55.49** | **88.84** | **95.34** | **99.81** | **99.91** | Course: | Method | RMSE | A0.1 | A0.5 | A1.0 | A5.0 | A10.0 | | ------------ | -------- | --------- | --------- | --------- | --------- | --------- | | TimeLLM | 4.10 | 65.70 | 83.47 | 89.59 | 97.60 | 98.59 | | **SafeAuto** | **3.85** | **76.26** | **89.68** | **94.11** | **98.30** | **99.25** | So it is indeed accurate and the weights here are also not arbitrarily chosen but determined by a predefined soft target distribution. Moreover, the ablation study on the hyperparameter $\sigma$, as shown in Fig. 4 on page 15, further validates the robustness and effectiveness of this weighting strategy. > **Q3: I have some doubts about whether the proposed method is very effective for low-level actions. The BDD-X dataset is not a universally recognized best dataset for evaluating low-level actions. Maybe it needs to be compared with some end-to-end autonomous driving works.** Thank you for your insightful question! In our experiments, we also used the DriveLM dataset, which is derived from the standard nuScenes dataset and specifically designed for end-to-end prediction. The goal is to forecast 3 seconds into the future (i.e., 6 future waypoints), thereby providing a standard framework for low-level action evaluation. As demonstrated in Table 2, our method still achieves substantially improved low-level action prediction, with an ADE of 0.84 compared to the 1.51 ADE reported for the original DriveLM. This performance is also on par with the full regression method UniAD (full), which records an ADE of 0.80. Currently, there are few multimodal autonomous driving datasets that include high-level annotations. Nevertheless, we are still excited to evaluate our method on any new datasets offering comprehensive multimodal annotations for both high-level and low-level actions as they become available! > **Q4: The writing is not very friendly for those readers in the autonomous driving field. I recommend adding relevant background information about the methods.** Thank you for your valuable suggestion! We acknowledge that the current version may not be fully accessible to readers in the autonomous driving field. We will add more background details, such as for RAG, in the related works section to better support these readers.
Summary: SafeAuto proposes a unified framework to enhance autonomous driving systems by leveraging multimodal foundation models. It integrates three core components: Position-Dependent Cross-Entropy (PDCE) Loss: An adaptation of the standard cross-entropy loss that incorporates digit-level proximity and place-level weighting, making it behave more like MSE for low-level numerical predictions. Knowledge-Enhanced Post-Safety Verification: Utilizes a Markov Logic Network (MLN) to explicitly encode traffic rules and safety constraints, serving as a verification layer that can override unsafe high-level action predictions from the multimodal language model. Multimodal Retrieval-Augmented Generation (RAG): Combines video data, control signals, and environmental predicates into a unified embedding space to retrieve similar driving experiences and inform both high-level and low-level decision making. The proposed methods are evaluated on datasets BDD-X and DriveLM, which shows improvements in both low-level control signal accuracy (e.g., reductions in RMSE for speed and course) and high-level behavior prediction (except justification). Claims And Evidence: The paper claims that by integrating a modified loss function, explicit safety verification via MLN, and a multimodal retrieval mechanism, SafeAuto significantly improves both low-level control precision and high-level decision-making in autonomous driving. The work is generally well-motivated and the PDCE loss is especially targeted to improve the numeric prediction ability for autonomous driving tasks. The evaluations also reflect the effectiveness of this design - the prediction accuracy increases (especially the low-level motion/control signal prediction). The MLN is intended to improve the safety and the RAG is meant to enhance the reasoning. The ablation study also shows how different modules contribute to the final improvement. However, in the experiments, it would be great if the authors can provide more detailed results on safety-critical scenarios (besides the rule-following). Besides, the justifications scores on the BDD-X data is behind all baselines and it is worth explaining and further investigation. How the RAG datasets are built and how feasible the RAG is are still unclear, especially given the significant improvement brought by the RAG module. Methods And Evaluation Criteria: The authors provide extensive results on two major multi-modal driving dataset BDD-x and DriveLM and they also presents the results on both high-level and low-level prediction/decision-making, which support most of their claimed contribution. Generally one concern of the (m)LLM-based is how efficient the inference process is - although the paper mentions the inference is computationally efficiently. It would be great if the author can provide more details on that. Theoretical Claims: The theoretical contribution mainly lie on the design of PDCE loss, which makes sense to me. I believe it helps mitigate the LLM' intrinsic issue of poor numeric reliability. The theoretical framework for using MLNs to encode traffic rules is solid, but its practical performance depends on accurate predicate extraction and the proper setting of rule weights. The paper could delve deeper into how uncertainties in rule extraction might affect inference in the MLN. Experimental Designs Or Analyses: As mentioned above, the paper provide extensive results/ablation study on both high-level and low level motion prediction/decision-making, which demonstrates the effectiveness of the proposed methods. However, there are still some concerns regarding the evaluation: 1) the justification score falls behind all baselines and I didn't find convincing explanation or ablation for that - the explainability is one of the key motivation why we use the (M)LLM for such safety-critical tasks. If we only cares about the motion prediction/action decision-making, LLM-based methods may not be necessary or efficient. 2) If we look into the detailed ablation study, we find for most metrics the RAG brings most improvement, especially for the high-level predictions. It is unclear how the RAG dataset is built and how similar they are to the actual test data? This can be critical to understand the feasibility and generalizability of the proposed methods. Supplementary Material: No supplementary material is provided Relation To Broader Scientific Literature: The paper is based on the recent advance of (M)LLMs and a series of pioneering works on utilizing LLMs to autonomous driving tasks. They focus on improving loss design and enhancing the safety of such syetems. Essential References Not Discussed: The paper includes the recent advances in MLLMs for autonomous driving and the safety guarantees for such systems. However, I also encourage the authors to include the works on enhancing safety for general LLM-based autonomous systems because the system-level design of MLLM-based and general LLM-based AV systems are similar. For instance, the work 'DiLu: A Knowledge-Driven Approach to Autonomous Driving with Large Language Models' (ICLR 24) utilize a similar memory module to enhance the performance and reasoning ability. The works 'Empowering Autonomous Driving with Large Language Models: A Safety Perspective' (ICLRW 2024) discussed a couple methods with verification modules and in-context learning to enhance the safety. Other Strengths And Weaknesses: * Strength: The PDCE loss is a creative solution to the problem of numerical prediction with autoregressive models, retaining language generation abilities while improving regression performance. The explicit incorporation of traffic rules using MLNs directly addresses the safety-critical requirements of autonomous driving. The framework is evaluated on two datasets with both qualitative and quantitative results, and ablation studies clarify the contributions of each component. * Weakness: Hyperparameter Sensitivity: The performance of PDCE loss and the multimodal RAG component depends on carefully chosen parameters (e.g., σ, weighting factors). A deeper exploration of this sensitivity would be beneficial. Reliance on External Modules: The approach relies on pretrained object detectors (YOLOv8) and text encoders, making it vulnerable to errors from these components. Limited Domain Evaluation: Although evaluated on two datasets, additional tests in diverse driving scenarios or simulated environments could further demonstrate the generalizability and robustness of SafeAuto. LLM's strength for AD tasks is its general common sense and generalizability. Other Comments Or Suggestions: Please kindly refer to the previous comments. Questions For Authors: Please kindly refer to previous comments/questions Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are deeply grateful to the reviewer for their insightful and thorough feedback, and we appreciate the recognition of our work's contribution! The suggestions and comments made for our work have significantly helped to improve its quality. > **Q1: Could the authors provide more details on the inference efficiency of the proposed (M)LLM-based method?** Thanks for raising this insightful question! We employ KV-caching to accelerate the MLLM’s inference during multi-turn conversations, enabling faster and more efficient responses. Using a single NVIDIA A6000 GPU in a standard academic setting, we recorded an average inference time of approximately 2.33 seconds per case with the full pipeline (i.e., including RAG, covering both high-level and low-level predictions with Safe-Reasoning) on the BDD-X dataset, and 3.50 seconds per case on the DriveLM dataset. In our practical industry deployments, the inference speed can be improved by 2–3 times. We will incorporate more detailed results on the inference efficiency into our final version. > **Q2: Why are the justification scores lower compared to baselines?** Thanks for your insightful observation. While our justification score appears lower relative to some baselines, it's substantially improved when compared to the vanilla base model without any components (see Table 11 on Page 15). While for other baselines, ADAPT is not an LLM-based method and independently predicts justifications separate from low-level actions. Compared to DriveGPT, our justification scores are comparable, but we significantly outperform it in high-level action prediction accuracy. Regarding RAGDriver, it uses both train and test data for training RAG, which we believe introduces evaluation bias; while our experiments adhere strictly to using only the training data. Notably, when adopting the setting of RAGDriver, actually we will have 50% further improvement in justification scores, but we don’t think it is a correct setting for evaluation. > **Q3: Could the authors provide more details on how the RAG dataset was constructed and clarify its similarity to the test data, given its significant impact on performance?** Thanks for your thoughtful question! Our RAG dataset is built exclusively from training data without incorporating any information from the test set. Upon manual inspection, we observed they are not extremely similar, suggesting the model generalizes effectively rather than relying on direct memorization. The superior performance of SafeAuto over RAGDriver also highlights the efficiency and generalizability of our RAG design. > **Q4: Essential References Not Discussed** Thanks for the helpful references! We have now cited these works and expanded the related discussion in our new version accordingly. > **Q5: Could the authors provide insights into the sensitivity of performance to hyperparameters in the PDCE loss and multimodal RAG component?** Thank you for highlighting this important point, and we apologize for the confusion. We indeed conducted sensitivity analyses for key hyperparameters, but the corresponding results are deferred to Appendix C. Specifically, Figure 4 on Page 15 shows our ablation study on the weighting factor $\sigma$. The results indicate that our PDCE loss could still consistently outperform the CE loss across a wide range of different $\sigma$ values, demonstrating robustness to hyperparameter selection. > **Q6: Could the authors discuss potential vulnerabilities arising from reliance on external modules such as pretrained detectors (YOLOv8) and text encoders?** Thanks for your insightful observation! While we acknowledge that external modules like YOLOv8 and pretrained text encoders can introduce errors—a common limitation shared by related works (e.g., AgentDriver). However, as we incorporate post-hoc safety verification through our SafeAuto-Reasoning component, it can correct certain errors originating from these modules and thus enhance driving stability. The substantial overall improvements obtained by leveraging these external modules also indicate that they provide more benefits than drawbacks. > **Q7: Limited Domain Evaluation.** Thank you for the valuable suggestion! We fully agree that evaluating our framework in more diverse driving scenarios would further demonstrate its generalizability and effectiveness. Currently, however, publicly available multimodal autonomous driving datasets are limited and BDD-X and DriveLM are the main public multimodal benchmarks providing both high-level and low-level: as for other datasets, nuScenes lacks high-level action or reasoning annotations, while Waymo does not provide detailed image/ video data. Although richer private datasets exist within industry, these typically cannot be publicly shared due to policy constraints. We sincerely hope future availability of diverse multimodal datasets will enable broader evaluation of methods like ours.
null
null
null
null
null
null
Flow Matching for Few-Trial Neural Adaptation with Stable Latent Dynamics
Accept (poster)
Summary: Neural representational drift is a well-known problem in brain computer interfaces (BCIs) that make it difficult to re-use a decoder over multiple days without an additional recalibration step. Importantly, the amount of data available each day for recalibration is often small. Prior work has explored ways of aligning neural activity across days to aid this process, but they often require more data than would be available in a real-world deployment. This paper argues that this is partially due to assumptions made on the latent variables and choice of divergences, which may break down in the low-data regime, leading to poor transfer between days and training instabilities. The authors propose Flow-Based Distribution Alignment (FDA), which leverages flow matching for few-trial neural adaptation. The choice of a flow matching-based method is motivated by the fact that it places minimal assumptions on the distributions of neural representations while enabling direct likelihood maximization. This second property also enables source-free alignment, which enables calibration without behavioral labels. They prove that their flow-based transformation is stable in terms of Lyapunov exponents, which they claim aids with transfer. They show that FDA improves performance over appropriate baselines on motor cortex datasets. Claims And Evidence: Overall, the paper did a good job supporting its claims with a thorough and extensive evaluation. However, it is still unclear to me why the stability of the flow is made out to be so critical for transfer. The baselines clearly do have larger Lyapunov exponents, but that doesn't prove that this is why they transfer poorly. I think this claim deserves a more thorough ablation. Either that, or it should not be so heavily focused on in the paper. Methods And Evaluation Criteria: The proposed methods and evaluation criteria (both benchmarks and metrics) make sense for this problem setting. Theoretical Claims: I skimmed the proof of dynamical stability in appendix A2 and nothing popped out to me as incorrect. Experimental Designs Or Analyses: I looked over the experimental design of the main results and ablations and found them to be sound. Supplementary Material: I skimmed the proof of dynamical stability in appendix A2, computational of Lyapunov exponents in appendix B5, and some of the supplementary results in appendix C. Relation To Broader Scientific Literature: This paper provides clear arguments as to why their method should work better in the data-limited regime and empirically show it through extensive evaluations with state-of-the-art methods. The authors do a good job discussing related work and situating their paper in it. While the innovations in this paper are more specific to the application, there are definitely aspects which are relevant to the broader scientific community. Pre-training on large datasets and transferring with selectively re-training a subset of the model weights on new data is a common paradigm across machine learning. This paper makes important claims about how the choices of distribution of latent variables can have a substantial impact on this transferability. On a more low-level, their choice of linear interpolation for the flow path allowed them to use a one-step Euler method, simplifying their source-free alignment method. This flavor of technique may have utility in more settings than just neural representation alignment. They also prove that their flow is stable under certain assumptions and regularizations, which to my knowledge, is novel to this work. Essential References Not Discussed: There are not essential references missing from this paper as far as I'm aware of the literature. Other Strengths And Weaknesses: The paper is well written and does a good job supporting most of its claims. It is a novel approach to neural representation alignment for BCI decoders, appears to have better few-shot transfer, and can do source-free alignment. Some of the technical innovations may also have broader application in other domains. Other Comments Or Suggestions: I don't think that presenting the results in a table, as in Table 1, is the most effective way of communicating them in this case. While this is not uncommon in ML papers, I find this table to be too busy to parse. I'd suggest something more like a box or violin plot may help. Maybe with just the most important results, and then the rest could be moved to the supplementary material. This is not critical, but just a minor comment. Questions For Authors: Is there an easy way to ablate the stability property of FDA and show that when those assumptions are violated, it transfers worse in some way? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your careful review and recognition of our work. Below, we provide a point-by-point reply to your concerns. Due to the limit, all figures prefixed with 'R' below are available in the external link [https://drive.google.com/file/d/129vv370SF4RLanLj92-lzh_vMmkeDCve/view?pli=1]. ### Claims And Evidence: - About why the stability of the flow is made out to be so critical for transfer and a more thorough ablation Thanks for raising the unclear point. The stability of flow models enhances zero-shot performance, making it effective for few-trial adaptation. Specifically, this stability regulates feature deviations, preserving essential semantic information across domains. As illustrated in Fig.R4, it ensures that latent factors with similar labels flow toward consistent semantic representations, even under input drifts in target signals. Empirical validation through zero-shot transfer performance (original Table S8 on Page 18) further demonstrates the benefits of this stability. Additionally, we conducted a more thorough ablation study to further validate this effect. Since stability is maintained by activation functions and scale coefficients, we ablated these two components (FDA-al and FDA-sc, respectively) to violate the assumptions of stability. As shown in Fig.R5, the distribution of maximum Lyapunov exponents (MLE) for FDA-al and FDA-sc indicates that both variants frequently exhibit positive MLEs, signifying instability. Furthermore, the corresponding results for zero-shot and few-trial performance, evaluated using MMD-based alignment on the CO-M and RT-M datasets, are summarized in the table below. Consistent with their reduced stability, FDA-al and FDA-sc demonstrated substantially degraded transfer performance, which is comparable to the baselines. This result suggests that instability is the factor contributing to their poor transfer ability. #### Comparison of average $R^2$ scores (%) across sessions for FDA-al, FDA-sc, and FDA-MMD on the CO-M, and RT-M datasets ($r$ = 0, 0.02). | Data | $r$ | FDA-al | FDA-sc | FDA-MMD | |:---------:|:-----------:|:-------------------:|:----------------------:|:--------------------:| | CO-M | 0 | -9.34 ± 9.57 | -18.20 ± 17.53 | **16.23 ± 9.43** | | | 0.02 | 14.51 ± 16.37 | 13.35 ± 19.01 | **45.59 ± 5.15** | | RT-M | 0 | 1.23 ± 4.76 | 1.78 ± 3.89 | **38.15 ± 8.21** | | | 0.02 | 16.99 ± 11.75 | 20.46 ± 11.77 | **42.08 ± 6.31** | ### Other Comments Or Suggestions: - I don't think that presenting the results in a table, as in Table 1, is the most effective way of communicating them in this case. While this is not uncommon in ML papers, I find this table to be too busy to parse. I'd suggest something more like a box or violin plot may help. Maybe with just the most important results, and then the rest could be moved to the supplementary material. Thanks a lot for the valuable suggestion. We will improve the presentation of our results by using clearer violin plots and relocating some results to the supplementary material in our revision. ### Questions For Authors: - About the easy way to ablate the stability property of FDA and show that it transfers worse in some way when those assumptions are violated Yes, we found that FDA exhibited significantly worse zero-shot and few-trial transfer performance when these assumptions were violated. Specifically, since stability is governed by activation functions and scale coefficients, we ablated these two components (FDA-al and FDA-sc, respectively) to affect the stability property of FDA. More detailed results are provided above in the above response. We sincerely hope that these responses may address your concerns. We believe that our novel FDA framework will be of significant interest to the ICML community, given its potential impact on few-trial neural alignment and real-world BCI reliability. Could you please consider raising the scores? We look forward to your further feedback. Thank you in advance. --- Rebuttal Comment 1.1: Comment: I want to thank the authors for performing the requested ablations! Although it is not surprising, I felt it was an important aspect to validate empirically.
Summary: This paper proposes Flow-based Distributional Alignment (FDA), a few-shot alignment or adaptation method for neural signals across days, using flow matching. While neural activity adaptation methods in general struggle to maintain stable performance across multiple days, the authors claim that FDA would perform better due to theoretical guarantees on the stability of representations derived from FDA. That is, due to the highly negative maximum Lyapunov exponent of latent states through learning, the authors argue that FDA-finetuning results in latent representations that converge to a stable fixed point or manifold/attractor. The authors carry out comprehensive experiments on a motor cortical dataset containing data from 2 monkeys, and 2 tasks (centre-out and random target reaches) to demonstrate the performance of their model in comparison to baselines. They also carry out ablation studies to gauge the importance of each part of their method. ### Update after rebuttal My main outstanding concerns are summarised in my response to the authors' rebuttal. Finally, the NDT-2 0-shot results here don't seem consistent with Fig. 5 from the original NDT-2 paper, showing good 0-shot and few-shot performance (even for real-time control). In the absence of more details on the authors' reproduction of NDT-2 and 0-shot setup, I'm unable to evaluate this result completely. So overall, I will still retain my original score. Claims And Evidence: The main claim of the paper is that the proposed method, FDA, outperforms baseline methods in few-shot adaptation to new days. Based on the experiments in the paper, I mostly think that there is sufficient evidence for this claim. The claim about novelty is valid – to my knowledge, this is a novel application of flow matching to neural activity adaptation. The sample and compute efficiency claims also seem to hold in light of experimental results. Methods And Evaluation Criteria: Yes, the proposed methods, evaluation datasets, and metrics (decoding $R^2$) make sense for the application at hand. Quantifying stability of dynamics through the maximum Lyapunov exponent is also a standard practice. Theoretical Claims: The theoretical result and the proof seem to make sense, I briefly went through the full proof in the appendix but did not read it in full detail. Experimental Designs Or Analyses: Yes, I believe the experiments carried out and the analyses are sound. The authors have correctly validated their claims of superior few-shot performance on cross-day neural decoding. The claim about stability of latent dynamics (through training) is also validated empirically through the Lyapunov exponent analysis. Several ablations have been carried out, validating the necessity and utility of various model components. Supplementary Material: Yes, I paid particular attention to sections A.1, B and C, and read through the proof in section A.2. Relation To Broader Scientific Literature: This paper makes a contribution towards improving neural activity alignment methods, which is one class of methods meant to improve the performance of BCI decoders across days or even subjects (although the latter is not explored here and acknowledged as a limitation). Existing methods of this class often require a large number of trials to adapt neural activity on subsequent days to the source domain (usually day 0), however the proposed method is able to achieve good results with very few trials. It is also, to my knowledge, one of the first works applying flow matching to this setting. It is worth noting that this class of adaptation-based methods often end up yielding poorer performance than large-scale deep learning methods that have been proposed recently, however the asset of the proposed method is its ability to generalise in the setting with extremely few and unlabelled trials for finetuning. Essential References Not Discussed: I think relevant and related work is adequately discussed in the paper. Other Strengths And Weaknesses: Apart from points made previously, here are additional strengths and weaknesses: **Strengths:** * The ability to adapt few-shot using even just 5 trials is a great advantage. * The method does not require target trials to be labelled, so learning the adaptation is unsupervised. **Weaknesses:** * The paper is very dense and so is a lot to take in, but it also lacks clarity in certain places (see Questions section). * Some cited works such as Azabou et al. (2023; POYO), Ye et al. (2024; NDT-2), etc. were not compared against. I think it should be possible to compare against models like NDT-2 where the source code and weights are available (https://github.com/joel99/context_general_bci). * Related to the above, there is a claim in Section 2 that Ye at al. (2023; NDT-2) and Zhang et al. (2024; MtM) are capable of supervised alignment. I do not quite understand this point: NDT-2 and MtM are trained in a self-supervised manner, and only the readout training is supervised. Furthermore, alignment isn't really necessary in either case because they are large-scale models trained on multiple sessions/tasks/datasets to begin with. Could the authors clarify what they mean here and perhaps modify the text here? It is true that Azabou et al. (POYO; 2023) is amenable to supervised alignment. * Other BCI tasks such as speech and handwriting decoding were not considered. * As acknowledged by the authors, cross-task and cross-subject adaptation was not studied. Other Comments Or Suggestions: One suggestion to the authors would be to attempt to make the writing clearer and accessible to a BCI audience by retaining the most important details and experiments in the main text and relegating some of the others to the appendix. Given the number of ablations, it gets confusing to remember and map the abbreviated variants of FDA to particular components being ablated, especially when looking at figures. Some of the table captions, such as that of Table S9, could be made more descriptive. I had to go back to the main text to understand what I was looking at. Also, some figures lack certain axes labels such as Figure S4, which makes it hard to interpret them. Questions For Authors: Apart from my comments above, here are some questions: * In some of the plots, such as Figure 4(b) and Figure S6, why does the $R^2$ drop as the number of samples for finetuning increases? * How do the authors decide the window size $w$ for the context windows? * In some cases, performance on the random target task is better than performance on centre-out. Do the authors have any intuition for why that might be the case? In general, decoding in RT tasks is more difficult than CO, and RT is a harder task for subjects to do as well. * The authors randomise the selection of few-shot finetuning trials, but could they comment on what level of diversity in the trials is desirable? For example, if all the finetuning trials sampled were associated with just one of the centre-out targets, I would expect the performance to be low – although in the real-world setting, one could argue that the experimenter can collect calibration data with some diversity. * I recall reading that the latent state was read out using a linear decoder somewhere in the paper, but could the authors clarify if my understanding is correct? This could be made clearer. * What are the details of the hardware used for the computational efficiency benchmarking? The time in seconds seems very low if that's the training time. Is it the total time taken or the time taken for 1 step/epoch? * Can the authors comment on the computational efficiency in terms of inference time? This is important given that the causal formulation allows for real-time inference. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your careful review and recognition of our work. Below, we provide a point-by-point reply to your concerns. - Additional comparison with NDT-2 We conducted an additional comparison against NDT-2 by pre-training it on a single session with supervised readout training and evaluating its zero-shot performance without alignment. The average $R^2$ scores on the CO-C and CO-M datasets are presented in the table below. While NDT-2 achieved high decoding performance in the pre-trained session, its zero-shot performance degraded significantly. This decline may be attributed to its reliance on larger datasets to achieve robust zero-shot transfer. #### Average $R^2$ scores (%) of NDT-2 | Data | intra-session | inter-session | |:------: |:-----------------:|:-----------------:| | CO-C | 83.85 ± 1.49 | -28.59 ± 0.49 | | CO-M | 82.09 ± 1.06 | -35.69 ± 2.06 | - About the supervised alignment by NDT-2 and MtM Thank you for the valuable comment. We agree that alignment of large-scale models isn't necessary in the cases we have presented in this study. What we meant is that the finetuning of MtM and NDT-2 can begin with self-supervised techniques, such as masked reconstruction. Then, for different downstream tasks, specific decoders are trained using only a few target labels, while all other weights remain fixed. This supervised alignment enables rapid transfer to tasks that were not encountered during pre-training. - About other BCI tasks and cross-task/subject adaptation Thanks for the comment. We agree that this presents interesting directions for further study. ### Questions: - About the drop of $R^2$ with more finetuning samples The curve in Fig.4(b) and Fig.S6 is based on a single random run, and the observed decrease may be due to the small trial gap (~2 trials between adjacent points). Moreover, the overall trend shows an increase in $R^2$ as the number of finetuning samples grows, as demonstrated in Fig.3(c) on Page 6. - About the choice of window size w Thanks. We conducted a grid search to determine the appropriate context window size $w$. As shown in the Table S11 on Page 21, balancing performance and computational efficiency, we selected 5 as the default size for the CO-M and RT-M. - About better $R^2$ on RT than CO tasks The better performance on RT tasks may be attributed to differences in signal quality between subjects. For the same subject (Monkey M), performance on CO tasks is better than on RT tasks, as shown in the original Fig.3(b) on Page 6. - About the desirable level of diversity in trials We conducted additional analyses on selected sessions of CO-M. The diversity was categorized into three levels: small (1–2 distinct targets), medium (3–4 targets), and large (all distinct targets). The average $R^2$ for FDA-MMD is presented in the tables below. Our results indicate that a medium level of diversity achieves desirable performance, while too little diversity negatively impacts performance. #### Average $R^2$ scores (%) for FDA-MMD on CO-M | Trial Diversity | Day14 | Day15 | Day28 | Day32 | |:------:|:------------------:|:------------------:|:------------------:|:------------------:| | Small | 58.93 ± 1.10 | 45.86 ± 1.81 | 50.90 ± 1.70 | 43.70 ± 2.60 | | Medium | 57.01 ± 1.80 | 57.26 ± 1.20 | 55.46 ± 1.00 | 49.19 ± 1.49 | | Large | 65.60 ± 2.84 | 59.49 ± 0.46 | 59.74 ± 1.33 | 53.23 ± 1.09 | - About the linear decoder The linear decoder was mentioned in Line 162 (right) on Page 3, and will be made clearer in the revision. - About details of hardware and reported computational efficiency The hardware used was NVIDIA GeForce RTX 3080 Ti (12GB). The training time reported in Table S9 on Page 20 summarizes the time per epoch for both pre-training and fine-tuning. A more detailed comparison of total time(s) is presented below. | | ERDiff | Cycle-GAN | NoMAD | FDA-MLA | FDA-MMD | |:-------------------:|:-------:|:---------:|:------:|:-------:|:-------:| | Pre-training | 2449.68 | - | 135.34 | 98.95 | 98.95 | | Fine-tuning | 1.21 | 11.90 | 76.83 | 2.32 | 2.38 | - About the efficiency of inference We conducted further analysis of FDA's inference time on an NVIDIA GeForce GTX 1080 Ti (11GB). As presented in the table below, average inference time for a single window is approximately 4 ms, making it suitable for real-time applications. #### Inference time of FDA (ms) | Data | FDA-MLA | FDA-MMD | |:------:|:-------:|:-------:| | Avg | 3.90 | 3.97 | We hope that these responses may address your concerns. We believe that our novel FDA framework will be of significant interest to the ICML community, given its potential impact on few-trial neural alignment and real-world BCI reliability. Could you please consider raising the scores? We look forward to your further feedback. Thank you in advance. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. > Additional comparison with NDT-2 This is not really a fair comparison. The comparison should be done with a pre-trained NDT-2 model that is only fine-tuned few-shot to the target session using some data. The rationale behind this is that powerful pre-trained models are available publicly and can be fine-tuned for specific use cases with very little labelled data. The experiment done here misses this key ingredient and is against the reasoning behind large-scale decoding approaches. Considering POYO for example, a pre-trained model may be fine-tuned to new sessions few-shot using very little labelled data, and by only re-training embeddings for neuron/unit identity (unit re-identification pipeline in POYO, see some results [here](https://poyo-brain.github.io)). This still leads to good performance even on unseen animals. > About the drop of $R^2$ with more finetuning samples Shouldn't the authors be including error bars on these plots, and showing the (mean +/- std/SEM) improvement across several runs/seeds or even across days instead of for a particular day? I think this would be important for all figures (other ones seem to have error bars). If the same trend is still captured, then it is not simply due to randomness but a failure case of the method, where it doesn't work as effectively for a larger number of finetuning samples – which would be important to understand. > About better $R^2$ on RT than CO tasks Yes, I understand that and have looked at Figure 3, however my point is mainly about Figure 4 in this case where there is about 0.1 higher $R^2$ for the RT task. I am curious if the authors could explain this in terms of number neurons in the recording or other metrics. I'm also curious why the plots are for CO-M Day 31 in (a) and Day 29 in (b), while RT remains the same (Day 52). I acknowledge the additional experiments on finetuning trial diversity and benchmarking – these would be good additions to the paper. I still maintain my positive opinion overall and lean towards acceptance, but I will retain my score. --- Reply to Comment 1.1.1: Comment: We are grateful for your further feedback and provide our responses as follows. - Additional comparison with NDT-2 Thank you for the suggestion. For a fairer comparison, we utilized 47 sessions recorded from the motor cortex of two monkeys, available via the external link (https://zenodo.org/records/3854034), as well as datasets provided by the Neural Latents Benchmark (https://neurallatents.github.io/) for pre-training. Subsequently, supervised fine-tuning was conducted using 80% of the trials from a source session of the CO-M/RT-M datasets, followed by zero-shot evaluation on the remaining target sessions. The average $R^2$ scores are presented in the table below. While NDT-2 achieved higher performance in intra-session decoding on the source session, its performance degraded substantially during zero-shot evaluation. This suggests that NDT-2 may be more effective when fine-tuned with data from the target sessions. #### Average $R^2$ scores (%) of NDT-2 | Data | intra-session | inter-session | |:------|:-------------------:|:------------------:| | CO-C | 89.82 ± 1.18 | -34.70 ± 0.32 | | CO-M | 93.75 ± 1.51 | -52.31 ± 0.16 | - About the drop of $R^2$ with more finetuning samples We further analyzed the average $R^2$ scores of FDA-MMD across multiple random seeds for each target session under varying training ratios $r$ on the CO-M and RT-M datasets. The detailed results are provided in the tables below. A general trend of increasing $R^2$ with larger fine-tuning sample sizes was observed across most sessions. The drop in $R^2$ on Day 52 (RT-M) is attributed to FDA-MMD achieving similar performance regardless of the number of fine-tuning samples, making it an exceptional case. Additional results, showing the (mean +/- std/SEM) improvement, will be annotated in our original figures. #### Average $R^2$ scores (%) on CO-M dataset under varying $r$ | $r$ | Day 8 | Day 14 | Day 15 | Day 22 | Day 24 | Day 25 | Day 28 | Day 29 | Day 31 | Day 32 | |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | 0.02 | 45.23 ± 4.44 | 55.90 ± 3.17 | 49.55 ± 3.41 | 27.35 ± 7.34 | 51.28 ± 2.53 | 36.79 ± 4.12 | 54.87 ± 4.40 | 41.26 ± 5.70 | 57.10 ± 3.24 | 44.66 ± 4.41 | | 0.03 | 44.47 ± 3.31 | 59.18 ± 4.24 | 52.90 ± 4.62 | 40.58 ± 4.98 | 55.94 ± 1.78 | 41.10 ± 2.95 | 57.93 ± 4.03 | 39.56 ± 6.33 | 59.15 ± 1.77 | 48.08 ± 2.90 | | 0.04 | 46.68 ± 2.44 | 60.35 ± 5.54 | 53.18 ± 5.23 | 42.89 ± 4.10 | 59.48 ± 2.23 | 45.84 ± 2.40 | 59.97 ± 2.62 | 42.66 ± 5.25 | 61.03 ± 1.72 | 49.80 ± 3.28 | | 0.06 | 49.96 ± 3.43 | 60.48 ± 5.33 | 52.53 ± 5.18 | 43.19 ± 3.64 | 59.19 ± 2.65 | 49.29 ± 3.97 | 61.25 ± 2.38 | 45.06 ± 3.81 | 63.31 ± 2.97 | 51.27 ± 2.66 | #### Average $R^2$ scores (%) on RT-M dataset under varying $r$ | $r$ | Day 1 | Day 38 | Day 39 | Day 40 | Day 52 | Day 53 | Day 67 | Day 69 | Day 77 | Day 79 | |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | 0.02 | 74.32 ± 2.25 | 55.39 ± 2.80 | 40.44 ± 7.31 | 39.85 ± 3.27 | 44.99 ± 4.96 | 50.03 ± 4.44 | 50.29 ± 5.07 | 39.19 ± 4.07 | 16.67 ± 9.32 | 38.99 ± 5.70 | | 0.03 | 73.86 ± 3.20 | 58.12 ± 2.75 | 41.61 ± 5.91 | 41.88 ± 3.17 | 44.87 ± 5.25 | 52.17 ± 0.71 | 51.08 ± 6.30 | 43.40 ± 3.84 | 20.51 ± 8.44 | 41.95 ± 5.26 | | 0.04 | 74.57 ± 2.45 | 58.79 ± 2.71 | 41.39 ± 6.34 | 42.20 ± 4.26 | 45.09 ± 5.13 | 53.39 ± 1.56 | 51.27 ± 6.91 | 45.08 ± 3.98 | 22.68 ± 7.64 | 41.69 ± 5.51 | | 0.06 | 74.98 ± 1.93 | 58.97 ± 1.54 | 43.82 ± 6.86 | 43.50 ± 4.54 | 45.03 ± 5.34 | 53.76 ± 1.35 | 52.65 ± 5.39 | 44.26 ± 4.33 | 29.05 ± 7.01 | 48.94 ± 5.36 | - About better $R^2$ on RT than CO tasks We further analyzed the mutual information (MI) between spiking recordings from individual channels and calculated the maximum MI values across channels to assess the similarity between the source and target sessions. We found that Days 29 and 31 from CO-M achieved an average maximum MI of about 6e-4, which is significantly lower than the 2e-3 observed on Day 52 (RT-M). Additionally, we observed that the number of valid channels differs between the CO-M sessions (95 for the source and 96 for the target), whereas RT-M sessions remain consistent. This suggests less neuronal overlap with the source session, resulting in worse decoding performance. Moreover, relative figures for different days (Day 67 and Day 69) of the RT-M dataset are presented in Figure S6(a) on Page 20. Thanks once again for your valuable feedback. We look forward to any additional questions you may have.
Summary: The work propose to utilize Flow-based distribution alignment (FDA) to learn flexible neural representations with stable latent dynamics, and performs source free alignment with likelihood maximization. The author additionally performed theoretical analysis on the stability of latent dynamics, and performed experiments on multiple motor cortex datasets. Claims And Evidence: The claim that BCIs trained on one day usually obtain degraded performance on other days due to the nonstationary property of neural signals makes sense. Actually the signal pattern could be quite different even across different sessions within the same day. Methods And Evaluation Criteria: The method of using flow-based distribution alignment to tackle the variability in signal decoding looks pretty novel to me. However, stronger motivation is needed on the proposed approach. In addition to the general description that FDA fits well to the problem, the readers would want to know what are the exact benefits that the mechanism could bring to tackling the problem, and what functionality makes the method specifically fit to the problem. Theoretical Claims: The theoretical analysis on stability of latent dynamics is provided in the manuscript, I didn't check the details on the correctness. Experimental Designs Or Analyses: The authors performed pretty detailed comparison with numerous existing baselines, on top of three real-world datasets. Additionally, ablation study including different alignment strategies and the contribution of each of the components are analyzed. Overall the experiments looks pretty solid. I would appreciate it if the authors could provide more intuitive visualization on the flow matching process. Supplementary Material: N/A Relation To Broader Scientific Literature: The method could be useful in numerous downstream applications utilizing BCI systems for rehabilitation and other healthcare purposes. Essential References Not Discussed: The work tackles the variability issue of brain signal decoding. While the work provided pretty nice review on the method side including neural representation alignment and normalizing flows, I found the related works that specifically tackling the variability of brain signal decoding currently missing, including [1][2] etc. [1] Distributionally Robust Cross Subject EEG Decoding, ECAI 2023 [2] UNCER: A framework for uncertainty estimation and reduction in neural decoding of EEG signals, Neurocomputing 2023 Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: More annotations are needed in Fig. 2 for readers to properly understand the mechanism of the approach. For example, what does X^S and X^T respectively stands for. Similarly for C^S and C^T. And what is the relationship between C^S and v_{\theta}? More explanation is needed on why target distribution p1 representing the desired neural representation could be defined by a random variable z^S(1)? Questions For Authors: Please see above comments Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your careful review and recognition of our work. Below, we provide a point-by-point reply to your concerns. Due to the limit, all figures prefixed with 'R' below are available in the external link[https://drive.google.com/file/d/129vv370SF4RLanLj92-lzh_vMmkeDCve/view?pli=1]. ### Methods And Evaluation Criteria - About the motivation Our FDA successfully addresses the issues of significant degradation in zero-shot performance and instability in few-trial alignment, achieving stable latent feature extraction and reliable alignment with few trials. On one hand, flow-based learning regulates feature deviations through stable latent dynamics, preserving essential semantic information from the source domain. This mechanism enforces bounded variations in latent features under input target perturbations, preserving semantic information in neural representations. Empirical validation of zero-shot transfer performance (original Table S8 on Page 18) highlights the advantage of this stability, further enhancing efficient few-shot adaptation. On the other hand, by leveraging the unique properties of flows, FDA guarantees stable few-trial fine-tuning. The use of explicit log-probability objectives and flexible latent-space modeling enables stable parameter gradients, effectively preventing the catastrophic overfitting typically seen in few-trial adaptation. ### Experimental Designs Or Analyses - About more intuitive visualization on the flow matching process Thank you for the suggestion. We have provided a more intuitive visualization of the flow matching process in Fig.R4. As shown in this figure, our flow matching process learns latent variables from neural signals in a coarse-to-fine manner, differing from conventional one-step extraction. The flow transforms noisy variables $\mathbf{z}(0)$ into neural representations $\mathbf{z}(1)$, guided by conditional features derived from neural signals. Simultaneously, the corresponding prior distribution $p_0$ is transformed into the target distribution $p_1$. The stable latent dynamics of the learning process ensure that latent factors with similar labels flow toward consistent neural representations, even when guided by shifted neural signals. ### Essential References Not Discussed - About the missing related works that specifically tackling the variability of brain signal decoding Thanks a lot for the suggestion. We will include more related works that specifically tackle the variability of brain signal decoding in the revision. ### Other Comments Or Suggestions: - About more annotations in Fig. 2. For example, what does X^S and X^T respectively stands for. Similarly for C^S and C^T. And what is the relationship between C^S and v_{\theta}? Thank you for the valuable comment. We have added more annotations, and the revised version is provided as Fig.R2. Here, $\mathbf{x}^S$ and $\mathbf{x}^T$ represent input tokens derived from short-term windows of the source and target domains, with each token corresponding to spikes from a single channel. Additionally, $\mathbf{c}^S$ and $\mathbf{c}^T$ denote the output conditional features of the transformer-based $f_{\alpha}$, which are utilized to guide the flow of $\mathbf{z}$. As illustrated in Fig.R4, we parameterize the guided velocity field of $\mathbf{z}$ as $v_{\theta}$, which determines the velocity and direction of $\mathbf{z}$. $\mathbf{c}^S$ further serves as the input to the neural network responsible for predicting $v_{\theta}$. - About more explanations on why target distribution p1 representing the desired neural representation could be defined by a random variable z^S(1)? Thank you for raising this point. To achieve optimal decoding performance, the target distribution is preferable to be defined by a random variable, which can then be transformed into the ground-truth labels through the decoders. When weights of the linear decoder are fixed, the desired $\mathbf{z}^S(1)$ can be obtained via a linear transformation using the inverse of the decoder's weight matrix. The neural representation thus defined can subsequently be transformed into the desired labels through the decoder. Related explanation will be included in the revision. We sincerely hope that these responses may address your concerns. We believe that our novel FDA framework will be of significant interest to the ICML community, given its potential impact on few-trial neural alignment and real-world BCI reliability. Could you please consider raising the scores? We look forward to your further feedback. Thank you in advance.
Summary: The paper introduces a new approach for learning and aligning neural representations across multiple sessions to link neural activity with behavioral actions. Particularly, the authors present a neural decoder that aligns recordings from different sessions (e.g., over multiple days) based on flow matching in latent space. This method is particularly useful when only a few trials are available for alignment, a common issue in brain-computer interfaces. The paper also provides a theoretical proof for the stability of the approach using Lyapunov exponents. The authors demonstrate the effectiveness of their method on various monkey neural recording datasets and benchmark it against existing approaches. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes, please also see the 2nd point in strengths and weaknesses. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes, please also see the 2nd point in strengths and weaknesses. Supplementary Material: I skimmed through, focused on the data description and pre-processing. Relation To Broader Scientific Literature: The paper addresses an important challenge in modeling neural dynamics across sessions with limited trial data available in each session. The authors explain how their method addresses this issue and compare its performance to several benchmark approaches using rich monkey datasets. Essential References Not Discussed: No, I think that overall the related work section was written well Other Strengths And Weaknesses: ## Strengths: The paper is overall well-written and addresses a very important problem. The authors provide the assumptions, mathematical developments, and theory to support their claims and effectively motivate the need for their model. I also appreciate the supplementary material, which further supports the paper with helpful additional explanations. ## Weaknesses: My main two concerns are: 1) Model interpretability, and 2) Lack of evaluation on synthetic data. Particularly: 1) The model is very complex, as it includes the tuning of multiple networks, which are themselves not very interpretable. In other words, the fundamental components of the model may be very hard to understand or explain in relation to neural dynamics. Given that the paper focuses on BCIs that could ultimately be used in clinical settings, it’s hard to believe clinicians would trust a model that is not easily interpretable. Hence, I wonder if there was any effort from the authors to interpret the components themselves (e.g., f_alpha) to provide some understanding of what features they emphasize, etc. 2) The authors demonstrate their method only on real-world monkey data, but they do not show recovery of ground-truth latent variables in synthetic data, which I believe is crucial to show that the model can truly recover the real underlying components. Specifically, I am referring to a test beyond just evaluating R² of reconstruction—creating simulated data with real known z, generating “observations” from this z, and then showing that the model can recover Z. Without this, while the model provides good reconstruction as demonstrated in the results, we cannot be sure it can recover the real underlying latent dynamics. 3) How realistic is the assumption made on lines 196-197 regarding the z evolving linearly and monotonically between 0 and 1? I assume this choice is meant to regularize the model’s development of z, but some intuition for its implications (perhaps in the discussion) would be helpful. I also have smaller concerns listed under “Other Comments or Suggestions". Other Comments Or Suggestions: 1) I miss a comma before “resulting” in line 11 (right). 2) In line 111 (left), you mention that the labels are in $ \mathbb{R}^d $; it would be helpful to clarify from the beginning that labels can be multi-dimensional (and also explain what $ d $ refers to). 3) The notation for $ z^{S}(\tau) $ is a bit confusing since for $ x $, the input in brackets was the channel number, and the subscript referred to time. Why not be consistent by using the subscript for time here as well? 4) In line 181, you mention the scale coefficient, but I cannot see where it is used (at least in the main text). Are you sure you defined that? 5) In line 191 and Equations 5 and 7, is it all $ \ell_2 $ distance? Why not to clarify it is $\ell_2$ in the subscript to distinguish it from other norms (especially since you use other norms later)? 6) I believe lines 236-242 (left) would be better placed in the related work section. 7) It seems that the authors frame the motivation for the paper around BCIs; however, the model can also work on non-BCI neural recordings (e.g., neuropixels), as demonstrated on some of the datasets. It is unclear to me why the authors chose to focus mainly on BCIs when the model doesn’t seem to integrate unique BCI features (e.g., feedback). Questions For Authors: 1. How do you define a session? Do different sessions necessarily come from the same subject, or can they involve different subjects with the model learning a unified representation for them? Is it always the same subset of neurons observed across sessions? I think a better clarification of how sessions can vary (e.g., subject identity, observed neurons) would be helpful. Based on lines 132-133, it sounds like your approach should adapt to changes in channels, but I cannot see how that is expressed in the math/model. 2. In real-world scenarios, if the recording device is re-inserted across sessions, a different subset of neurons may be captured, which is a common challenge in neuroscience. Can your model be extended or generalized to cases where a potentially non-overlapping subset of neurons is observed across sessions (e.g., as done in [1])? (I don’t expect the authors to change the model in this paper's scope, but it might be worth discussing in the discussion section) 3. It is not clear to me from line 109 if there is overlap between windows, or what you mean by “one step later” (line 110). Do you mean 1 sample after, based on the sampling rate? And does the overlap between windows depend on the sampling rate, or how else do you define the “step”? 4. Why is $ \tau $ restricted to the 0-1 range? Is it just due to the normalization of each trial to this range, which makes it easier to handle varying durations? I would suggest adding a short explanation of that in line 148. 5. How is $ \eta $ defined? How sensitive is the model to different choices of $ \eta $? 6. For the model to work, do the neurons need to be recorded simultaneously within each session? 7. Would the model work for brain areas (e.g., hippocampus) or animals (e.g., bats) with very sparse firing patterns, whose firing rates may not be well approximated by normal statistics? Would the model work for Poisson statistics data, which is more realistic for certain neural firing patterns? 8. Where is Monkey C in figure 3? [1] Mudrik, N., Ly, R., Ruebel, O., & Charles, A. S. CREIMBO: Cross-Regional Ensemble Interactions in Multi-view Brain Observations. In The Thirteenth International Conference on Learning Representations. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your thorough review and recognition of our work. Below we provide a detailed response to your concerns, with reference figures in the supplementary material [https://drive.google.com/file/d/129vv370SF4RLanLj92-lzh_vMmkeDCve/view?pli=1], indicated by Fig.R. ### Weaknesses - Model interpretability The latent features effectively capture the evolved state within neural dynamics. We employ a transformer-based network $f_\alpha$ to extract latent features from signal windows. Given the challenges in interpreting the meaning of these features for realistic scenarios, our current focus is on the effectiveness of FDA for few-trial adaptation. We demonstrate the successful extraction of the Lorenz attractor from synthetic spiking data using our method below. We will explore the relationship between the latent features and underlying neural dynamics across various cases in future. - Synthetic data We conducted experiments to evaluate the recovery of ground-truth latent variables in synthetic data. Following method in (Kapoor, Jaivardhan, et al. Latent diffusion for neural spiking data. NeurIPS 2024: 118119), we used the Lorenz attractor as the latent dynamics. We simulated firing rates as an affine transformation of the 3D latent variables into a 96-dimensional space, then sampled spike trains from a Poisson distribution. As shown in the table, our FDA successfully recovered the latent dynamics from synthetic spiking data. The visualizations of our decoded 3D trajectories in Fig.R1 confirmed that FDA effectively captured the neural dynamics. #### Average $R^2$ (%) on the recovered latent variables | Mean Firing Rates | 0.05 | 0.1 | 0.3 | |:------:|:---------------:|:---------------:|:---------------:| | $R^2$ | 95.43 ± 0.87 | 95.68 ± 1.07 | 95.24 ± 1.03 | - About z (0-1) The evolution of z only represents the iterative learning process. The temporal evolution of neural dynamics is characterized by the shift of short-term windows(as explained in Line 106-108 on Page 2). ### Comments: - We will clarify the multi-dimensional labels in the revision. - Here $\tau$ represents the iterative learning steps instead of the temporal evolution of neural signals. To avoid confusion, we therefore used a distinct notation $\tau$ . - The scale coefficient represents the shift parameter of z generated by the MLP for predicting the velocity field. - Yes, it is all $l_2$ distance and will be standardized. - We find that our flow model can be well-suited to both few-trial adaptation in BCIs and non-BCI recordings. The flow process can include feedback as conditional features to guide the flow of subsequent windows. ### Questions: - Session definition The sessions used in experiments were from the same subject. The recorded neurons across sessions largely originate from the same subset. Moreover, as shown in Fig.R2, the transformer-based $f_{\alpha}$ is adaptable to varying numbers of tokens, corresponding to different channel counts. - Non-overlapping subset Given the widespread application of transformers for universal representation from neurons across regions, we believe our model has the potential to be extended to cases with non-overlapping neuron subsets across sessions. We agree that this is an interesting direction for future study. - One step later Yes, "one step later" refers to the next sample based on the sampling rate. The overlap between windows depends on the sampling rate, ensuring consistent time steps between the decoded and ground-truth variables. - $\tau$ (0-1) As explained above, $\tau$ represents the iterative learning step of z to obatin latent features, differing from the temporal evolution of neural signals. - Choice of $\eta$ $\eta$ is pre-initialized using Xavier initialization. We tested various choices via differnt random seeds, and observed no significant impact on the final results (as shown in Table 1 on Page 7). - Simultaneous neurons The current model directly utilized temporal structures from single-channel recordings as input tokens for conditional features. Therefore, it is preferable for neurons to be recorded simultaneously within each session. - Poisson statistics As shown in the table above, the model demonstrated adaptability to Poisson synthetic data with various firing rates. Furthermore, as illustrated by the coefficient of variation (CV) distribution in Fig.R3, some real data we used exhibits characteristics of Poisson distributions. - Monkey C The figure for Monkey C is shown in Fig.S1(a). We sincerely hope that these responses may address your concerns. We believe that our novel FDA framework will be of significant interest to the ICML community, given its potential impact on few-trial neural alignment and real-world BCI reliability. Could you please consider raising the scores? We look forward to your further feedback. Thank you in advance. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I believe that the additional Synthetic data experiment is helpful.
null
null
null
null
null
null
Improved Convex Decomposition with Ensembling and Boolean Primitives
Reject
Summary: This paper proposes a novel method for representing scenes using convex primitives enhanced with a Boolean (set-difference) operation. In contrast to prior work that uses a fixed number of primitives, the authors introduce an ensembling strategy to select an adaptive number of positive and negative primitives per scene. The approach combines a neural prediction stage (using an encoder–decoder architecture based on ResNet-18) with a descent-based polishing procedure to refine the primitive parameters. Experiments on NYUv2 and a large LAION image collection show significant improvements over previous state-of-the-art methods in depth estimation, normal prediction, and segmentation accuracy. Claims And Evidence: Claims: + The paper claims that incorporating negative primitives via set-differencing enhances the representational capacity of convex decompositions. + It further claims that an adaptive ensembling strategy, which selects the optimal number of primitives, leads to substantial improvements over fixed-primitive approaches. Evidence: + Extensive quantitative evaluations on NYUv2 demonstrate improvements in depth error (AbsRel) and normal prediction metrics compared to baseline methods. + Qualitative comparisons (e.g., visualizations of segmentation and face labels) support the quantitative results. Methods And Evaluation Criteria: Methods: + The method uses a two-stage process: an initial prediction via a ResNet-18-based network, followed by a gradient descent “polishing” procedure to minimize a loss computed from depth, normals, and segmentation. + The paper introduces Boolean primitives (negative primitives) to “carve out” complex geometries, which is novel in the context of convex decomposition. Evaluation: + Standard metrics for depth (AbsRel, AUC at various thresholds), normals (mean/median angle errors), and an oracle segmentation metric are used for evaluation. + Comparisons are made with state-of-the-art methods on both indoor (NYUv2) and in-the-wild (LAION) datasets. Theoretical Claims: I think it's correct. Experimental Designs Or Analyses: Design: + The experimental design is thorough, comparing different configurations (varying total primitives and numbers of negatives) to validate the method’s robustness. + Ablation studies on the effect of the polishing procedure versus a pure descent baseline are well presented. Analysis: + Error metrics are reported comprehensively, and the analyses convincingly demonstrate the benefits of the ensembling strategy and the use of negative primitives. Supplementary Material: + The supplementary material (referenced in the main text) appears to include additional ablation studies, visualizations, and detailed breakdowns of runtime/memory usage. + These additional details help reinforce the main experimental findings and provide useful context for the implementation details. + But no demo submitted. Relation To Broader Scientific Literature: The paper is well situated within the literature on primitive-based scene representation, convex decomposition, and constructive solid geometry. Essential References Not Discussed: The authors not discussed the monocular depth/normal estimation model, like Marigold [1], GeoWizard [2], Depth-anything [3] and so on. [1] Ke B, Obukhov A, Huang S, et al. Repurposing diffusion-based image generators for monocular depth estimation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 9492-9502. [2] Fu X, Yin W, Hu M, et al. Geowizard: Unleashing the diffusion priors for 3d geometry estimation from a single image[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024: 241-258. [3] Yang L, Kang B, Huang Z, et al. Depth anything: Unleashing the power of large-scale unlabeled data[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 10371-10381. Other Strengths And Weaknesses: No Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for reading our paper and offering positive feedback. ## 8. Cost of Ensembling Please see __Tables 1 and 3__ for detailed timing breakdowns. Individual models we trained require betewen 0.84 to 2.06 seconds. This is over an order of magnitude faster than prior work (40 seconds), while simultaneously achieving better error metrics. For $K^{total}=12$, our method is more parameter-efficient. We agree that real-time primitives are the future - this work takes a significant step in that direction. While there is a fixed cost in generating primitives and rendering them, applying post-training finetuning dominates the compute time and can be varied in length based on desired latency requirements - see __Fig. 6, 13, and 14__ for tradeoffs. ## 9. Optimal Ratio of Negative Primitives in CSG Modeling In Constructive Solid Geometry (CSG), we represent 3D objects using boolean operations on primitive shapes. For a fixed budget of $K^{total}$ primitives, we aim to determine the optimal ratio of negative primitives ($K^-$) to positive primitives ($K^+$) that maximizes representational efficiency. A CSG model can be described as: $$\text{Object} = (P_1 \cup P_2 \cup ... \cup P_{K^+}) - (N_1 \cup N_2 \cup ... \cup N_{K^-})$$ Where $P_i$ are positive primitives and $N_j$ are negative primitives, with $K^+ + K^- = K^{total}$. __Definitions:__ - **Primitive Interaction**: Overlapping volumes creating representational complexity - **PP Interaction**: Between two positive primitives - **PN Interaction**: Between a positive and negative primitive __Assumptions:__ 1. Only positive volumes and their modifications by negative primitives are visible in the final result 2. The representational power comes primarily from PP and PN interactions 3. Primitives are distributed to maximize meaningful interactions 4. Optimal representation maximizes visible features per primitive used 5. Assume connected geometry __Mathematical Model:__ 1. **PP Interactions**: $\binom{K^+}{2} = \frac{K^+(K^+-1)}{2}$ 2. **PN Interactions**: $K^+ \cdot K^-$ __Balancing PP and PN Interactions__ For optimal efficiency, PP and PN interactions should be balanced: $$\frac{K^+(K^+-1)}{2} \approx K^+ \cdot K^-$$ Substituting $K^+ = K^{total} - K^-$ and simplifying: $$\frac{(K^{total} - K^-)(K^{total} - K^- - 1)}{2} \approx (K^{total} - K^-) \cdot K^-$$ $$\frac{K^{total} - K^- - 1}{2} \approx K^-$$ For large $K^{total}$: $$\frac{K^{total} - K^-}{2} \approx K^-$$ Solving: $$K^{total} \approx 3K^-$$ $$K^- \approx \frac{K^{total}}{3}$$ Thus, $K^+ \approx \frac{2K^{total}}{3}$ __Verification__ With this ratio, both interaction types equal approximately $\frac{2(K^{total})^2}{9}$, confirming our balance criterion. ### Why This Balance Is Likely Reliable 1. **Diminishing Returns**: As $K^-$ increases beyond the optimal ratio, each additional negative primitive becomes less effective because: - Negative primitives can only remove existing positive volume - Available positive volume decreases with fewer positive primitives 2. **Complementary Information**: PP and PN interactions contribute equally valuable but different information: - PP interactions define the overall positive volume - PN interactions create necessary concavities and details 3. **Maximum Information Content**: The ratio $K^- = K^{total}/3$ provides: - Sufficient positive primitives to establish base structure - Optimal negative primitives to efficiently carve features - Maximum meaningful visible interactions per primitive ### Empirical Evidence and Practical Verification Our quantitative evaluation supports an optimal $K^- ≈ K^{total}/3$ in __Tables 4-7__. Observe how depth and segmentation metrics tend to be highest when $K^{total}/K^-$ are near __36__/_12_, __24__/_8_, and __12__/_4_. ## 10. Depth Estimation Model NYUv2 has supervised depth and camera calibration parameters; for our method to work on real-world scenes, we need to extract a point cloud from in-the-wild images. To do so, we select a recent SOTA depth estimation model <https://github.com/DepthAnything/Depth-Anything-V2/tree/main/metric_depth>. We then make reasonable camera calibration assumptions to obtain a point cloud (see Sec. 4.3). We anticipate as better depth estimation models become available, our model will naturally generate better primitives. ## 11. Missing References Primitive fitting is a relatively niche area in the CV community, as compared with hot topics like NeRFs and Diffusion Models. With that said, we cited four papers from 2024. More importantly, we have covered the key references in this area and we perform comparative evaluation on all of the recent works in this topic. We are happy to include any others you feel we have missed. --- Rebuttal Comment 1.1: Comment: I have reviewed all the rebuttal comments, and my queries have been satisfactorily addressed. I have no further questions.
Summary: This paper aims to decompose a scene into different primitives. Based on the work "Convex Decomposition of Indoor Scenes"[1], this paper introduces two strategy to improve the baseline: (1) Introducing the negative primitives for the decomposition; (2) ensembling multiple networks' results and choose the best. Experiments show that the proposed strategies bring the improvement. [1] Vavilala, Vaibhav, and David Forsyth. "Convex decomposition of indoor scenes." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. ## update after rebuttal From the rebuttal, the authors make a clear statement that introducing the negative primitives is helpful on average. They update the results in Table 1, showing the depth and normals get better with boolean primitives and ensembling. I acknowledge the contribution of introducing the negative primitives for the first time in primitives fitting. So I raise the score. Claims And Evidence: From my perspetive, the improvement from negative primitives are limited. The increase of primitive numbers could bring a more significant improvement, as shown in row 1 to row 4 of the table 1. The effectiveness of the proposed strategies is my main concern. Methods And Evaluation Criteria: The method and evaluation criteria is reasonable, mainly following "Learning robust 3d shape fitting to single rgb images" and "Cuboids Revisited: Learning Robust 3D Shape Fitting to Single RGB Images". Theoretical Claims: The authors do not involve any theoretical proofs for the proposed strategy. Experimental Designs Or Analyses: 1. I found that the proposed strategy is not consistently improve the results. As shown in Table 1, introducing the negative primitives (line 4, 36/8) results in a decrease in normal accuracy compared to positive primitives only(36/0). The same problem can be found when the ensambling is introduced(Line 5-8). The depth improvement from the negative primitives is limited. (0.049->0.049; 0.057->0.055 in abs relative error). Although the authors provide many samples to show primitive predictions, fewer of them show how negative primitives could help to present more complex and accurate geometry. 2. The state of the art "Robust Shape Fitting for 3D Scene Abstraction" [2] is not involved in the comparison, although a conference version[3] is showed up in Table 4 of the supplementary. [2] Kluger, Florian, et al. "Robust Shape Fitting for 3D Scene Abstraction." IEEE transactions on pattern analysis and machine intelligence (2024). [3] Kluger, Florian, et al. "Cuboids revisited: Learning robust 3d shape fitting to single rgb images." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. Supplementary Material: I read the whole supplementary material. It involves the implementation details, more evaluation results and the application of the primitive decomposition. Relation To Broader Scientific Literature: The negative primitive comes from the traditional CSG decompostion[4]. The ensembling is common in machine learning. [4] Shapiro, Vadim, and Donald L. Vossler. "Construction and optimization of CSG representations." Computer-Aided Design 23.11 (1991): 4-20. Essential References Not Discussed: The references are essential. Other Strengths And Weaknesses: Strength 1. I think the introduction of negative primitives is a reseanable direction for decompostion. However, I believe some improvements are needed to help the negative primitives influence the framework more. Or some toy experiments in object-level decompostion could help to analyze the method. 2. The authors show some interesting applications of decompostion. Weakness 2. It would be better to improve the writing of this paper, especially the introduction. It would be better to explain the current method only using the positive primitives. Then introducing the motivation of negative primitives. Besides, CSG is not explained in abstract. Other Comments Or Suggestions: It would be better to explain the method in detailed, such as the pipeline of the method. Figure 2 should be enlarged. Questions For Authors: What is the advantages of using primitives decompostion for image editing? Could the method directly benefits any robotics application? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for taking the time to look at our work. ## 7. Value of Negative Primitives You make a great point - all methods produce good results on average. We don’t claim that quality keeps improving as we increase negative primitives beyond a point. Instead, our aim is to show it’s possible to fit CSG in the first place, and doing so improves the quality of primitive fits on average (e.g. __Fig. 7__). Notice that most of the time, a solution with boolean primitives is chosen, indicating they are genuinely useful in shape abstraction. __Fig. 7__: we should have been more clear about our central claim: we do not claim that quality keeps improving as we replace replace positive primitives with boolean primitives. Theory, intuition, and experimentation instead support some optimal intermediate value with a mixture of boolean and positive primitives near $K^-=K^{total}/3$. Therefore, Fig. 7 actually makes our point: different scenes require different numbers of primitives, but on average having some boolean primitives is helpful. You pointed out that the normals got slightly worse with more boolean primitives. We investigated and found that we did not align the methodologies to compute normals. For GT Normals, we computed finite differences of the point cloud. For predicted normals from the primitives, we calculated the gradient of the SDF at the intersection point consistent with [1]. This is now fixed, with finite difference being used for normal computation everywhere. __Updated Table 1__ above shows the corrected metrics; we will update all tables in the paper accordingly. Observe how depth and normals get better with boolean primitives and ensembling. While __Figs. 4, 5, and 10__ demonstrate some boolean primitive examples, here are a few more results emphasizing boolean primitives. The headers of each column indicate $K^{total}/K^-$. The normals make it clear that the boolean primitives are carving away geometry and significantly enrich the shapes we can encode. <https://drive.google.com/file/d/18hkwD4UdkCe97U8yFoZEjZaXLVvw-Atk/view>. You made an important observation about our results. Boolean primitives are a mechanism for increasing the types of shapes we can encode, but we can just increase the total number of primitives in lieu of making some of the primitives negative. Thus, intuition would suggest that at higher primitive counts, boolean primitives don’t offer as much of an advantage than at smaller primitive counts, because having lots of primitives available is already quite expressive. This is precisely what we observe on NYUv2, where at $K^{total} = 36$, models with and without boolean primitives perform comparably well as you noticed. At smaller primitive counts ($K^{total} \in {\{12,24\}}$), having some of the primitives be negative helps on average. In fact, in __Table 7__ in our paper, at all primitive counts boolean primitives are helpful on average. For $K^{total}=12$, picking 4 booleans improved AbsRel 0.0719 -> 0.0659. For 24 primitives, having 4 or 8 booleans reduced AbsRel 0.059 -> 0.0525. On LAION, __Table 5__ in our paper, there is more data to supervise the larger primitive count and we do see an improvement for 36 total primitives if we let 16 of them be boolean (0.0771 -> 0.0719). Of critical note, our method works on "in-the-wild" natural images, which can be incredibly diverse. It’s hard to know in advance what is the best number of primitives for a given test image. That’s why ensembling is so valuable because we don’t need to just use one model that gives the best AbsRel on average; we can run a few models with different mixtures of $K^{total}$ and $K^-$ and choose the best one. Given all the performance improvements we made to primitive detection, ensembling is quite feasible, depending on the use-case. Thanks for identifying this point and we will make this clear in the final version. Additionally, there is a bias-variance tradeoff in our work. Adding primitives reduces bias (you can encode things more accurately) but increases variance (harder to get all the primitives right). But adding a negative primitive significantly reduces bias and also significantly increases variance -- as above, one negative can be worth several positives. This means that the best setting likely has a mixture of positive and negative primitives, with positives favored. Adding negative primitives yields better results but adding too many quickly creates variance problems. [1] Vavilala, V. and Forsyth, D., 2023. Convex decomposition of indoor scenes. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 9176-9186). ## Additional Notes Thanks for the suggestions to polish the writing - we will do so for the final version. Note that we evaluate against "Cuboids Revisited" in __Table 4__; "Robust Shape Fitting..." is the journal version that has identical error metrics. Also, please see __1. Global Comment__ & __3. Analysis of Downstream Tasks__ above. --- Rebuttal Comment 1.1: Comment: From the rebuttal, the authors make a clear statement that introducing the negative primitives is helpful on average. I acknowledge this contribution and novelty. However, the limited improvement and limited theoretical justification leads to the boardline recommendation. --- Reply to Comment 1.1.1: Comment: Thanks for reading our response. We hope you had a chance to see all of our points (1-11) in our rebuttal. In our manuscript, we established that our underlying methodology achieves approx. 50% reduction in relative error as compared with prior work. Additionally, our work is the first to show that we can fit CSG to natural images, while improving depth error metrics by about 11% when ensembling CSG instead of ensembling positive primitives alone. We provided theoretical analysis in __4. Theoretical Analysis__, which establishes that many shapes can be encoded with fewer total primitives using CSG than with positive primitives alone. In __9. Optimal Ratio of Negative Primitives in CSG__, we derive a theoretical result that approx. 1/3 of the total primitives should be boolean. While the optimal ratio will vary based on the geometry to encode, for NYUv2 and LAION, our experimental results match up closely with theory. __Updated Table 1__ shows that in our _Ensemble pos + neg R->S_, an average of 1/3 of primitives were boolean. Further the best individual network used 12 negative primitives out of 36 total. We feel we have broken new ground in 3D primitive fitting, by improving quality, increasing speed by an order of magnitude, introducing CSG representations, ensembling to find the optimal $K^{total}$, and bridging the domain to natural in-the-wild images. We have included extensive quantitative and qualitative evaluation, with theory to support our claims. If there is additional analysis or clarification that would help, we are happy to provide it.
Summary: This paper addresses the problem of parsing complex 3D scenes into geometric primitives, focusing on improving accuracy by incorporating boolean operations (set differencing via negative primitives) and ensembling to dynamically select the number of primitives per scene. The authors propose a hybrid approach combining learned regression for initial primitive prediction and gradient-based refinement to optimize geometry. Experiments on NYUv2 and LAION datasets demonstrate significant improvements in depth, normals, and segmentation metrics over state-of-the-art (SOTA) methods. Key contributions include enabling constructive solid geometry (CSG) representations for real-world scenes, leveraging test-time ensembling to adapt primitive counts, and validating the utility of negative primitives. The innovation lies in extending convex decomposition to handle boolean operations, which enriches representational capacity, and in the systematic exploration of ensembling strategies. The work has academic value in advancing primitive-based scene abstraction, a foundational problem in 3D vision, with applications in robotics and scene editing. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: There is no theoretical claim. Experimental Designs Or Analyses: Yes. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: I am confused if this paper is suitable for ICML, a machine learning conference. Essential References Not Discussed: No. Other Strengths And Weaknesses: #### **Strengths** 1. **Novelty of Boolean Primitives**: The integration of negative primitives to enable CSG-like operations is a meaningful advancement. This addresses a critical limitation of prior work, which could only model unions of convex shapes. 2. **Ensembling Strategy**: Dynamically selecting the number of primitives per scene via ensembling is a clever solution to the challenge of variable scene complexity. The two strategies (S→R and R→S) are well-motivated and empirically validated. 3. **Rigorous Evaluation**: Extensive experiments on NYUv2 and LAION datasets, including depth, normals, and segmentation metrics, provide strong evidence of superiority over SOTA. The inclusion of LAION—a challenging in-the-wild dataset—demonstrates generalizability. 4. **Efficiency**: The method achieves faster inference than prior work (e.g., 29.9 sec vs. 40 sec for SOTA) despite ensembling, thanks to optimizations like batching and mixed precision. 5. **Practical Insights**: The analysis of negative primitives’ impact (e.g., Fig. 7) and the comparison of random vs. network-initialized optimization (Fig. 6) offer valuable takeaways for the community. #### **Weaknesses** 1. **Limited Theoretical Justification for Negative Primitives**: While empirical results show benefits, the paper does not rigorously analyze why negative primitives improve accuracy more efficiently than simply increasing the number of positive primitives. A theoretical discussion on the representational efficiency of CSG operations is missing. 2. **LAION Evaluation Limitations**: Depth and normals for LAION are inferred via pretrained models rather than ground truth, introducing potential error propagation. The paper does not quantify how this affects results. 3. **Ambiguity in Face Labeling**: The claim that face labels grow as $( f \times (K^{total} - K^-) \times (1 + K^-) )$ (Sec. 3.1) is not intuitive. A visual example or mathematical derivation would clarify this. 4. **Computational Cost of Ensembling**: While faster than SOTA, the total inference time (29.9 sec for R→S ensembling) remains high for real-time applications. The paper does not discuss trade-offs between accuracy and latency. 5. **Incomplete Application Discussion**: The impact statement mentions potential uses in robotics and editing but lacks concrete examples or metrics (e.g., editability scores, robotic planning success rates). Other Comments Or Suggestions: **Presentation Issues:** • Ablation studies are buried in appendix; key results (e.g., the effect of polishing steps) should be in the main text. • Table 1’s formatting (e.g., merged cells) complicates readability. • Terms like "smoothed polytopes" (Sec. 3.1) and "blending term" are inadequately defined. Questions For Authors: Please see the above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for reviewing our paper. ## 4. Theoretical Analysis Multiple reviewers expressed interest in theoretical justification as to why boolean primitives are advantageous in fitting complex real-world scenes. We provided qualitative evidence in __Fig. 3__, in which we model a cube with a hole punched in it. Intuitively, one positive and one negative primitive are sufficient to model it perfectly (2 total primitives). Without CSG, approx. 5 primitives may be required, which is less parameter-efficient. Based on that, we can sketch a theoretical argument as to why having a vocabulary of mixed positive and negative primitives is expected to yield more accurate representations than the same number of positive-only primitives. ### Kolmogorov Complexity Perspective For many objects, the description length (Kolmogorov complexity) using mixed primitives is significantly shorter than using positive-only primitives: For a shape S: Let K₊(S) = minimum description length using only positive primitives Let K±(S) = minimum description length using mixed primitives Theoretical result: For many shapes with concavities, K±(S) << K₊(S) Example: A simple cube with a hole requires just 2 primitives with mixed CSG (1 positive cube, 1 negative cube) but would require numerous small positive primitives to approximate the concavity with positive-only CSG. ## 5. Error Propagation You raise an important point: in the absence of GT depth/normals for in-the-wild images like LAION, we use a pretrained network to estimate depth, from which we use a standard heuristic (finite differences) to obtain GT normals. Single image depth predictors are very strong and reasonable choices when GT is not available. Our convex decomposition procedure uses this inferred depth when generating primitives (RGBD input), and we evaluate the quality of these primitives based on the provided depth. In effect, the claim that we make in this work is that _our model gives the user what was asked for_. When we report low AbsRel, it means our model adheres to the input depth. As better depth estimation models become available, our procedure will naturally get better, although a small amount of finetuning may be required if switching depth estimation models due to differences in the statistics of each depth predictor’s output. ## 6. Ambiguity in Face Labeling When computing segmentation accuracy with boolean primitives, we compute the triple ($f_i,K^+_j,K^-_k$) at each ray intersection point, where $i$ is the face index, $j$ is the index of the positive primitive we hit, and $k$ is the index of the (potentially) negative primitive we hit. Each unique triple can get its own face label. Thus, given a fixed primitive budget $K^{total}$, replacing a pure positive primitive representation with a mixture of positives and negatives can yield more unique faces. For example, $K^+/K^-$ = $12/0$ maxes out at $12f$ unique faces; $K^+/K^-$ = $6/6$ maxes out at $42f$ faces. Note that $f\times K^+ \times (1+K^-)$ is the theoretical maximum of unique labels, as practical scenes do not involve every primitive touching every other primitive. ## Additional Notes Please see __1. Global Comment__ & __3. Analysis of Downstream Tasks__ above and __8. Cost of Ensembling__ & __9. Optimal Ratio of Negative Primitives__ below.
Summary: This paper aims for the task of fitting a scene with simple primitives. To address the challenges of local minima, poor representing complex structure and highly relying on good initialization, authors propose a novel negative primitive design. Experiments on NYUv2 and ALION show advantages of this method. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. There is no theoretical analysis in this paper. Experimental Designs Or Analyses: Yes, I reviewed all the experiments, and I believe they do not sufficiently demonstrate the effectiveness of the proposed methods. For instance, Figure 6 only validates that network-based initialization accelerates convergence; however, across various K^{total}/K^- settings, the network-based results appear largely similar. This suggests that with a good initialization, the benefit of incorporating negative primitives becomes less pronounced. Moreover, Figure 7 also does not clearly highlight the advantages of negative primitives—it appears that configurations with K^- = 0 outperform those with K^- values of 16, 20, 24, 28, or 32, and the performance only seems to approach optimal levels around K^- = 12. Supplementary Material: Yes Relation To Broader Scientific Literature: Yes. The paper discussed some related work. Essential References Not Discussed: I am concerned that most of the articles cited in this work were published before 2024. There are very few recent studies, which suggests that the author's research is not thorough enough and that the topic's current significance is not adequately explained. Other Strengths And Weaknesses: Strengths: +: The proposed negative primitive is novel, simple and easy to follow. +: The downstream task on image synthesis is interesting and shows a potential of enhancing the controllability of many scene-level task. Weakness: -: The experiments do not fully demonstrates the effectiveness of the proposed method, as mentioned in "Experimental Designs Or Analyses". -: Authors discuss some empirical findings on the losses design in sec 3.2, while no detailed ablations in experimental sections. Other Comments Or Suggestions: No. Questions For Authors: I would suggest the author to refine this paper to show (1) more comparison results on downstream tasks; (2) provide solid evidence of the negative primitive design. Ethical Review Flag: Flag this paper for an ethics review. Ethical Review Concerns: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the feedback on our paper. ## 1. Global Comment We'd like to refresh the reviewers with a summary of our contributions: 1. We depart from the limited NYUv2 dataset used in existing primitive-fitting papers and show how to make primitive-fitting work on real-world natural images via a portion of the LAION dataset. We can compute primitives for almost any natural image. 2. We are the first (to our knowledge) to fit CSG to natural images and demonstrate that a mixture of positive and negative primitives is advantageous on average. 3. As discussed in the method section, we analyze every aspect of data generation and training, including hyperparameter tuning, such that every model we train (even without boolean primitives and ensembling) outperforms existing work on established benchmarks -- often while using fewer primitives and less compute. 4. We are the first to use ensembling to find the optimal number of primitives for a given test image, which simultaneously improves geometric accuracy. 5. By analyzing and improving our post-training finetuning process, we are the first to show that it's possible to fit 3D primitives to data without a neural network to predict a start point. 6. The authors commit to open-source the training and inference code. ## 2. Missing Ablations We ablated and analyzed key components of our method, including sweeps of $K^{total}$ and $K^-$ (**Tables 1, 4, 5**), two forms of ensembling ($S\rightarrow R$ and $R\rightarrow S$) in **Tables 2, 4, and 5**; the number of faces per primitive (we do both the traditional 6-faced cuboid and show that extending to higher-faced polytopes e.g. 12-faced helps in **Table 2**). We also analyze the time and memory of individual networks and different forms of ensembling, including breakdowns of each stage of our pipeline, in **Tables 1 & 3**. Further, we analyze both the benchmark NYUv2 dataset and, for the first time in 3D primitive generation, natural LAION images in-the-wild **(Fig. 5)**. Another key ablation is network start vs. optimizing primitives directly **(Fig. 6)**. We aggressively analyzed and improved the optimization process of 3D primitives from data, such that, for the first time to our knowledge, we can get good 3D primitives from RGB images without a neural network providing a start point. However, having a network start is advantageous in terms of quality and speed. We are happy to add more ablations that the reviewers feel would be helpful. ## 3. Analysis of Downstream Tasks We present qualitative examples of using primitive abstractions to edit images (that is part of a separate, concurrent work) in **Figures 8, 15, and 16**. It's helpful to grab and move objects in a scene. Primitives are a great candidate to simplify user-interaction especially in cluttered real-world environments [1]. For robotics, we are aware of 3D primitives used for fast collision checking, sampling-based planning, physics simulations, robotic manipulation, procedural scene generation and shape approximation (and likely much more). For both image generation and robotics, we want primitives that are accurate, fast, and can be generated from in-the-wild data. Our paper specifically improves accuracy, speed, and real-world generalization. To evaluate, we use established depth, normal, and segmentation error metrics, which are reasonable for downstream use-cases we can envision right now. Given the scope of this project focused on obtaining better primitives, investigating these downstream use-cases is future work. [1] Bhat, S.F., Mitra, N. and Wonka, P., 2024, July. Loosecontrol: Lifting controlnet for generalized depth conditioning. In ACM SIGGRAPH 2024 Conference Papers (pp. 1-11). ## Additional Notes Please see __4. Theoretical Analysis__, __7. Value of Negative Primitives__, and __11. Missing References__ below. ## Updated Table 1 | Method | $K^{total}$ | $K^-$ | AbsRel↓ | Normals Mean↓ | Normals Median↓ | SegAcc↑ | Time (s) | Mem(GB) | |--------|-------------|-------|---------|---------------|-----------------|---------|----------|---------| | 12 | 12 | 0 | 0.075 | 36.29 | 28.74 | 0.624 | 0.84 | 3.53 | | 24 | 24 | 0 | 0.058 | 33.58 | 25.69 | 0.692 | 1.46 | 5.57 | | 36 | 36 | 0 | 0.048 | 32.18 | 24.27 | 0.730 | 2.06 | 7.61 | | best | 36 | 12 | 0.048 | 32.04 | 24.34 | **0.765** | 2.06 | 7.61 | | $\mathsf{pos}$ $S\rightarrow R$ | 27.60 | 0.0 | 0.056 | 33.37 | 25.51 | 0.699 | 2.08 | 7.61 | | $\mathsf{pos+neg}$ $S\rightarrow R$ | 26.07 | 12.31 | 0.056 | 33.73 | 25.88 | 0.717 | 2.13 | 7.61 | | $\mathsf{pos}$ $R\rightarrow S$ | 35.17 | 0.0 | 0.048 | 32.18 | 24.29 | 0.729 | 6.21 | 7.61 | | $\mathsf{pos+neg}$ $R\rightarrow S$ | 35.08 | 11.76 | **0.043** | **31.83** | **24.12** | 0.760 | 29.9 | 7.61 | | Vavilala et al. | 13.9 | 0 | 0.098 | 37.4 | 32.4 | 0.618 | 40.0 | 6.77 |
null
null
null
null
null
null
TextCenGen: Attention-Guided Text-Centric Background Adaptation for Text-to-Image Generation
Accept (poster)
Summary: This paper mainly targets text-to-image generation. The authors focus on an interesting problem: after generating images, one would potentially want to insert specific visual texts into the images, and it would be better if the area to be inserted has clean background rather than being occupied with other objects. The authors show that current T2I models can hardly fulfill this goal without specific design. To this end, they propose an attention-guided text-centric background adaptation to modify the attention maps according to several criteria. The authors conduct multiple experiments to show the effectiveness. Claims And Evidence: Yes, the authors have pointed out an interesting and important problem in T2I generation. The experiments can validate this. Methods And Evaluation Criteria: Yes, the relationship between attention map and generated objects has been studied in previous works. Besides, the experiments designed by the authors are reasonable enough to show the effectiveness of the proposed method. Theoretical Claims: No theoretical claims are presented in this paper. Experimental Designs Or Analyses: Yes. The main experiment includes comparison with several T2I models. Both qualitative and quantitative results illustrate the superiority of the proposed method. The ablation study can show the role of each term in the proposed guidance. Supplementary Material: The authors have provided detailed setting and more experiment results in the supp, which can help readers better understanding the paper. Relation To Broader Scientific Literature: The proposed method can better enhance the application of T2I models in design of wall papers, posters, etc. Besides, the method can be potentially extended to T2V/T-2-3D models. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: 1. The whole paper is well written and easy to understand. Weaknesses: 1. I wonder if it would be better to place the task importance in Fig.6 in Sec.1? 2. It has been shown that actually models such as SD1.5 and SDXL do not enjoy strong correlation between the attention map and the target objects. I wonder if this could affect the proposed method. 3. While the proposed method is solid, it is to some extent complicated. Some simpler methods, for example, generating staggered layouts and using layout-grounded T2I to generate images, or directly restricting the TV in the target region, may be a better choice. 4. It would be better to present results with more advanced diffusion models such as SD3 and FLUX. Other Comments Or Suggestions: Please refer to the weaknesses. Questions For Authors: Please refer to the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable review. We address each point below. ### [W1]Task Importance Placement in Introduction We'll add to the introduction: "As shown in Fig. 6, creating text-friendly images is essential for graphic design applications (validated by our 114-participant user study)." ### [W2]Correlation Between Attention Maps and Target Objects Correlation is an important assumaption for our method, which relies on attention manipulation. Strengthening the correlation between semantic content and attention maps is a critical research direction in this field [1,2]. **Any training-free methods that improve this correlation could be readily integrated as upstream components to our approach.** This represents a promising direction for future research that would complement rather than replace our contribution. Notably, Our results still demonstrate that attention manipulation remains an effective mechanism for our task even with current models with semantic loss (with clip score). ### [W3]Alternative Approaches Comparison To clarify differences between our task and existing approaches, we present the following comparison: | Task | Training-Free | Annotation-Free | Layout Specification | Required Anchors | |------|---------------|-----------------|----------------------|------------------| | Layout-to-Image [3] | ❌ | ❌ | Object | >5 | | Text-Friendly Image Gen (Ours) | ✓ | ✓ | Space Region | 1-2 | | Visual Text Generation | ✓ | ✓ | Text | 1-2 | Unlike layout-to-image tasks requiring training and intensive annotation, our method only needs space region specification, which is crucial for dynamic applications like mobile wallpapers (Figure 1) or e-commerce posters. Layout-based methods primarily focus on object arrangement rather than creating text-friendly backgrounds. Their objective is fundamentally different: - Layout-grounded T2I: Positions objects according to a layout - Our approach: Creates backgrounds that harmonize with planned text regions Direct TV restriction in target regions would yield effects **similar to attention reduction**, which we have already validated in our ablation studies. As shown in Table 1, simply applying spatial constraints without force-directed guidance (w/o FDG) leads to higher CLIPS Loss (2.2 vs 0.32), indicating reduced semantic fidelity to the original prompt. This demonstrates that naive approaches like direct TV restriction fail to balance text-friendliness with instruction following. | Method | CLIPS Loss ↓ | TV Loss ↓ | Saliency IOU ↓ | VTCM ↑ | |--------|-------------|-----------|----------------|--------| | w/o FDG | 2.2 | 12.44 | 28.56 | 3.03 | | Ours | **0.32** | **8.81** | **22.86** | **4.4** | Our force-directed approach enables precise object placement control while maintaining semantic coherence, as evidenced by lower CLIPS Loss and superior text-compatibility metrics. **Both alternatives would ultimately need similar attention control mechanisms to achieve our goal of text-friendly background generation that respects user-specified regions**. ### [W4]Testing with More Advanced Models We have deployed our method on SD1.5, SD2.0, and SDXL to demonstrate the broad applicability of our method across different model architectures. Our approach can be adapted to MMDiT-based models like FLUX through methods similar to those demonstrated in recent work [4] similar to this [code](https://github1s.com/krafton-ai/Rare-to-Frequent/blob/main/R2Fplus_Diffusion_sd3.py#L986). The key difference in MMDiT models is the attention structure, which can be described as: ``` MMDiT Attention = | Text-Text (TT) | Text-Image (TI) | | Image-Text (IT) | Image-Image (II)| , ``` In UNet models, we modify cross-attention (equivalent to TI block). For MMDiT models, one possible solution is to apply our force-directed approach to the Image-Text (IT) block (such as transpose the IT block), and maintain consistency in the Text-Image (TI) block. This adaptation is technically straightforward based on existing implementations like Rare-to-Frequent. While implementation details differ, the fundamental principle of manipulating attention to create text-friendly backgrounds remains valid across architectures. We appreciate your thoughtful suggestions and will address them in our revised manuscript. [1] Yang et al. "Dynamic prompt learning: Addressing cross-attention leakage for text-based image editing," Neurips 2023 [2] Liu B, Wang C, Cao T, et al. "Towards understanding cross and self-attention in stable diffusion for text-guided image editing," CVPR 2024 [3] Zheng et al., "LayoutDiffusion: Controllable diffusion model for layout-to-image generation," CVPR 2023 [4] Park D, Kim S, Moon T, et al. "Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance," ICLR 2025 Spotlight
Summary: This paper introduces TextCenGen, a training-free method for generating text-friendly images. While traditional text-to-image (T2I) models can create high-quality images, they typically don't account for the need to reserve space for text placement. TextCenGen addresses this challenge through several innovations: 1) A dynamic background adaptation method that creates smooth backgrounds for predefined text regions 2) A force-directed graph algorithm that guides the relocation of objects in attention maps to avoid overlap with text regions 3) A spatial excluding cross-attention constraint that ensures background smoothness in text regions The authors constructed a benchmark with 27,000 images, demonstrating that TextCenGen reduces saliency overlap in text regions by 23% compared to existing methods while maintaining 98% of the original semantic fidelity. The method requires no additional training and can be plugged into various T2I models. Claims And Evidence: The paper's claims are well-supported by experimental evidence: 1. The authors propose new evaluation metrics (Saliency IOU, TV Loss, CLIP Score Loss, and VTCM) that comprehensively assess the quality and text-friendliness of generated images. 2. Quantitative comparisons with existing methods (Dall-E 3, AnyText, Desigen, and Stable Diffusion 1.5) show that TextCenGen outperforms these methods across all datasets. 3. A user study with 114 participants validates the effectiveness and user perception of the method. 4. Detailed ablation studies demonstrate the importance of force-directed cross-attention guidance and spatial excluding cross-attention constraint. 5. Rich visual results intuitively showcase the method's effectiveness. Methods And Evaluation Criteria: The proposed methods are well-suited for addressing the text-friendly image generation problem: 1. Force-Directed Cross-Attention Guidance: Cleverly applies physical mechanics concepts to attention map editing, enabling smooth relocation of conflicting objects. 2. Spatial Excluding Cross-Attention Constraint: Ensures background smoothness in text regions, suitable for text overlay. 3. The training-free approach makes the method more accessible and applicable to various pre-trained models. The evaluation criteria are comprehensive and reasonable: 1. Testing on three datasets (P2P Template, DiffusionDB, and Synthesized Prompts) covers diverse scenarios. 2. Evaluation metrics balance considerations of image quality, semantic fidelity, and text region adaptability. 3. MLLM-judged ELO ranking provides a more comprehensive assessment of design appeal. Theoretical Claims: The theoretical contributions focus on adapting force-directed graph algorithms, borrowing concepts from physics to apply to attention map editing: 1. Repulsive Force: Ensures each element is placed separately 2. Margin Force: Prevents vertices from being expelled from visual boundaries 3. Warping Force: Maintains object visibility within the canvas through affine transformations These theoretical claims are clearly understood, logically sound, and well-formulated in the paper. Experimental Designs Or Analyses: The experimental design is comprehensive and reasonable: 1. Benchmark comparison: Thorough comparison with existing methods across multiple datasets 2. Ablation studies: Evaluation of the contribution of each component 3. User study: Validation of the method's practicality and user perception The experimental analysis is also thorough: 1. Combines quantitative metrics and qualitative results 2. Compares performance across different Stable Diffusion versions 3. Analyzes the influence of the force balance constant 4. Provides generalization analysis across different types of prompts and region shapes Supplementary Material: The paper includes comprehensive appendix materials detailing: 1. Task introduction and comparison with existing tasks 2. Experimental settings, including region random sampling method 3. Text box shape orientation analysis 4. Detailed explanation of evaluation metrics 5. Detailed introduction of comparison methods 6. MLLM-as-Judge ELO ranking methodology 7. Analysis of force balance constant influence 8. More experimental results and ablation studies 9. Analysis of method limitations Relation To Broader Scientific Literature: This paper relates to several research directions: 1. Text layout generation: Extends traditional layout design methods, shifting from static image layout to dynamic generation of text-suitable backgrounds. 2. Text-to-image generation: Leverages recent advances in diffusion models but adds consideration for text placement. 3. Attention-guided image editing: Adopts training-free methods to manipulate attention maps, avoiding expensive retraining. The paper skillfully combines these areas to propose a novel text-friendly image generation method. Essential References Not Discussed: The paper already includes the major literature in related fields Other Strengths And Weaknesses: Strengths: 1. Addresses an important and practical problem: generating images with space reserved for text 2. The training-free method design is clever and easy to integrate into existing models 3. Comprehensive evaluation including multiple datasets, metrics, and user studies 4. The proposed force-directed cross-attention guidance concept is innovative, providing a new perspective for attention map editing Weaknesses: 1. Limited handling of non-convex shapes, which may lead to object size reduction or fragmentation 2. May produce unexpected changes or unspecified objects in certain cases 3. Evaluation primarily focuses on SD 1.5, requiring more validation for broader applicability to other T2I models Other Comments Or Suggestions: 1. Consider providing a simplified algorithm pseudocode to make the entire process easier for readers to understand 2. Further testing in practical application scenarios (such as poster design, mobile interface design) would strengthen the work 3. Provide more analysis on computational complexity and runtime Questions For Authors: 1. How robust is the method when handling complex scenes (such as multiple objects overlapping)? Are there coping strategies? 2. What improvements are being considered for handling non-convex shape objects? 3. How could the method be extended to support simultaneous optimization of multiple text regions? 4. Is adaptive setting of force balance parameters feasible? How could they be automatically adjusted based on scene complexity? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your valuable review. We address your concerns with additional experiments and analysis below. ### [Q1,Q3]Multi-Object and Complex Scene Handling Our approach shows strong performance in complex scenes with multiple objects. We additionally evaluated on the Desigen dataset, which includes multiple text boxes representing a challenging multi-region scenario: In multi-object cases, our method creates complementary forces guiding content away from specified regions. This approach extends to complex layouts as shown in our Desigen benchmark results: | Dataset | Metrics | Desigen training-free | Desigen-Trained | Ours (SD 1.5) | |---------|---------|----------------------|-----------------|---------------| | Desigen | Saliency IOU ↓ | 40.8 | 41.7 | **32.0** | | | TV Loss ↓ | 19.4 | 18.1 | **9.7** | | | VTCM ↑ | 2.4 | 2.3 | **3.7** | In multi-object cases, our method creates complementary forces guiding content away from specified regions. [W1,Q2] The reviewer may misunderstood our approach. Our warping force and other forces specifically address object size reduction and fragmentation caused by attention reduction, and **this fragmentation equally occurs in both convex and non-convex shapes**. We mentioned non-convex shapes in limitations only to consider more fine-grained region requirements from users, such as spiral-shaped regions. For non-convex shapes, our implementation approximates them using their convex hull. While a limitation, user studies show most text and icon placement scenarios involve convex regions, making this practical for common use cases. [Q4] We're exploring image reward metrics for optimal parameters across different scene complexities, including MLLMs as judges to automatically adjust parameters. ### [W3] Applicability to Other T2I Models Our method is extensible to other text-to-image models. We have successfully applied it to multiple diffusion models (see line330 table 2 in the paper). The implementation is straightforward. After defining the attention processor class, integration requires only a few lines: ```python for name in pipe.unet.attn_processors.keys(): if "attn2" in name: attn_procs[name] = my_attn_processor(guidance_func, region, len(tokens), place_in_unet=name) else: attn_procs[name] = pipe.unet.attn_processors[name] pipe.unet.set_attn_processor(attn_procs) ``` When combined with the finetuned diffuison model, we see improvements: | Dataset | Metrics | Desigen-Trained+Attention reduction | Desigen-Trained+Ours | Improvement | |---------|---------|-------------------------------------|----------------------|-------------| | Desigen benchmark | Saliency IOU ↓ | 41.66 | 38.48 | 7.60% | | | TV Loss ↓ | 18.06 | 12.19 | 32.50% | | | VTCM ↑ | 2.30 | 2.85 | 23.90% | | | CLIP Score ↑ | 28.99 | 26.41 | -8.90% | ### [Com 3] Computational Complexity and Runtime Analysis We will add runtime analysis to the appendix. Our method maintains computational complexity comparable to standard diffusion models with minimal overhead from force computations. Our implementation is efficient, requiring less than 15GB VRAM (compatible with consumer GPUs like RTX 3090) and averaging approximately 30 seconds per image generation, as it requires only a single inference pass rather than multiple candidates. ### [Com 1] Algorithm Pseudocode We will add simplified pseudocode to method section in the revised paper to clarify the implementation process. Here is the algorithm that summarizes our approach: ``` Algorithm: Text-Friendly Background Adaptation Input: Text prompt P, Target region R for text placement Output: Text-friendly image I with clear space in region R 1. Initialize diffusion model with prompt P 2. For each timestep t: a. Extract cross-attention maps A_k^l for tokens k, layers l b. For each token k, layer l: i. Detect conflicts D(k, R, A_k^l) ii. For conflicting tokens: - Compute attention centroid - Calculate repulsive force F_rep and margin force F_m - Compute total displacement Δpos = F_rep + F_m - Apply warping through affine transformation iii. Apply constraint: A_k,new^l = A_k^l ⊙ (1-R) c. Apply modified maps for guidance 3. Return text-friendly image I ``` ### Application Scenarios We have already supplemented our evaluation with the Desigen dataset(see table 2), which represents poster and advertisement design scenarios. Mobile interface design represents an interesting future direction that we plan to explore. We appreciate your consideration and helpful suggestions.
Summary: This paper aims to solve the problem of generating a background image conditioning on intended size and position of overlaying texts. The proposed approach, TextCenGen, is to generate a regular image and a background image at the same time, and use the text intended region and attention map from the regular image to guide the generation of background image such that it leaves space for texts. 2700 text prompts were used as background captions to generate images. The source of text prompts include ChatGPT generated, Prompt2Prompt, and DiffusionDB. Baselines are other off-the-shelf models in their training-free versions. Qualitatively, the TextCenGen leaves a cleaner space in the background image compared to selected baselines; quantitatively, TextCenGen scores better numbers for lower saliency overlap and higher semantic fidelity. ## update after rebuttal The provided additional results comparing with baselines in their trained versions and retrieval based method are highly appreciated. This answers my question, and it shows that the proposed method indeed provide additional improvements over existing work. As such, I update my recommendation to 3 weak accept. Claims And Evidence: One major claim is that TextCenGen outperforms existing methods measured by automatic metrics (saliency IoU, CLIP score, Total Variation Loss, etc.) and user studies. The supporting evidence was 27000 images generated from 2700 selected text prompts, comparing with generic off-the-shelf models, Dalle-3, SD1.5, AnyText, or training-free version of another background generation work, Desigen. However, given that background generation for texts has its target use cases, like posters and ads, it would be more representative to evaluate on those datasets [1] [2]. Also, even if the proposed method is training free, it would be informative to compare with existing method as is; in particular, to compare with Desigen's trained version. [1] Weng, Haohan, et al. "Desigen: A pipeline for controllable design template generation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [2] Zhou, Min, et al. "Composition-aware graphic layout GAN for visual-textual presentation designs." IJCAI, 2022 Methods And Evaluation Criteria: Instead of evaluating on generic text prompts, I think it would be more relevant to evaluate on captions from images in layout design datasets [1] [2]. Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: The dataset to run experiments on could be more relevant to designs. The baseline to compare with could be more informative if they were not limited to their training-free version. Supplementary Material: Section B Experiment Setting. Relation To Broader Scientific Literature: If the work conditions not just the text position and size to generate a background, but further conditions on the text font style, color, and its semantic meaning or context, the impact would be broader. Essential References Not Discussed: I appreciate the authors including discussions to retrieval based layout planning methods [3], and not just generation based only. It would be interesting to compare with retrieval based methods. [3] Jin, C., Xu, H., Song, R., and Lu, Z. Text2poster: Laying out stylized texts on retrieved images. In ICASSP, pp. 4823–4827. IEEE, 2022. Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable review. ### Evaluation on Poster/Ads Datasets We selected the Desigen dataset as a layout design dataset for evaluation. We successfully downloaded 53,577 usable images, with 52,806 used for training the Desigen version. The remaining 771 images from the validation set and their corresponding static text masks were used to construct our new Desigen benchmark. Our core contribution is text-friendly background generation for arbitrary user-specified regions without training constraints. **This approach extends beyond posters and ads to applications like mobile wallpapers where users can specify any position for text, rather than being limited by the distributions in datasets [1,2].** ### Comparison with Trained Versions of Baselines Our method focuses on a training-free approach due to the diverse range of user prompts and space region positions. Training often compromises generalization across subtasks. Table 1 shows Desigen-Trained + Attention Reduction improves over its training-free version on certain metrics. When combining our method with the trained Desigen model, we observe substantial improvements: | Dataset | Metrics | Desigen-Training-free + Attention Reduction | Desigen-Trained + Atention Reduction | Ours-Replace (SD 1.5) | |-|-|--|-|-| | P2P Template | Saliency IOU ↓ | 30.6 | 35.0 | **22.9** | | | TV Loss ↓ | 13.7 | 14.8 | **8.8** | | | VTCM ↑ | 2.9 | 3.0 | **4.4** | | DiffusionDB | Saliency IOU ↓ | 30.3 | 31.1 | **23.6** | | | TV Loss ↓ | 12.0 | 14.6 | **8.4** | | | VTCM ↑ | 3.2 | 2.8 | **4.4** | | Synthetics Prompt | Saliency IOU ↓ | 31.6 | 32.6 | **27.7** | | | TV Loss ↓ | 15.6 | 14.5 | **11.4** | | | VTCM ↑ | 2.7 | 2.9 | **3.5** | | Desigen benchmark | Saliency IOU ↓ | 40.8 | 41.7 | **32.0** | | | TV Loss ↓ | 19.4 | 18.1 | **9.7** | | | VTCM ↑ | 2.4 | 2.3 | **3.7** | Table above shows Desigen-Trained improves over its training-free version. When combining our method with the trained Desigen model, we observe substantial improvements: | Dataset | Metrics | Desigen-Trained+ Attention reduction | Desigen-Trained+Ours | |-|-|-|-| | Desigen benchmark | Saliency IOU ↓ | 41.66 | **38.48** | | | TV Loss ↓ | 18.06 | **12.19** | | | VTCM ↑ | 2.3 | **2.85** | ### Comparison with Retrieval-Based Methods We evaluated against Text2Poster [3] across all datasets. Our approach differs from retrieval methods: we support arbitrary user regions and create novel content beyond image databases. Since Text2Poster doesn't support user-specified space regions, we tested on preset positions (top, bottom, left, right, center) and selected the best result based on VTCM for fair comparison. Our related work section (line 083) explicitly mentions Text2Poster, which inspired us to consider background adaptation in the T2I era. We appreciate the reviewer highlighting this discussion direction, as it has provided us with valuable insights. | Dataset | Metrics | Text2Poster Best@5 | Text2Poster Avg@5 | Ours-Replace (SD 1.5) | |-|-|-|-|-| | P2P Template | Saliency IOU ↓ | 31.25 | 36.22 | **22.86** | | | TV Loss ↓ | 11.49 | 12.45 | **8.81** | | | VTCM ↑ | 2.48 | 2.28 | **4.4** | | | Clip Score↑ | 20.8 | 20.8 | **27.96** | | DiffusionDB | Saliency IOU ↓ | 24.07 | 34.38 | **23.59** | | | TV Loss ↓ | **7.99** | 10.65 | 8.41 | | | VTCM ↑ | 2.93 | 2.22 | **4.39** | | | Clip Score↑ | 17.57 | 17.57 | **27.2** | | Synthetics Prompt | Saliency IOU ↓ | 31.25 | 36.91 | **27.7** | | | TV Loss ↓ | 11.49 | 13.28 | **11.37** | | | VTCM ↑ | 2.48 | 2.16 | **3.49** | | | Clip Score↑ | 20.8 | 20.9 | **28.1** | Our method outperforms retrieval approaches in most text-friendliness metrics while preserving prompt semantics. The emergence of autoregressive models like GPT-4o creates potential for combining Text2Poster's retrieval methodology with attention-guided RAG to achieve both text-friendliness and semantic fidelity. ### Broader Impact Our work extends beyond text placement to other UI elements including icons (Appendix Figure 1). While approaches like TextDiffuser-2 handle text with specific styles through dedicated training, our method achieves user instruction following and arbitrary space region support through attention control. This approach avoids reliance on "sign" or similar elements in prompts (line 088-089), opening a pathway to more flexible controllable generation that preserves both user intent and element compatibility. We appreciate your reconsideration of our paper. [1] Zheng G, Zhou X, Li X, et al. LayoutDiffusion: Controllable diffusion model for layout-to-image generation. CVPR 2023 [2] Zhou, Min, et al. "Composition-aware graphic layout GAN for visual-textual presentation designs." IJCAI, 2022 [3] Jin, C., et al. Text2Poster: Laying out stylized texts on retrieved images. ICASSP 2022 [PS: Pioneering work]
null
null
null
null
null
null
null
null
GeoPixel: Pixel Grounding Large Multimodal Model in Remote Sensing
Accept (poster)
Summary: The paper introduces GeoPixel, a remote sensing multimodal LLM for pixel level understanding and reasoning in high resolution aerial images. The authors present GeoPixelD, a new dataset with detailed, spatially-aware annotations for grounded conversations in remote sensing. The authors develop an adaptive image divider module for high resolution image understanding within the MLLM. Experimental results demonstrate GeoPixel's superior performance in generating grounded conversations and segmenting referred objects in remote sensing data, highlighting its advancements in understanding and interpreting visual information. Claims And Evidence: 1. GeoPixel is designed to handle and reason about high resolution aerial images which is confirmed by performance comparison with models like PixelLM and RSMIN. 2. GeoPixel is designed to provide pixel grounded outputs and its capability is demonstrated on the RS-GCG dataset. 3. Although the impact statement mentions that GeoPixel can be used for urban planning and disaster response, the paper does not showcase any such applications. Instead, it focuses solely on demonstrating the performance of GeoPixel on benchmark datasets. Methods And Evaluation Criteria: 1. There is no comparison of GeoPixel shown against other remote sensing based MLLM such as GeoChat, SkyEyeGPT or TeoChat in tasks like remote sensing image captioning or object detection. Although they are mentioned in the related works and Table-1, it might be interesting to see a comparison with these models. Theoretical Claims: NA Experimental Designs Or Analyses: The experiments shown in the paper are adequate to evaluate the pixel-level understanding and reasoning capability of GeoPixel. Supplementary Material: NA Relation To Broader Scientific Literature: GeoPixel is one of the early works in remote sensing that developed a MLLM for grounded conversations and pixel-level reasoning. While such models exist for other domains, GeoPixel outperforms these generalist models, particularly for high-resolution aerial images. As mentioned in the impact statement, GeoPixel can be utilized in remote sensing applications like urban planning and disaster response (although not shown in the paper). Essential References Not Discussed: TeoChat: Irvin, J. A., Liu, E. R., Chen, J. C., Dormoy, I., Kim, J., Khanna, S., ... & Ermon, S. (2024). Teochat: A large vision-language assistant for temporal earth observation data. ICLR. Other Strengths And Weaknesses: 1. The paper introduces GeoPixelID, a dataset for grounded conversation in remote sensing. The paper also introduces a benchmark dataset to evaluate RS-LLM models on the task of referring segmentation. Other Comments Or Suggestions: NA Questions For Authors: NA Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer fxwn,  Thank you for your comprehensive review of our submission. We appreciate your insights and the opportunity to clarify and expand upon aspects of our work.  **Performance Benchmarking:** We appreciate your acknowledgment of GeoPixel's superior performance in handling high-resolution aerial images compared to models like PixelLM and RSMIN. Regarding your suggestion to compare GeoPixel with other multimodal models in remote sensing (e.g., GeoChat, SkyEyeGPT, TeoChat), we would like to clarify that GeoPixel is specifically designed as a segmentation-centric model. In contrast, current RS multimodal language models do not support segmentation outputs . To enable a fair comparison, we formulated referring expression detection as a post-processing task over GeoPixel's segmentation predictions. Specifically, the segmentation masks generated by GeoPixel are used to derive horizontal bounding boxes (HBBs) and oriented bounding boxes (OBBs), providing a consistent evaluation framework across models. This approach was applied on VRSBench, allowing us to benchmark GeoPixel against a range of existing models under comparable detection metrics.  GeoPixel demonstrated substantial gains over existing methods in referring expression detection/localization, measured by Accuracy\@0.5 and Accuracy\@0.7 across unique, non-unique, and overall categories:  **MiniGPT-v2** scored 40.7/18.9 (unique), 32.4/15.2 (non-unique), and 35.8/16.8 overall.  **LLaVA-1.5** achieved 51.1/16.4 (unique), 34.8/11.5 (non-unique), and 41.6/13.6 overall.  **Mini-Gemini** showed lower performance with 41.1/9.6 (unique), 22.3/4.9 (non-unique), and 30.1/6.8 (overall).  **GeoChat** reached 57.4/22.6 (unique), 44.5/18.0 (non-unique), and 49.8/19.9 (overall).  **GeoPix** performed similarly with 57.0/22.7 (unique), 44.8/18.2 (non-unique), and 49.8/20.0 (overall).  **GeoPixel** achieved the best results: **70.37/41.54** (unique), **65.80/40.32** (non-unique), and **67.70/40.83** (overall).  We further compared **GeoChat** and **GeoPixel** on VRSBench using orientated bounding boxes for referring expression detection.  **GeoChat** scored 32.3/12.6 (unique), 18.5/5.7 (non-unique), and 24.3/8.6 (overall).  **GeoPixel** significantly outperformed, achieving **54.48/24.87** (unique), **60.51/30.97** (non-unique), and **58.00/28.42** (overall).   **Relation to Broader Scientific Literature:** Thank you for acknowledging GeoPixel's contribution as an early work in remote sensing MLLMs. It sets a foundation for subsequent research in this rapidly evolving domain. Also, we appreciate your reference to the concurrent work TeoChat. We will discuss it in the revised manuscript to contextualize GeoPixel's contributions within the broader scientific dialogue.  **Further Contributions:** Your recognition of the GeoPixelD dataset as a valuable resource for the community is encouraging. We believe that both GeoPixelD will help propel forward the capabilities of RS-MLLM models.  We are preparing a revision that incorporates these insights and will provide additional data and qualitative results to address the gaps you've highlighted. Thank you once again for your thorough evaluation and constructive criticism. --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal. I have gone through the concerns raised by other reviewers. I believe the paper fits well into the application driven ML track and hence I would like to maintain my original rating.
Summary: This work introduces a multi-modal Grounded Conversation Generation (GCG) dataset that includes grounded descriptions for high-resolution remote sensing images, along with a benchmark featuring human-verified annotations. By leveraging recent advances in large vision-language models, the authors propose a model aimed at achieving fine-grained visual understanding in remote sensing imagery. Claims And Evidence: Method novelty. The method framework has no significant difference with existing MLLMs, such as LISA and VisionLLM-v2. The idea of dividing the input images into local and global regions is widely explored in large vision-language models, such as Llava-Next. Methods And Evaluation Criteria: The proposed method and evaluation make sense for the target problem. Theoretical Claims: The work does not include any theoretical proofs. Experimental Designs Or Analyses: The experimental design is mostly good, except for minor unfairness in Table 3. Supplementary Material: I checked all the supplementary materials. Relation To Broader Scientific Literature: This work lies in the scope of MLLMs in remote sensing. There are already many attempts in this direction, with only a few focusing on pixel-level tasks. This work mainly differs in the newly collected grounded detailed dataset. Essential References Not Discussed: Some Remote Sensing (RS) LMMs are missing in the related work section, such as, SkyEyeGPT, VRSBench, Popeye, MMM-RS, and RS-MoE. Other Strengths And Weaknesses: 1. Method Novelty The framework does not significantly differ from existing MLLMs, such as LISA and VisionLLM-v2. The strategy of partitioning input images into local and global regions is a well-explored concept in large vision-language models (e.g., Llava-Next). 2. Dataset Collection Grid-Based Annotation: In the individual pipeline, images are divided into 3×3 grids, with annotations generated for each grid. This approach raises concerns when objects are positioned at grid boundaries or span across multiple grids, potentially leading to incomplete or inaccurate annotations. Annotation Accuracy: The reliance on automatically generated annotations by vision-language models may lead to inaccuracies, even when using state-of-the-art proprietary models. Other Comments Or Suggestions: No further suggestions. Questions For Authors: Pretraining Discrepancy: In Table 3, the proposed GeoPixel model is pretrained on the iSAID-based GeoPixelD dataset, whereas the comparison methods are trained solely on the target dataset, which creates an imbalance in the experimental setup. Evaluation Metric: For the detailed captioning task, the use of CIDER as the evaluation metric may not be entirely appropriate. This limitation should be discussed in the manuscript. Ethical Review Flag: Flag this paper for an ethics review. Ethical Review Concerns: Annotation Accuracy: The reliance on automatically generated annotations by vision-language models may lead to inaccuracies, even when using state-of-the-art proprietary models. Code Of Conduct: Affirmed. Overall Recommendation: 3 Ethics Expertise Needed: ['Responsible Research Practice (e.g., IRB, documentation, research ethics, participant consent)']
Rebuttal 1: Rebuttal: Dear Reviewer Vj3H,  Thank you for your thorough review and insightful comments regarding our submission. We appreciate the opportunity to address the issues raised and clarify aspects of our research methodology and dataset.  **Method Novelty and Framework Comparison:** While our method framework resembles existing MLLMs like LISA and VisionLLM-v2, our approach is specifically tailored to the unique challenges of remote sensing (RS) imagery. The novelty lies in integrating high-resolution image comprehension with pixel-level grounding capabilities. This cohesion is critical for RS, as it enhances the model's capacity to decipher complex visual information at a granular level, thereby providing a more refined understanding and interaction with diverse geospatial objects. Moreover, our data annotation pipeline is uniquely designed for RS imagery, leveraging spatial priors and region-specific markers to extract and represent regional information effectively.  **Dataset Collection and Annotation Methodology:** Thank you for raising concerns about annotation accuracy with 3×3 grids. To address this, we would like to clarify the robustness of our dataset annotation methodology: In instance annotation, precise object assignment is ensured through a pixel overlap ratio, rather than just bounding box or object centers, to ensure accurate placement of objects. Furthermore, in scenarios where ambiguity arises, each individual annotation is already supplemented with a distinct set of markers. These markers aid in distinguishing between instances, particularly when locational ambiguity exists. Further reinforced through the integration of category priors, this proves to be a highly effective strategy for instance-specific annotations. In comparison, situations involving cluster annotations where markers are absent, we have adopted an enhanced localization mechanism with a multi-grid hierarchical system. This adjustment significantly mitigates ambiguities by refining the spatial grid at which annotations are applied, ensuring comprehensive and precise coverage across intersecting grids.  **Pretraining Discrepancy and Evaluation Metrics:** Regarding the pretraining discrepancy mentioned, I would like to clarify that all the models in Table 3 are trained on GeoPixelD data under the same training conditions, parameters, and epochs. Evaluation Metric: We acknowledge that CIDEr may not fully capture the nuances of detailed image captioning. To address this, we conducted additional evaluations using the CLAIR score [1], an LLM-based metric (GPT-4o in our case) that better aligns with human judgments. The results are GLaMM: 43.11, LISA+: 68.96, PixelLM+: 73.93, GLaMM-ft: 71.74, and GeoPixel **77.50**. Notably, GeoPixel outperforms all other models in this evaluation and across ROUGE-1, ROUGE-2, and ROUGE-L scores.  [1] Chan, D., Petryk, S., Gonzalez, J. E., Darrell, T., & Canny, J. (2023). CLAIR: Evaluating image captions with large language models. arXiv preprint arXiv:2310.12971.  **Example of CLAIR score and reasoning:** "score": 0.25, "explanation": "The candidate and reference sets both describe an aerial view involving docks and a body of water with a dark appearance. However, there are notable discrepancies in the details: the candidate mentions four piers with specific structures on them, while the reference describes only two piers. The description of the boats also differs, with the candidate mentioning a solitary boat moored at a dock, while the reference describes two boats in different positions."  **Ethical Review and Annotation Accuracy:** In addressing ethical review concerns regarding the accuracy and use of automatically generated annotations, it is important to emphasize the rigorous validation process implemented in this research. We employed a robust validation protocol where each gcg description in the test set was meticulously verified by expert annotators. These specialists, trained in recognizing and correcting discrepancies in automated text, manually corrected any data discrepancies found. Moreover, training set annotations were also rigorously filtered to eliminate aerial perspective inconsistencies, artifacts such as marker identifiers, fore/background references, depth cues, and inconsistent descriptors. This validation process underscores our commitment to ethical research practices.  **Related Work:** We thank you for pointing out several relevant works. We will update our manuscript to include discussions on RS-specific LMMs like SkyEyeGPT, VRSBench, Popeye, MMM-RS, and RS-MoE, providing a more comprehensive overview of the landscape and situating our contributions within it.  We hope that these clarifications and planned improvements address your concerns and demonstrate our commitment to advancing the field of RS through responsible and innovative research practices.  Thank you for your constructive feedback, invaluable in refining our work. --- Rebuttal Comment 1.1: Comment: The author's response has addressed part of my concerns. First, regarding data quality, this statement should be added to the data collection part: "We employed a robust validation protocol where each gcg description in the test set was meticulously verified by expert annotators", along with details of the validation protocol. Second, even though the designed method is technically sound, it is mainly built from existing techniques, and the novelty is still questionable. Overall, I'll improve my score to weak accept, given the dataset contribution.
Summary: This paper introduces GeoPixel and GeoPixelD. GeoPixel is a combination of models that receives high-res RGB satellite imagery and outputs text (e.g., a description in natural language) and a dense segmentation map. The combination consists of a frozen vision encoder that "tokenizes" the image and feeds these tokens to a pretrained LLM. The LLM outputs text, which is fed to a pixel decoder along with visual tokens. And the pixel decoder outputs the segmentation map. GeoPixel performs well on both image captioning / describing (in natural language) and image segmentation. GeoPixelD is a dataset that consists of matching high-res RGB satellite imagery and natural language descriptions, constructed via a semi-automated pipeline. Claims And Evidence: Yes. The paper claims to outperform the SOTA on text-guided image segmentation and image description tasks. I believe the results on page 7 support this. Methods And Evaluation Criteria: There are many RS benchmarks that this paper does not use to evaluate GeoPixel. I believe this is because GeoPixel is designed for two specific tasks (RS image descriptions in natural language and text-guided image segmentation) for which few datasets exist. Can the authors please clarify why they did not evaluate GeoPixel on other RS image to text datasets, like those used in GeoChat, EarthGPT, etc.? Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Overall, the experiments look sound. According to the end of section 5.2, all LMM models are finetuned on GeoPixelD. However, in table 2, GLaMM has two entires: "GLaMM" and "GLaMM-FT". LISA and PixelLM do not have the "FT" suffix, were these _not_ finetuned on GeoPixelD? Supplementary Material: I reviewed the full appendix. Relation To Broader Scientific Literature: GeoPixel demonstrates a method to leverage pretrained models as components in a system that can be finetuned successfully on another domain (remote sensing). This research question is relevant to many ML applications. That being said, the GeoPixel method itself is very similar to GLaMM (which was not applied to RS). Essential References Not Discussed: This paper does not discuss RS-specific foundation models, such as SatMAE, etc. Other Strengths And Weaknesses: As discussed, GeoPixel is quite similar to GLaMM. I see the differences between GeoPixel and GLaMM as: 1. GeoPixel does not use an explicit region encoder (see top left of Figure 2 in GLaMM). Instead, GeoPixel tiles each high-res image and encodes them independently and globally (via downsampling). 2. GeoPixel uses PLoRA, which applies LoRAs to the vision tokens inside of the LLM. To me, #1 makes sense in RS since we may not have a strong region encoder to extract salient regions. Tiling a high-res image seems like a sensible choice. #2 also seems reasonable to me, as the LLM likely needs to be adjusted a bit. However, I do not see ablations on these choices that empirically justify them. Overall, I expect more experiments from an applied submission with limited technical novelty. Other Comments Or Suggestions: None. Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer zikq,  Thank you for your detailed review and insights on our submission, GeoPixel. We appreciate the opportunity to address your concerns and clarify aspects of our research.  **Evaluation on Other RS Image-to-Text Datasets:** You raised an important question regarding why GeoPixel was not evaluated on additional RS image-to-text datasets, such as those used in GeoChat, EarthGPT, etc. The primary reason for this was the specific focus of GeoPixel on high-resolution satellite imagery coupled with text-guided image segmentation. While valuable, the datasets used in GeoChat EarthGPT often encompass varied tasks that do not align directly with the dense segmentation focus of GeoPixel. As the stated models do not support referring expression segmentation, we formulate referring expression detection as a post-processing task applied to the outputs of GeoPixel. Specifically, segmentation masks predicted by GeoPixel are utilized to derive horizontal bounding boxes (HBBs) and oriented bounding boxes (OBBs), enabling a consistent basis for comparative evaluation on VRSBench. GeoPixel outperforms GeoChat in referring expression detection in HBB and OBB, reporting Accuracy\@0.5 / Accuracy\@0.7 across unique, non-unique, and overall cases. The gain in performance is summarized as follows: HBB-based Detection Unique: **+12.97 / +18.94** Non-Unique: **+21.30 / +22.32** Overall: **+17.90 / +20.93** OBB-based Detection Unique: **+22.18 / +12.27**, Non-Unique: **+42.01 / +25.27**, Overall: **+33.70 / +19.82**.  Both models were evaluated after finetuning on VRSBench; the gain in performance clearly shows the importance of pixel-level alignment in multimodal models.  **Models Finetuning Clarification:** The models LISA+ and PixelLM+ were indeed modified and finetuned on the GeoPixelD dataset. The designation "FT" was specifically used for GLaMM to distinguish between its original and finetuned versions, as it was not modified. We will ensure that future iterations of the manuscript clearly articulate these distinctions to avoid confusion regarding the experimental setups and model adaptation status.  **Methodological Choices and Justifications:** Typically employed for tasks like region-specific captioning, explicit encoders are not used in our approach. Instead, we utilize a Set of Marks (SOM) alongside spatial priors, as detailed in our data annotation pipeline, to target and delineate specific regions precisely. This method ensures accurate regional descriptions and functions independent of model frameworks and offers an effective alternative. Table 4 presents the ablation study on the impact of tiling high-resolution images, where P=1 denotes the use of a single image patch, effectively implying no tiling.   **Relation to Broader Scientific Literature and Technical Novelty:** We appreciate the observation regarding the conceptual similarity between GeoPixel and models like GLaMM. However, remote sensing (RS) imagery demands high spatial resolution to capture fine structural details of ground objects. GLaMM's input resolution (336×336) is inadequate for representing RS data's rich spatial context and intricate features. GeoPixel addresses this by combining high-resolution visual understanding with pixel-level grounding, making it well-suited for the complexities of RS imagery. Additionally, our semi-automated data annotation pipeline, specifically tailored for RS imagery, extracts regional information through a set of marks and spatial priors, which also constitutes a key contribution.  In the revised manuscript, we will discuss RS-specific foundation models (e.g., SatMAE) to contextualize our work further within the broader literature.  We hope that these clarifications address your concerns adequately. Thank you for your constructive feedback, which is invaluable in improving our work. --- Rebuttal Comment 1.1: Comment: After reading the authors' rebuttal and other reviews, I keep my score a weak reject. There are two main reasons: 1. Limited technical novelty. There are two main innovations from GLaMM: Dividing images into tiles and using PLoRAs [1]. I'm not an expert in vision-language models (VLM), but I believe many of them can process multiple images / tiles in a single sample. Secondly, PLoRA has been used in (and was originally designed for) VLMs. Thus, the technical contributions are minor. 2. Limited experiments, specifically ablations. The submission has fewer experiments than I expect; however, as the authors address, there are very few public datasets for this specific task. Their method could be adapted to perform similar tasks, and the authors provided 1 such example in their rebuttal. However, the lack of public datasets does not explain the paper's lack of ablations. Ablations are crucial to help understand any proposed method. In response to my request for ablations, the authors pointed to Table 4 and did not address PLoRA. Table 4 varies the number of tiles at test time {1, 4, 9} but does not retrain the model under a different condition. For example, the model could be trained with 4 patches and also tested with {1, 4, 9} to test resolution extrapolation. Finally, since the two main technical contributions are not adequately ablated, it remains unclear how these two elements (and more!) contribute to GeoPixel's success. Thus, I cannot recommend this paper be accepted. [1] https://arxiv.org/abs/2401.16420 --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer’s comments and would like to clarify several key points regarding our work's novelty and experimental depth. **On Technical Novelty:** While some existing VLMs are capable of handling multiple tiles, GeoPixel introduces a tiling strategy in conjunction with pixel-level grounding, uniquely enhancing spatial understanding in high-resolution remote sensing (RS) imagery. This design captures fine-grained geospatial features, enabling reasoning across broader spatial regions, an aspect often overlooked in prior work. It is noteworthy that GeoPixel is not a minor extension of existing approaches *but rather a purpose-built solution for geographic reasoning* with a demonstrated impact on downstream tasks. RS often involves positional references (e.g., "the red car at top left"), requiring models to resolve spatial relationships among similar objects. GeoPixel demonstrates robust grounding in such cases both quantitatively and qualitatively. As demonstrated in Table 3 and Figure 9, tiling does not impair the model’s ability to localize objects based on positional language; in fact, it improves generalization. Furthermore, a key contribution of our work is the introduction of a *semi-automated annotation pipeline* specifically designed for remote sensing imagery. This pipeline leverages spatial priors and regional markers to generate high-quality pixel-level annotations aligned with natural language efficiently. As a result, we present the first pixel-grounded remote sensing dataset that supports multi-object segmentations interleaved with language, addressing a critical gap. **On Experiment and Ablation Study:** Geopixel, being a segmentation specialist model, is evaluated against a SOTA model across four complex tasks—RS-GCG, RES, Referring Expression Horizontal Detection, and Referring Expression Oriented Detection. In RS-GCG, it outperforms expert models (LISA, PixelLM, and GlaMM) under fair modification and training conditions evaluated across diverse metrics. Notably, in RES, GeoPixel surpasses specialized models, marking first among RS-focused LMMs and achieving a significant margin (*+15.83* Acc\@0.5 test) over concurrent work like GeoGround on the RRSIS-D benchmark. Geopixel outperforms recent LMMs (Geochat, GeoPix) in referring object detection (grounded localization through HBB) by leveraging superior pixel-level alignment. Moreover, in RS, oriented object detection is crucial, as aerial views often contain densely packed objects in arbitrary directions. Mainstream detectors use five- or eight-parameter formats, while current LMMs output quantized versions of these. We reformulate the task as a post-processing step on segmentation masks, achieving significant gains (*+33.7* Acc\@0.5) over the state-of-the-art. Regarding ablation studies on resolution and tiling, we clarify the interpretation of Table 4 and our training methodology. Following [1], the 'training patch number' specifies the maximum patch count permitted per sample during training. Critically, our training protocol employs patching at multiple scales: patches are extracted not only at the nominal maximum size but also from lower-resolution variants of the input data. This ensures the model is exposed to a spectrum of resolutions up to that defined by the maximum patch configuration. Consequently, we contend that the ablation study performed on inference-time patch size comprehensively evaluates the impact of effective input resolution and sufficiently demonstrates the utility of tiling within our framework. LMMs require effective modality alignment between visual and language tokens. In our model, this alignment is achieved through a vision projection layer and additionally through pLoRA [1]. To study the effect of this alignment for RS data, we conducted an ablation on the training of the vision projection layer (a 2-layer MLP), with results presented in Table 6. These results highlight the role of feature alignment. It is crucial to note that the pLoRA, pre-trained on extensive data in [1], was maintained without modification, thus serving as a consistent auxiliary alignment factor. [1] https://arxiv.org/abs/2401.16420
Summary: The paper presents GeoPixel, a novel large multimodal model (LMM) that advances high-resolution remote sensing (RS) image analysis by integrating pixel-level grounding with textual understanding, addressing limitations in existing RS-LMMs. Its architecture features an adaptive image divider for processing 4K-resolution RS images through global-local patch fusion, combined with vision encoders (CLIP ViT-L and SAM-2), the InternLM2 language model, and a pixel decoder for mask generation. GeoPixel demonstrates excellent performance on RS-GCID tasks and RRSIS segmentation tasks; additionally, a key contribution is the semi-automatically constructed dataset GeoPixelD, which serves as an effective resource for pixel-level understanding of RS imagery. Claims And Evidence: Claim: GeoPixel supports high-resolution RS imagery. Evidence: Dynamic partitioning (up to 4K) and experiments showing performance gains with higher patch counts (Table 4). Claim: Pixel-level grounding improves RS understanding. Evidence: Comparisons with bounding-box-based RS-LMMs (Table 1) and superior metrics in multi-target segmentation (Table 2). Claim: GeoPixelD enables fine-grained comprehension. Evidence: Dataset construction details (Section 4) and ablation on annotation complexity (Table 5). Methods And Evaluation Criteria: Methods: Adaptive partitioning, dual vision encoders, and a pixel decoder address RS-specific challenges, constructing a pipeline to achieve image understanding and pixel-level segmentation tasks. Evaluation: Metrics (CIDEr, mIoU) and benchmarks (RS-GCG, RRSIS) align with task objectives. Baselines (LISA+, PixelLM+) are adapted fairly but require clarity regarding their modifications. Theoretical Claims: The paper does not make any theoretical claims that require rigorous mathematical proof. Its core contributions focus on method design and empirical verification. Experimental Designs Or Analyses: Yes, Baselines (LISA, PixelLM) are adapted for RS via fine-tuning on GeoPixelD, ensuring fair comparison. The RRSIS benchmark (Table 3) uses established metrics (P@0.5, mIoU), but cross-dataset generalization (e.g., on non-iSAID data) is untested. Supplementary Material: Yes, Figures 6–9 illustrate annotation pipelines, model comparisons, and qualitative results. Failure cases (Figure 5) highlight limitations. Relation To Broader Scientific Literature: GeoPixel advances RS-LMMs by introducing pixel grounding, addressing a gap in prior works (e.g., RSGPT, GeoChat) limited to bounding boxes. It aligns with natural-image pixel-grounding models (LISA, PixelLM) but is applied to remote sensing scenarios. Essential References Not Discussed: There are some MLLM with Segmentation Capability works [1][2][3]. [1] RSUniVLM: A Unified Vision Language Model for Remote Sensing via Granularity-Oriented Mixture of Experts. [2] GeoGround: A Unified Large Vision-Language Model. for Remote Sensing Visual Grounding. [3] GeoPix: Multi-Modal Large Language Model for Pixel-level Image Understanding in Remote Sensing. Discussing the differences in tasks and methods can help readers better understand this article. Other Strengths And Weaknesses: Strengths: 1. First RS-LMM to achieve pixel-level grounding, addressing a critical gap in existing literature. 2. GeoPixelD’s semi-automated annotation pipeline ensures high-quality, granular data for RS-specific tasks. 3. The paper is well-presented and well-written. 4. It is good that the author considered the limitations and challenges Weaknesses: 1. It is unclear what GeoPixel has done with remote sensing images and how effective they are. 2. There is a lack of discussion with some related work.(Please see Essential References Not Discussed *) 3. No discussion of training/inference efficiency, especially for the additional SAM2 decoder. Other Comments Or Suggestions: Please see Weaknesses. The experimental results of the article are good, the workload is sufficient, and the quality of the data set is also high. My focus may be on the effectiveness and significance of the task itself. Questions For Authors: 1. The paper mentions adaptive partitioning, but the multi-scale feature fusion method is widely used in MLLM, such as S2-Wrapper[1]. Is there any special design for remote sensing? Or is this design more helpful for remote sensing images? [1] Shi B, Wu Z, Mao M, et al. When do we not need larger vision models? 2. Do we need GCG tasks for remote sensing images? The current mainstream supervised segmentation tasks often target limited categories in remote sensing images. In comparison, what are the advantages of GEOPIXEL, which is also oriented to a single task? 3. Experiments rely heavily on GeoPixelD/iSAID; cross-dataset robustness (e.g., DIOR, xView) is untested. Can GeoPixel be extended to more general remote sensing scenarios? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer Amru,  Thank you for your detailed review and constructive comments regarding our submission on GeoPixel. We appreciate your taking the time to analyze our work and the insights you provided. Below, we address your comments and concerns:  **Clarity on Adapted Baselines:** In our experiments, LISA+ extends LISA which couldn't handle multiple instances and was enhanced to include multitarget segmentation masks in its output pipeline and phrase tokens (<p> and </p>) are added for the GCG task.  PixelLM+ builds on PixelLM, where phrase tokens are added, and <SEG> token is replaced with multiple codebook tokens. These changes ensure a fair comparison and are detailed in Section 5.2 (Baselines) of our manuscript.  **Related Work:** GeoGround, RSUniVLM, and GeoPix (which was published in January 2025) are works concurrent to ours and share similarities. Comparative analysis can enrich the understanding of GeoPixel's unique contributions. GeoGround and RSUniVLM support pixel-level grounding by converting masks into text sequences, adding a computational burden to the LLM that scales with the number of distinguishable objects. GeoPixel resolves this limitation through end-to-end training with a dedicated mask decoder. Moreover, GeoPixel not only outperforms GeoGround on RRSIS-D data **+11.11**(Acc0.5 val), **+15.83**(Acc0.5 test), **+6.89**(mIoU val), and **+6.8**(mIoU test) but also outperforms specialist models (Table 4) showcasing strong referring expression segmentation capability.  **Training and Inference Efficiency:** All models, including GeoPixel and the baseline models (LISA+, PixelLM+, and GLaMM-ft), were trained for the same epochs with similar computational resources, ensuring performance gains stem from model design, not training or hardware differences. Furthermore, to provide a comparison of inference efficiency, we additionally report average runtime per sample: LISA+: 25.28s, PixelLM+: 91.44s, GLaMM-ft:35.08s, and GeoPixel: (P=1) 46.48s for RS-GCG task. **Adaptive Partitioning and Multi-scale feature fusion MSFF:** Adaptive partitioning is not merely an alternative but a complementary strategy to MSFF. RS imagery exhibits significant variations in spatial coverage (e.g. 800x800 pixels can cover diverse areas such as 1 km² or 10 km²). In this context, image resolution is crucial to determine the granularity and clarity of details within each square meter. High resolution (HR) enhances the visibility of finer features, essential for accurate identification and detailed analysis. MSFF can also be valuable by allowing information integration from various scales. Balancing both strategies presents a promising direction for future exploration, potentially improving RS accuracy and efficiency. **Necessity and Advantages of GCG in RS:** GCG is immensely important in RS, as traditional supervised segmentation tasks, although precise, are limited to fixed categories and offer minimal interactivity. GCG adds an intuitive, interactive layer, enabling users to explore and query data more flexibly, broadening the scope beyond predefined labels. GeoPixel demonstrates specialized, high-precision segmentation capability, but by integrating GCG, it can expand its usability, empowering more dynamic and user-driven data exploration.  **Cross-Dataset Robustness:** We evaluated GeoPixel on VRSBench referring expression detection task, reporting Acc\@0.5 / Acc\@0.7 across 3 categories: unique (U), non-unique(NU) and overall (O) **MiniGPT-v2** scored 40.7/18.9 (U), 32.4/15.2 (NU), 35.8/16.8 (O).  **LLaVA-1.5** achieved 51.1/16.4 (U), 34.8/11.5 (NU), 41.6/13.6 (O).  **Mini-Gemini** showed lower performance with 41.1/9.6 (U), 22.3/4.9 (NU), 30.1/6.8 (O).  **GeoChat** reached 57.4/22.6 (U), 44.5/18.0 (NU), 49.8/19.9 (O).  **GeoPix** performed similarly with 57.0/22.7 (U), 44.8/18.2 (NU), 49.8/20.0 (O).  **GeoPixel** achieved the best results: **70.37/41.54** (U), **65.80/40.32** (NU), **67.70/40.83** (O).  We further compared **GeoChat** and **GeoPixel** on VRSBench using orientated bounding boxes.  **GeoChat** scored 32.3/12.6 (U), 18.5/5.7 (NU), 24.3/8.6 (O).  **GeoPixel** significantly outperformed, achieving **54.48/24.87** (U), **60.51/30.97** (NU), **58.00/28.42** (O).   To avoid data leakage, GeoPixel (trained on GeoPixelD) is finetuned on VRSBench without using any data from RRSIS-D (a DIOR-based dataset). GeoPixelD and VRSBench rely on DOTA's training set for training and validation set for testing. These steps ensure no leakage from either DOTA or DIOR. We plan to extend our evaluations using additional datasets like xView to test GeoPixel's applicability to broader RS scenarios. The DIOR base results indicate promising adaptability, which we'll detail along with qualitative results in the revised submission.  We hope our responses adequately address your concerns. Thank you for your recommendations, which have undoubtedly helped improve our work.
null
null
null
null
null
null
Maximum Coverage in Turnstile Streams with Applications to Fingerprinting Measures
Accept (poster)
Summary: This paper considers the maximum coverage problem, where there is a universe of $n$ elements and we are presented with $d$ subsets of these $n$ elements. The goal is to retain and output a set of $k$ subsets (where $k$ is a parameter given in advance) whose union covers as many items as possible. The exact version of this problem is NP hard, but it is known that a greedy algorithm, which runs in polynomial time, provides a $1-1/e$ approximation, and this is known to be tight assuming $P \neq NP$. This paper considers the setting where the subsets are presented in a (single-pass) streaming fashion: each time we see one subset and the objective is to retain a set of $k$ subsets with approximation factor as close as possible to $1-1/e$ with as little space (memory) as possible. The main result is a single pass algorithm, in a turnstile setting where an element can be added or removed to a single subset in each step. The space complexity for $1-1/e-\epsilon$ approximation is roughly $d / \epsilon^3$, and the update time is only polylogarithmic in $n$. The authors also provide applications to fingerprinting, where the goal is to find sets of users satisfying certain "minimally intersecting" constraints. Claims And Evidence: This is a theoretical paper and the claims are proved. Can you provide a reference that $1-1/e$ approx factor is tight? (This was mentioned without proof in the introduction) Methods And Evaluation Criteria: Yes - solid experimental benchmarking (see relevant bullet below). Theoretical Claims: The theoretical claims are strong and seem reasonable. I did not verify their correctness. Experimental Designs Or Analyses: This is a theoretical paper, and the focus is on proving theorems, not on experimentation. That said, the authors do carry out a number of relevant experiments, which I think is more than enough for such a theoretically-oriented paper. I found the soundness of the experiments adequate, especially given the relatively limited baseline for this problem. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: There is not a lot of research on this problem in the streaming domain, but the authors do identify the most relevant work and benchmark against it. Essential References Not Discussed: None that I know of. Other Strengths And Weaknesses: This is a solid paper providing strong results for the maximum coverage problem, with interesting applications to fingerprinting. It is probably worthy of acceptance. There are no major weaknesses. Perhaps one minor weakness is that this paper is written in a relatively dry manner, and seems mostly intended for experts. A more detailed subsection on proof techniques (as part of the intro) is also missing and should be added. Other Comments Or Suggestions: See weaknesses discussion above. Questions For Authors: None Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their review and encouraging comments. We address them below. Here is a reference showing that a 1-1/e approximation (unless P = NP) in polynomial time is tight [1]. We will be sure to include this citation in the next version of the paper. We will also be sure to include a more detailed subsection on our proof techniques in the introduction. In particular, we will give an algorithm and proof sketch and more details on why we can accommodate deletions. [1] Uriel Feige. 1998. A threshold of ln n for approximating set cover. J. ACM 45, 4 (July 1998), 634–652. https://doi.org/10.1145/285055.285059 Please let us know if there is anything else that would be helpful to address or clarify.
Summary: The paper considers the maximum coverage problem in the turnstile model. The offline problem considers $d$ subsets from a universe $[n]$, and the goal is to output $k < d$ subsets such that the union of the sets contains the largest possible number of items from $[n]$. This can also be expressed in matrix notation where $A\in\mathbb{R}^{n\times d}$: each column is a subset, each row is an item from the universe. $A_{ij}\neq 0$ means that item $i$ is present in subset $j$. In the turnstile model, the matrix is gradually constructed by updates of the form $(i, j, \pm c)$, modifying $A_{ij}$ by adding or subtracting $c$. The challenge is to continuously solve the maximum coverage problem on $A$ as it gets updated, in low space. Without the restriction on space, the problem is simple, as $A$ can be stored uncompressed and running a greedy algorithm achieves a tight $1-1/e$ relative approximation (assuming $P\neq NP$), and so we could simply run a greedy algorithm at each step. With a restriction on space, the problem is harder. Previous work achieved $(1-1/e -\epsilon)$-relative approximation for only set-arrival in space $\tilde{O}(d/\epsilon^2)$ (McGregor & Vu, 2018). Bateni et al. (2017), achieved the same error guarantee for item-arrival in space $\tilde{O}(d/\epsilon^3)$. Neither work supported deletions. This paper gives a streaming algorithm that also supports item-deletion, for the same error guarantee, in $\tilde{O}(d/\epsilon^3)$, matching the guarantees of Bateni et al. (2017) for this harder setting (Theorem 1.1). The algorithm proposed in this paper uses Bateni et al. (2017) as the starting point. There the idea is that a matrix $A_*$ on a smaller universe can be constructed by carefully subsampling $A$, and that greedily running $k$-cover on $A_*$ gives the right error guarantee. In this paper, they give an algorithm (Algorithm 1) for constructing $A_*$ assuming a fixed $A$ is given up front. They proceed to show how (multiple) $A_*$ can be built from $A$ using a linear sketch (Algorithm 5), internally based on CountSketch and a $L_0$ sketch. As CountSketch and the $L_0$ sketch are linear sketches, and linear sketches support both additions and deletions, so does Algorithm 5. When outputting an answer, they run $k$-cover on each of the $\log n$ different (corresponding to different subsampling rates of the rows of $A$) $A_*$, and choose the best output based on the $L_0$ sketch. The paper also considers additional problems in turnstile streams. The authors reduce targeted fingerprinting to maximum coverage, thereby improving over an existing algorithm in Gulyas et al. (2016) (Corollary 1.2), reducing space (from $O(nd)$ to $\tilde{O}(d/\epsilon^3)$) and time (from $O(knd)$ over all updates, to $\tilde{O}(1)$ per update). For general fingerprinting, the authors design a new linear sketch for computing the complement of the frequency moment of a dataset, $n^p - F_p$ (Theorem 1.4), which they use for achieving a $(1-1/e-\epsilon)$-relative approximation for general fingerprinting in space $O(dk^3/\epsilon^2)$ and time. The paper includes experiments on the fingerprinting results run on public datasets, comparing against Gulyas et al. (2016). It is demonstrated that the new algorithms are more time and space efficient (Figure 1, 2 and 4), but at a cost in utility (Figure 2,3 and 5). The authors also demonstrate that their general fingerprinting algorithm can be used for dimensionality reduction, with the use case of $k$-means. ## update after rebuttal The authors provided good answers to my questions in their rebuttal. I stand by my score. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I skimmed the appendix, but did not check any proof carefully. Experimental Designs Or Analyses: The experiments performed in the paper make sense to me. Supplementary Material: I skimmed the attached code. It appears well-documented and is supplied as a Jupyter Notebook. Relation To Broader Scientific Literature: To the best of my knowledge, the results are properly contextualized. Essential References Not Discussed: I am not familiar with any missing work. Other Strengths And Weaknesses: Strengths: 1. It is overall a well-written paper. 2. The contribution of expanding streaming maximum coverage to also work for deletions strikes me as meaningful, and I personally find the techniques interesting. 3. The empirical results make a good case for the improved time and space complexity (at a cost in utility). Weaknesses: 1. It is hard to infer what was known before on fingerprinting from reading the paper. For targeted fingerprinting Corollary 1.2 and its preceding discussion seem to indicate that this paper improves the space and time complexity, but it is not clear to me if it achieved the same error guarantee. For general fingerprinting it also hard to infer how Theorem 1.5 compares to prior work (Gulyas et al., 2016). 2. Related to the above, it seems that the baseline always achieves better error in the experiments, but that this should be the case is not clear from the text. 3. Minor quibble: the figures could be better formatted/scaled. Other Comments Or Suggestions: N/A. Questions For Authors: I ask the following questions to better understand the contributions made in this paper. I do not expect to drastically change my score based on these responses in isolation. Questions: 1. You seem to incur a factor of $O(\log n)$ in the space from storing $O(\log n)$ different versions of $A_*$ for the result in Theorem 1.1. Do Bateni et al. (2017) and McGregor & Vu (2018) also incur a logarithmic dependence on $n$? 2. What error guarantee do Gulyas et al. (2016) achieve for targeted and general fingerprinting? Do they get a $(1-1/e)$-approximation for both? 3. Is the space/time complexity of Gulyas et al. (2016) the same across general and targeted fingerprinting? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review and comments. We address the comments and questions below. We will be sure to scale and format the figures more appropriately in the next version of the paper. Concerning the $O(\log n)$ factors in Theorem 1.1: Yes both Bateni et al. (2017) and McGregor & Vu (2018) incur $O(poly \log n)$ factors. Concerning the questions on prior work (Gulyas et al., 2016): (Gulyas et al., 2016) shows that max coverage can be reduced to fingerprinting proving that fingerprinting is NP-hard. In addition, it is known that the best approximation factor for maximum coverage in polynomial time is $1-1/e$. This is the approximation that (Gulyas et al., 2016) achieves, although it is not proven formally. For both targeted and general fingerprinting their algorithm takes O(nd) space and O(knd) time. In our paper we achieve a $1-1/e-\epsilon$ approximation for input $\epsilon \in (0,1)$, making our approximation near-optimal. In our paper we focus on optimizing the space and time bounds for maximum coverage, giving as a corollary a $\tilde{O}(d/\epsilon^3)$ space and $\tilde{O}(1)$ update time algorithm for targeted fingerprinting. This removes the dependence on $n$ and gives a total time nearly linear in the length of the stream. For general fingerprinting, we achieve $\tilde{O}(dk^3/\epsilon^2)$ space. We note that in these big data settings we usually assume that $n$ is much larger than $d$ and therefore aim to remove the space dependence on $n$. We will be sure to make this more clear in the next version of the paper. For the experiments, the baseline always does achieve a better error. We will make sure this is clearer in the text. We note that it is expected that the baseline achieves better error - the baseline is a slightly optimized version of the classical greedy maximum coverage algorithm which is known to achieve optimal error for algorithms using polynomial time. However, as noted in our experiments our algorithms do not suffer much in accuracy in comparison and greatly increase the time efficiency. Please let us know if there is anything else that would be helpful to address or clarify.
Summary: The paper introduces a linear sketch for the maximum coverage problem that supports both insertions and deletions of item-feature pairs under the turnstile streaming model. This sketch improves on previous work that considers insertion-only streams, or which support only the insertion or deletion of entire subsets rather than individual items from subsets. The main application of this sketch considered in the paper is the fingerprinting problem from the privacy literature, where the task is to select a subset of features that best distinguishes either a single user (i.e., the "targeted" case), or all pairs of users (the "general" case). As part of the machinery to apply the sketch to general fingerprinting, the paper describes an additional sketch for the complements of frequency moments. The empirical evaluation compares the runtime and accuracy of the proposed sketching approach to the fingerprinting method described in Gulyas et al. (2016). Additionally, an application to accelerate k-means via dimensionality reduction is also explored. Claims And Evidence: The claimed construction of a maximum coverage sketch supporting insertions and deletions is supported by the theoretical results. The claim that the sketching approach improves on the fingerprinting method from Gulyas et al. (2016) is supported by the run time and accuracy measurements in the evaluation. Methods And Evaluation Criteria: The fingerprinting evaluation compares the proposed sketch against the method from Gulyas et al. (2016) on the UCI Adult and Census datasets. Two limitations of this evaluation are that: (1) it does not compare the proposed sketch against other sublinear space algorithms for maximum coverage like that of Bateni et al. (2017), which should also be applicable to the fingerprinting problem, and that (2) the evaluation does not touch on the proposed sketch's support for item deletions, which is the main differentiator of this approach from previous work. With respect to (1), one question that should ideally be addressed is: what is the penalty that is incurred vs. prior work in terms of the sketch's memory/accuracy tradeoff due to the added support for deletions? Theoretical Claims: I checked the proofs of the main theorems at a high level and did not identify any correctness issues. I did not verify the proofs in detail. Experimental Designs Or Analyses: - Supplementary Material: I reviewed Sections C, D, and E of the supplementary material at a high level. Relation To Broader Scientific Literature: The paper's contributions are related to the literature on sketching algorithms for performing analyses of data streams using sublinear space. For the maximum coverage problem, the proposed sketch improves on the results of Bateni et al. (2017) by supporting both insertions and deletions while maintaining the same memory cost up to polylog factors. Essential References Not Discussed: - Other Strengths And Weaknesses: Strengths: - The theoretical contributions of the paper are strong. The sketch constructions presented in the paper and the accompanying proofs of correctness are nontrivial and will be of interest to the broader streaming algorithms community. Weaknesses: - The empirical evaluation of the proposed sketch is limited, as detailed above. - The figures in the evaluation section should be made more legible. The font size in the plots is small, and there is plenty of whitespace that can be filled with larger figures. Other Comments Or Suggestions: Typos: - L213-214 should read "This process continues until $A_*$ contains ... So, $A_*$ is ...". - Algorithm 1, L2 should read "$\varepsilon = \epsilon / 8$" Questions For Authors: 1. Can you clarify how the sketch from Bateni et al. (2017) fails to support deletions? 2. The max coverage sketch is constructed using several Count Sketch and L0 sketches. What practical recommendations do you have for selecting the parameters for these component sketches? 3. The evaluation section of the paper focuses on small values of $k$. Can you comment on how the algorithm scales to larger values of k, e.g. for dimensionality reduction from 1000s to 100s of features? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough and encouraging review. We address their comments and questions below. Regarding the experimental evaluation: We chose Gulyas et al. (2016) as the baseline primarily because it represents the standard offline approach for the fingerprinting application, allowing us to demonstrate the speedup achieved by our sketch. We did not include item deletions in our evaluation because the baseline (prior work Gulyas et al. (2016)) is not a streaming algorithm. Therefore, it does not support updates of any kind including deletions and requires direct access to the entire input (which in this case is input $n \times d$ matrix $A$). Regarding the penalty of adding support for deletions as compared to prior streaming work - our algorithm achieves the same asymptotic space complexity of $\tilde{O}(d/\epsilon^3)$ and near-optimal approximation factor $(1-1/e-\epsilon)$ as the insertion-only algorithm of Bateni et al. (2017). While constants might differ in practice, theoretically we match the bounds while offering broader functionality. We will be sure to improve the formatting and legibility of the plots in the next version of the paper. We thank the reviewer for also pointing out some typos - we will be sure to correct them. Regarding the question on the sketch from Bateni et al. (2017): The sketch from Bateni et al. (2017) fails to accommodate deletions. Up to $\tilde{d/\epsilon^3}$ edges, the sketch requires $d/\epsilon$ nonzero elements per item/row of input matrix $A$. In particular, in their Algorithm 2 where they implement this sketch in the streaming setting, they keep the first $d/\epsilon$ nonzeros for some row $r$ and discard all the nonzeros for that row $r$ that come in the stream after this. However, in a stream that has deletions, some subset (or all) of these first $d/\epsilon$ nonzeros could be deleted, leaving the sketch with no nonzeros for this row. It is not clear how to get around this issue with the algorithm given in the paper of Bateni et al. (2017). Regarding practical recommendations for selecting parameters for CountSketch and L0 sketches: While our theoretical analysis establishes the necessary parameters to achieve the desired accuracy and failure probability, practical implementations often require less space and time to obtain good approximations. Our experiments support this observation. We set the number of buckets and other sketch-related constants in our implementation to fixed values, which we either varied for experimental analysis or fine-tuned. These values were lower than the theoretical requirements. Regarding the question about how the algorithm scales: In our experiments we focus on small values of $k$ due to compute limitations. We note that theoretically the only algorithm whose space complexity depends linearly on $k$ is the one for general fingerprinting. In particular, the space of the core max coverage sketch is independent of $k$. In addition, our algorithms are designed for big data settings and we expect our algorithms to scale well with huge values of $n$, $d$, and $k$. Furthermore, as $n,d,k$ grow, we expect the accuracy guarantees to hold while the runtime advantage over the $O(knd)$ baseline becomes more pronounced. Evidence of this can be seen in our speed-up for the larger dataset versus the smaller dataset. Please let us know if there is anything else that would be helpful to address or clarify.
Summary: This paper studies the maximum coverage problem in the data stream model and give the first turnstile algorithm, i.e., allowing both insertion and deletion, for this problem with space complexity almost match previous insertion-only streaming algorithms. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes Theoretical Claims: I have checked the correctness of key proofs. The proofs seem correct to me. Experimental Designs Or Analyses: The experimental studies are not the strength of this work. But it made a nice complement to the theoretical results. Supplementary Material: I just did a quick scan of the supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper is presenting the first turnstile streaming algorithms for the max coverage problem and some related problem. Linear sketching techniques are well-studied, but their applications to the problems studied in this paper are missing in the literature. Essential References Not Discussed: No Other Strengths And Weaknesses: Strength: 1. The main theoretical results are interesting to the streaming literature. In addition to minimizing the space complexity, update time is optimized as well. 2. Similar techniques are also applicable to fingerprinting problem. 3. Complementary experimental studies show that the proposed algorithms are competitive. Weakness: 1. I feel that the techniques used are not too deep. 2. The problems studied do not have much "learning flavor" (although I have seen similar papers published in top machine learning venues). I think the author could adjust the writing to make it more machine learning oriented. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their review and encouraging comments. We will be sure to adjust the writing, in particular the introduction, to make it more machine learning oriented in the next version of the paper. For example, we will expand on the applications to sensor placement, influence maximization, and engagement maximization which are popular problems in machine learning and have been referenced in other machine learning papers [1, 2]. [1] Zhou, H., Huang, L., and Wang, B. “Improved Approximation Algorithms for k-Submodular Maximization via Multilinear Extension.” ICLR 2025. [2] Tajdini, A., Jain, L., and Jamieson, K. “Nearly Minimax Optimal Submodular Maximization with Bandit Feedback.” NeurIPS 2024. Please let us know if there is anything else that would be helpful to address or clarify.
null
null
null
null
null
null
The Best of Both Worlds: Bridging Quality and Diversity in Data Selection with Bipartite Graph
Accept (poster)
Summary: This paper proposes a novel method for selection SFT data for LLM fine-tuning. The proposed GraphFilter method pairs each instruction in the SFT dataset with a corresponding set of n-grams. It then assigns a priority rank to each example using a priority function that takes into account both quality and diversity of the sample (this assignment is re-done after each sampling step). Examples with highest priority are sampled first in the proposed algorithm. After selecting an example proposed algorithm removes the all the edges that go into the n-grams connected to the selected examples insuring that each n-gram is only encountered in a single samples example. Claims And Evidence: Most of the claims are supported by experimental results. Methods And Evaluation Criteria: Yes, the selected methods and benchmark make sense for the data selection/ranking problem tackled in this paper. Theoretical Claims: Quality Metric: - Eq. 2 seem to contradict the sentence "A higher SUPERFILTER value indicates that the response is more relevant and informative given the instruction, thus reflecting higher quality", i.e. lower PPL(y|x) indicates that the response (y) is more relevant and informative given the instruction (x), but since PPL(y|x) is in the nominator of the equation it would also mean that lower PPL(y|x) would result in lower QUALITY(u) (SUPERFILTER) metric and not higher as stated but he authors. The rest of the theoretical statements seem to be correct. Experimental Designs Or Analyses: I could not find major issues. Supplementary Material: I did not extensively review the supplementary material. Relation To Broader Scientific Literature: While the number of recent works proposing and investigating data curation methods is vast, the related literature could do a better job discussing how the this work relates to the broader set of works in this are. For example, the n-gram overlap has been used in prior data selection methods by Xie et al., the interplay between diversity and quality has been looked at in several recent works including Goyal et. al, Chi et al., and Chang et al.. Xie, Sang Michael, et al. "Data selection for language models via importance resampling." Advances in Neural Information Processing Systems 36 (2023): 34201-34227. Goyal, Sachin, et al. "Scaling Laws for Data Filtering--Data Curation cannot be Compute Agnostic." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. Chang, Ernie, et al. "Scaling Parameter-Constrained Language Models with Quality Data." arXiv preprint arXiv:2410.03083 (2024). Zhang, Chi, et al. "Harnessing Diversity for Important Data Selection in Pretraining Large Language Models." arXiv preprint arXiv:2409.16986 (2024). Essential References Not Discussed: While I don't thinks there are missing works that are essential to understand the context, I think the relation to other existing works in this domain could be discussed in greater length and detail (see "Relation To Broader Scientific Literature"). Other Strengths And Weaknesses: Strengths: - novelty: to the best of my knowledge the presented formulation of the data selection problem is novel; - seemingly strong empirical performance - the fact that the proposed method falls within the lower range of runtime among the evaluated approaches is a strength; Weaknesses: - missing justification for the decision of using the budget of k=10k examples for the main results presented; - see section "Theoretical claims" for a potential methodological issue; Other Comments Or Suggestions: If I understand it correctly, after selecting an example proposed algorithm removes the all the edges that go into the n-grams connected to the selected examples insuring that each n-gram is only encountered in a single sampled example. Wouldn't this approach reduce data diversity rather than enhance it? More specifically, if different n-grams are only encountered in a single context, the model's ability to recall knowledge after training can be diminished as demonstrated by Allen-Zhu et al. (2023). While it may not be an issue for SFT (especially if the method in only applied to the instructions and not the responses) I wonder if this could be an issue if the method was applied to selection of data for pre-training. Allen-Zhu, Zeyuan, and Yuanzhi Li. "Physics of language models: Part 3.1, knowledge storage and extraction." arXiv preprint arXiv:2309.14316 (2023). Questions For Authors: - Did authors consider adding baselines that use influence functions for data selection? While this has not been used for SFT afaik, the advantage of these methods is that they make data selection directly dependent on the intrinsics (i.e. parameters & architecture) of the model for which the data is selected. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback and thoughtful comments. Below, we address the specific points raised: ## 1. Quality Metric We are sorry about the confusion caused by this description. Equation 2 essentially measures the difficulty of the example, as suggested by [1, 2]. The correct interpretation should be: "A higher value indicates greater difficulty, thus reflecting higher quality." As demonstrated by recent study, LLMs can achieve better performance by focusing on more challenging examples [3]. We will revise the manuscript to clarify this point. References: 1. From quantity to quality: Boosting llm performance with self-guided data selection for instruction tuning. 2023. 2. Superfiltering: Weak-to-strong data filtering for fast instruction-tuning. 2024. 3. Wizardlm: Empowering large language models to follow complex instructions. 2023. ## 2. Broader Scientific Literature We appreciate your suggestions regarding related work. In the revised version, we will expand our discussion to better contextualize our work within the broader literature, as you suggested. ## 3. Budget Selection **Our decision to use k=10K as the primary budget size was informed by established works in the field.** Specifically, this choice aligns with recent data selection methods that have demonstrated strong performance using similar budget sizes. For instance, SuperFiltering [1] utilized 7.8K examples from a pool of 52K, AlpaGasus [2] employed 9K examples from 52K, and DEITA [3] selected 10K examples from a total of 300K. By adopting a budget size of 10K, we ensure a fair and meaningful comparison with these baselines. Furthermore, we conducted extensive experiments across multiple budget sizes (1K, 5K, 10K, 50K, 100K, and 200K) as shown in Figure 3. These results demonstrate that the quality-focused data selection method has an advantage when the budget is small, while the diversity-focused method performs better with larger budgets. Our approach consistently outperforms the baseline methods across all budget sizes, demonstrating its robustness and effectiveness. We will include these additional details in the revised manuscript. References: 1. Superfiltering: Weak-to-strong data filtering for fast instruction-tuning. 2024. 2. Alpagasus: Training a better alpaca with fewer data. 2023. 3. What makes good data for alignment? a comprehensive study of automatic data selection in instruction tuning. 2023. ## 4. Data Diversity Concern We acknowledge that this could lead to a loss of diversity in the training data, as the same n-grams may represent different concepts in different contexts. However, we believe this is not a significant issue for our method. **As shown in Figures 2a and 2c, the GraphFilter demonstrates significantly higher lexical diversity compared to all the baselines and is as semantically diverse as diversity-focused methods (e.g., InsTag).** Furthermore, we would like to clarify that when an n-gram node is removed from the bipartite graph, we lower the rankings of the corresponding examples rather than completely removing them from the training set. This approach allows the model to continue learning from these examples, albeit with lower weights. Additionally, there are two possible ways to mitigate this potential issue. First, we can use a larger n-gram size so that shorter n-grams are less likely to be completely removed. Second, we can allow for multiple visits to the same n-gram during the data selection process. Regarding the pre-training stage, we agree with your concern about the potential loss of diversity if the n-gram size is too small. In our preliminary experiments, we found that using n-grams of size 3 is sufficient in the context of SFT. However, larger n-grams may be necessary for pre-training. We will include these discussions in the revised manuscript. ## 5. Influence Functions for Data Selection Thank you for your suggestion to explore the use of influence functions for data selection. We agree that this is an interesting direction for future research and that this approach could be included as one of the baseline methods. Unfortunately, we are unable to include this method in our rebuttal due to constraints on computational resources and time, as influence-function-based methods typically require fine-tuning the model on the entire dataset first and then computing the Hessian of the loss function for each example. Compared to influence-function-based methods and other baseline approaches, our method is significantly more efficient and scalable, as shown in Table 2, making it suitable for real-world applications with large amounts of data. Furthermore, the subset selected by influence functions is model-dependent and may not generalize well to other models. We will discuss this potential extension in the revised manuscript.
Summary: The paper presents GraphFilter, a data selection approach designed to balance quality and diversity in SFT for LLMs. The key contribution is formulating data selection as a set cover problem and leveraging a bipartite graph structure where sentences are connected to their constituent n-grams. The priority function, which multiplicatively combines quality and diversity scores, guides the iterative selection of high-quality and diverse sentences. The method is evaluated across three model backbones and six benchmarks, outperforming nine baselines in both model performance and computational efficiency. Extensive experiments and ablation studies justify the design choices, highlighting the effectiveness of instruction diversity and the interplay between quality and diversity across different subset sizes. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes: The bipartite graph representation of data is intuitive and allows for effective tracking of n-gram coverage. Theoretical Claims: Yes Experimental Designs Or Analyses: Baseline selection is reasonable, but additional diversity-based methods could be explored. While INSTAG and KMEANS are reasonable diversity baselines, clustering-based approaches (e.g., hierarchical clustering or spectral clustering) could provide more granularity. Supplementary Material: NA Relation To Broader Scientific Literature: Closely related, with enough novelty. Essential References Not Discussed: NA Other Strengths And Weaknesses: The writing is well and easy to follow. Other Comments Or Suggestions: NA Questions For Authors: I would like to know whether using the coordinates of this figure (MTLD, SKYWORKRM) as direct optimization objectives would lead to better results. Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback of GraphFilter. We address your comments and questions below. ## 1. Additional Diversity-Based Methods We appreciate the reviewer's suggestion to explore additional clustering-based approaches for diversity. Following this recommendation, we conducted supplementary experiments comparing GraphFilter against hierarchical and spectral clustering baselines. For a fair comparison, we implemented these methods using the same procedure as our KMEANS baseline: we embedded instructions using BAAI/bge-large-en-v1.5 (https://huggingface.co/BAAI/bge-large-en-v1.5), clustered them into 10K clusters, and randomly selected one example from each cluster. The results in the table below demonstrate that GraphFilter consistently outperforms all clustering-based methods, including hierarchical and spectral clustering, across all evaluation metrics ($\mu_{BENCH}$, $\mu_{LLM}$, and $\mu_{ALL}$). This further validates the effectiveness of our bipartite graph approach in capturing both quality and diversity compared to traditional clustering techniques. | | $\mu_{BENCH}$ | $\mu_{LLM}$ | $\mu_{ALL}$ | |-------------------------|---------------|-------------|-------------| | KMEANS | 48.90 | 41.72 | 46.51 | | InsTag | 49.93 | 41.72 | 47.19 | | Hierarchical Clustering | 49.44 | 41.55 | 47.01 | | Spectral Clustering | 49.68 | 42.03 | 47.12 | | GraphFilter | 50.55 | 42.79 | 47.97 | ## 2. Using MTLD and SkyworkRM as Direct Optimization Objectives Thank you for this insightful question. We conducted additional experiments using MTLD and SkyworkRM as direct optimization objectives within our GraphFilter framework. To implement this approach, we replaced our original quality and diversity metrics with these measures: SkyworkRM scores were used to initialize the quality score (u) for each example, while MTLD scores served as the diversity measure (v). During the iterative selection process, we maintained the bipartite graph structure but updated the MTLD scores after each selection as n-gram nodes were removed. The results below show that our current formulation with SuperFilter for quality and TF-IDF for diversity provides a more effective optimization strategy than directly using the SkyworkRM and MTLD metrics. Furthermore, we would like to highlight that our approach is flexible and can easily incorporate other quality and diversity metrics as direct optimization objectives, when new state-of-the-art metrics become available. | Quality(u) | Diversity(v) | $\mu_{BENCH}$ | $\mu_{LLM}$ | $\mu_{ALL}$ | |-------------|--------------|---------------|-------------|-------------| | SuperFilter | TF-IDF | 50.55 | 42.79 | 47.97 | | SuperFilter | MTLD | 50.15 | 42.42 | 47.47 | | SkyworkRM | MTLD | 49.51 | 42.01 | 46.93 |
Summary: The paper introduces GRAPHFILTER, a data selection method for LLM fine-tuning that balances quality and diversity by modeling the dataset as a bipartite graph of sentences and n-grams. The approach iteratively selects sentences using a priority function combining SUPERFILTER (quality) and TF-IDF (diversity). Experiments across three model backbones and six benchmarks show GRAPHFILTER outperforms nine baselines in performance and efficiency. Key contributions include a novel graph-based formulation, a priority function merging quality/diversity, and empirical validation demonstrating superior results. Claims And Evidence: The claims are supported by comprehensive experiments, including ablation studies, runtime comparisons, and benchmark results. However, the reliance on SUPERFILTER as the sole quality metric raises questions about generalizability. While the paper shows GRAPHFILTER works with other metrics (e.g., PERPLEXITY), it does not thoroughly explore how alternative quality measures (e.g., reward models beyond SUPERFILTER) might affect outcomes. Additionally, the theoretical justification for the set cover approximation could be strengthened by discussing how the priority function impacts the greedy algorithm’s guarantees. Methods And Evaluation Criteria: The bipartite graph framework is well-suited for balancing diversity (via n-gram coverage) and quality. The evaluation criteria (standardized benchmarks, LLM-as-a-judge, and efficiency metrics) are appropriate. However, the paper could better justify the choice of n-gram combinations (unigrams, bigrams, trigrams) and explore sensitivity to n-gram size. The use of the Magpie dataset, while large, may limit generalizability to other domains or data distributions. Theoretical Claims: The paper correctly relates GRAPHFILTER to the set cover problem and cites the greedy algorithm’s approximation ratio. However, the analysis assumes uniform n-gram importance, which may not hold in practice (e.g., rare n-grams might be more critical). The theoretical discussion would benefit from addressing how TF-IDF weighting interacts with the coverage objective. Experimental Designs Or Analyses: The experimental design is robust, with multiple baselines, model backbones, and benchmarks. The runtime comparison (CPU vs. GPU baselines) highlights practical advantages. However, the paper does not test GRAPHFILTER on out-of-distribution data or explore how diversity impacts generalization in low-resource settings. Additionally, the reliance on synthetic Magpie data (generated by LLAMA-3-70B) may not reflect real-world data selection challenges. Supplementary Material: The appendices provide necessary details on baselines, hyperparameters, and evaluation setup. However, the lack of a detailed scalability analysis (e.g., performance on 1M+ examples) or discussion of computational bottlenecks (e.g., graph updates for massive datasets) is a gap. Relation To Broader Scientific Literature: The work builds on data selection/curation for LLMs (e.g., SemDedup, DEITA) and diversity-aware methods (e.g., KMEANS, DPPs). However, it does not engage with recent advances in active learning or coresets for LLMs, which also aim to balance quality/diversity. The bipartite graph approach is novel but could be compared to graph-based data pruning methods in other domains (e.g., recommender systems). Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths : Novel integration of quality/diversity via bipartite graphs. Strong empirical results across diverse benchmarks. Efficient CPU implementation, reducing hardware barriers. Weaknesses : Limited exploration of quality metric alternatives (e.g., reward models). N-gram approach may miss semantic diversity captured by embeddings. No analysis of how GRAPHFILTER affects downstream bias/fairness. Other Comments Or Suggestions: Other Comments Or Suggestions Clarify how n-gram selection (unigrams, bigrams, trigrams) was optimized. Discuss scalability to billion-scale datasets. Include a sensitivity analysis for the priority function’s multiplicative form (e.g., additive alternatives). Questions For Authors: How does GRAPHFILTER perform with alternative quality metrics (e.g., human annotations)? If results degrade, does this indicate over-reliance on SUPERFILTER? Could semantic diversity (e.g., BERT embeddings) complement or replace n-gram-based diversity? Does the method exacerbate biases in the original dataset (e.g., underrepresented topics)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough assessment. ## 1. Quality Metrics We would like to clarify that **GraphFilter is designed to be agnostic to the specific quality metric used.** We discovered an inaccuracy in the original Table 4 results. The updated results presented below actually strengthen our claims: GraphFilter maintains strong performance across various quality metrics. These results demonstrate that, while SuperFilter works best in our setting, our approach is not fundamentally dependent on it. **When new quality metrics are introduced, GraphFilter can be easily adapted to incorporate them.** We will update Table 4 accordingly in our revision. | Quality(u)| Diversity(v) | $\mu_{BENCH}$ | $\mu_{LLM}$ | $\mu_{ALL}$ | |--|--|--|--|--| |SuperFilter | TF-IDF | 50.55 |42.79|47.97| |Perplexity| TF-IDF | 49.21 |40.85|46.43| |ArmoRM| TF-IDF | 49.01 |41.85|46.61| |DEITA| TF-IDF | 49.11 |41.97|46.73| | X| TF-IDF | 48.94 |41.87|46.58| |SuperFilter |X | 49.52 |41.28|46.78| | X|X | 48.27 |40.28|45.61| | SuperFilter | MTLD | 50.15 | 42.42 | 47.47 | | SkyworkRM| MTLD | 49.51 | 42.01 | 46.93 | ## 2. N-gram **Our choice of trigrams was based on model performance and efficiency.** As shown in the table below, we conducted experiments with n-gram sizes from 1 to 5 using Llama-3-8B. Our results indicate a significant performance improvement when moving from unigrams (n=1) to trigrams (n=3). Furthermore, the number of n-gram nodes increases substantially with n, as well as the runtime. We will include this analysis in our future revision. | n-gram | # of n-grams | Runtime (hrs) | $\mu_{BENCH}$ | $\mu_{LLM}$ | $\mu_{ALL}$ | |--|--|--|--|--|--| | 1| 0.1M| 2.12| 49.02 | 41.41 | 46.48 | | 2| 1.0M| 2.30| 49.58 | 42.14 | 47.31 | | 3| 2.6M| 2.48| 50.55 | 42.79 | 47.97 | | 4| 4.8M| 3.38| 50.11 | 42.63 | 47.43 | | 5| 7.4M| 4.58| 50.44 | 42.81 | 47.95 | ## 5. N-gram Importance We would like to point out that **the importance of n-grams is not uniform but is determined by the TF-IDF re-weighting.** As shown in Table 4, the TF-IDF re-weighting (SuperFilter + TF-IDF vs. SuperFilter + X, and X + X vs. X + TF-IDF) significantly improves the performance of GraphFilter. We will clarify this point in our revision. ## 3. Scalability Analysis We discussed the implementation details of GraphFilter in Line 192-205. The brute-force GraphFilter has a time complexity of $O(N)$ per example. To improve the scalability, we employed a max-heap (or priority queue) data structure to select the highest-priority examples and reduce the time complexity to $O(log N)$ per example. Due to limited resources and time, we were unable to evaluate GraphFilter on extremely large datasets in this rebuttal. We will clarify this in our future revision. ## 4. Semantic Diversity We would like to highlight that **our research demonstrates that lexical diversity through n-grams serves as an effective proxy for semantic diversity.** As shown in Figure 2a and 2c, we demonstrated that GraphFilter exhibits significantly higher lexical diversity compared to all the baselines and is as semantically diverse as those diversity-focused methods (e.g. InsTag), **suggesting that our approach effectively captures semantic diversity through lexical diversity.** Furthermore, the semantic diversity could be incorporated by extending our priority function to $\phi(u) = Quality(u) \times LexicalDiversity(u) \times SemanticDiversity(u)$. For semantic diversity, we could measure the average pairwise cosine distance between embeddings of the candidate example $u$ and all selected examples. A larger distance indicates higher semantic novelty. However, adding semantic distance introduces significant computational overhead, particularly for large datasets. Due to limited resources and time, we were unable to conduct this experiment in this rebuttal. We will explore this direction in our future revision. ## 5. Priority Function We conduct additional experiments, as suggested. As shown in the table below, the multiplicative priority function outperforms the additive function. We will include this analysis in our revision. | Priority Function | $\mu_{BENCH}$ | $\mu_{LLM}$ | $\mu_{ALL}$ | |--|--|--|--| |Multiplicative| 50.55 | 42.79 | 47.97 | |Additive| 49.92 | 42.41 | 47.17 | ## 6. Bias **We do observe bias in the selected examples. This bias stems from the quality-based metrics, rather than our approach.** As discussed in lines 318-329, we observe that metrics like ArmoRM and Perplexity inherently favor certain types of examples. In contrast, GraphFilter addresses this issue by balancing quality and diversity. Although GraphFilter may inherit some bias from the quality metric, it mitigates this bias by incorporating diversity metrics into the selection process. ## 7. Miscellaneous Due to the length limit, we will include discussions and experiments on synthetic dataset, broader literature, out-of-distribution and low-resource settings, and theoretical analysis in our future revision.
Summary: This paper introduces GRAPHFILTER, a method to optimize data selection for training large language models by balancing quality and diversity. Using a bipartite graph and a priority function, it enhances model performance and efficiency. Extensive tests show that GRAPHFILTER surpasses traditional methods, demonstrating the role of well-balanced data selection in improving LLM generalization. Claims And Evidence: The claim that GRAPHFILTER induces the diversity of the selected data is convincing, due to the nature of the set cover problem. Methods And Evaluation Criteria: I have the following concerns about the methodology: 1. Maximizing the n-gram indeed ensures some granular lexical diversity of texts, but the general and more importantly semantic or domain diversity of the texts are not considered. Hence the motivation to apply set cover problems on the n-grams is not that robust. 2. The quality part relies on the previous work SUPERFILTER, leaving the contribution of the paper mostly to the diversity part. 3. The priority function in equation (4) is a direct integration of quality and diversity score. A balance between these two might be considered. Theoretical Claims: No theoretical claims are found. Experimental Designs Or Analyses: The experiments are comprehensive. However, there are some critical issues: 1. The paper only applies the method to the instructions of the SFT data, which I think is a critical issue. In many cases, the responses of the SFT data are even more important. 2. The training data might be too small to show the significance of the performance scores in the experiments. 3. There are more related baselines that are not considered, see “Essential References Not Discussed” below. Supplementary Material: I reviewed the appendix. Relation To Broader Scientific Literature: This paper mostly applies the set cover optimization on the n-grams of texts to select LLM SFT data. The method of using set cover is novel but I think for general improvement of LLM’s performance, this data selection method would have limited contribution. Essential References Not Discussed: The set cover problem is often considered as one of the coreset methods. There are other submodular functions that are easy to implement and maybe should be considered as baselines. For one example of the references, see [1] below. There are other papers that consider the diversity of the data on a higher level, such as the diversity of their quality aspects or topics as in [2] below. The authors might consider citing this paper and even consider it as a baseline. Another related work is [3] below, which also uses the same diversity-based method as in [2] on the data directly. It might also be cited or considered as a baseline. [1] Kaushal, V., Ramakrishnan, G., & Iyer, R. (2022). Submodlib: A submodular optimization library. arXiv preprint arXiv:2202.10680. [2] Li, X., Gao, M., Zhang, Z., Yue, C., & Hu, H. (2024). Rule-based data selection for large language models. arXiv preprint arXiv:2410.04715. [3]Yang, Y., Wang, H., Wen, M., Mo, X., Peng, Q., Wang, J., & Zhang, W. (2024). P3: A Policy-Driven, Pace-Adaptive, and Diversity-Promoted Framework for data pruning in LLM Training. arXiv preprint arXiv:2408.05541. Other Strengths And Weaknesses: The application of the set cover is novel and the experiments of the paper are comprehensive. However, there are some critical weaknesses of the paper, as discussed above. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful review of our paper. ## 1. Diversity Approach We would like to highlight that **our research demonstrates that lexical diversity through n-grams serves as an effective proxy for semantic diversity.** As shown in Figure 2a and 2c, we demonstrated that the subset selected by GraphFilter exhibits significantly higher lexical diversity compared to all the baselines and is as semantically diverse as those diversity-focused methods (e.g. InsTag). **This suggests that our approach effectively captures semantic diversity through lexical diversity.** The semantic diversity could be incorporated by extending our priority function to $\phi(u) = quality(u) \times LexicalDiversity(u) \times SemanticDiversity(u)$. For semantic diversity, we could measure the average pairwise cosine distance between embeddings of the candidate example $u$ and all previously selected examples. A larger distance would indicate higher semantic novelty. However, adding semantic distance calculations would introduce significant computational overhead, particularly for large datasets. Due to limited resources and time, we were unable to conduct this experiment in this rebuttal. We will explore this direction in our future revision. ## 2. Reliance on SuperFilter We would like to respectfully clarify that **GraphFilter is designed to be agnostic to the specific quality metric used, requiring only that the metric can be computed on a per-example basis.** We discovered an inaccuracy in the original Table 4 results. The updated results presented below actually strengthen our claims: GraphFilter maintains strong performance across various quality metrics. These results demonstrate that, while SuperFilter works best in our setting, our approach is not fundamentally dependent on it and can effectively leverage different quality assessment methods. **When new quality metrics are introduced, GraphFilter can be easily adapted to incorporate them, as long as they can be computed on a per-example basis.** Additionally, as suggested by `Reviewer ipfn`, we evaluated GraphFilter with the MTLD score as the diversity metric and the SkyworkRM as the quality metric. The results are presented in the updated Table 4 as well. **We observe that GraphFilter is also compatible with different diversity metrics, further validating the robustness and flexibility of our approach.** We will update Table 4 accordingly in our revision. | Quality(u)| Diversity(v) | $\mu_{BENCH}$ | $\mu_{LLM}$ | $\mu_{ALL}$ | |--|--|--|--|--| |SuperFilter | TF-IDF | 50.55 |42.79|47.97| |Perplexity| TF-IDF | 49.21 |40.85|46.43| |ArmoRM| TF-IDF | 49.01 |41.85|46.61| | DEITA| TF-IDF | 49.11 |41.97|46.73| | X| TF-IDF | 48.94 |41.87|46.58| |SuperFilter |X | 49.52 |41.28|46.78| | X|X | 48.27 |40.28|45.61| | SuperFilter | MTLD | 50.15 | 42.42 | 47.47 | | SkyworkRM| MTLD | 49.51 | 42.01 | 46.93 | ## 3. Priority Function Our approach treats quality and diversity as equally important. We acknowledge that introducing an explicit weighting parameter could provide additional flexibility. We will include this discussion and corresponding experiments in our revised manuscript. ## 4. Instruction-only Application **It is important to note that our approach does not ignore responses entirely.** When computing quality scores, we consider both the instruction and its corresponding response, ensuring that instruction-response pairs are of high quality. Furthermore, **every method has its own optimal way of being applied.** As demonstrated in Table 5 of our submission, **applying GraphFilter to instructions yields superior performance compared to applying it to responses or both.** We will include this discussion in our revised manuscript to clarify our design choices. ## 5. Small Dataset **We followed well-established practices from previous works, many of which select comparable or even smaller proportions of data.** SuperFiltering [1] selects up to 7800 examples from 52K training examples. AlpaGasus [2] selects 9K examples from 52K training examples. DEITA [3] selects 10K example from 300K training examples. In our work, we select 10K examples from 300K examples. This proportion is consistent with the scale of data selection in related works and allows us to conduct a fair comparison. Furthermore, we conducted extensive experiments across multiple budget sizes, as shown in Figure 3. Our approach consistently outperforms the baseline methods across all budget sizes, demonstrating its robustness and effectiveness. We will include these additional details in the revised manuscript. References: 1. Superfiltering: Weak-to-strong data filtering for fast instruction-tuning." 2024. 2. Alpagasus: Training a better alpaca with fewer data. 2023. 3. What makes good data for alignment? a comprehensive study of automatic data selection in instruction tuning. 2023. ## 6. Missing Literature We will incorporate these references in our revised manuscript. --- Rebuttal Comment 1.1: Comment: Thank the authors for the response, which addresses some of my concerns. I have raised my score.
Summary: This paper presents GRAPHFILTER, a novel data selection method designed to address the challenge of balancing data quality and diversity in SFT. The core idea is to model the dataset as a bipartite graph where sentences are connected to their constituent n-grams. By using a priority function that multiplicatively combines quality and diversity metrics, GRAPHFILTER iteratively selects sentences. This approach aims to maximize n-gram coverage while taking into account the quality of the data. Extensive experiments on three mainstream models across six benchmarks show that GRAPHFILTER outperforms nine baselines in terms of both model performance and computational efficiency. Claims And Evidence: This paper made the following main claims, which I think is well supported. - GRAPHFILTER can achieve a better balance between data quality and diversity compared to existing methods. - It achieves sota performance on multiple benchmarks - GRAPHFILTER is computational efficient. - The combination of n - gramsand the multiplicative priority function are crucial for the success of the method. - Applying GRAPHFILTER to instructions alone yields the best performance, emphasizing the importance of diverse instructions in SFT. Methods And Evaluation Criteria: Yes. The methods are novel to me and the evaluation criteria is common used. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. GRAPHFILTER offers a promising approach to balancing quality and diversity in data selection. The experimental validation is strong, but more details are needed regarding the generalizability of the method, the choice of diversity metrics, and the adaptability to different scenarios. Supplementary Material: Yes. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths 1. I think the presented method is simple yet effective. Besides, the computational efficiency is impressive. Weaknesses 1. There is no mention of error bars (or std) in the tables. 2. The impact of different subset sizes (e.g., 1K vs. 100K) as well as settings of hyperparameters on the balance between quality and diversity needs further exploration. Other Comments Or Suggestions: N/A Questions For Authors: 1. Would you consider change the name GRAPHFILTER, to GraphFilter? Writing everything in uppercase looks strange to me. 2. How was the choice of using up to trigrams justified? Would variable or larger n - grams improve the results? 3. What is the sensitivity of the hyperparameters? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful review of our paper on GraphFilter. ## 1. Generalizability GraphFilter demonstrates strong generalizability through consistent performance across three model backbones and six diverse benchmarks. The Magpie dataset used in our experiments is a general dataset covering a wide range of tasks. We will clarify these points in our future revision. ## 2. Diversity Metrics **GraphFilter is designed to be agnostic to the specific quality and diversity metrics used.** We have demonstrated that GraphFilter is compatible with a wide range of quality metrics, as shown in Table 4. Please note that we carelessly presented incorrect results in Table 4, which we will correct in our future revision. Furthermore, we conducted additional experiments using MTLD as the diversity metric. During the iterative selection process, we maintained the bipartite graph structure but updated the MTLD scores iteratively as n-gram nodes were removed. As shown in the table below, GraphFilter is flexible enough to accommodate different quality and diversity metrics while maintaining strong performance. | Quality(u)| Diversity(v) | $\mu_{BENCH}$ | $\mu_{LLM}$ | $\mu_{ALL}$ | |--|--|--|--|--| |SuperFilter | TF-IDF | 50.55 |42.79|47.97| |Perplexity| TF-IDF | 49.21 |40.85|46.43| |ArmoRM| TF-IDF | 49.01 |41.85|46.61| | DEITA| TF-IDF | 49.11 |41.97|46.73| | X| TF-IDF | 48.94 |41.87|46.58| |SuperFilter |X | 49.52 |41.28|46.78| | X|X | 48.27 |40.28|45.61| | SuperFilter | MTLD | 50.15 | 42.42 | 47.47 | | SkyworkRM| MTLD | 49.51 | 42.01 | 46.93 | ## 3. Adaptability GraphFilter can be easily adapted for domain-specific scenarios. This adaptation can be achieved in two ways: (1) **replacing the default general quality metrics with domain-specific ones**, as discussed in Section 3.3, or (2) **introducing a whitelist for domain-specific terms, allowing n-gram nodes containing these terms to be visited multiple times.** These modifications ensure that domain-specific examples receive higher priority during selection. We will discuss these strategies in our future revision. ## 4. Error Bars We acknowledge the importance of error bars in our results. Due to limited resources and time, we were unable to run more experiments with different random seeds and include error bars in this rebuttal. We will address this issue in the future revision. ## 5. Different Subset Sizes and Hyperparameters To evaluate how GraphFilter performs across different subset sizes, we conducted experiments with Llama-3-8B, as shown in Figure 3. Our results demonstrate that the quality-focused data selection method has an advantage when the budget is small, while the diversity-focused method performs better with larger budgets. Our approach consistently outperforms the baseline methods across all budget sizes, demonstrating its robustness and effectiveness. We assume that both quality and diversity are equally important for data selection, so we did not introduce any hyperparameters into the priority function (Equation 4). We acknowledge that introducing such hyperparameters could further improve the method's adaptability. Due to limited computational resources and time, we were unable to provide the results in this rebuttal. We will include these results in the future version. ## 6. Method Name We will change the name to "GraphFilter" in the future version to improve readability. ## 7. Choice of Trigrams **Our choice of trigrams was based on our preliminary study to balance between diversity representation, model performance, and computational efficiency.** As shown in the table below, we conducted experiments with n-gram sizes from 1 to 5 using Llama-3-8B. Our results indicate a significant performance improvement when moving from unigrams (n=1) to trigrams (n=3). However, beyond n=3, we observe diminishing or even negative returns. Furthermore, the number of n-gram nodes increases substantially with n (from 0.1M for unigrams to 7.4M for 5-grams), as well as the runtime (from 2.12 hours to 4.58 hours). We will include this analysis in our future revision. | n-gram | # of n-gram nodes | Runtime (hrs) | $\mu_{BENCH}$ | $\mu_{LLM}$ | $\mu_{ALL}$ | |--|--|--|--|--|--| | 1| 0.1M| 2.12| 49.02 | 41.41 | 46.48 | | 2| 1.0M| 2.30| 49.58 | 42.14 | 47.31 | | 3| 2.6M| 2.48| 50.55 | 42.79 | 47.97 | | 4| 4.8M| 3.38| 50.11 | 42.63 | 47.43 | | 5| 7.4M| 4.58| 50.44 | 42.81 | 47.95 | ## 8. Hyperparameter Sensitivity In GraphFilter, the primary hyperparameter is the n-gram size, which we set to 3. This choice is detailed in Section `7. Choice of Trigrams`. Other hyperparameters, such as learning rate and batch size, are kept consistent across all methods to ensure a fair comparison. Furthermore, we acknowledge the potential benefits of introducing an additional hyperparameter to control the balance between quality and diversity in the priority function (Equation 4). Due to limited resources and time, we will include such an analysis in our future revision.
null
null
null
null
SageAttention2: Efficient Attention with Thorough Outlier Smoothing and Per-thread INT4 Quantization
Accept (poster)
Summary: In Transformer-based models, SageAttention accelerates self-attention through quantization, but its use of INT8 for queries and keys is slower than INT4. Moreover, its acceleration is limited to specific Nvidia architectures due to FP16 computations. To address this issue, this paper proposes a thread-level granularity quantization method that considers GPU architecture and applies outlier smoothing to queries. Furthermore, the study identifies architectural issues in FP16 matrix multiplication accumulation operations and introduces two-level accumulation strategies to mitigate these problems. As a result, SageAttention2 achieves superior performance and acceleration compared to various existing attention mechanisms, including FlashAttention, across diverse models and tasks. ## update after rebuttal Thank you for the authors' response. After careful review, I believe the experiments and responses are sufficient. Therefore, I have revised the score for this paper. Claims And Evidence: The proposed method effectively leverages low-bit operations for acceleration, demonstrating its suitability for efficient computation. Additionally, it visualizes and compares the data distribution of queries, keys, and values, providing a solid rationale for determining the granularity of each quantization. Through these analyses, this paper further demonstrates that the proposed model achieves high acceleration while maintaining reasonable performance, as evidenced by comparative examples of actual image generation. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem at hand. Experiments were conducted across multiple benchmarks, including text, video, and various other tasks. Theoretical Claims: The paper does not make significant theoretical claims that require proof verification. Instead, it focuses on empirical analysis and performance evaluation rather than theoretical derivations. Experimental Designs Or Analyses: The experiments are valid, as SageAttention2 has been compared across various models and tasks against multiple existing attention mechanisms. Supplementary Material: I have checked the code in the supplementary materials and verified the methods claimed in the paper (e.g., per thread and per block quantization). Relation To Broader Scientific Literature: This paper builds on prior research in self-attention acceleration, particularly SageAttention, which introduced quantization for improved efficiency. However, its reliance on INT8 for queries and keys and FP16 for attention maps limited acceleration benefits to specific Nvidia architectures. By introducing thread-level granularity quantization and extending outlier smoothing to queries, this study refines existing quantization techniques, aligning with prior work on mixed-precision inference and INT4 quantization. Additionally, the identification of FP16 matrix multiplication accumulation issues and the proposed two-level accumulation strategy contribute to ongoing research on numerical precision challenges in GPU-based deep learning. The study’s experimental validation across various models, including LLMs, reinforces the effectiveness of quantization-aware strategies in optimizing Transformer inference across different GPU architectures. Essential References Not Discussed: All essential references have been included. Other Strengths And Weaknesses: 1. Strenths • This paper identifies and analyzes the discrepancy between the theoretical accuracy of the proposed quantization method and its actual performance during GPU inference. By addressing this issue, the study demonstrates effective model acceleration while maintaining the intended accuracy. • This study analyzes the limitations of existing research and proposes improvements, conducting experiments on various models, including LLMs. The results demonstrate effective acceleration of model inference across a broad range of GPU environments. 2. Weaknesses • This paper identifies errors arising during the accumulation process in matrix multiplication between the attention map and V. To mitigate this issue, it introduces a two-level accumulation technique. However, as acknowledged by the authors, this approach is not entirely novel. Additionally, the root cause of the problem stems from a design flaw in Nvidia GPU architecture. Since the proposed method primarily addresses a hardware-related limitation rather than introducing a fundamentally new algorithmic contribution, its impact on advancing the field is limited. • The proposed method enhances performance compared to SageAttention by reducing the quantization bit and mitigating the resulting performance degradation. This is achieved by extending outlier smoothing—previously applied only to the key—to the query and modifying the quantization granularity to a thread-wise level. However, the first approach is a straightforward extension of an existing technique rather than a novel innovation. Moreover, the change in granularity does not lead to a substantial improvement over previous methods, making the overall contribution appear insufficient. • A proper ablation study is needed. The ablation study provided in the appendix only compares TOPS, which focuses solely on computational speed. This is insufficient to fully explain the impact on actual model performance and makes it difficult to evaluate overall trade-offs. • While the paper explains why per-thread quantization was chosen, it lacks comparisons with other quantization methods. Other Comments Or Suggestions: • This paper compares model performance with existing techniques, including Smooth Attention and Hadamard Attention. However, it does not evaluate their inference speed, which would be a valuable addition. • The figures lack readability and explanation. For example, the authors mention converting P and V to FP8, but Figure 2 and other visual materials do not clearly illustrate this process. Questions For Authors: • SageAttention2 performs well in computations with long sequences. However, can it still provide meaningful speed advantages in cases with short sequences or small batch sizes? • In the paper, value uses channel-wise granularity, unlike query and key. This appears to be due to value being computed with FP8 precision. Are there any results comparing the effects of using different quantization granularities specifically for value? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer aDo7, Thank you for your valuable suggestions and questions. --- >### Weakness1 **Reply**: We appreciate the valuable question. We argue that: 1. We first discover the critical role of accumulator precision in PV matrix multiplication for Attention; this is an essential insight for the attention operator. 2. The technique and the methods mentioned in the paper were developed concurrently (within a two-month interval). Additionally, we were the first to discover and report this hardware issue, and we can provide evidence after the rebuttal. 3. Beyond addressing hardware limitations, The technique can enhance accuracy for efficiency techniques using reduced accumulator precision in matrix multiplication. --- >### Weakness2 **Reply**: Thank you for your question. We argue that: 1. Smooth Q and Smooth K are distinct. Specifically, smooth Q is a per-block level smoothing and introduces an additional matrix-vector multiplication to compute ΔS added in the GPU kernel. Moreover, despite its simplicity, this approach demonstrates exceptional improvement. We argue that contribution should not be dismissed based on methodological simplicity. 2. The improvement from per-thread quantization over SageAttention’s per-block quantization is significant, not marginal. For example, Table 6 clearly shows superior accuracy with per-thread quantization. Also, using per-block quantization for Sage2-4b will get completely blurred videos of Cogvideo. 3. Per-thread quantization is a highly innovative approach, and is non-trivial, requiring sophisticated algorithm-hardware co-design. --- >### Weakness3 **Reply**: Thank you for your valuable suggestion. We add the accuracy ablation results and end-to-end perplexity of Llama-3.1 alongside the TOPS and will revise Table 18. |Method|TOPS|Cossim↑|Relative L1↓|Ppl.↓| |-|-|-|-|-| |Attention (INT4 + FP8)|284|0.8004|0.3906|7.963| |+ Per-thread quantization|283|0.9249|0.3127|7.459| |+ Two-level accumulation|283|0.9498|0.2731|7.345| |+ Smooth Q|273|0.9946|0.06480|6.256| --- >### Weakness4 **Reply**: We appreciate your suggestion about comparing other quantization methods. We want to highlight that our paper already includes comparisons with per-tensor, per-block, and per-token quantization methods in Table 6 and 15, with discussions in lines 250–256. The results demonstrate that per-thread quantization achieves accuracy comparable to per-token quantization and outperforms per-tensor and per-block quantization. Furthermore, we provide a speed comparison of different quantization methods, showing that per-thread quantization introduces almost no computational overhead compared to per-tensor and per-block methods, while per-token quantization is notably slower: |Quantization|TOPS| |-|-| |per-tensor|286| |per-block (line 1094)|284| |per-thread (line 1095)|283| |per-token|268| --- >### Comment1 **Reply**: We appreciate the reviewer’s suggestion. We evaluate the end-to-end inference latency of Mochi on L20 below: |Attention|Latency (s)| |-|-| |HadamardAttn|1198| |SmoothAttn|1208| |SageAttentoin2|1190| SageAttention2 achieves a similar inference speed to HadamardAttn and SmoothAttn while delivering significantly better accuracy performance. --- >### Comment2 **Reply**: Thank you for your suggestion. First, V's FP8 conversion is already shown in Figure 2. Second, the P matrix only exists internally within the GPU kernel and is difficult to visually represent in the overview figure. Therefore, we mainly describe its FP8 conversion process in detail in Algorithm 1 and Section 3.3. We will revise Figure 2 in our paper to explicitly indicate that P is converted to FP8. --- >### Question1 **Reply**: We appreciate the reviewer’s question. Our speed evaluation already covers a range of sequence lengths from 1,024 to 32,768, demonstrating speedup across all sequences (e.g., 1,024). To further address your concern, we provide additional results for batch size = 1 and sequence length = 1024 on RTX4090, as shown below: |Attention|TOPS| |-|-| |Torch|10.9| |xformers|94.1| |FlashAttn2|142.5| |SageAttn1|255.3| |SageAttn2-8b|329.9| |SageAttn2-4b|352.6| The results show that SageAttention2 consistently delivers higher throughput with short sequences and small batch sizes, maintaining similar speedup as observed in long-sequence and large-batch scenarios. --- >### Question2 **Reply**: Thank you for your valuable suggestion. First, per-token quantization can not be applied to V because the quantization must be conducted along the outer axis of $PV$. We compare the accuracy of per-channel, per-block, and per-tensor quantization methods, showing that per-channel quantization achieves the best accuracy: |Quantization|Cossim↑|Relative L1↓| |-|-|-| |Per-Channel|0.9946|0.0648| |Per-Token|✗|✗| |Per-Block|0.9930|0.0651| |Per-Tensor|0.9922|0.06777| --- If you feel your concerns have been resolved, we would greatly appreciate it if you consider raising the score.
Summary: The authors propose SageAttention2, where they manage to quantize key and query matrices in the attention computation to INT4 while the softmax outputs and value matrices are quantized to FP8. They show that the quality degradation is managable in this configuration. Meanwhile, if the key and query matrices are quantized to INT8 instead, then there is almost no measurable quality degradation. The key is to perform asymmetric quantization of the key and query matrices. ## update after rebuttal My score is unchanged after the discussion period. Keeping my score to champion for the authors' work. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: There aren't any theoretical claims except for simple mathematical derivations. They seem correct. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. All parts. Relation To Broader Scientific Literature: Yes. Attention is a widely used operation. This work will be impactful. Essential References Not Discussed: No. Other Strengths And Weaknesses: * The paper is well written. * The experimental design is comprehensive. * This work will be the new state-of-the-art attention kernel. Other Comments Or Suggestions: No other comments. Questions For Authors: * Do the authors think that it is possible for PV computation to be quantized to sub-8-precision as well? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Dear Reviewer r8J3, Thank you for your valuable question. Below, we address the question raised. --- >**Question1.** Do the authors think that it is possible for PV computation to be quantized to sub-8-precision as well? **Reply**: Thank you for the insightful suggestion. The answer is yes - we have tried quantizing P and V to FP4 using Blackwell GPUs' micro-scaling quantization and obtained preliminary accuracy results across CogVideo layers. Specifically: - We keep Q and K in INT8 (since Blackwell GPUs don't support INT4) - We quantize P and V to NVFP4 - Using our smooth Q and smooth K techniques |Attention|Cossim ↑|Relative L1 ↓| |-|-|-| |Sage2-8b|0.99982|0.01573| |Sage2-4b|0.99460|0.06480| |FP4 PV Attention|0.99674|0.03250| Based on these preliminary results, we believe FP4 quantization for P and V is possible and practical.
Summary: This paper makes the attention computation more efficient. It uses INT4 quantization of Q and K, instead of INT8 quantization. To enhance the accuracy of INT4, this paper proposes a outlier smoothing strategy, which is well-motivated. The overall design and implementation take hardware characteristics into account. The accuracy and efficiency of proposed method are evaluated by various tasks and settings: on RTX4090, the proposed method is much faster than FlashAttention2; on Hopper GPUs, the proposed method matches the speed of FlashAttention3 while delivers much higher accuracy. Claims And Evidence: All claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense. Theoretical Claims: There are no theoretical claims in this paper. This paper does some mathematical derivations about quantization in section 3.1 and I have checked its correctness. I also have checked the implementation of SageAttention2 in algorithm 1. Experimental Designs Or Analyses: I have checked all experiments represented in main text. I think the experimental designs are sound and sufficient. 1. It compares with FlashAttention and recent proposed INT4 quantizations under various tasks and settings. 2. The ablation study is sufficient to demonstrate the effectiveness of its key designs, including Q/K smoothing strategies, Q/K quantization granularities, and numerical precision choices of P/V. Supplementary Material: No Relation To Broader Scientific Literature: algorithm and hardware co-design Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: The proposed takes hardware characteristics into account. The proposed INT4 Per-thread Quantization is novel and it is a great example of algorithm/hardware co-design. Weaknesses: The effectiveness of proposed Q/K smoothing is evaluated empirically. The evaluation is sufficient but it might be better to analysis the theoretical benefits of Q/K smoothing. For example, the authors can analysis and compare the quantization error bounds with/without smoothing for specific input distributions. This provides the paper with stronger theoretical support. Other Comments Or Suggestions: The experimental settings of Figure 5 are not clear. What is number of heads? Which dataset is used? Questions For Authors: The authors mentioned that the accumulator for the mma(f32f8f8f32) instruction is actually FP22. I am not familiar with mma instruction. Is the precision mismatch some kind of software "bug"? Or perhaps there are deeper design reasons behind it? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer t1YD, Thank you for your valuable suggestions and questions. Below, we address each point raised. --- >**Weakness1.** The effectiveness of proposed Q/K smoothing is evaluated empirically. The evaluation is sufficient but it might be better to analysis the theoretical benefits of Q/K smoothing. For example, the authors can analysis and compare the quantization error bounds with/without smoothing for specific input distributions. This provides the paper with stronger theoretical support. **Reply**: Thank you for the valuable suggestion. We analyze the quantization error as follows: **Proof**: Let $X \in \mathbb{R}^{N \times d}$ be $N$ activation tokens with dimension $d$. Following [1], we suppose that an activation token follows an Gaussian distribution $\mathcal{N}(\boldsymbol{\mu}, \Sigma^2)$, where $\boldsymbol{\mu} = (\mu_1, \mu_2, \ldots, \mu_d)$ and $\Sigma^2$ is a diagonal matrix with $\Sigma^2 = \mathrm{diag}(\sigma_1^2, \sigma_2^2, \ldots, \sigma_d^2)$. Further, we suppose that different token $X_i$ is i.i.d. sampled from the same distribution. Suppose the absolute maximum value in a quantization group is $M$, and the bit width is $b$, then there are $2^b$ quantization levels. Under round to nearest strategy, the expected quantization error is $\frac{1}{2} \cdot \frac{2M}{2^b}$ which is proportional to the maximum absolute value in the quantization group. So smaller absolute maximum value leads to smaller quantization error. After smoothing, we have: $$ Y_{ij} = X_{ij} - \frac{1}{N} \sum_{k=1}^N X_{kj} $$ $Y_{ij}$ also follows a Gaussian distribution. The mean and variance of $Y_{ij}$ can be calculated as follows: $$ \mathbb{E}[Y_{ij}] = \mathbb{E}[X_{ij}] - \frac{1}{N} \sum_{k=1}^N \mathbb{E}[X_{kj}] = \mu_j - \frac{1}{N} \sum_{k=1}^N \mu_j = 0 $$ $$ \mathrm{Var}[Y_{ij}] = \mathrm{Var}[\frac{N-1}{N} X_{ij}] + \sum_{k=1, k\neq i}^N \mathrm{Var}[\frac{1}{N} X_{kj}] = \frac{(N-1)^2}{N^2} \sigma_j^2 + (N-1) \frac{1}{N^2} \sigma_j^2 = \frac{(N-1)}{N} \sigma_j^2 $$ So $Y_{ij}$ have smaller mean and variance compared to $X_{ij}$. Following the property of Gaussian distribution, we have: $$ P(|Y_{ij}| < \epsilon) > P(|X_{ij}| > \epsilon),\ \forall \epsilon > 0 $$ So the distribution of $X_{ij}$ is more concentrated towards 0 after smoothing. Then we know that $$ P(\mathrm{absmax}(Y_i) < \epsilon) = \prod_{j=1}^d P(|Y_{ij}| < \epsilon) > \prod_{j=1}^d P(|X_{ij}| > \epsilon) = P(\mathrm{absmax}(X_i) < \epsilon) $$ So this makes the distribution of absolute max value in a token more concentrated towards 0, leading to smaller quantization error. --- >**Comment1.** The experimental settings of Figure 5 are not clear. What is number of heads? Which dataset is used? **Reply**: We apologize for the lack of clarity in the experimental settings of Figure 5. We used a head size of 32 and a batch size of 4. Since we are benchmarking kernel speed, we follow standard practice as FlashAttention1/2/3, i.e., using Gaussian input (with mean 0, standard variance 1) for floating-point inputs. For integer inputs, we use uniform random values within the representation range: [-128, 127] for INT8 and [-8, 7] for INT4. We will clarify this in the final manuscript. --- >**Question1.** The authors mentioned that the accumulator for the mma(f32f8f8f32) instruction is actually FP22. I am not familiar with mma instruction. Is the precision mismatch some kind of software "bug"? Or perhaps there are deeper design reasons behind it? **Reply**: Thank you for your question. This is a hardware-level issue, not a software bug. The exact reason for this precision mismatch is not publicly documented, but we hypothesize that it is due to chip area constraints. Given the limited space, the designers may have had to make trade-offs, and reducing the accumulator precision for the FP8 tensor appears to have been one such compromise. [1] QLoRA: Efficient Finetuning of Quantized LLMs --- If you feel your concerns have been resolved, we would greatly appreciate it if you consider raising the score.
Summary: This paper proposed several improvements on SageAttention to make it comparable with FlashAttention 3 in terms of speed but better in accuracy. The enhancements mainly focused on enabling lower precision compared to previous SageAttention, i.e. move from INT8 to INT4 for Q*K^T and from FP16 to FP8 for P*V. To address the outlier problem in Q*K^T, SageAttention2 extended the smoothing technique used in SageAttention from K-only to both Q and K and showed great improvement in accuracy. In order to further improve accuracy and reduce the overhead from INT4 dequantization, author proposed an interesting per-thread quantization scheme which will enforce each thread to read and use only one set of Q K scale. This method shows comparable accuracy with per-token, i.e. better than per-block, and costs almost no overhead. The challenge for using FP8 in P*V calculation is caused by Hopper TensorCore design, which uses FP22 accumulator instead of FP32. To alleviate the impact of FP22, author employed the two-level accumulation technique and optionally applied smoothing on V. Resulting SageAttention2 shows very promising results on language, image, and video models. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: NA, no theoretical claims. Experimental Designs Or Analyses: Yes, no issues. Supplementary Material: Yes, the entire Appendix. Relation To Broader Scientific Literature: It's an improvement on SageAttention and competitive alternative to FlashAttention2/3. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength 1. describe the key concepts clearly with good supporting data. 2. tested on a wide range of models. 3. demonstrated speed-up and accuracy improvement. Weakness overall, a nice, solid work. Here are a few minor suggestions: 1. Author highlighted several times about the comparison with FlashAttention2/3 throughout the paper, however, FlashAttn were missing in many cases in the main experimental results, for example, only the middle part of Table 2 (for video models) shows FlashAttn3-fp8, but the other two categories only have SageAttn family. Since it was mentioned in Abstract, Introduction, and Fig. 1, readers would be expecting a quantitative benchmark with both FlashAttn2 and 3. Especially when SageAttn2 provides two modes, i.e. one is faster (INT4) while the other is more accurate (INT8), a dedicated table for a fair comparison to FlashAttn family would be beneficial. 2. In Section 3.2, author stated that per-token method would result in "significant overhead" but didn't specify/quantify how serious this problem is. On the other hand, in Table 18, it only shows per-thread method added no overhead. Maybe author can elaborate a bit in Sec 3.2 and give readers a better idea how much improvement were made by this technique. 3. In Appendix Table 9, 14, and Fig 18, it wasn't clear which version of SageAttention2 (4b or 8b) were used in the experiments. Other Comments Or Suggestions: type on Line 115/116 right column, "... preprocessing technique **to by** subtracting the token-wise mean..." Questions For Authors: Please see Weakness above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer KKjK, Thank you for your valuable suggestions and questions. Below, we address each point raised. --- >**W1.** Author highlighted several times about the comparison with FlashAttention2/3 throughout the paper, however, FlashAttn were missing in many cases in the main experimental results, for example, only the middle part of Table 2 (for video models) shows FlashAttn3-fp8, but the other two categories only have SageAttn family. Since it was mentioned in Abstract, Introduction, and Fig. 1, readers would be expecting a quantitative benchmark with both FlashAttn2 and 3. Especially when SageAttn2 provides two modes, i.e. one is faster (INT4) while the other is more accurate (INT8), a dedicated table for a fair comparison to FlashAttn family would be beneficial. **Reply**: We apologize for any confusion regarding the comparison with FlashAttention2/3. The *"full precision"* row in our tables represents FlashAttention2/3’s unquantized attention performance, making Table 2 a direct and fair comparison between SageAttn2 and FlashAttention2/3. Note that since FlashAttention3 can only run on Hopper GPUs (H100/H20), so the speed comparisons with it are naturally limited to these GPUs. We will revise the manuscript to explicitly clarify this points. --- >**W2.** In Section 3.2, author stated that per-token method would result in "significant overhead" but didn't specify/quantify how serious this problem is. On the other hand, in Table 18, it only shows per-thread method added no overhead. Maybe author can elaborate a bit in Sec 3.2 and give readers a better idea how much improvement were made by this technique. **Reply**: Thank you for your valuable feedback. We appreciate your point about the lack of clarification on the overhead introduced by the per-token method in Section 3.2. We measure the TOPS of per-token INT4 quantization and the result is as follows, with the settings aligned with Table 18: |Quantization|TOPS| |-|-| |per-block (line 1094 in our paper)|284| |per-thread (line 1095 in our paper)|283| |per-token|268| Per-thread introduces about 0.4\% overhead, while per-token quantization results in approximately 6\% overhead, which is 15 times the overhead of per-thread quantization. --- >**W3.** In Appendix Table 9, 14, and Fig 18, it wasn't clear which version of SageAttention2 (4b or 8b) were used in the experiments. **Reply**: We apologize for the ambiguity. On Hopper GPUs (H100/H20), INT4 tensor core is not available. So in Figure 9, the columns of H100 and H20 report the result of SageAttn2-8b, while all other entries of SageAttn2 use the 4b version. For Table 14 and Fig. 18, the experiment was conducted on H100 (as stated in line 1041), so we use SageAttn2-8b and compared it with FlashAttention3-FP8, which has the same bit width, for a fair evaluation. We will clarify this in the final manuscript. --- >**C1.** typo on Line 115/116 right column **Reply**: Thank you for pointing out the typo and we will revise our paper. --- If you feel your concerns have been resolved, we would greatly appreciate it if you consider raising the score. --- Rebuttal Comment 1.1: Comment: Thanks for your clarifications. Please do include (better in earlier paragraphs) the considerations regarding INT4 Tensor Core availability on Hopper. It will remind the readers about the deployment options. --- Reply to Comment 1.1.1: Comment: Dear Reviewer KKjK, Thank you for your valuable suggestion and timely feedback! We will revise our paper to include information about the availability of INT4 Tensor Cores on Hopper GPUs in the earlier paragraphs. Furthermore, we will also revise our paper to include clarifications on all the issues raised in the rebuttal. We hope our reply can address your concerns. We would greatly appreciate it if you consider raising the score.
Summary: SageAttention2 introduces a new way to enhance the accuracy and efficiency of attention through a 3 pronged process: firstly, it introduces an INT4 matrix-multiplication technique for query-key and FP8 matmul technique for attention weight and values; secondly, it proposes a smoothing technique for queries to reduce loss of accuracy due to outliers in INT4 quantization; thirdly, it investigates the reason for loss in accuracy due to FP8 matmul for attention weights and values, and proposes a way to overcome the loss. The main algorithmic contributions are as follows: 1. Providing an efficient INT4 quantization scheme that adapts to thread-level granularity to move from multiple quantization scales in a GPU Warp to single quantization scale per warp to reduce dequantization overhead. This technique takes adavantage of the Nvidia's PTX Tensor Core warp-level `mma` instruction layout to partition Q and K thread blocks into quantization groups such that each quantization group shares a single quantization scaling factor to perform dequantization. 2. To reduce the accuracy overhead from loss of precision due to INT4 range, the paper adapts a technique to smooth Q similar to a previous technique of smoothing K from SageAttention paper. This technique subtracts the mean along the token dimension to utilize the INT4 range `[-7, 7]` more uniformly. This reduces the effect of outliers by making them smaller in magnitude, which in turn allows for a significant preservation of accuracy. 3. For `P*V`, the paper choses FP8 quantization over INT or FP16 due to better representation of P values numerically and prevalence of FP32 accumulator in wide variety of GPUs respectively. The authors observe that the loss in accuracy due E4M3 FP8 quantization of PV (P per-block, V per-channel quantization) is due to the nature of FP22 accumulator in the CUDA implementation, and they propose a 2 stage FP32 accumulation buffer during the computation of the online-softmax PV matmul to counteract the loss in accuracy due to the CUDA implementation using FP22 accumulator. 4. Lastly the authors propose the same smoothing technique for V by subtracting mean along the token dimension to counteract the accuracy loss due to FP22 accumulation, but its benefit is only found to be observed when V possesses channel-wise bias, so this smoothing is kept as optional. Claims And Evidence: Most of the claims made in the paper over accuracy and performance are backed by sufficient evidence. Observations: 1. Smoothing of Q+K seems to be an effective technique in preservation of accuracy with INT4/8 matmul of QK. 2. The granular per-thread quantization seems to be on-par with per-token quantization strategy for INT4 QK matmul in terms of accuracy, though enough evidence hasn't been presented on the said overhead of per-token dequantization. Theoretically, per-thread quantization shouldn't have any performance overhead, which is what has been presented in the table 18, and evidence of it being able to handle outliers in preserving accuracy has been tabulated in Table 4. 3. The 2-level FP32 buffer used for FP22 accumulation leads to circumvent the accuracy loss due to mma(f32f8f8f32) CUDA implementation. Though this claim hasn't been backed by sufficient evidence in terms of accuracy over multiple tasks. Table 7 and 16 fail to mention if this 2 level was employed in FP8 matmul of PV. Methods And Evaluation Criteria: The paper covers many modalities across multiple tasks involving language, image, and video generation, along with image classification involving an extensive suit of models that have attention at their core for computation. 1. The SageAttention2 is benchmarked against multiple different renowned efficient attention algorithms, in text2text, text2video, and text2image and compared in terms of the metrics that are majorly indicative of accuracy in literature. 2. Each concept in SageAttention2 is tested separately for average accuracy and worst accuracy, while keeping the other parameters of the attention frozen. These include but not limited to: ``` a. Smoothing of Q and K, separately and together and comparing against other methods of preserving INT4 accuracy, like SmoothAttn, Hadamard transformation etc. b. Differing the INT4 quantization granularities among per-tokens, per-thread, per-block, and per-tensor. c. Differing the PV matmul calculation with FP8 E4M3, FP8 E5M3, FP16, and INT8 precisions while keeping the KQ matmul INT4 methodology the same. ``` 3. For performance, varying the different techniques of employing per-thread granularity in quantization of QK, employing the 2-level accumulation strategy for PV matmul, and smoothing of Q. Each of these techniques is ablated against the baseline of INT4 QK (K smoothed), PV FP8 matmul (with default FP32 accumulation.) Theoretical Claims: I checked for correctness of the following equations and their assumptions: 1. Q.K matmul, adjusting for smoothed Q and smoothed K. All conclusions are correct based on the formulations. 2. Online softmax equation adjusting for 2-level accumulation strategy with FP8 matmul 3. Optional smoothing of V, and adjusting for `V_mean` addition to the output. 4. Indices calculation for Q and K, along with the scaling factor calculations for the same, with correct assumptions for per-thread grouping of Q and K tokens. 5. The algorithm 1 in Page 4 seems to reflect the text astutely. Experimental Designs Or Analyses: The paper goes over multiple experiments with regards to kernel speed comparing with xformers, SageAttention, FlashAttention2, and FlashAttention3 (for Nvidia Hopper architecture). The analysis for the experiments are found to be accurate in text with regards to TOPS. There is some room for improvement over the TOPS claim, as the end-to-end generation latency across LLMs show that switching to SageAttention2-4b over 8b might not be the best use of hardware (Table 8), given that 8b suffers almost no loss in accuracy over the full-precision attention, and has quite similar end-to-end generation latency as compared to 4b. The paper also mentions the metric loss of SageAttention2-4b compared to 8b, which is apparent once the tabular and visual results are inspected upon. Supplementary Material: I did a quick review of CUDA and Triton JIT Per-thread INT4 kernels provided in the supplementary material. I atired installing the SageAttention package through setup.py for an SM86 3090Ti GPU. I was able to import the SageAttention CUDAExtention without any issues in pytorch. I ran a quick benchmark over FlashAttention2, and found the results from the paper to be similar and holding up, accounting for the 3090Ti GPU and adjusting for its tensor cores. Relation To Broader Scientific Literature: More and more papers are targeting the GPU hardware by leveraging the instruction set, inherent knowledge of how GPU executes code blocks. We are seeing more of warp-level algorithms, algorithms designed to reduce any unnecessary communications overhead. Algorithm co-design with innate hardware knowledge are leading to better algorithms, that are turning out to be optimal. This paper stands together with lot of papers that are structuring up the methodology of hardware-software co-design, which is a must for lot of real-world applications. Essential References Not Discussed: Not related to understanding the topics in this paper directly, but would be nice to get a short reference and methodology discussed in FlashAttention2. FlashAttention2 also implements methodology to eliminate per-warp communication which is similar to the computation done in SageAttention2 per-thread INT4 quantization with single scaling factor to eliminate dequantization overhead. Other Strengths And Weaknesses: This paper provides a bright way towards deploying low-latency transformer models. With great INT4/8 and FP8 attention mechanism, lot of real world real-time deployments of LLM could be possible as long as hardware supports such tensor cores. The SageAttention2 INT4 Q*K matmul algorithm seems highly intuitive, and given the per-thread granularity, it seems to be an optimal way to design quantization groups, which takes into account both performance and accuracy. The paper restricts itself to only one technique to deal with quantization outliers, namely, smoothing. It would be great to see a comprehensive analysis of more methods that don't introduce any overhead, but that could be better suited to handle outliers for a reduced quantization error due to INT4 precision. The fact that INT8 version of SageAttention-2 matches or is on-par with the full-precision attention in accuracy, seems to indicate that there is room for improvement in the QK INT4 matmul front. Other Comments Or Suggestions: 1. Would like to see more comparisons between SageAttention2-4b and SageAttention-8b and SageAttention, the visual results on HunyuanVideo and CogvideoX seems to indicate the weakness of INT4 range. Maybe related to loss of information to INT4 range since the smoothed Q distribution seems to indicate a normal curve, so there might be some unrecoverable accuracy due to INT4 range. 2. Could also include few benchmarks against speech related tasks, like audio generation, TTS and/or speech-to-text. 3. The end-to-end generation latency of SageAttention2-4b seems to be on par with SageAttention2-8b for 4090 and L20 GPUs. Wheras the TOPS tell a different story. When it comes to performance, I suggest relying on end-to-end latency in milliseconds instead of TOPS. Since it aligns more with real-world use cases. 4. More detailed ablation studies needed for the 2-level FP32 buffer accumulation in regards to accuracy, like FP8 E4M3 2-level accumulation vs baseline FP8 E4M3. Questions For Authors: 1. Is the reason for not including the SageAttention2-4b in H100 and H20 benchmarks due to lack of INT4 tensor cores in Hopper GPUs used? 2. If the above is true, would it be possible to pack 2 INT4 values in INT8 for the above 2 gpu architectures? 3. Would learning for or calibrating for these per-thread quantization scaling factors lead to reduce the gap between SageAttention-8b and SageAttention-4b instead of calculating the scaled-max-of-absolute values as scaling factors? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer 1zQu, Thank you for your valuable suggestions and questions. Below, we address each point raised. --- >### Comment1 **Reply**: Thank you for your valuable suggestion. We compared the accuracy of sageattention, sage2-8b, sage2-4b, and sage2-4b without smooth Q across CogVideo layers (see table below). The limited INT4 representation range causes precision loss, but this is unrelated to Smooth Q. Before smoothing, the distribution was less uniform. While smoothing makes it closer to normal (though not perfectly uniform), it significantly reduces quantization error. |Attention|Cossim ↑|Relative L1 ↓| |-|-|-| | SageAttention|0.99995|0.00733| | Sage2-8b|0.99982|0.01573| | Sage2-4b|0.99460|0.06480| | Sage2-4b (without smooth Q)|0.9498|0.27305| --- >### Comment2 **Reply**: Thank you for suggesting benchmarks on speech-related tasks. In response, we evaluated Qwen-2-Audio 7B, a speech-to-text model, on the ASR task using the Librispeech test split and measured its performance with the WER metric. The results are presented below: |Attention|test-clean ↓|test-other ↓| |-|-|-| |Full-Precision|1.74|4.01| |HadamardAttn|1.77|4.05| |SmoothAttn|1.76|4.01| |SageAttention|1.74|4.02| |Sage2-4b|1.73|3.99| |Sage2-8b|1.72|4.03| We can see that SageAttention2 consistently outperforms the baselines, highlighting its effectiveness in audio-related models and benchmarks. --- >### Comment3 **Reply**: We agree and recognize the significance of end-to-end latency for real-world performance evaluation, which is why we report it in our results. However, we emphasize that the end-to-end speedup depends entirely on: 1) The attention speedup. 2) The proportion of total latency attributed to attention. For example: - If a model spends 50s on attention and 50s on other operations, a **2×** attention speedup reduces latency by **25s** (total: 75s). - A **2.5×** attention speedup yields a **30s** reduction (total: 70s). Moreover, quantifying the attention kernel’s FLOPs is essential, as it directly measures computational efficiency and our method’s advancement. This approach aligns with the evaluation standards of FlashAttention 1/2/3. --- >### Comment4 **Reply**: Thank you for your valuable suggestion. We compare the accuracy of Sage2-8b with and without 2-level FP32 buffer accumulation across all layers of CogvideoX: |Model|Cossim ↑|Relative L1 ↓| |-|-|-| |Sage2-8b|0.9997|0.02133| |Sage2-8b (without two-level accumulation)|0.9939|0.17843| --- >### Question1 **Reply**: Thank you for your question. Yes, the reason for not including SageAttention2-4B in H100 and H20 benchmarks is that Hopper GPUs do not have INT4 Tensor Core. --- >### Question2 **Reply**: Thank you for your question. H20 and H100 are both Hopper architecture. Yes, it is technically feasible to pack INT4 into INT8, dequantize them to INT8 in the kernel, and use INT8 Tensor Core ops. However, this approach has significant drawbacks: 1. **Speed Impact**: - It requires additional CUDA Core operations for dequantization and has the same Tensor Core count as INT8. Therefore it won't bring speed gains. 2. **Accuracy Impact**: - INT4 quantization introduces greater accuracy loss than INT8. --- >### Question3 **Reply**: Thank you for your valuable suggestion. We believe that learning the scaling factors would not help reduce the gap. Learned scaling factors are static and input-independent. However, the inputs exhibit significant fluctuations [1, 2], making static scaling suboptimal. For example, if all inputs are smaller than the predetermined scales, the representation range of INT4 is not fully utilized. Moreover, calibrating will compromise true plug-and-play compatibility. A more promising approach may be learning clipping factors [3]. It scales the absolute maximum by a learned ratio, which helps suppress outliers while preserving the representation range. --- [1] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration [2] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models [3] OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models --- --- If you feel your concerns have been resolved, we would greatly appreciate it if you consider raising the score. --- Rebuttal Comment 1.1: Comment: ``` We agree and recognize the significance of end-to-end latency for real-world performance evaluation, which is why we report it in our results. However, we emphasize that the end-to-end speedup depends entirely on: The attention speedup. The proportion of total latency attributed to attention. ``` Hi, is it possible to update this analysis in the paper? The analysis on the breakdown of both INT4/8 QK preprocessing and attention kernels in terms of latency (in ms, not TOPS), along with similar analysis on the PV kernels would be really helpful to get a good understanding of the hotspots in end-to-end latency. Please also include the new tables from these rebuttals. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you very much for your valuable suggestion! First, we provide a rough analysis of the overhead in milliseconds for a sequence length of 16K, as shown in the table below. We will analyze the INT4/8 QK and FP8 PV preprocessing and the attention kernel latency (measured in milliseconds) in detail in our paper. | Component | Latency| |-----------------------|-------| | Original Attention | 102.3 | | Sage2-8b | 39.5 | | Sage2-4b | 35.6 | | Smooth Q | 3.9 | | Smooth K | 0.49 | | Quant P | 1.3 | | Quant V | 2.5 | | INT4/INT8 Quant QK | 2.7 | --- Furthermore, we will add the Tables from the rebuttals, as well as the new analyses and corresponding experiments, into our paper. --- We hope our reply can address your concerns. We would greatly appreciate it if you consider raising the score.
null
null
null
null
Hi-Patch: Hierarchical Patch GNN for Irregular Multivariate Time Series
Accept (poster)
Summary: The paper introduces Hi-Patch, a hierarchical patch graph network designed for IMTS, where variables have different sampling rates. Hi-Patch models both local and global dependencies across different scales. It represents observations as nodes, captures short-term dependencies using intra-patch graphs, and progressively learns global features through inter-patch layers. The final representations are fed into task-specific decoders. Experiments on eight datasets show that Hi-Patch outperforms SOTA models in IMTS forecasting and classification. Claims And Evidence: There are no problematic main claims. Methods And Evaluation Criteria: The model designed in the paper is both reasonable and effective for addressing the IMTS modeling problem. The dataset used is a commonly employed real-world dataset in IMTS modeling, which also provides practical guidance for solving real-world problems. Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: The experiments in this paper are relatively comprehensive and demonstrate the effectiveness of the model. However, the ablation study does not sufficiently highlight the importance of hierarchical information. Additionally, the hierarchical illustrations in the appendix (Figures 11–13) fail to clearly show that the proposed model effectively captures better hierarchical information. Supplementary Material: I have read the whole supplementary material. Relation To Broader Scientific Literature: This paper details efforts to advance IMTS modeling across various scientific domains. The paper provides a new perspective on the modeling of ISMTS to some extent and effectively improves downstream task performance. Essential References Not Discussed: No. Other Strengths And Weaknesses: 1. The motivation of this paper is unclear. While it emphasizes the importance of multi-scale information in multivariate time series modeling, there is no direct evidence supporting this claim. Even in the ablation study, no dedicated experiments demonstrate the significance of hierarchical information. Additionally, the hierarchical illustrations in the appendix (Figures 11–13) do not convincingly show that the proposed model captures better hierarchical structures. 2. The proposed model involves multiple graph layers, which may impact its time and space complexity. Therefore, it would be helpful if the authors could provide an analysis of Hi-Patch’s computational complexity. 3. The paper requires tuning of several important parameters, including but not limited to the number of layers and patch size. Other Comments Or Suggestions: No. Questions For Authors: What are the advantages of designing hierarchy at the feature level compared to designing it at the raw value level? Specifically, as shown in the paper, the hierarchical structure obtained through aggregation at the feature level may lead to inappropriate information being passed to the next layer, potentially affecting the learning process in higher layers. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Responses to Reviewer Ko9v **Q1. The motivation and experimental validation of IMTS multi-scale modeling.** **A1.** 1. **Motivation and Significance of Multi-scale Information** Our work is grounded in the premise that multi-scale information is essential for general time series analysis—as demonstrated by previous studies (e.g., Pathformer, TimeMixer, MSGNet). Since irregular multivariate time series (IMTS) inherently exhibit multi-scale features (for instance, monthly, quarterly, and yearly patterns in the USHCN weather dataset), it is natural to assume that multi-scale information is also critical for IMTS. Moreover, prior work such as Warpformer has empirically verified the benefits of multi-scale modeling for IMTS, while also revealing certain limitations. These intuitive and empirical insights jointly motivate our further exploration into multi-scale modeling for IMTS. 2. **Experimental Validation and Ablation Studies** In our ablation experiments (the “w/o Hie” setting), we set the patch size to span the entire historical window, thereby removing the hierarchical structure so that the model processes only single-scale raw information(Table 3 and Table 11 in Appendix I). The significant performance drop observed (3.90%↓ on MSE and 3.07%↓ on MAE on four datasets) directly confirms the pivotal role of the hierarchical design in extracting multi-scale features. Additionally, Figures 11–13 in Appendix K visually demonstrate the diverse patterns that Hi-Patch can "see" at different layers—a capability that distinguishes our model from many existing IMTS methods. ------ **Q2. Analysis of computational complexity.** **A2.** As detailed in Appendix G.1, the primary computational cost of our method arises from the intra-patch graph layer, whose complexity scales quadratically with the number of observation points (N) per patch. Our focus is on irregular multivariate time series, which are typically characterized by extremely sparse sampling. As shown in Table 4, all our datasets exhibit a missing rate exceeding **75%**. In these scenarios, the computational cost decreases sharply due to the limited number of available data points. Our empirical study in Appendix G.2 indicates that our overhead is moderate—**ranking 5th out of 9 methods**—and remains within an acceptable range. Consequently, We contend that the additional overhead is justified by the improved extraction of scarce data patterns. ------ **Q3. Clarification of parameter tuning.** **A3.** 1. **Intuitive Parameter Choice** Key parameters—such as the number of layers and patch size—have clear physical interpretations. Layers correspond to different scales of feature extraction, while the patch size reflects the local time window. These parameters can be set according to the specific task requirements and data characteristics. Our sensitivity analyses (Sections 5.4 and 5.5) demonstrate that the optimal parameter ranges is related to the sample distributions and inherent periodicities. Detailed search ranges are provided in Appendix E to guide practical parameter selection. 2. **Practicality and Generalizability** Although some hyperparameter tuning is required—common in deep learning methods[1, 2, 3]—our approach shows competitive or superior performance compared to state-of-the-art methods. The tuning process further adds flexibility, allowing the model to adapt to a variety of applications. [1]. Random Search for Hyper-Parameter Optimization*. Journal of Machine Learning Research, 13:281–305, 2012. [2]. Practical Bayesian Optimization of Machine Learning Algorithms. NeurIPS, 2012 **Q4. The advantages of designing hierarchy at the feature level.** **A4.** 1. **Carrying Multiple Semantic Cues** A hierarchy built in the raw value space merely propagates numerical value information. In contrast, a feature-level hierarchy embeds additional semantic details (e.g., timestamps, variable identifiers, and sampling density). This enriched representation enables the model to capture not only numerical variations but also temporal dynamics and inter-variable heterogeneity—an essential aspect for effectively modeling IMTS. 2. **Efficient Single-time Feature Encoding** With a feature-level hierarchy, a single feature encoding is performed at the outset, and subsequent layers focus solely on dependency extraction and fusion. Constructing the hierarchy at the raw value level would require repeated encoding at every layer, thus incurring higher computational cost. 3. **Enhanced Information Transmission and Control** We employ multi-head and multi-time attention mechanisms at the feature level to ensure appropriate weighting and filtering during the aggregation process, thereby mitigating inappropriate information passing. Our ablation experiments (w/o TEAGG) confirm that this design substantially improves high-level feature learning (11.11% on MSE↑ and 5.53% on MAE↑ on four datasets).
Summary: The paper introduces a graph-based framework called **Hi-Patch** to handle irregularly sampled multivariate time series. The approach divides the time axis into patches of short intervals, capturing local (fine-grained) temporal patterns for densely sampled variables in each patch. It then progressively aggregates and propagates patch-level information through multiple “inter-patch” graph layers, forming a hierarchical structure that extracts coarse-grained temporal and inter-variable correlations for densely and sparsely sampled variables. This bottom-up, multi-scale pipeline allows for comprehensive feature extraction spanning local to global time scales. Empirical results show that Hi-Patch outperforms a variety of baselines in both classification (area under ROC/PR curves) and forecasting (MSE/MAE) tasks. The method’s key contributions include a patch-based intra-graph for local context, a stack of inter-patch graphs for higher-level features, and a final decoding layer that uses these learned representations for downstream predictions. ## update after rebuttal The authors covered most of my concerns in the rebuttal, so I kept the original positive rating. Claims And Evidence: The paper’s primary claims—namely that Hi-Patch achieves state-of-the-art results on irregular multivariate time series forecasting and classification through a novel hierarchical patch-based architecture—are substantiated mainly. The experiments cover multiple datasets of varied nature (e.g., clinical, climate, activity), comparing against both specialized irregular-series baselines and prominent time-series models and typically show consistent improvements in performance metrics (e.g., MSE/MAE, AUROC/AUPRC). One minor point to note is that while the method’s patch-based hierarchical graph structure is well-tested under moderate sequence lengths, claims about scalability to massive datasets or exceedingly fine-grained irregularities could benefit from deeper empirical exploration. Overall, the evidence presented—including comparative results, ablation studies, and multi-dataset analysis—convincingly supports the paper’s central claims. Methods And Evaluation Criteria: Yes. The authors use benchmark datasets covering domains where irregular multivariate time series naturally arise (e.g., healthcare, climate, motion). These datasets and tasks (forecasting and classification) are appropriate for evaluating the benefits of the proposed hierarchical patch-based method. Comparisons to both general-purpose time-series models and specialized irregular-series frameworks further demonstrate the paper’s relevance to the problem setting and the robustness of its approach. Theoretical Claims: The submission does not appear to present formal proofs or a rigorous theoretical framework beyond outlining model architectures and derivations (e.g., attention mechanisms, graph-update formulas). Consequently, there are no explicit proofs to check for correctness. The authors support their modeling choices with empirical evidence rather than theoretical guarantees. Experimental Designs Or Analyses: The paper’s experimental design appears methodologically sound—particularly its multi-dataset, multi-task approach. The authors follow established splits (e.g., 6:2:2) and compare against well-chosen baselines across domains. They include ablation studies and variance reporting, further bolstering validity. No critical flaws or inconsistencies in the experimental design or analyses were observed. Supplementary Material: The author provides the running code, which I reviewed and found to be clearly expressed and easy to read. Relation To Broader Scientific Literature: The paper extends patch-based approaches like PatchTST—which typically handles regular time series—by leveraging graph-based links to account for the irregular sampling patterns seen in many real-world datasets. This builds on prior GNN research (e.g., tPatchGNN, MTGNN) that models asynchronous inter-variable relationships; however, it is combined with a hierarchical, multi-scale design. The work synthesizes established patching methods in time series analysis with graph architectures that capture local (intra-patch) and global (inter-patch) dependencies, advancing existing literature on irregular time series modeling and multi-scale feature extraction. Essential References Not Discussed: The paper already cites a range of prior works on patch-based modeling (e.g., PatchTST), continuous-time or irregular methods (e.g., Latent ODEs, GRU-D), and multi-scale networks (e.g., MSGNet). From the perspective of irregular time-series GNNs and multi-scale forecasting, no critical, well-known work appears omitted. The paper’s references collectively offer sufficient background for its primary contributions. Other Strengths And Weaknesses: The paper exhibits notable originality by creatively combining patch-based time series segmentation with hierarchical graph neural networks. This approach effectively addresses the challenges of irregular sampling and multi-scale feature extraction. This innovative synthesis bridges ideas from patch representations and continuous-time graph models, filling an essential gap in the literature. The empirical evaluation is comprehensive, spanning diverse datasets and tasks, which underscores the significance of the method across various real-world applications. Additionally, the clarity in the presentation of the model architecture and the experimental setup enhances the paper's overall quality. On the downside, the increased complexity of the hierarchical design introduces additional hyperparameters and potential computational overhead, which may affect scalability in massive datasets. Furthermore, while the paper demonstrates substantial performance improvements, more discussion on the interpretability of the learned multi-scale features and a deeper analysis of the metamethod's sensitivity to hyperparameter choices could further strengthen the contribution. Other Comments Or Suggestions: Other Comments and Suggestions: - Overall, the paper is well-written and clearly describes the methodology and experiments. - Minor typographical errors were observed, such as occasional inconsistencies in notation formatting (e.g., varying use of boldface for vectors) and a few grammatical slips that can be easily corrected during revision. - A short discussion on potential limitations or failure cases could further balance the presentation. Questions For Authors: 1. How does the proposed hierarchical framework scale computationally when applied to highly long time series or datasets with significantly higher sampling densities? 2. Can the authors elaborate on any techniques or experiments aimed at interpreting the hierarchical features learned by the model—for instance, identifying which scales or graph connections most strongly influence the final predictions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Responses to Reviewer 2e2R **W1 & Q1. The increased complexity of the hierarchical design may affect scalability in massive datasets. How does it scale computationally when applied to highly long time series or datasets with significantly higher sampling densities?** **A1.** 1. **Complexity in Sparse IMTS** As detailed in Appendix G.1, **the primary computational cost of our method stems from the intra-patch graph layer**, whose complexity is quadratic in the number of observation points (N) within each patch rather than hierarchical design. 2. **Future Scalability on Dense Data** In scenarios involving highly long time series or datasets with much higher sampling densities, we can readily adopt existing scalability techniques to mitigate the overhead of the intra-patch graph. For instance, **edge pruning**—where each node connects only to its nearest k observation points—can reduce the complexity to **O(kN)**. Alternatively, one could improve the attention mechanism (e.g., by incorporating the **ProbSparse Self-Attention** from Informer, which reduces complexity to **O(N log N))**. In these cases, the trade-off between a slight reduction in feature extraction capability and improved scalability would be justified. This extension is part of our planned future work; the current paper focuses on the initial proposal of a multi-scale framework for sparse IMTS. In the revised version, we will add a Limitations section to discuss scalability aspects. ------ **W2 & Q2. More discussions or experiments on the interpretability of the learned hierarchical multi-scale features.** **A2.** Taking the USHCN dataset as an example—comprising climate data from 1,218 centers across the United States for five variables (precipitation, snowfall, snow depth, maximum temperature, and minimum temperature)—we use the first two years (24 months) as historical input to predict future changes. 1. **Effect of Different Scales on Final Prediction Results** As illustrated in Figure 3(a) in Section 5.4, increasing the number of scales from 1 to 5 (by incorporating scales of 12, 6, 3, and 1.5 months) results in a continuous decrease in MSE, demonstrating the benefit of multi-scale features. Notably, when a 0.75-month scale is added (resulting in 6 scales), the MSE increases, indicating that this extra scale introduces redundancy rather than additional useful information. **This experiment not only identifies the scales that are beneficial for prediction, but also reveals which scales most strongly influence the final predictions through the slope changes.** For instance, the largest drop occurs when the 6-month scale is introduced, highlighting it is the most important scale for weather prediction. 2. **Discussion on the Interpretability of Learned Multi-Scale Features** As shown in Figure 5(a) in Section 5.5, our model achieves optimal performance with a patch size of 1.5 months, which enables feature extraction at scales of 1.5, 3, 6, and 12 months. **These scales correspond well with key climatological cycles and are largely interpretable.** Figure 12 in Appendix K provides visualizations of the time series at these scales based on our patching approach: - **Scale 1 (Original Scale):** The original scale view retains the full sequence data, exhibiting high local variability. - **Scale 2 (1.5-Month Aggregation):** By aggregating data every 1.5 months, the resulting view smooths short-term fluctuations, highlighting mid-term climate trends. - **Scale 3 (3-Month Cycle):** Aggregating every two observations from Scale 2 yields a 3-month cycle view. Seasonal trends become more evident in this view, with red and purple temperature curves show an upward, downward, upward and downward trend, broadly corresponding to the four seasons. - **Scale 4 (6-Month Cycle):** Further aggregation at 6-month intervals reveals monsoonal climate trends, where the red and purple temperature curves first rise and then fall, reflecting the impact of summer and winter monsoons. - **Scale 5 (12-Month Cycle):** At a 12-month aggregation, the view reveals annual macro-climate trends, such as overall increases, decreases, or stabilization in climate patterns. We will include this analysis in the appendix in future revisions. ------ **Comment1. Minor typographical errors and grammatical slips.** **A3.** Thank you for your careful review. We will thoroughly proofread and correct these issues in the revised version of the paper. ------ **Comment2. A short discussion on potential limitations or failure cases.** **A4.** We will incorporate a discussion on the scalability limitations of Hi-Patch in light of our response to W1 & Q1. --- Rebuttal Comment 1.1: Comment: The author's response somewhat alleviated my doubts, so I retained the original positive rating. --- Reply to Comment 1.1.1: Comment: Thank you for maintaining your positive rating. We’re glad to have alleviated your concerns and would be happy to clarify any further questions you may have.
Summary: This paper introduces a Hierarchical Patch Graph Neural Network (Hi-Patch) for Irregular Multivariate Time Series (IMTS) modeling, where variables have distinct sampling rates and exhibit multi-scale dependencies. Existing multi-scale analysis methods struggle with IMTS due to their assumption of regular sampling, making them ineffective in handling asynchronous and mixed-granularity data. Hi-Patch addresses this by first encoding each observation as a node, capturing local temporal and inter-variable dependencies through an intra-patch graph layer, and then progressively aggregating these nodes into higher-level patch representations using inter-patch graph layers. This hierarchical structure enables multi-scale feature extraction, preserving both fine-grained and coarse-grained patterns across different variable scales. The model is evaluated on forecasting and classification tasks across eight datasets, outperforming state-of-the-art methods. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: No proofs in the paper Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: The proposed Hi-Patch might be applied to IMTS modeling across various domains. Essential References Not Discussed: The key related works are included. Other Strengths And Weaknesses: Strengths: 1. The studied classification and forecasting problems of Irregular Multivariate Time Series (IMTS) have significant applications in various crucial domains, such as healthcare, finance, climate science, and astronomy. 2. The proposed Hi-Patch aims to address several unique challenges posed by IMTS modeling, including irregularity, asynchrony, and mixed granularity. These challenges may not be present and addressed in the widely studied Regular Multivariate Time Series. 3. The experimental results exhibit promising performance in IMTS forecasting task. Weaknesses: 1. The Hi-Patch model appears computationally expensive due to its use of graph attention operations on fully connected graphs of all observation nodes across variables. This may limit its scalability for large-scale variables with dense observations. 2. The performance improvement in the classification task seems marginal. Additionally, the PhysioNet and P12 datasets both originate from The PhysioNet/Computing in Cardiology Challenge 2012 [1], making them somewhat redundant. 3. As a crucial driving factor for this study, the motivation of studying multi-scale information in IMTS should be further justified. Why is it an important factor in IMTS modeling? [1] https://physionet.org/content/challenge-2012/1.0.0/. Other Comments Or Suggestions: NA Questions For Authors: see weakness Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Responses to Reviewer ZR6y **W1. Hi-Patch appears computationally expensive, which may limit its scalability.** **A1.** We address this concern from both sparse and dense data perspectives. 1. **Trade-off in Sparse IMTS** As detailed in Appendix G.1, the primary computational cost of our method arises from the intra-patch graph layer, whose complexity scales quadratically with the number of observation points (N) per patch. Our focus is on irregular multivariate time series, which are typically characterized by extremely sparse sampling. As shown in Table 4, all our datasets exhibit a missing rate exceeding **75%**. In these scenarios, the computational cost decreases sharply due to the limited number of available data points. We contend that the additional overhead is justified by the improved extraction of scarce data patterns. Moreover, our empirical study in Appendix G.2 indicates that our overhead is moderate—**ranking 5th out of 9 methods**—and remains within an acceptable range. 2. **Future Scalability on Dense Data** Should the need arise to extend our approach to large-scale dense datasets, existing scalability techniques can be integrated to reduce the overhead of the intra-patch graph. For instance, **edge pruning**—such as connecting each node only to its nearest k observation points—can reduce the complexity to **O(kN)**, or alternatively, leveraging improved attention mechanisms like the **ProbSparse Self-Attention** from Informer can lower the complexity to **O(N log N)**. In these cases, the trade-off between a slight reduction in feature extraction and enhanced scalability would be worthwhile. This extension is among our planned future works. For the current paper, our core contribution remains the introduction of a multi-scale framework for sparse IMTS. We will also add a Limitations section discussing model scalability. ------ **W2. The performance improvement in the classification task seems marginal. The PhysioNet and P12 datasets are redundant.** **A2.** 1. **Classification Performance Improvement** Although our method only shows marginal improvements over the best baseline on each individual dataset, it is important to note that **no single baseline consistently outperforms the others across all datasets**. The table below presents the average AUROC and AUPRC for our method and the three most competitive baselines across four datasets. **Our method achieves an average AUROC improvement of 1% and an average AUPRC increase of 2.7%**, which is noteworthy given that the AUROC values are already near 90%. | | Avg AUROC | Avg AUPRC | | ---------- | ---------- | ---------- | | IP-Net | 86.2 | 52.0 (2nd) | | StraTS | 86.7 (2nd) | 51.1 | | Warpformer | 86.3 | 50.3 | | Ours | 87.7 (1st) | 54.7 (1st) | 2. **PhysioNet and P12 Datasets** The P12 dataset comprises three subsets—set-a, set-b, and set-c—each containing 4,000 samples (totaling 12,000 samples). Previous studies [1, 2] have utilized only the set-a subset (namely PhysioNet dataset in our paper), whereas others [3, 4] employed the full P12 dataset. **By evaluating both configurations, we aim to test the robustness of our method with respect to sample size and distribution.** The consistent performance across these configurations further validates the stability of our approach. [1]. Multi-Time Attention Networks for Irregularly Sampled Time Series, ICLR, 2021 [2]. Warpformer: A Multi-scale Modeling Approach for Irregular Clinical Time Series, KDD, 2023 [3]. Graph-Guided Network for Irregularly Sampled Multivariate Time Series, ICLR, 2022 [4]. Time Series as Images: Vision Transformer for Irregularly Sampled Time Series, NeurIPS, 2023 ------ **W3. The motivation for studying multi-scale information in IMTS.** **A3.** 1. **Foundational Evidence** Prior studies—such as those on Pathformer, TimeMixer, and MSGNet—have demonstrated that multi-scale information is crucial for general time series analysis. Although irregular multivariate time series (IMTS) are characterized by uneven sampling intervals, they inherently retain multi-scale properties. 2. **Practical Example** For instance, the USHCN weather dataset exhibits monthly, quarterly, and yearly patterns (see Figure 12). Each temporal scale shows distinct patterns that are essential for accurate prediction. This example underlines the importance of multi-scale information in IMTS. 3. **Prior Empirical Validation** Empirical evidence from previous work Warpformer [2], supports the benefits of incorporating multi-scale information into IMTS modeling. While Warpformer has its own limitations, its success further motivates our deeper exploration into a multi-scale modeling approach for IMTS. Collectively, these points underscore the significance of multi-scale information in IMTS modeling and reinforce the motivation behind our work. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification, I will raise my score. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback and for taking the time to carefully review our responses. We deeply appreciate your thoughtful evaluation and support.
null
null
null
null
null
null
null
null
Hierarchical Equivariant Policy via Frame Transfer
Accept (poster)
Summary: Hierarchical Equivariant Policy (HEP) enhances hierarchical policy learning by introducing a frame transfer interface, where the high-level agent’s output serves as a coordinate frame for the low-level agent, improving flexibility and inductive bias. It also integrates domain symmetries at both levels, ensuring overall equivariance. HEP achieves state-of-the-art performance in complex robotic manipulation tasks, demonstrating improvements in both simulation and real-world settings. Claims And Evidence: The high-level policy outputs only x, y, z positions, while the low-level policy generates concrete actions. This approach is commonly used in hierarchical planning, where the high-level policy produces key points and the low-level policy generates trajectories. Additionally, the frame transfer mechanism resembles a coordinate transformation to a target-frame-centric representation. The authors should provide more explanation to clarify the novelty of their approach. Methods And Evaluation Criteria: Can the method be applied in scenarios with only image input, without requiring RGB-D inputs? During the evaluation, is a separate policy trained for each task? Also, does the policy incorporate language inputs? How is the Equivariant Diffusion Policy used? Does its sampling procedure differ from that of a standard Diffusion Policy? Theoretical Claims: N/A Experimental Designs Or Analyses: All the baselines use 3D scene representations; a comparison with state-of-the-art image-based visuomotor policies would provide a more comprehensive evaluation. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their response. We respond below: --- > “...The authors should provide more explanation to clarify the novelty of their approach.” We agree with the reviewer that there exist various works on hierarchical policy learning. However, our method is the first to incorporate equivariant learning into hierarchical policy. This is both novel, and leads to sizable performance increases. Additionally, our proposed frame transfer interface ensures translational equivariance and imposes soft constraints on the low-level agent. We refer Reviewer 4 to Reviewer 1’s assessment of our method's novelty > “Can the method be applied in scenarios with only image input, without requiring RGB-D inputs?” Currently, our approach requires a 3D understanding of the environment, which we obtain using RGB-D inputs. Given the expense and inherent noise associated with RGB-D sensors, an interesting direction for future work would be to extend our framework to scenarios that rely solely on RGB images. Recent advances in depth-estimation foundation models [1,2] may enable the generation of accurate 3D representations from pure image data. Additionally, in tabletop settings where a top-down camera is available, pixel coordinates can be directly mapped to spatial (x,y) positions without requiring explicit depth measurements, thus potentially enabling SE(2) equivariance in the policy. [1] Guo, Y, et al. "Depth Anything: Unleashing the Power of Large-Scale Image-Text Pretraining for Zero-Shot Depth Estimation." arXiv preprint arXiv:2311.16502, 2023. [2] Wen, B, et al. "FoundationStereo: Zero-Shot Stereo Matching." arXiv preprint arXiv:2501.09898, 2025. > “...separate policy trained for each task? ...does the policy incorporate language?” We are evaluating our policy under a single-task setting following former work [3] for fair comparison. However, it should be straightforward to operate in a multitask setting by adding a language feature to the observation, as is done in [4,5]. Evaluating the performance in multi-task settings can be an interesting direction for future work. [3] Xian, Z et al "ChainedDiffuser: Unifying Trajectory Diffusion and Keypose Prediction for Robotic Manipulation." CoRL. PMLR, 2023. [4] Ke, J, et al. "3D Diffuser Actor: Policy Diffusion with 3D Scene Representations." CoRL. PMLR, 2024. [5] Shridhar, M, et al. "Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation." CoRL. PMLR, 2023. --- > “How is the Equivariant Diffusion Policy used?...sampling procedure differ from that of a standard Diffusion Policy?” Our method improves upon standard diffusion policy in two key ways. Specifically, our policy is explicitly equivariant [6], meaning that it is automatically able to generalize to new problem instances related by symmetry. This is not inherent in Diffusion Policy [7]. Secondly, our method utilizes a hierarchical decomposition — namely, a high-level and low-level agent — not found in standard diffusion policy. As shown in our experiment result in Table 1 in our response to reviewer 3 cFSf (referenced here due to character limitations) and Table 1 here, these two changes enable our method to significantly outperform standard diffusion policy. [6] Weiler, M. et al. "E(2)-Equivariant Neural Networks for Image Classification and Beyond." ICLR, 2019. [7] Chi,C et al. "Diffusion Policy: Visuomotor Policy Learning via Action Diffusion." RSS, 2023. --- > “...baselines use 3D scene representations; a comparison with state-of-the-art image-based visuomotor policies would provide a more comprehensive evaluation.” As shown in the prior work [9], image-based Diffusion Policy [7] performs poorly in RLBench tasks, where the 3d representation of the workspace is essential. We also performed an extra experiment comparing the image-based diffusion policy against our method and we report the result here: ### Table 1. Performance on 5 Tasks from RLBench | Method (Closed-loop) | Mean | Turn On Lamp | Open Microwave | Push 3 Buttons | Open Drawer | Put Item in Drawer | |-|-|-|-|-|-|-| | **Ours** | **79** (+23) | **60** (+32) | **64** (+22) | **37** (+36) | **95** (+41) | **76** (+28) | | EquiDiff [8] | 57 | 28 | 42 | 1| 54|48| | DiffPo (Img) [7] | 2.8 | 4| 1 | 0 | 7| 2 | We follow the hyperparameters from the original work, with the exception that we use RGB images from the same four cameras used in our setup (wrist, front, right shoulder, and left shoulder) and at the same resolution, to ensure a fair comparison. [8] Wang ,D ,et al "Equivariant Diffusion Policy." CoRL. PMLR, 2024. [9] Shridhar, M et al "Generative Image as Action Models." CoRL PMLR, 2024.
Summary: The paper proposes to develop SO(2)xT(3) equivariant hierarchical policy for imitation learning learnt policy. The high-level proposes the translation target in the canonical world frame. While the low-level policy uses the local frame to diffuse the action. The equivariance is achieved via frame transfer interface and voxel repr. Experiments show that the proposed hierarchical policy achieves ~10% performance when the number of demonstration is very limited. Claims And Evidence: The claim is well supported by the experiments. Furthermore, i agree that with these design choices, the policy is indeed equivariant. Methods And Evaluation Criteria: As RL-Bench is the standard to test imitation learning algorithms, i believe the result is quite convincing on its capability to generalize well in the low data domain. Theoretical Claims: I agree that the theoretical claims on the equivariance property is correct. Experimental Designs Or Analyses: The experiment is comprehensive. However, i think related baselines are missing like equivariant policies, e.g., Equivariant Diffusion Policy. I would like understand the difference of having a hierarchical policy and more simple versions. Supplementary Material: I scanned through the proofs that sound reasonable to me. Relation To Broader Scientific Literature: The work studies methods that can allow the imitation policy works well in low data domain. This is an important problem for robot learning. Essential References Not Discussed: There should be work on equivariant policies, e.g., Equivariant Diffusion Policy, Wang et al. Other Strengths And Weaknesses: The design of the hierarchical policy is reasonable for the equivariant policy. However, i think evaluation is needed to show that a hierarchical policy is indeed needed for solving these imitation learning tasks. Other Comments Or Suggestions: None Questions For Authors: 1. Could you show results of other equivariant policies? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their response. We respond below: > "...related baselines are missing like equivariant policies, e.g., Equivariant Diffusion Policy. I would like understand the difference of having a hierarchical policy and more simple versions" > "There should be work on equivariant policies, e.g., Equivariant Diffusion Policy, Wang et al." > "Could you show results of other equivariant policies?" Thank you for bringing this to our attention. We did provide comparisons with equivariant policies, specifically the Equivariant Diffusion Policy [1], labeled as **"EDP"** in Table 1. However, we acknowledge that this reference may not have been sufficiently clear. In our revised manuscript, we will explicitly clarify that **"EDP"** refers to the Equivariant Diffusion Policy and emphasize this comparison more clearly to avoid confusion. Additionally, we've conducted an ablation study on the hierarchical architecture to better understand its benefits: ### Table 1. Revised Ablation Experiment | **Method** | **Mean** | **Lamp on** | **Open microw.** | **Push 3 buttons** | **Push button** | **Open box** | **Insert USB** | |--------------------|---------:|------------:|-----------------:|-------------------:|----------------:|-------------:|---------------:| | No Hierarchy | 0.51 | 0.28 | 0.42 | 0.01 | 0.96 | 0.99 | 0.38 | | No Equi No FT | 0.60 | 0.21 | 0.44 | 0.53 | 0.96 | 0.99 | 0.51 | | No Equi | 0.70 | 0.41 | 0.53 | 0.67 | 0.98 | 0.99 | 0.64 | | No FT | 0.78 | 0.75 | 0.56 | 0.73 | 0.98 | 0.99 | 0.68 | | No Stacked Voxel | 0.84 | 0.77 | 0.65 | 0.87 | 0.99 | 0.99 | 0.79 | | **Complete Model** | **0.94** | **0.95** | **0.82** | **0.99** | **1.00** | **1.00** | **0.90** | Our findings demonstrate that incorporating a hierarchical structure notably enhances performance, particularly on long-horizon tasks. --- **[1]** Wang, Dian, Stephen Hart, David Surovik, Tarik Kelestemur, Haojie Huang, Haibo Zhao, Mark Yeatman, Jiuguang Wang, Robin Walters, and Robert Platt. *"Equivariant Diffusion Policy."* Conference on Robot Learning (CoRL). PMLR, 2024.
Summary: The paper proposes a novel Hierarchical Equivariant Policy (HEP) framework for robotic manipulation tasks, combining hierarchical learning with translation and rotation equivariance via a flexible Frame Transfer interface. HEP decomposes tasks into high-level coarse predictions and low-level fine-grained trajectory generation, improving flexibility, sample efficiency, and spatial generalization. Experiments show HEP significantly outperforms existing approaches in simulation and real-world robotic tasks, especially those demanding precise control and long-horizon planning. Claims And Evidence: The experiments support the main claims. The main claims are: - HEP significantly improves robotic manipulation performance in both simulation (RLBench tasks) and real-world experiments compared to state-of-the-art methods. - Frame Transfer Interface effectively provides flexibility and efficiency in hierarchical policy learning. Empirical evaluations across 30 diverse tasks demonstrate that HEP achieves higher success rates compared to baseline methods. Methods And Evaluation Criteria: The proposed methods and simulation benchmark make sense for the problem. The inclusion of real-world experiments is also a plus. The ablations demonstrate the value of different components. One-shot generalization tests are also interesting. Theoretical Claims: The theoretical claims seem correct under the assumptions stated, although they are impossible to verify due to the deep neural net. Experimental Designs Or Analyses: In open-loop training, the low-level target is constructed by interpolating between the consecutive keyframes. This interpolation may fail in a clustered environment due to the need for obstacle avoidance. This also introduces one of the main limitations of the experimental setup: the method is tested only on a simple, unclustered table-top setting. Supplementary Material: I went through the supplementary material. Relation To Broader Scientific Literature: The paper’s contributions are well-motivated, and address important limitations of existing approaches. Essential References Not Discussed: Nothing I can think of. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their response. We respond below: > “The theoretical claims seem correct under the assumptions stated, although they are impossible to verify due to the deep neural net.” We thank the reviewer for raising this important point. While our theoretical claims appear valid under the stated assumptions, we acknowledge the difficulty of directly verifying these assumptions due to the complexity of deep neural networks. To address this concern empirically, we conducted an experiment measuring the equivariance error specifically on C4, a subgroup of SO(2). This allowed us to quantify the difference between the rotated output and the output from a rotated input. The experimental results are summarized in the table below: **Table 1. Equivariance Error Under Different Rotations** | Rotation | Equivariance Error | |----------|--------------------| | 0° | 0% | | 90° | 0.013% | | 180° | 0.006% | | 270° | 0.009% | As you can see, the network is equivariant, modulo numerical error, to rotational transformation. Also, our network inherits the transnational equivariance through the Unet and frame transfer architecture. --- > “In open-loop training, the low-level target is constructed by interpolating between the consecutive keyframes. This interpolation may fail in a clustered environment due to the need for obstacle avoidance. This also introduces one of the main limitations of the experimental setup: the method is tested only on a simple, unclustered table-top setting.” We thank the reviewer for the question. In Section 6 **“Robust to Environment Variations”**, we demonstrate that the trained policy would be robust to some disturbance objects in the workspace. We conducted an extra experiment evaluating the success rate of executing a block stacking task under human perturbation mimicking a dynamic environment and include the results here: **Table 2. Success Rate Under Human Perturbation** | Task | Success Rate | |------------------|--------------| | Blocks stacking | 0.8 | However, the reviewer makes a good point that if the demonstration data does not contain obstacle avoidance while an obstacle is introduced at test time, the trained policy might fail. In future work, we propose to add an additional hierarchy of a trajectory optimization layer, which would refine the policy for obstacle avoidance.
Summary: The paper introduces Hierarchical Equivariant Policy (HEP), a framework for hierarchical reinforcement learning that integrates equivariance (geometric symmetry) into a two-level policy architecture. The high-level policy outputs a coarse 3D subgoal (keypose), essentially a translation in space representing the next target location in the task. This subgoal is then used to “frame-shift” the low-level policy’s coordinate system via a Frame Transfer interface, meaning the low-level agent perceives the world relative to the high-level’s suggested frame. Claims And Evidence: The paper introduces a novel integration of equivariance into hierarchical policy learning. This is the first work to imbue a two-level (coarse-to-fine) policy with symmetry properties, theoretically proving that the combined high-low policy remains equivariant to translations and rotations. This theoretical contribution is non-trivial and extends prior equivariant RL approaches (which were single-level) to a hierarchical setting. By decomposing the symmetry into a global part (at the high level) and a local part (at the low level), the design cleverly ensures the overall policy is invariant to spatial transformations of the task, which is a new insight with potentially broad impact on how we design policies that generalize across space and orientation. Instead of the high-level simply commanding a fixed goal state for the low-level (as in prior two-stage approaches), it outputs a coordinate frame shift (a 3D translation) that serves as a context for the low-level. This provides a strong inductive bias (the low-level is “anchored” to work towards the subgoal) without rigidly constraining the low-level’s behavior. The low-level agent can thus refine the trajectory locally, handling details and adjustments that the high-level might miss. This hierarchical decomposition bridges the gap between keyframe-based and trajectory-based learning methods – combining their advantages. Methods And Evaluation Criteria: HEP demonstrates state-of-the-art results on a wide range of tasks. It consistently outperforms multiple baseline methods, including advanced diffusion-based policies, by a significant margin. A notable strength is the method’s demonstrated ability to generalize beyond its training conditions. The one-shot learning experiment, where HEP learned a task from a single demo and still succeeded 80% of the time on novel object configurations, is a compelling result. This level of generalization and robustness is a major practical strength, as real-world deployments often face variations that are not seen in training. The paper goes beyond simulation and validates the approach on a real robotic system, which strengthens the work significantly. Additionally, the authors include ablation studies that clearly quantify the contribution of each component: removing equivariance, Frame Transfer, or the voxel encoder each drops performance significantly (e.g. removing equivariance alone reduces success by 24%). Theoretical Claims: The low-level policy predicts a sequence of fine-grained actions (trajectory) relative to the subgoal frame instead of absolute coordinates. This design preserves a strong inductive bias (anchoring the low-level to an intermediate goal) while allowing flexibility for the low-level to adjust and refine the trajectory locally. Crucially, the authors incorporate domain symmetries into both levels of the policy: the high-level subgoal selection and the low-level trajectory generation are designed to be equivariant to translations and in-plane rotations (T(3) × SO(2) symmetry). This means if the environment or task is shifted or rotated, the HEP’s policy will produce a correspondingly shifted/rotated action sequence, greatly enhancing spatial generalization. Theoretical propositions (with proofs in the appendix) show that under the given design, the entire hierarchical policy is equivariant to those transformations. Experimental Designs Or Analyses: To efficiently handle visual input, HEP uses a stacked voxel grid representation of point clouds (from multi-view RGB-D cameras) processed by a 3D equivariant U-Net, enabling rich 3D features and fast inference. On the experimental side, the paper provides an extensive evaluation of HEP on 30 robotic manipulation tasks from RLBench and several real-world robot tasks. Supplementary Material: Parts E, F, G an H Relation To Broader Scientific Literature: The paper is related to existing hierarchical methods in robotic manipulation such as Ma et al., 2024; Xian et al., 2023. This paper proposes an approach where the high-level agent predicts a keypose in the form of a coarse 3D location representing a subgoal of the task. This location is then used to construct the coordinate frame for the low-level policy. Essential References Not Discussed: Shao, Jianzhun, et al. "Pfrl: Pose-free reinforcement learning for 6d pose estimation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. Other Strengths And Weaknesses: • While HEP addresses translational equivariance thoroughly, the high-level Frame Transfer interface handles only translations (T(3)) and not rotations of the coordinate frame. The equivariance to rotations is incorporated in the network (policy) architecture for planar rotations (SO(2)), but the high-level does not explicitly output a desired orientation for the subgoal. This is noted by the authors as a limitation: currently the framework isn’t equipped to specify or leverage rotations in the subgoal, which could limit performance on tasks where orientation of the end-effector is critical. The paper suggests that extending Frame Transfer to include rotation is future work. Until that is realized, the method might struggle or require ad-hoc solutions in scenarios like screwing in a lightbulb or opening a door, where the goal orientation matters as much as position. • Another limitation is that the experiments are confined to tabletop robotic manipulation tasks. All tasks in the paper involve a fixed robotic arm interacting with objects on a table or in a small workspace. This leaves open the question of how well the approach scales to other domains (e.g., legged locomotion, navigation, or deformable object manipulation). The authors explicitly note that extending HEP to more complex robots like humanoids is a promising direction, but it remains to be seen if the current design would directly apply or if significant modifications are needed. For example, a humanoid might require a deeper hierarchy or different equivariances. Thus, the generality of HEP beyond manipulation isn’t proven. • HEP is trained via behavior cloning on demonstration data. While the approach is more sample-efficient than prior methods it still fundamentally requires demonstration trajectories for each new task. This reliance on expert demos could be a bottleneck in scenarios where obtaining demonstrations is time-consuming or costly. As a minor point, hyperparameter sensitivity (e.g., how the choice of the interval m for high-level steps, or the number of diffusion steps, affects performance) is not deeply discussed – presumably these were tuned on a subset of tasks, but it’s not reported, leaving a bit of uncertainty on how robust the training setup is. • While the related work is thorough, there might be recent works on hierarchical RL or imitation (outside of diffusion models) that could be acknowledged for completeness. These are relatively minor and did not significantly hinder understanding or merit of the work. Other Comments Or Suggestions: No Questions For Authors: 1. HEP showed strong results even with very few demonstrations in some cases. Could the authors elaborate on what aspects of HEP enable such one-shot generalization? For example, is it largely the equivariant inductive bias that makes one demo cover many situations via symmetry? 2. how much does the hierarchical decomposition alone (even without equivariance) contribute versus a flat equivariant policy? 3. In real-world tests some failures were due to misalignment from depth sensor noise. How might the system be made more robust to such errors? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We respond below: --- > "HEP addresses translational equivariance thoroughly... currently, the framework isn’t equipped to specify or leverage rotations... could limit performance on tasks where orientation is critical." Thank you for bringing this up! We agree that predicting orientation subgoals at the high level could potentially further improve sample efficiency. However, we would like to clarify that our current low-level agent already predicts SE(3) actions, enabling it to handle tasks requiring complex end-effector orientations, such as door closing, fridge opening, and pot cleaning. That being said, we are actively working on an extension of this paper where the high-level explicitly predicts goal orientations. We will thoroughly investigate and compare the advantages and disadvantages of these two orientation-handling strategies in our follow-up work. > "Experiments... confined to tabletop manipulation... scalability to other domains remains unclear." We appreciate the reviewer's insightful suggestion. Extending HEP to more complex robots, such as humanoids, is indeed a valuable direction for future research. We recognize that the generality of HEP in cross-embodied scenarios remains to be fully established. Nonetheless, we firmly believe that HEP effectively demonstrates two fundamental principles essential for efficient policy learning: special Euclidean group symmetry and hierarchical policy decomposition. These principles are expected to inform and inspire future developments across various robotic domains. > "HEP relies on demonstrations... could be a bottleneck... hyperparameter sensitivity was not thoroughly discussed." We appreciate the reviewer's valuable insight. We agree that dependence on expert demonstrations can be a limiting factor, especially given the time-consuming nature of obtaining demonstrations. While this study primarily explores behavior cloning, we believe future research could effectively extend HEP into reinforcement learning frameworks, thus eliminating the necessity for human demonstrations. Regarding hyperparameter sensitivity, we would like to clarify that our approach did not involve any hyperparameter tuning. Specifically, we directly adopted the high-level hyperparameters from Act3D and the low-level hyperparameters from Equivariant Diffusion Policy without any finetuning. We believe this choice underscores the robustness of our method. We will also make this explicit in the final version of the manuscript. > "Related work is comprehensive... recent hierarchical RL or imitation learning works outside diffusion models should be acknowledged." Thanks for your valuable advice. We added following works as our related works: [1] Wang, C, et al. "MimicPlay: Long-Horizon Imitation Learning by Watching Human Play." CoRL. PMLR, 2023. [2] Luo, J, et al. "Multi-Stage Cable Routing through Hierarchical Imitation Learning." IEEE Transactions on Robotics, 2024. [3] Triantafyllidis, E, et al. "Hybrid Hierarchical Learning for Solving Complex Sequential Tasks using the Robotic Manipulation Network (ROMAN)." Nature Machine Intelligence, 2023. [4] Bagaria, A, et al. "Effectively Learning Initiation Sets in Hierarchical Reinforcement Learning." NeurIPS, 2023. [5] Shao, J, et al. "Pfrl: Pose-free reinforcement learning for 6d pose estimation." CVPR. 2020 --- ### Reviewer’s Questions: > "HEP showed strong one-shot generalization... Is it largely due to equivariant inductive bias?" Thank you for bring this up. We believe the reason is that we use a 3D Unet as our high level (which is T(3) equivariant), but most importantly, our frame transfer interface passes the equivariance and generalization ability to our low level, leading to an improved generalization ability of the whole policy. > "Contribution of hierarchical decomposition alone (without equivariance) compared to flat equivariant policy?" This is a great question. We conducted an ablation study with hierarchical decomposition. And we put it into our response to Reviewer 3 cFSf due to character limitation. Please see Table 1 in our response to reviewer 3 cFSf for more details. As shown in the table, removing the hierarchical decomposition lead to a significant drop in performance, demonstrating importance of a hierarchical structure especially on long horizon task. > "Improving robustness to depth sensor noise?" We thank the reviewer for raising this important point. While the high sample efficiency of our policy allows us to train directly on real-world data—enabling the policy to adapt to sensor noise to some degree—we agree that introducing additional noise during training (e.g., applying a dropout layer to randomly remove points) could further enhance the robustness of the system to sensor noise. We will explore this idea in future experiments. Thank you again for your insightful feedback!
null
null
null
null
null
null
GHOST: Generalizable One-Shot Federated Graph Learning with Proxy-Based Topology Knowledge Retention
Accept (poster)
Summary: This paper proposes GHOST, a novel generalized one-shot FGL framework designed to address challenges related to generalizable capability and catastrophic forgetting. The method involves two main components: Dual-Level Aligned Proxy Model and Topology-Conscious Knowledge Retention. In the first component, each client constructs a proxy model that captures local feature and topological information. In the second component, the server aggregates the proxy models from all clients to form a global model. A Topology-Consistency Criterion is applied during global training to stabilize key parameters and retain important topological information, thus mitigating catastrophic forgetting. The framework also incorporates a privacy-preserving mechanism to secure client data. Experimental results demonstrate that GHOST outperforms existing methods and shows resilience to data heterogeneity. Claims And Evidence: This paper proposes a persuasive explanation of its claims. One-shot Federated Learning has emerged as an effective approach in environments with limited resources, and there indeed is a gap between One-shot FL and traditional FGL, especially in the generalized setting, as claimed in the paper. The figure of the problem illustration clearly presents the phenomenon. Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense for the problem. The Dual-Level Aligned Proxy Model and Topology-Conscious Knowledge Retention are both reasonable in the one-shot setting. The node classification task is widely used as a downstream task in FGL methods and is practically applicable. Theoretical Claims: I have checked the correctness of proofs for theoretical claims in this paper. The dual-level alignment loss can indeed capture fine-grained knowledge, thus achieving a good alignment. Experimental Designs Or Analyses: I have checked the soundness of the experimental designs and analyses in this paper. In the experimental part of this paper, the authors conducted the node classification task on seven real-world datasets. The privacy security part is meaningful and convincing to me. Moreover, the authors make detailed analyses on the experimental results in Sec 5.2. Supplementary Material: There is no supplementary material in this paper. Relation To Broader Scientific Literature: The proposed method pushes the boundaries of FGL by offering an innovative generalized one-shot approach, filling gaps left by previous studies and paving the way for more practical applications in FGL. Moreover, the Fused Gromov-Wasserstein distance [1] in the alignment part is novel and effective. [1] Vayer, T., Chapel, L., Flamary, R., Tavenard, R., & Courty, N. (2020). Fused Gromov-Wasserstein distance for structured objects. Algorithms, 13(9), 212. Essential References Not Discussed: There are no related works that are essential to understanding key contributions of the paper, but are not currently cited in the paper. Other Strengths And Weaknesses: - Strengths: 1. The proposed method of generalized one-shot FGL appears interesting and innovative. 2. The paper provides a clear and detailed explanation of the methodology, making the method easy to follow. 3. This paper considers the structure attributes in graph data, introducing Topology-Consistency Criterion to mitigate catastrophic forgetting, which is an important addition to the field and is well-supported by both theory and experiments. 4. The inclusion of a privacy analysis is a strong point, particularly in FL where data privacy is crucial. The Center-Shifting method provides a practical solution to safeguard client data. - Weaknesses: 1. This paper does not provide implementation details for applying one-shot FL methods to graph data. Specifically, the one-shot baselines used in the experiments, such as FedCVAE and FedSD2C, rely on generative techniques designed for image data, which differs significantly from graph data. The paper does not address how graph data are generated in these methods. 2. How does the method perform under higher levels of data heterogeneity? For example, what happens when the Dirichlet distribution parameter α is set to lower values (e.g., α=0.01)? Other Comments Or Suggestions: I have no other comments or suggestions. Questions For Authors: See Weaknesses above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ***Dear Reviewer TQDx:*** We greatly appreciate your positive feedback on our work, as well as the thoughtful concerns and questions you raised. We have carefully considered each of your comments and provided detailed responses. **Weakness:** **W1: Implementation details for applying one-shot FL methods to graph data.** We sincerely appreciate the reviewer’s suggestion to clarify the implementation details of applying one-shot FL methods to graph data. In our experiments, we follow the overall process of the compared one-shot FL methods, utilizing the encoders and decoders from the corresponding papers to generate graph data features. Additionally, we adopt the k-nearest-neighbor (KNN) method to construct the topological structure for the generated graph data. Specifically, in FedCVAE, we adhere to the Conditional Variational Autoencoder framework as used in the original paper. We input the training data and labels into the encoder to obtain representations in the latent space, which are then fed into the decoder to generate features. The KNN algorithm is subsequently applied to construct the topology. Similarly, in FedSD2C, we follow the original paper’s Autoencoder framework (including both the encoder and decoder) for generating graph data features and use the KNN algorithm for topology construction. By doing so, we retain the key processes and steps of the original baselines while reasonably adapting them to graph data. **W2: Performance under higher levels of data heterogeneity (e.g. $\alpha = 0.01$)** To demonstrate the performance of our method under higher data heterogeneity, we conduct experiments on the CiteSeer, and Coauthor-CS datasets with the Dirichlet distribution parameter $\alpha$ set to 0.01. We select five baselines for comparison: FedAVG, FedNova, FedPub, FedTAD and FedCVAE. The experimental results are shown in Table 1. *Table 1: Performance of GHOST under higher data heterogeneity ($\alpha = 0.01$) on CiteSeer and Coauthor-CS datasets.* | **Methods** | **CiteSeer** | **Coauthor-CS** | |---------------|--------------|-----------------| | FedAVG | 23.15 | 11.09 | | FedNova | 19.12 | 16.94 | | FedPub | 20.71 | 11.28 | | FedTAD | 21.21 | 11.19 | | FedCVAE | 23.53 | 8.97 | | **GHOST** | **25.32** | **17.61** | From the table, we can observe that our method maintains the best performance even under higher data heterogeneity. For some datasets, when $\alpha$ is smaller, certain clients have very few nodes, which does not align with real-world scenarios. Therefore, we set $\alpha = 0.05$ in the experiments, as it ensures high data heterogeneity while avoiding overly extreme scenarios that do not reflect real-world conditions. --- Rebuttal Comment 1.1: Comment: Thank you for the response. The additional experiments under higher data heterogeneity resolve my questions. I will maintain my positive score. --- Reply to Comment 1.1.1: Comment: ***Dear Reviewer TQDx:*** Thank you for taking the time to revisit our work. We are pleased that our clarifications and additional results have addressed your questions. Your positive evaluation means a lot to us. Best Regards, Authors
Summary: This work focuses on issues of communication overload, generalizability and catastrophic forgetting and proposes a one-shot approach where each client constructs a proxy model After the alignment, these models are then uploaded to the server to train a global model in an ensemble manner. This work also adopts topology knowledge retention to mitigate catastrophic forgetting. Privacy protection is also a key component in this work, ensuring the security of client data. Experimental results indicate that GHOST outperforms existing methods, showing robust performance in the face of data heterogeneity. Claims And Evidence: This work identifies the challenges of poor generalizable ability and catastrophic forgetting and proposes clear evidence in the introduction part. Methods And Evaluation Criteria: The framework figure (Figure 2) makes clear and vivid elaboration of the overall method. All the modules in the work are closely related to the identified problems. The benchmark datasets are highly relevant with the problem as well. Theoretical Claims: Both the proxy model alignment part and topological knowledge retention part are supported by rigorous analyses. The identification of parameters crucial for capturing topological knowledge is interesting, where the topological attributes at the hidden layers are fully explored and captured. I believe topological information is of great significance and is worth highlighting in graph learning. Experimental Designs Or Analyses: The experimental designs and analyses in this work are soundness to me. Performance on seven datasets with various scales shows the superiority of this work. The ablation study and sensitivity study are sufficient, making strong proof for the contribution of each module. In Appendix D, the authors conduct the experiment with different client scales, showing the resilience of this work. However, I have one issue about the results, which will be proposed in details in **Questions For Authors:** part. Supplementary Material: No supplementary material in this work Relation To Broader Scientific Literature: GHOST represents a novel step forward in the field of FGL by presenting a unique one-shot approach. It overcomes some of the key limitations found in previous studies, creating new avenues for future research. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: 1. Novel and well-motivated. This work points out the issue of communication overload and innovatively bridges the gap between FGL and one-shot setting. With efficient aligning and knowledge retention, this work effectively addresses the problems left by previous studies. 2. Easy to follow. The paper is well-written with a detailed and exquisite framework visualization. Moreover, the precise descriptions of modules in the methodology makes the overall work clear and easy to follow. 3. Topology-aware. The topology knowledge retention module is meaningful. Graph data significantly differ from image data due to their structure attribute in non-Euclidean space. This work highlights the importance of graph structure and strive to consolidate the topology knowledge during training, which makes sense. Weaknesses: 1. Lack of further explanation of the pseudo labels: This work assumes that the pseudo labels have a uniform distribution across all classes. This assumption could be unrealistic in some real-world datasets, where the class distributions might be skewed. 2. Lack of more sensitivity study. The work lacks analysis of model performance with different local training epochs of each proxy model. 3. Lack of training details: The work lacks details of the local training epoch for other traditional FL/FGL or one-shot FL baselines in the experimental section. Other Comments Or Suggestions: None. Questions For Authors: Model performance with varying client numbers: I noticed the experiment in Appendix D, which evaluates the model's performance with different client scales. Why does the model perform better with more clients on some datasets (e.g., Coauthor-CS)? With 5 clients, each client should have more training data compared to when there are 10 clients, so why does the model perform worse under the former condition (24.99 for 5 clients vs. 29.91 for 10 clients)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ***Dear Reviewer 9vCa:*** We sincerely appreciate the time and effort you have dedicated to reviewing our paper. We hope that the detailed responses provided below will effectively address your concerns and offer the necessary clarifications. **Weakness:** **W1: Lack of further explanation of the pseudo labels.** We appreciate the reviewer’s valuable comment regarding the assumption of a uniform distribution for the pseudo labels. While we acknowledge that real-world datasets may exhibit skewed class distributions, we believe that the uniform distribution assumption provides several benefits in our context. Firstly, adopting a uniform distribution avoids the need for clients to share class label distributions, which significantly mitigates the risk of data privacy leakage. This aligns with the principles of privacy preservation in Federated Graph Learning, ensuring that sensitive label information is not exposed during the training process. Secondly, the use of a uniform distribution ensures that data from all classes are taken into account, including minor or underrepresented classes. This prevents the model from neglecting the features and information from minority class data, thus promoting a more balanced learning process. We believe that this design choice strikes a good balance between privacy preservation and model performance, and the effectiveness of this approach is supported by the results presented in the paper. **W2: Lack of more sensitivity study on different local training epoches.** To demonstrate the robustness of our method under different local training epochs, we select Cora and CiteSeer datasets, and design the range of local training epochs as [90, 110] with a step size of 5. Experimental results are shown in Table 1 and Table 2. *Table 1: Different local training epoches on the Cora dataset.* | Epoches | 90 | 95 | 100 | 105 | 110 | Avg |----------|--------|--------|--------|---------|-----------|-------| |FedAVG | 30.52 | 30.55| 30.61 | 30.62 | 30.61 | 30.58 |FedTAD | 30.09 | 30.21 | 30.43 | 31.15 | 30.43 | 30.46 |FedCVAE | 26.86 | 21.72 | 30.89 | 29.61 | 32.36 | 28.29 |**GHOST** | **46.93** | **47.39** | **50.41** | **47.02** | **47.57** | **47.86** *Table 2: Different local training epoches on the CiteSeer dataset.* | Epoches | 90 | 95 | 100 | 105 | 110 | Avg |----------|--------|--------|--------|---------|-----------|-------| |FedAVG | 32.73 | 33.03 | 32.88 | 32.36 | 32.66 | 32.73 |FedTAD | 20.52 | 23.30 | 33.86 | 23.75 | 25.62 | 24.72 |FedCVAE | 27.19 | 33.56 | 34.76 | 32.13 | 24.19 | 30.37 |**GHOST** | **40.90** | **40.52** | **37.75** | **40.62** | **37.68** | **39.49** From the tables, we can observe that our method maintains strong performance across different epochs, demonstrating its stability under varying epoches. **W3: Lack of details of training epoches in other baselines** For all the baselines compared in the experimental section, we uniformly set the local training epoch to 100 to ensure sufficient convergence of the local models. In the one-shot setting, too few local training epochs would prevent the model from adequately learning the local data, making the comparison unfair. Conversely, too many epochs could lead to overfitting to local data, resulting in degraded generalization performance. **Questions** **Q1: Model performance with varying client numbers** **A1:** We appreciate the insightful question regarding model performance under varying client numbers. The observed performance improvement with a higher number of clients, particularly in datasets with high data heterogeneity (e.g., Coauthor-CS), can be attributed to the impact of distributional bias among clients. When the number of clients is small, each client tends to hold a larger amount of data. However, in highly heterogeneous settings, this also means that the data distributions across clients can be significantly different, leading to greater heterogeneity in both data volume and feature distribution. Such imbalances can make it more challenging for the global model to effectively capture a generalizable representation, resulting in degraded performance. Conversely, increasing the number of clients leads to a finer-grained partitioning of the data, which can help mitigate extreme distribution shifts among clients. This, in turn, allows the global model to better learn shared patterns across clients, improving overall performance. Additionally, different datasets have their own unique characteristics in terms of data features and structure. As a result, the effect of varying client numbers may manifest differently across datasets, leading to dataset-specific trends in performance. We hope this clarifies the phenomenon and appreciate the opportunity to further discuss our findings. --- Rebuttal Comment 1.1: Comment: I have read the authors' responses. The explanations resolve all my previous confusion, and I've checked other reviewers' feedback as well. I will raise my score and vote for acceptance. --- Reply to Comment 1.1.1: Comment: ***Dear Reviewer 9vCa:*** Thank you for your thoughtful feedback and for taking the time to review our responses. We truly appreciate your constructive insights and your willingness to engage with our clarifications. Your support and recognition of our work mean a lot to us. Best Regards, Authors
Summary: This study tackles challenges such as communication overhead, limited generalization ability, and catastrophic forgetting. It introduces a one-shot strategy where clients independently build proxy models, which are later aligned in both feature-structural level and aggregated on the server to form a global model. To address catastrophic forgetting, the approach incorporates topology-aware knowledge retention. Additionally, this study conducts adequate experiments and addresses the privacy security. Claims And Evidence: Claims made in this study are supported by clear and convincing evidence. This study precisely identifies the limitations of existing FGL methods and addresses these issues with effective and convincing approach (i.e. the “Dual-Level Alignment” module). Methods And Evaluation Criteria: The proposed methods and evaluation criteria actually make sense for the identified problems. Two key modules (the “Dual-Level Alignment” module and the “Topology Knowledge Retention” module) are both highly related to those problems. Theoretical Claims: I have checked all the theoretical claims. Experimental Designs Or Analyses: I have checked the experimental designs and analyses. The authors have conducted the node classification task on graphs of various scales and make comprehensive comparisons with traditional FL/ FGL and one-shot FL methods. Moreover, the ablation study and the sensitivity study both strengthen the validity of this study. However, I think it will be better with more FGL/ one-shot FL methods compared in the experimental part. Supplementary Material: This study has no supplementary material. Relation To Broader Scientific Literature: This study proposes an novel one-shot FGL approach, effectively tackling the identified issues such as limited generalization ability and catastrophic forgetting. The study is meaningful and raises the concern of the communication overhead problem in FGL systems. I believe one-shot FGL is a promising solution. Essential References Not Discussed: Essential related works are comprehensively cited in this study. Other Strengths And Weaknesses: Strength: S1: It is novel and meaningful to propose a one-shot FGL approach. This study precisely identifies the challenges of existing FGL methods and tackles them effectively. The overall approach is well-motivated, and the motivation is also explained with clarity. S2: The figures of problem and framework illustration are clear and detailed, and the equations and explanations are reasonable. I believe this is a comprehensive framework for the graph data. S3: The proposed approach throughout the study flows seamlessly. Both two key modules and their components are innovative and coherent. The Dual-Level Aligned Proxy Model effectively captures the feature-structural knowledge and the Topology Knowledge Retention module then integrates the diverse information against the data heterogeneity while mitigating catastrophic forgetting. Weaknesses: W1: This study uses many notations. To improve clarity, the authors should provide a summarize table of the commonly used notations and their definitions. W2: The authors compare the proposed approach with several traditional FL/FGL and one-shot FGL methods , but additional comparisons with more FGL and one-shot FL methods could strengthen this study. W3: Equation 4 is designed to align the label distributions of the true and pseudo data, and Equation 5 is designed to align the feature distributions of the true and pseudo data. Essentially they serve the same purpose, so the necessity and motivation for using them simultaneously needs to be made clear. W4: As shown in Equation 6, the optimization objective of fused GW-OT is the transport matrix T. How does Equation 6 serve a loss and yield the gradients on the pseudo feature matrix? W5: Equation 9 is very confusing. Actually, I can’t clearly understand the meaning of it. It’s suggested to add more explanations about Equation 9. Other Comments Or Suggestions: The authors should conduct comparison experiments with more traditional FGL and one-shot FL methods. Questions For Authors: Q1: In Section 4.3, the authors propose the Topology-Conscious Knowledge Retention module and compute the Topology-Consistency Criterion to identify crucial parameters for capturing structural knowledge. Can you provide a more detailed explanation of how the Topology-Consistency Criterion reflects the importance of each parameter in learning topological information? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ***Dear Reviewer AgVq:*** Thank you for your thoughtful review and for highlighting important concerns. Below, we provide detailed responses to clarify our proposed approach. **Weaknesses & Questions:** **W1: Notation clarity and summarization.** We sincerely appreciate the reviewer’s valuable suggestion regarding notation clarity. To enhance readability, we will incorporate a summary table of commonly used notations and their definitions in a future version of our work. Thank you for your insightful feedback. **W2 & Suggestions: Additional comparisons with more FGL and one-shot FL methods.** Thank you for your valuable suggestion. To further enhance our study, we have conducted experiments incorporating **FGSSL** (FGL) and **FENS** (one-shot FL). The results on six benchmark datasets are presented in Table 1. *Table 1: Performance comparison including additional FGL and one-shot FL methods on six benchmark datasets.* | Method | Cora | CiteSeer | Pubmed | Amz-Photo | Coauthor-CS | Ogbn-Arxiv | |--------------|--------|--------|--------|--------|-------|-----------| | FGSSL | 30.25 | 21.95 | 39.68 | 13.06 | 22.44 | 9.24 | FENS | 31.43 | 20.97 | 49.07 | 25.30 | 22.54 | 13.09 | **GHOST** | **50.41** | **37.75** | **58.87** | **37.22** | **29.91** | **17.24** From the results, we observe that **GHOST** continues to achieve the best performance, further demonstrating its robustness and effectiveness. We greatly appreciate your insightful suggestion, as it has helped reinforce the comprehensiveness of our study. **W3: Necessity and motivation for using both Divergence Loss (Eq. 4) and Dispersion Loss (Eq. 5).** Thank you for raising this point. Although both losses act on the feature representations, they capture complementary aspects. The Divergence Loss minimizes the statistical discrepancy between the pseudo and real features, ensuring that overall properties (spread, variance, density) match. In contrast, the Dispersion Loss preserves the angular (directional) relationships among features, maintaining the geometric configuration. This helps maintain the intrinsic relational patterns among the features regardless of their magnitudes. Together, they provide a comprehensive alignment, as confirmed by our ablation study (Sec. 5.4). **W4: Clarification on Equation 6 as a loss function and its gradient flow.** Thank you for this important question. Although Equation 6 formulates the FGW loss in terms of the transport plan $\mathbf{\Gamma}$, the loss function is inherently a function of both the graph structures and the feature matrices of the real and pseudo graphs. In practice, we use a differentiable optimal transport solver (implemented with a conditional gradient method with optional Armijo line-search) to obtain $\mathbf{\Gamma}$. Consequently, even though $\mathbf{\Gamma}$ is the immediate optimization variable in Equation 6, the overall FGW loss is differentiable with respect to $\hat{\mathbf{X}}$ (and $\hat{\mathbf{A}}$), allowing gradients to flow back to these pseudo graph parameters. Thus, by minimizing the FGW loss, the model not only optimizes the transport plan for aligning the local and pseudo graphs but also adjusts the pseudo feature matrix to reduce the discrepancy in both feature and structural domains. This gradient propagation through the differentiable OT solver ensures that the pseudo feature matrix is updated in a manner that preserves the intrinsic topological relationships while matching the local statistical properties. **W5 & Q1: Explanation of Equation 9 (Topology-Consistency Criterion).** Thank you for highlighting the need for clarification. The Topology-Consistency Criterion quantifies how much each global GNN parameter contributes to preserving learned topological structures. We first compute a Coherence Factor $\gamma_{ij}^l$ for connected nodes, which quantifies the similarity of their hidden representations after a transformation by the corresponding layer’s weight matrix $\boldsymbol{\theta}_{\boldsymbol{w}}^l$. Aggregating these values across nodes yields a global topological consistency measure for layer $l$. In Equation 9, the norm of the derivative for each parameter reflects its **“sensitivity”: a higher norm indicates that even a small change in that parameter would cause a significant alteration in the coherence.** In other words, parameters with high sensitivity are deemed more important for capturing the intrinsic topological structure across the diverse graphs in our integrated knowledge set. By computing and then consolidating these sensitivity measures across layers, we obtain a comprehensive criterion $\mathcal{T}^k$ that identifies which parameters are crucial for preserving structural knowledge. This process helps in selectively retaining important parameters during the global training phase, mitigating the risk of catastrophic forgetting.
null
null
null
null
null
null
null
null
Adaptive Data Collection for Robust Learning Across Multiple Distributions
Accept (poster)
Summary: This paper considers a multi-round decentralized training method with a limited annotation budget to label data from multiple distributions from various locations. Claims And Evidence: Yes, the claims have theoretical proofs besides empirical experiments. Methods And Evaluation Criteria: Yes, the paper shows results in multiple domains, such as classification tasks. Theoretical Claims: The paper mainly leverages existing results from bandit algorithms. Experimental Designs Or Analyses: Yes, I checked all the results and raised my question in the "questions" section. Supplementary Material: I scanned through the proofs and the additional results. Relation To Broader Scientific Literature: This paper is an add-on to existing bandit and decentralized data collection problems. Essential References Not Discussed: In other application domains, there's a thread on decentralized data collection with theoretical guarantees compared to the optimality gap. Some use optimization-based methods, and some use submodular methods, which have pretty straight-forward proofs on the optimality gap. Please compare them in the paper and add them as baselines. [1] When data acquisition meets data analytics: A distributed active learning framework for optimal budgeted mobile crowdsensing. Qiang Xu; Rong Zheng, INFOCOM 2017 [2] Decentralized Data Collection for Robotic Fleet Learning: A Game-Theoretic Approach. Oguzhan Akcin, Po-han Li, Shubhankar Agarwal, Sandeep P. Chinchali, CoRL 2022 [3] Fleet Active Learning: A Submodular Maximization Approach. Oguzhan Akcin, Orhan Unuvar, Onat Ure, Sandeep P. Chinchali, CoRL 2023 [4] Distributed Submodular Maximization. Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, Andreas Krause. JMLR 2016 Other Strengths And Weaknesses: # Strengths 1. The proposed method can be combined with other existing bandit algorithms. 2. The paper clearly stated the assumptions and the claims. # Weaknesses 1. The IID assumption (Assumption 2.6) is not so realistic since in the real world, the data distributions are correlated if the locations are close enough (following the example in the intro). Please justify. 2. The paper mainly leverages existing results on bandits to their data collection problem. Other Comments Or Suggestions: N/A. Questions For Authors: 1. In Table. 1, UCB-OGD is not actually on par with DBAL since the MEAN ACC of DBAL is much better. Then, what is the point of proposing UCB-OGD? Maybe try them on a more complicated dataset (ImageNet) to demonstrate their capability. 2. The definition of data sources is strongly related to the core definition of a task. In the TestBed experiment, the cameras are correlated in some ways we do not even know and the algorithms still work. It thus adds complexity to the setting and worth investigating. Can you do the same with the other two experiments such that the objects appear in the data sources are correlated, and conduct performance analysis on them? Also, it may reflect the assumptions made earlier. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer WqSu, Thank you for your questions. We would like to further clarify the problem and our contributions. **Problem Definition and Connection to Decentralized Data Collection:** The adaptive data collection problem is indeed related to many topics in active learning (AL), including the suggested line of work on decentralized data collection. Particularly, the theoretical guarantees of decentralized data collection are often achieved by leveraging *predefined* data quality metrics, such as information functions [1], submodular functions [2,4], or a target data distribution [3]. Many centralized AL algorithms (e.g. uncertainty- or entropy-based sampling) share similar intuitions by selecting the most relevant data to annotate using such metrics, which we compare as baselines in *Section 5*. However, the robustness of real-world applications are often measured in terms of their worst-case performances rather than the quality of the data itself. Therefore, it is more common in robust learning to directly minimize the worst-case loss, which is the *minimax objective* employed by our paper. We further discuss the advantages of the proposed framework in the following paragraph. We will also make sure to add the suggested references to the related work of our paper. **Advantage of The Proposed Framework (question 1):** One of the major advantages of the proposed framework is its high flexibility to apply to a wide range of different tasks without relying on predefined data quality metrics (as in decentralized data collection). This is especially useful in data-scarce scenarios where a target dataset is absent, or in complex tasks (e.g. object detection or VLM training) where the data quality function is hard to define. Although numerous AL algorithms exist for specific tasks such as classification and detection, to the best of our knowledge, there isn't a general framework of adaptive data collection with *provable minimax regret bounds* that incorporates DL models. In fact, we demonstrated in the experiments (*Section 5*) that the same algorithmic framework of UCB-OGD can be utilized to train robust DL models for classification, object detection, and vision-language modeling (VLM) for which we have provided experimental result in *[Figure 5](https://anonymous.4open.science/r/icml2025-adaptive4robust-CB0F/fig5-alcurve-vqav2.pdf)*. We kindly refer to our response to reviewer 7Pps for further descriptions of these results. Therefore, although UCB-OGD shows lower mean accuracy than AL on the CIFAR10 dataset, it achieves similar minimum accuracy (which is our main performance metric), while also yielding superior performances in object detection and VLM tasks (*Table 1* and new *[Figure 4](https://anonymous.4open.science/r/icml2025-adaptive4robust-CB0F/fig4-alcurve-voc2012.pdf)*). ***I.I.D.* Assumption (weakness 1, question 2):** In *Assumption 2.6*, independence is only assumed *within* each data source, i.e. we assume that the images collected from a single camera are *i.i.d.*, while no assumptions are made with respect to the relationships between the data distributions of different cameras. This is the standard assumption of bandit problems, where the per-arm rewards are required to be *i.i.d.*, but different reward distributions can be arbitrary. In the smart city intersection example, *Assumption 2.6* states that the images from different cameras can be correlated *spatially* (when the locations are close) but not *temporally*. This is the case for the testbed dataset, since the images are sporadically collected during an extended period of time with minimal temporal dependence. We also note that the data sources defined in the VOC2012 experiment indeed admit correlations due to overlapping object classes (details in *Section 5*). **Our Novelty and Contributions (weakness 2):** While OGD and UCB are both established algorithms, we believe the combination of these techniques for the studied problem is novel. We want to emphasize that UCB has been proposed for *stationary stochastic bandits*, while our problem is a *non-stationary contextual bandit* where vanilla UCB is not applicable (details in *Section 4*). In this paper, we prove that, by maintaining an adaptively collected dataset and leveraging its information, UCB (and other stationary bandit algorithms like $\epsilon$-Greedy) can be combined with online optimization algorithms such as OGD to provide *provable minimax regret bounds* despite the non-stationary nature of our problem. We further note that the proposed framework can be incorporated with various bandit and optimization algorithms tailored to different problems at hand. We believe that our paper shows that adaptive data collection with robust theoretical guarantees can be a promising approach applicable to a wide range of tasks. We appreciate your suggestions and hope that the answers above could address your concerns. We welcome any further questions or feedback. --- Rebuttal Comment 1.1: Comment: Thanks for the reply. I have increased the score to reflect the additional experiments and clarifications of the authors.
Summary: This paper proposes a framework for adaptive data collection aimed at robust learning in multi-distribution scenarios under a fixed data collection budget. The proposed algorithm dynamically selects a data source for sampling in each round, updates the model parameters using gradient descent, and repeats the process to minimize the expected loss across all data sources. While the paper addresses an interesting problem and provides theoretical guarantees, the lack of significant technical contribution, unclear motivations, and questionable performance metric design weaken its overall impact. The authors should provide stronger justification for their approach, clarify the baseline comparisons, and consider tailoring their algorithm to better address the specific challenges of adaptive data collection in multi-distribution scenarios. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No Experimental Designs Or Analyses: No Supplementary Material: No Relation To Broader Scientific Literature: The paper tackles an important and practical problem—adaptive data collection for robust learning across multiple distributions. This topic is relevant to applications in domains where data is collected from diverse sources with limited budgets. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: This paper tackles an interesting problem and provides theoretical guarantees. Weaknesses: - The proposed method appears to be a direct application of existing approaches—minimizing the maximum loss via UCB sampling combined with OGD. This makes the novelty and technical contribution of the paper seem limited, as no significant algorithmic innovation or optimization is introduced. - The paper does not clearly articulate why adaptive data collection is necessary or advantageous in this specific context. The problem studied seems to deviate from the original objective of minimizing the expected loss across multiple distributions. - The algorithm relies on standard stochastic gradient descent without any specific optimization tailored to the adaptive data collection problem in multi-distribution scenarios. This further limits the novelty of the proposed approach. Other Comments Or Suggestions: See my questions. Questions For Authors: 1. What is the motivation for considering adaptive data collection in this specific setting? How does this approach improve over simpler methods that do not rely on dynamic sample selection? 2. The regret is defined based on the $T\min_\theta \max_k \mu_k(\theta)$. However, if we do not employ dynamic sample selection. The baseline should be $T\min_\theta \frac{1}{K}\sum_k \mu_k(\theta)$. It seems that comparing with this baseline can better prove the advantage of your adaptive data allocation. 3. Uniform sampling from all distributions seems to overlook their varying importance. How does the method take into account the relative significance of different distributions in the overall objective function? 4. The algorithm appears to be a straightforward integration of UCB and OGD. Are there any specific innovations or adaptations made to optimize the method for robust learning across multiple distributions? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer p2BW, Thank you for your questions. We would like to further clarify the intuitions of the problem and our contributions. **Necessity of Adaptive Data Collection (Q1):** In our motivating example of vehicle detection in a smart city intersection (*Section 1*), DL models such as SSD and YOLO require a large amount of annotated data for training. However, annotation for complex tasks such as object detection can be quite expensive, and is often subject to budget or resource constraints in a real-world application like ours. Therefore, it is essential to understand how the limited data budget can be utilized *efficiently* to maximize the performance. This is also the core idea of active learning (AL), which we also add as baselines for experimental comparisons. The improvement of the proposed algorithms (Eps-OGD and UCB-OGD) over static (or uniform) sample selection (Rand-OGD) is two-fold: - *Data Efficiency:* Models trained on adaptively collected data are able to achieve better performance using the same amount of annotated samples. This is shown in Table 1 and illustrated by the additional results we provided in an *[anonymous Github](https://anonymous.4open.science/r/icml2025-adaptive4robust-CB0F)*. We kindly refer to our response to reviewer 7Pps for further descriptions of these results. - *Robustness:* Adaptive data collection is especially suited for distributionally robust learning (minimax objective). In terms of our main metric, the minimum class-wise accuracies for classification and the minimum mAP of each data source for detection, the advantage of the proposed algorithms (and other AL baseline) is significant over Rand-OGD which uses uniform sampling. **Motivation of Minimax Objective (Q2):** One of the main reasons we consider the minimax objective is that many real-world applications (including our smart intersection example) is *safety-critical*, i.e. we are more concerned with the *worst-case* detection performance among all cameras instead of their average performance. This is essential because the quality of the data and the model accuracy can be largely affected by the specific camera hardware or its geometric location. This motivates our choice of a *minimax objective*, which is standard in robust learning literature. In this context, it is natural to minimize the worst-case loss among all data sources, i.e. $\min_\theta\max_k\mu_k(\theta)$, rather than focusing on the mean loss. In fact, as we show in *Theorem A.4*, algorithms that uniformly collect samples from all data sources (such as Rand-OGD) eventually converge to the optimum of the *mean objective*, i.e. $\min_\theta\frac{1}{K}\sum_{k=1}^K\mu_k(\theta)$, under the same assumptions stated in the paper. We also note that many active learning baselines are also more focused on the mean objective rather than the minimax objective studied by our problem. We believe our proposed framework can fill this important gap of building robust real-world applications under a limited data budget. **Accounting for Distributional Heterogeneity (Q3):** It is indeed the case that a uniform sampling scheme overlooks the heterogeneous nature of the loss distributions of different data sources, which is why Rand-OGD is only suited for the mean objective. On the other hand, the proposed algorithms like UCB-OGD and Eps-OGD incorporate bandit sampling, which learns the importance of different data distributions from the collected samples in the training set. This provides powerful information about which data source is suffering from worse performance and guides the algorithm to obtain more training samples from it. In this way, algorithms like UCB-OGD and Eps-OGD are able to behave *robustly* and eventually converge to the optimum of the minimax objective (*Theorem 3.1*). **Our Novelty and Contributions (Q4):** While OGD and UCB are both established algorithms, we believe the combination of these techniques for the studied problem is novel. We want to emphasize that UCB has been proposed for *stationary stochastic bandits*, while our problem is a *non-stationary contextual bandit* where vanilla UCB is not applicable (details in Section 4). In this paper, we prove that, by maintaining an adaptively collected dataset and leveraging its information, UCB (and other stationary bandit algorithms like $\epsilon$-Greedy) can be combined with online optimization algorithms such as OGD to provide *provable minimax regret bounds* despite the non-stationary nature of our problem. We further note that the proposed framework can be easily extended to various bandit and optimization algorithms for different problems at hand. We believe that our paper shows that adaptive data collection with robust theoretical guarantees can be a promising approach applicable to a wide range of tasks. We appreciate your insights and hope that the answers above could address your concerns. We welcome any further questions or feedback.
Summary: This paper proposes a framework for adaptive data collection and model training considering the multiple data distributions with the fixed annotation budget, where the goal is to come up with an optimized model that can perform well on all distributions. Through the integration of the upper-confidence bound (UCB) for effective sample collection and online gradient descent (OCG) for the model optimization, the paper proposes a theoretically guaranteed approach called UCB-OGD that shows superior performance on benchmarks compared to existing active learning and random sampling strategies. Claims And Evidence: 1. This paper lacks key baselines in the active learning domain [1,2]. The authors should compare their approach with these baselines in the evaluation. If a direct comparison is not feasible, they should at least discuss these methods in the related works section. Additionally, the authors should explore a sampling strategy based on evidential learning uncertainty and report the performance improvements achieved over this approach [3]. To facilitate comparison, they could consider a pool consisting of samples from all data sources and select samples with the highest uncertainty based on evidential learning uncertainty from that pool. 2. The pure entropy-based sampling strategy is not included in the comparison. The authors should include its performance in Table 1. To select samples, they may consider using a single pool of samples from all the data sources under consideration. 3. The evaluation is limited to computer vision tasks. To increase the impact of this work, the authors should consider natural language tasks (e.g., from the VLM space [4]) and demonstrate how the proposed technique reduces the number of samples from each data source while maintaining comparable performance to using all samples from all data sources. **References** 1. Ash et al. “Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds”. ICLR2020 2. Kirsch et al. “BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning”. NeurIPS2019 3. Sensoy et al “Evidential Deep Learning to Quantify Classification Uncertainty”. NIPS2018 4. Liu et al. “Visual Instruction Tuning”. NeurIPS2023. Methods And Evaluation Criteria: **Proposed Methodology**: Although the authors provide a detailed theoretical proof for the proposed technique, its novelty is limited. Upper-confidence bound (UCB) and online gradient descent (OGD) are well-established concepts in machine learning, and this paper primarily adapts these techniques for active learning. Specifically, it employs UCB-based uncertainty to guide data source selection. **Evaluation Criteria**: Strength: This paper has considered multiple computer vision tasks including classification and object detection with multiple datasets. Weakness: To enhance the impact of this work, authors consider the natural language tasks (e.g., from VLM space [4]) and showcase how this technique helps to reduce the number of samples from each data source with the comparable performance as that of the one considering all sample from all data sources. Theoretical Claims: I have checked the correctness of the theoretical claims made in the main paper as well as corresponding proofs in the supplementary materials. I don't see any issue in the theoretical claims. Experimental Designs Or Analyses: Yes, I have reviewed the experimental designs and analysis in the main paper (Experimental Results section). The dataset design and analysis appear valid. However, my concerns are as follows: (a) the paper lacks important active learning baselines, (b) the evaluation is limited to computer vision tasks, (c) entropy-based sample selection results are missing from the table, and (d) the active learning (AL) curve is absent. Including an AL curve, with performance (accuracy) on the y-axis and the number of training samples on the x-axis, would provide a clearer comparison of different baselines, including the proposed method, and better illustrate its effectiveness. Supplementary Material: Supplementary material is focused mostly on the theoretical proofs, implementation details, and additional results. I have reviewed all of them. Relation To Broader Scientific Literature: In the field of active learning, selecting informative samples within a fixed annotation budget is crucial, especially in critical domains like healthcare and road traffic. Unlike prior work, this paper introduces a novel upper-confidence bound (UCB)-based approach for selecting data distributions to enable effective sampling within the given budget. Additionally, the paper provides extensive theoretical analysis to demonstrate how the proposed technique outperforms existing methods. Furthermore, in addition to standard benchmarks, the paper evaluates the approach on a real-world urban intersection dataset, which could serve as an important testbed for future active learning techniques. Essential References Not Discussed: NA Other Strengths And Weaknesses: I have reiterated Strengths and Weaknesses as follow: **Strengths**: 1. This paper presents a comprehensive theoretical proof demonstrating the effectiveness of the proposed technique in achieving strong performance across K data sources with effective minimax regret across these sources. 2. The experiments are conducted across multiple tasks, including classification and object detection. 3. Demonstrates the effectiveness of the proposed technique by evaluating its performance on real-world vehicle detection at urban intersections. **Weaknesses**: 1. This paper lacks key baselines in the active learning domain [1,2]. The authors should compare their approach with these baselines in the evaluation. If a direct comparison is not feasible, they should at least discuss these methods in the related works section. Additionally, the authors should explore a sampling strategy based on evidential learning uncertainty and report the performance improvements achieved over this approach [3]. To facilitate comparison, they could consider a pool consisting of samples from all data sources and select samples with the highest uncertainty based on evidential learning uncertainty from that pool. 2. Although the authors provide a detailed theoretical proof for the proposed technique, its novelty is limited. Upper-confidence bound (UCB) and online gradient descent (OGD) are well-established concepts in machine learning, and this paper primarily adapts these techniques for active learning. Specifically, it employs UCB-based uncertainty to guide data source selection. 3. The pure entropy-based sampling strategy is not included in the comparison. The authors should include its performance in Table 1. To select samples, they may consider using a single pool of samples from all the data sources under consideration. 4. The authors should consider providing an active learning (AL) curve, with the y-axis representing performance (accuracy) and the x-axis showing the number of samples used during training. Comparing different baselines, including their proposed method, would provide a clearer understanding of the technique's effectiveness. 5. The evaluation is limited to computer vision tasks. To increase the impact of this work, the authors should consider natural language tasks (e.g., from the VLM space [4]) and demonstrate how the proposed technique reduces the number of samples from each data source while maintaining comparable performance to using all samples from all data sources. **References** 1. Ash et al. “Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds”. ICLR2020 2. Kirsch et al. “BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning”. NeurIPS2019 3. Sensoy et al “Evidential Deep Learning to Quantify Classification Uncertainty”. NIPS2018 4. Liu et al. “Visual Instruction Tuning”. NeurIPS2023. Other Comments Or Suggestions: NA Questions For Authors: Please refer to Other Strengths and Weaknesses section Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer y1y9, Thank you for your detailed feedback. We have provided additional experimental results in an *[anonymous Github](https://anonymous.4open.science/r/icml2025-adaptive4robust-CB0F)* and further clarified the results and our contributions below. **Additional Active Learning (AL) Baselines (weakness 1,3,4):** We add the results of BADGE algorithm (suggested reference [1]) and entropy-based sampling (*Munjal et al., 2022*) on the CIFAR10 dataset, summarized with other algorithms using an AL curve in *[Figure 3](https://anonymous.4open.science/r/icml2025-adaptive4robust-CB0F/fig3-alcurve-cifar10.pdf)* as suggested by the reviewer. In terms of our main metric, the minimum class-wise accuracies are 39.5 for BADGE and 53.0 for entropy-based sampling. The latter is comparable to the results of 52.7 by DBAL and 52.3 by UCB-OGD. We also provide the AL curve on the VOC2012 dataset in *[Figure 4](https://anonymous.4open.science/r/icml2025-adaptive4robust-CB0F/fig4-alcurve-voc2012.pdf)*, which shows superior performance of our algorithm compared to MDN (*Choi et al., 2021*), a SOTA AL baseline. The BatchBALD algorithm (suggested reference [2]) is not tested for now due to its significant computation overhead given the limited time frame. We will also make sure to add the suggested references to the related work of our paper. The main takeaway of these comparisons (also *Table 1* and *Figure 2* of the paper) we want to emphasize is that: - AL algorithms are *not necessarily* robust learners. They can be sensitive to initializations, especially for complex tasks like object detection (*Figure 2* of the paper and new *[Figure 4](https://anonymous.4open.science/r/icml2025-adaptive4robust-CB0F/fig4-alcurve-voc2012.pdf)*). - On the other hand, our proposed framework can achieve *similar or superior* performance compared to AL baselines for both classification and detection. Meanwhile, the framework is flexible to apply to a wide range of different tasks, since the proposed algorithm solely relies on the losses when making sampling decisions. **Advanced Sampling Strategies (weakness 1):** While the proposed algorithm can potentially benefit from employing more advanced sampling strategies, provable theoretical guarantees of a general adaptive data collection framework that can incorporate DL models remains elusive even under the simple setup that we discussed in the paper (where the algorithm only selects and optimizes one data source with the highest plausible loss in each round). The outline of the proposed algorithms and the implementations have been set up to resemble the theoretical claims as much as possible, at the price of having more basic algorithmic components (e.g. a simpler sampling scheme). Nonetheless, the experiments already demonstrate the capability of our proposed framework under these basic setups. **VLM Experiments (weakness 5):** We finetune a SmolVLM-256M-Base [1] model on a subset of VQAv2 [2] dataset and provide the AL curve of per-token accuracy v.s. the number of training samples in *[Figure 5](https://anonymous.4open.science/r/icml2025-adaptive4robust-CB0F/fig5-alcurve-vqav2.pdf)*. We kindly refer to our response to reviewer 7Pps for further details on the experimental setup. Although the time frame is very limited for experiments on a larger scale, we believe the results are able to demonstrate that our proposed framework can be applied to complex tasks like VLM where the training data incorporates various modalities. We will add more detailed experiments in the final version should the paper be accepted. **Our Novelty and Contributions (weakness 2):** While OGD and UCB are both established algorithms, we believe the combination of these techniques for the studied problem is novel. We want to emphasize that UCB has been proposed for *stationary stochastic bandits*, while our problem is a *non-stationary contextual bandit* where vanilla UCB is not applicable (details in *Section 4*). In this paper, we prove that, by maintaining an adaptively collected dataset and leveraging its information, UCB (and other stationary bandit algorithms like $\epsilon$-Greedy) can be combined with online optimization algorithms such as OGD to provide *provable minimax regret bounds* despite the non-stationary nature of our problem. We further note that the proposed framework can be easily extended to various bandit and optimization algorithms for different problems at hand. We believe that our paper shows that adaptive data collection with robust theoretical guarantees can be a promising approach applicable to a wide range of tasks. We appreciate your suggestions and insights, and hope that the updates above could address your concerns. We welcome any further questions or feedback. **References:** [1] Marafioti et al., “SmolVLM: Redefining small and efficient multimodal models”. 2025. [2] Antol et al., "VQA: Visual Question Answering". ICCV 2015. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for providing a comprehensive rebuttal within a short timeframe. I have increased my score to reflect their efforts, particularly in (a) presenting baseline results for other methods (even though some outperform the proposed approach), and (b) providing results for the VLM.
Summary: This paper presents a new online framework (UCB-OGD) for data collection and model training in a multi-distributional setting with a constraint on sample labeling. In particular, the purpose UCB-OGD is shown to achieve a sublinear minimax regret with a lower-bound showing algorithmic completion, guaranteeing performance in comparison with traditional active learning methods. Additionally, the paper also presents a well-designed experimental setup to validate the UCB-OGD's effectiveness. Claims And Evidence: The paper claims theoretical guarantees of their UCB-OGD alongside claims of empirical improvements in a multi-distributional setting with a budget on sample labeling. In this regard, the reviewer found the evidence convincing with both theoretical analysis under traditional bandits and online optimization. Additionally, the experimental evaluations are intuitive and, from the reviewer's perspective, fair. Methods And Evaluation Criteria: The reviewer found the proposed UCB-OGD method to be an intuitive method for addressing the raised active learning problems. Subsequently, the experimental setup is well-designed and the comparative methods chosen are also largely appropriate and fair. Theoretical Claims: The authors provide extensive theoretical analysis that show their proposed UCB-OGD algorithm achieves a sublinear minimax. The paper also provides a lower-bound showing no algorithm can do asymptotically better which matches the known multi-armed bandits theory. The author also clearly state their theoretical assumptions (i.i.d sampling, loss convexity in the model parameters, and bounded loss with Lipschitz continuity). To the reviewer's knowledge the theoretical proofs and claims are compelling with no visible incorrections. Experimental Designs Or Analyses: The authors evaluate UCB-OGD using a well-designed proposed setting with three distinct tasks. However, the reviewer would like to see more high resolution datasets in particular for the Classification tasks. Supplementary Material: Beyond the additional proof details presented in the appendix, the authors provide no additional supplementary materials. Relation To Broader Scientific Literature: Algorithmically, UCB-OGD is built on existing algorithms and do not provide too many novel ideas to the broader field. However, the reviewer does find the proposed framework on data collection, bridging ideas from active learning, multi-armed bandits, and online optimization, to be novel and potentially very helpful for the broader active learning field. Essential References Not Discussed: The reviewer is unaware of any necessary references which are omitted. Other Strengths And Weaknesses: See sections above. Other Comments Or Suggestions: Minor typos of note: - line 373 "Detailed of data source assignment is given in Appendix B" -> "Additional details of data source assignment are given in Appendix B" Questions For Authors: See the "experimental designs Or analyses" section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer 7Pps, Thank you for your feedback. We have provided additional experimental results in an *[anonymous Github](https://anonymous.4open.science/r/icml2025-adaptive4robust-CB0F)*. There are three active learning curves (model performance *v.s.* number of annotated samples) for three different tasks. **Vision-Language Modeling:** We finetune a SmolVLM-256M-Base model [1] on a subset of VQAv2 dataset [2] for visual question answering using Rand-OGD and UCB-OGD, respectively. We partition the dataset into three data sources based on the type of the questions (i.e. yes/no questions, numerical questions, and descriptive questions). The active learning curve of per-token accuracy *v.s.* the number of training samples is illustrated in *[Figure 5](https://anonymous.4open.science/r/icml2025-adaptive4robust-CB0F/fig5-alcurve-vqav2.pdf)*. Although the time frame is very limited for experiments on a larger scale, we believe the results are able to demonstrate that our proposed framework can be applied to complex tasks like VLM where the training data incorporates various modalities. We will add more detailed experiments in the final version should the paper be accepted. **Classification and Object Detection:** For the experiments on CIFAR10 and VOC2012 in the paper (*Section 5*), we summarize the performances of the proposed algorithms compared to several active learning baselines in *[Figure 3](https://anonymous.4open.science/r/icml2025-adaptive4robust-CB0F/fig3-alcurve-cifar10.pdf)* and *[Figure 4](https://anonymous.4open.science/r/icml2025-adaptive4robust-CB0F/fig4-alcurve-voc2012.pdf)*, respectively. The main takeaway we want to emphasize is that, while active learning algorithms can be *task-specific* and are *not necessarily* robust learners, our proposed framework is flexible to apply to a wide range of different tasks and is able to behave robustly. The flexibility is due to the fact that the proposed algorithm solely relies on the losses when making sampling decisions. We appreciate your insights and welcome any further questions or feedback. We will be continuously seeking to add further experiments in support of our theoretical claims. **References:** [1] Marafioti et al., "SmolVLM: Redefining small and efficient multimodal models". 2025. [2] Antol et al., "VQA: Visual Question Answering". ICCV 2015.
null
null
null
null
null
null
Identifying Neural Dynamics Using Interventional State Space Models
Accept (poster)
Summary: The paper introduces interventional state space models (iSSM), a novel framework designed to predict neural responses to novel perturbations, addressing the limitations of traditional state space models (SSM) which capture statistical associations without causal interpretation. The authors establish the identifiability of iSSM. The model is validated through simulations of the motor cortex and applied to biological datasets, including calcium recordings from mouse ALM neurons and electrophysiological recordings from macaque dlPFC. The results demonstrate that iSSM outperforms traditional SSMs in terms of parameter identifiability and predictive power. ## update after rebuttal Thanks for the detailed response and the additional sections on reproducibility, inference, and computational complexity. I also appreciate the short discussion on the optimal latent dimension. These changes address my feedback and strengthen the paper. My evaluation remains positive, and I’ll keep my original score of 4 (Accept) unchanged. I encourage the authors to further explore testing the optimal latent dimension in future work. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I checked the proof of Theorem 3.4. It seems correct. Experimental Designs Or Analyses: Yes Supplementary Material: I checked the proof of Theorem 3.4 Relation To Broader Scientific Literature: The paper could be related to causal discovery and control of dynamical systems. Essential References Not Discussed: - Other Strengths And Weaknesses: ## Strengths - Theoretical Contribution: The paper provides a theoretical foundation for the identifiability of iSSM, which is crucial for causal inference in general especially when the goal is to design some sort of control strategy to steer the system towards a desired state. - Empirical Validation: The model is tested on both synthetic and real-world datasets, showing generalization capabilities and parameter recovery. - Practical Relevance: The application of iSSM to real biological datasets demonstrates its potential utility in neuroscience research, particularly in understanding neural dynamics under perturbations. Although the current state of technology may not allow a granular intervention on latent dimensions, the result of this work could be of higher practical use when such interventional capabilities become possible. ## Weaknesses - Nonlinearity Limitation: The current model focuses on linear dynamics, and while the inference model can capture nonlinearities, explicitly modeling nonlinear dynamics in the latent space remains a limitation. - Complexity of Implementation: The implementation details and computational complexity of iSSM are not thoroughly discussed, which could be a barrier for adoption by other researchers. - Limited Scope of Interventions: The paper primarily assumes that interventions force the neural state out of its low-dimensional attractor manifold. I think this assumption and its consequences need to be deeper understood as they seem to be critical for the results of this work. Other Comments Or Suggestions: - Questions For Authors: - Although the emission model introduces nonlinearity to the model, the dynamics in the latent space is linear and cannot express qualitative behaviours that only nonlinear dynamical systems may have, e.g. limit cycles. This might be a guide for where not to use this model. Can you provide examples of neural circuits for which the linear dynamics is insufficient to exhibit the underlying dynamics regardless of how expressive the emission model is? - It seems the latent dimension is crucial for this model as those dimensions are supposed to correspond to biologically meaningful behaviour and to be physically intervenable. The stated observation in the paper “increasing the number of latents always provides better test accuracy” seems counterintuitive as there should be a number of dimensions that corresponds to the physical properties of the system and beyond that there is a risk of overfitting. Could you elaborate on this observation and any reasoning behind it? - I recommend including a pseudocode in the appendix and provide a more detailed discussion of the implementation. An ablation study would also be useful to better understand the author’s design choice, e.g. why choosing LSTM as the recognition network over other options. The authors have mentioned they have done cross-validation to find the optimal set of hyper-parameters, but it’s unclear which hyperparameters were involved in that search and with which range. Also how the range of search for hyper-parameters changes with the problem at hand. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate you and the other reviewers for your time and thoughtful evaluation of our work. We found the feedback to be highly constructive, as well as both fair and encouraging. Below we address your specific questions: **Nonlinearity limitation:** We completely agree with the reviewer that linear dynamics introduce a limitation into the model. However, developing theoretical results for more complex models is quite challenging. To our knowledge this is the first paper that develops such results for a non-LDS noisy SSM. We hope this brings the causal inference and neuroscience communities closer together and inspires the development of more theoretical results for more complex models. **Complexity of implementation:** Thanks for pointing this out, we have included a new section in the supplementary material detailing the inference and its computational complexity. To summarize the section we find that our algorithm scales as $\mathcal{O}(TN^2 + TD^2)$ where $T$ is the length of the trajectories, $D$ is the latent dimension, and $N$ is the number of neurons. Therefore our inference is linear in time and quadratic in the number of neurons. **Limited score of interventions:** The existing evidence for interventions forcing the neural state outside of the observational trajectories is discussed in both prior theoretical and experimental papers [1, 2, 3]. That said, we believe that collectively as a field our understanding of how interventions affect the neural states is quite limited. The model we present here is among the first models that incorporate interventions into the modeling framework and therefore we believe it’s critical to develop interventional models to better understand how interventions influence neural dynamics. [1] Jazayeri, M. and Afraz, A. Navigating the neural space in search of the neural code. Neuron, 93(5):1003–1014, 2017. [2] O’Shea, D. J., Duncker, L., Goo, W., Sun, X., Vyas, S., Trautmann, E. M., Diester, I., Ramakrishnan, C., Deisseroth, K., Sahani, M., et al. Direct neural perturbations reveal a dynamical mechanism for robust computation. bioRxiv, pp. 2022–12, 2022. [3] Daie, K., Svoboda, K., and Druckmann, S. Targeted photostimulation uncovers circuit motifs supporting short-term memory. Nature neuroscience, 24(2):259–265, 2021. **Expressivity of the emissions:** While we agree that linear dynamics is a limiting factor of the model, we would like to point out that adding nonlinear emissions to the linear dynamics changes its expressivity quite drastically. We reference the Koopman theory in the paper, which posits that a (possibly infinite dimensional) linear dynamical system followed by a nonlinear emission is sufficient to express "any" autonomous dynamical system. The catch here is of course "infinite dimensional" but intuitively as the latent dimension grows larger the expressivity of the model also increases. **Increasing latent dimension:** You are absolutely correct that if we assume low-dimensionality of the data then increasing the latent dimension is counter-intuitive. We would like to clarify that while we think the optimal latent dimension crucially depends on the task and dataset, we think of interventions as kicking the state of the system outside of its low-dimensional manifold increasing the dimensions spanned by the trajectories. Therefore the conjecture we've included in the discussion section is only valid for large datasets where diverse and numerous interventions are performed. But we have only presented this argument as a conjecture and we agree that experimental validation is needed for confirming it. **Discussion of the implementation:** Thank you for the suggestion, we added a new section in the supplementary material on the details of our variational inference algorithm. We have also included our code package in the submission which follows standard code development practices. **LSTMs, hyperparameters:** Using LSTMs in the variational posterior has become a standard practice in the field (e.g. [1]). We did experiment with vanilla RNNs but they drastically underperformed LSTMs. We will include these results if the paper gets accepted. Regarding the hyperparameter search, our two hyperparameters are the sparsity of $\boldsymbol{B}$ denoted by $s$ and the latent dimension $D$. The cross-validation and the range of parameters are included in Fig. 3C, 4D. We used a logarithmic axis for s to cover a wide range of possibilities and for $D$ we increased it up to a point where we see a drop of validation performance. Regarding how the hyperparameters change w.r.t. the task at hand, this question is highly problem specific. We think these value highly affected by the details of the experimental setup such as the task, type of recording, animal model, etc. [1] Krishnan, Rahul G., Uri Shalit, and David Sontag. "Deep Kalman Filters." arXiv preprint arXiv:1511.05121 (2015). Thanks again for your time; we look forward to your final assessment. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I appreciate the authors’ efforts in addressing my questions and incorporating clarifications and additional material into the revision. I find this work to be a valuable contribution to the community, and my overall assessment remains positive. As my initial score already reflects this opinion, I will keep it unchanged. I encourage the authors to expand on the reasoning behind their conjecture regarding the optimal latent dimension and suggest concrete ways it could be tested in future work. --- Reply to Comment 1.1.1: Comment: Thank you for the encouraging words. We have included new sections in our appendix on (1) reproducibility details (2) inference details (3) computational complexity. In addition, we will include a short discussion on the optimal dimension. We appreciate your time and feedback.
Summary: The paper introduces Interventional State Space Models (iSSM), a class of causal state-space models designed to identify neural circuit dynamics and predict responses to causal manipulations, addressing the limitations of traditional state-space models (SSMs) which lack causal interpretability. By explicitly modeling perturbations as interventions on latent neural dynamics, the authors demonstrate theoretically that iSSMs achieve identifiability of latent dynamics under suitable conditions. Empirically, they validate their model using simulated motor cortex dynamics and biological datasets involving calcium imaging from mouse ALM neurons during targeted photostimulation, and electrophysiological recordings from macaque dlPFC during micro-stimulation. Across these examples, iSSM consistently outperforms conventional observational SSMs in reconstructing neural activity, demonstrates robustness to initialization, and effectively generalizes to predict neural responses under novel interventions. Claims And Evidence: The main claims are well-supported by both theoretical arguments and empirical results. Methods And Evaluation Criteria: Yes, the paper clearly demonstrates alignment between the proposed methods, evaluation approaches, and the key problems it addresses. The authors provide theoretical guarantees showing that iSSMs achieve identifiability under suitable assumptions (e.g., bounded completeness, piecewise linearity, sufficient interventions), thus enabling accurate predictions under novel perturbations. They empirically validate these claims using synthetic motor cortex simulations and biological datasets (calcium imaging in mice and electrophysiology in macaques), showing that iSSM consistently outperforms standard observational models (SSM) in reconstructing neural activity, recovering latent dynamics, and ensuring robustness of learned parameters across initializations. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. I have the following questions: - In Figure 3.B, what's the difference of latent activities between correct and incorrect trials? - The goal of Section 4.3 is to evaluate iSSM's ability to generalizetounseeninterventions. Could you explain more about how this is achieved via Figure 4? Supplementary Material: Yes, I reviewed the supplementary material provided in the appendices, including additional experimental results and a detailed proof for the theoretical identifiability claims. Relation To Broader Scientific Literature: The paper’s key contributions build upon and directly address limitations highlighted in prior literature: traditional SSMs, effectively capture statistical correlations but fail to offer causal interpretability. This work specifically incorporates theoretical insights on the identifiability of nonlinear causal dynamical systems and extends them by allowing observational noise, a crucial step for realistic biological modeling Essential References Not Discussed: Yes. Other Strengths And Weaknesses: Strengths: - Combining causal inference concepts with traditional state-space modeling techniques, bridging a critical gap between purely statistical and causal modeling approaches. - Advances theoretical understanding by providing formal identifiability conditions, clearly extending prior theoretical work to the specific and challenging domain of neural dynamics. Weakness: - Only a standard SSM is used as a baseline. The authors should consider including additional baseline models specifically designed for causal dynamical systems to better demonstrate the superiority of iSSM. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate you and the other reviewers for your time and thoughtful evaluation of our work. We found the feedback to be highly constructive, as well as both fair and encouraging. Below we address your specific questions: **Correct vs. incorrect trials:** Please notice the latents closer to the end of the trial where the correct and incorrect latents start to diverge. We noticed that that’s the time when the animal is performing its action selection. This is relevant for us because the area where the recordings are performed ALM is thought to involve action selection. Therefore our results suggest that the causal latents discovered by the model distinguish between the action selection mechanism for the correct vs. incorrect trials. **Generalization to unseen interventions:** As described in the last paragraph of section 4.3, we divide the data into train and test trials with different interventions performed in each set. Our results show that (Fig. 4 C,D,E) that the test reconstruction accuracy is on par with training reconstruction accuracy, supporting the argument that the model is able to generalize to test interventions. **Compare against other baselines:** Thank you for pointing at the weakness of the paper. We want to clarify that our contribution is not developing a new latent variable model, rather it’s adding the interventional components to the existing SSMs. To our best knowledge, this is the first causal dynamical systems model that incorporates latents and observational noise. Therefore we are not aware of other alternatives to use for our comparisons, we are happy to include new comparisons if the reviewer has specific methods in mind. Thanks again for your time; we look forward to your final assessment. &nbsp; ### **Further responses to reviewer xGnX** **Explanation of dynamic attractor:** We borrowed the term "dynamic attractor" from the previous literature on this toy example [1]. In our understanding, a ring attractor in the neuroscience literature implies a specific latent mechanism where the spatial information is encoded in the connectivity pattern of the network. Here the dynamic attractor has non-rotational latents that are nonlinearly transformed to generate rotational dynamics in the observational space. Indeed the dynamical structure generated in the observational space is a ring attractor but we remained consistent with the terms in the literature. Indeed you are correct that in the non-noisy setting $\boldsymbol{x}=(0,1)$ or equivalently $\boldsymbol{y}=(0,0)$ is an unstable fixed point of the model. So the model will not move away from it. However, except for that pathological case, all other settings of $\boldsymbol{x}$; $\frac{d\boldsymbol{x}}{dt}$ will be nonzero and the model does not require perturbation to move along the circle. [1] Galgali, A. R., Sahani, M., and Mante, V. Residual dynamics resolves recurrent contributions to neural computation. Nature Neuroscience, 26(2):326–338, 2023. **Positive observations in Fig. 2C**: Thank you for your observation. This is a plotting issue, and the y-axis in the plots is not meaningful. We will fix that in the final revision (if accepted). We min-subtract the data in each dimension to stack different axes of the latents on top of each other. What’s shown in the state plots (Fig. 2 A, B) shows the exact numbers corresponding to the trajectories. **Sparsity penalty on $\boldsymbol{B}$**: We mention in line 132-134 (right column) that the sparsity penalty is implemented by placing a Laplace prior on the elements of $\boldsymbol{B}$ with a scale parameter $s$. In our real data experiments we do cross-validation over $s$ (the columns of Fig. 3C and Fig. 4D). For synthetic data we don’t use a sparsity prior. **Injective $f$**: While we don't explicitly enforce that the function f is an injective function, this function maps from a low-dimensional latent space for x to a high-dimensional observational space. So if this function fits the data well, then this function is often injective without explicitly enforcing it, as long as none of the latent dimensions collapses and are not used --- namely many different observations are mapped to the same latent variable value. We can often empirically check that this issue does not occur. **Reconstruction accuracy:** The reconstruction accuracy is defined as the correlation coefficient between true and reconstructed signals (latents or observations). Notice that except for the case of stimulation count is 0, the observational reconstruction accuracy is always close to 1 for both models. However, the latent reconstruction accuracy which directly measures the identifiability of the latents is only close to one for iSSM as the stimulation count increases. This is the main signature that the added interventional component to the model indeed leads to identifiability. The latent dimension for all the results in this figure is set to the ground truth 2. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. I will keep my score. --- Reply to Comment 1.1.1: Comment: Thank you for your time, please feel free to check our new results included in response to reviewer **xGnX**. We hope our new results improve the quality of the paper.
Summary: The authors propose iSSM, which is a linear dynamical systems model that accounts for causal perturbations to the neural population activity. The authors apply this model to two synthetic datasets inspired by literature on motor cortex dynamics, and apply their model to a variety of real neural population datasets that include perturbations. Claims And Evidence: The authors present theoretical results for the identifiability of iSSM. The authors claim that the “framework is general and can be applied to different types of SSMs, in this paper we focus on adding interventional components to the SSMs with linear dynamics and nonlinear observations.” However, I don’t think there was enough information in the paper to conclude to what extent the authors’ results apply. For example, can we replace the dynamics matrix A with a neural network and would the results still hold? How about if both matrix A and B are replaced with a neural network such that we have a completely nonlinear dynamics x_{t+1} = g(x_t, u_t), where g is an MLP? I think the authors shouldn’t say persistent activity is a hallmark of short-term memory (Line 325), especially when they reference Goldman, 2009. The idea in Goldman, 2009 is precisely that we can memorize without persistent neural activity. Methods And Evaluation Criteria: I couldn’t find sufficient details of how inference was done in the model, either in the main text or the Appendix. For example, having in the Appendix the details of the LSTM architecture, how the number of units in the LSTM was chosen, whether it is bidirectional or unidirectional, and whether it is an LSTM that runs forward in time or backward in time would be helpful. I also couldn’t find sufficient details on how the synthetic datasets were generated. For example, it was not clear what a_1 and a_2 were set to be. How many trajectories were generated for this synthetic dataset? What was the initial condition of the dynamics? How was the dynamical system perturbed in this simulation? Without these details (sufficient enough for the readers to reproduce the authors’ results), it is difficult to evaluate the authors’ claim. The authors claim that in the synthetic datasets, iSSM can recover the true latents and the underlying dynamics. However, the analyses didn’t have important details. For example, the authors show gray arrows (flow fields) in Figure 2A and B. Are the arrows both from the ground truth data or are they model-inferred? The flow fields for true and inferred seem identical. A clarification on this would be helpful. I would like to see how well iSSM recovers the flow fields, not just the trajectories. It was also not mentioned in the paper (both main text and Appendix) how SSM is defined. I think the definition of what SSM is should be in the main text. Is it a linear dynamical system (LDS) model with nonlinear observations similar to Gao et al., 2016? What inference method was used for SSM? Is it similar to the inference procedure for iSSM? Furthermore, I am uncertain whether this is the right type of synthetic datasets because it seems that the number of observations here is only two, whereas in the real datasets that the authors look at, the number of neurons tends to be higher. Is “Dynamic Attractor” a ring attractor? The flow field shown in Figure 2A1 and 2B1 seem to suggest this. However, if this is a ring attractor, I am confused because in Equation (6), dx/dt = 0 if and only if x_1 = 0 and x_2 = 1. This means that y_t = [cos(1); 0] for all t. However, the trajectory y_t is not a circle, contrary to what is shown in Figure 1A1 and 2B1. I think it is not clear to me how the flow field plots are generated. For “Dynamic Attractor”, is there a way for the neural state to move without perturbations? Rotational dynamics are seen in neural data even when there are no optogenetic or electric stimulations. In Figure 2B, the observation states are sometimes negative, but in Figure 2C the observations and latents are always positive. Is the trajectory in Figure 2A-B not the same as C? A clarification on how many trajectories are generated in total in this dataset would be helpful here as well. It was not mentioned in the paper how B with sparsity penalty was implemented. How was the scale parameter s chosen? What was s in the experiments? Theoretical Claims: I did not check the proofs of the theoretical claims. However, I would like to note that one of the assumptions is that the function f is an injective function. It was not clear how this was enforced during training of the authors’ model. Experimental Designs Or Analyses: It was not clear how reconstruction accuracy was calculated in Figure 2. I am also surprised that using this metric, reconstruction accuracy is close to 0 for both iSSM and SSM in Figure 2D. It was not clear how the error bar is computed. For this analysis, are the authors assuming that the latent dimensionality is 2? I think it is concerning that there is not enough details in the experiments to reproduce the results. For example, for the real datasets, how was the latent dimensionality chosen? What was the hyperparameter space? How was the hyperparameter optimized? How was the training and test split made? When the authors mention they do cross-validation, how many folds? (e.g., “To find the optimal hyperparameters, we performed cross-validation.” (Line 379) How many folds were used? What fold was used for showing the result in Fig 4D?) Consistency score is defined only with respect to B. Have the authors evaluated consistency for A? In Figure 3, the mean latents for correct and incorrect trials seem very similar. Given that the reconstruction accuracies for both SSM and iSSM are less than ~0.5, I’m not sure if the latent trajectories here should be trusted. To have some confidence, I think it would make sense to fit this dataset to an established model such as LDS with linear observations to see how SSM and iSSM perform with respect to this LDS baseline. Is data relevant to Figure 4 spike trains? It is odd that the plots in Figure 4A are in terms of firing rates, and not binned spike counts. Do the authors assume Poisson observations or Gaussian? Supplementary Material: I reviewed some parts of Appendix A. Relation To Broader Scientific Literature: A major strength of the paper is that the idea of incorporating interventional data into SSMs is interesting, and, if it works, would be an important contribution to the field. As the authors have pointed out, inferring latent dynamics from neural population activity is difficult, and if we have interventional data that can further constrain the model to learn the correct dynamics, this would help us understand neural computation. However, the experimental evidence supporting this idea is weak. Essential References Not Discussed: I can’t think of essential references not cited. Other Strengths And Weaknesses: Please see above. Other Comments Or Suggestions: I have no other comments or suggestions. The paper was written clearly. ## update after rebuttal I sincerely thank the authors for the detailed response. I could access the link and look at the details on estimated flow fields, 6d synthetic experiments, and details on initialisation, hyperparameters and inference (though I believe full details on the inference should be available in the Appendix for this to be published). **Dynamic Attractor:** if the equation should be $\frac{dx}{dt} = [a_1x_1, a_2(1-x_1)]^\top$ instead of $\frac{dx}{dt} = [a_1x_1, a_2(1-x_2)]^\top$, then I don't think there is a fixed point in this system if $a_1 = -20$ and $a_2 = 1.2$ as stated in the shared link. This is inconsistent with the authors' claim that the system is a dynamic attractor. **Sparsity on B:** I re-read the Appendix of the original submission and could not find information about the loss function. The loss shows up in 3.2, but it doesn't have any details on regularization. The shared link says the regularization coefficient was found using cross-validation, but no more details could be found. **Rotational Dynamics:** Should we be worried that the red flow field looks more like a line attractor, whereas for the black arrows, there doesn't seem to be a line of fixed points? Have the authors compared the eigenvalues? **Reconstruction accuracy:** It makes sense that if the latents are not identifiable, the correlation could be close to 0 for the latents (D1), but it is still puzzling to me why the neural reconstruction (D2) is also close to 0. This is also partly why I think having benchmarks to models like LDS (with linear emission) could be helpful. Does the LDS show that the neural reconstruction accuracy is close to 0? These remaining unresolved questions raise concerns about the validity of the analysis, and I will keep my score as is. Questions For Authors: 1. The authors mention that “models that are built upon observational data are not able to capture neural dynamics outside of the low-dimensional space.” However, it is not clear to me whether this method can model dynamics that are outside the latent x. How do the authors model dynamics that are outside the intrinsic manifold? 2. Can iSSM model inputs that are not interventional as well? If so, how does that affect its performance in comparison to SSM? 3. Could we have used something other than LSTM for the posterior? I wonder if there are any alternative ways of doing inference. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We sincerely appreciate you and the other reviewers for your time and thoughtful evaluation of our work. We found the feedback to be highly constructive, as well as both fair and encouraging. Below we address your specific questions: **Generalizing to other SSMs:** Thank you for bringing this up. Indeed we present our theoretical results for linear dynamics and nonlinear observations. The algorithmic part of the paper is readily applicable to more complex generative models. Since we’re using variational inference which is agnostic to the underlying model, changing the components of the generative model–i.e. dynamics, emissions, and noise model–will not impact the inference. For identifiability results, the key idea of the proof lies in independence testing: The intervened latents at time t+1 are independent of the others at previous time step t, and this independence leads to the identifiability of the latents. The linearity assumption in the latent space does make this argument easy to make, since we only need to consider the covariance. That said, this argument is readily generalizable to other settings where $x_{t+1}$ and $x_{t}$ are nonlinearly related (e.g. polynomial function or ReLU function up to sparsity constraints), as long as the statistical independence of the independent latents is still sufficient to identify the latents. That said, our goal for this paper is laying the foundation of applying interventional models to neural datasets and we leave these developments for future work. **Hallmark of short-term memory:** We thank the reviewer for noticing this. While we agree that multiple mechanisms are proposed for working memory, here we’re using "persistent activity" loosely without any implications of the underlying mechanism. Notice that even in Goldman 2009 the proposed mechanism still explains the observed persistent activity. We will clarify in the revised manuscript that the observed persistent activity should not be confused with the underlying mechanism generating it. **Reproducibility:** Thank you for pointing this out. While we have not detailed the specifics of our inference scheme, we’re using standard variational inference tools. That said, we added a new section to the supplementary about the details of inference and computational complexity for completeness. Additionally, we realize that we’ve omitted some important reproducibility details, therefore we added a new section to the supplementary on this. Table [1](https://f.uguu.se/qDwlATSB.png), [2](https://d.uguu.se/jCNcSZzj.png) summarize these results. **Recovering true latents:** Thank you for this great observation. The arrows presented are all from the ground truth model, we have not shown the inferred dynamics by the model. We will include this in the revised manuscript. That said, notice that it’s not quite straightforward to visualize the inferred dynamics. The dynamics inferred by the model are encoded in the variational LSTM and the fitted emission. There are two ways for visualizing the inferred dynamics. (1) Using the inferred A: This is not ideal because of the variational gap. (2) Using the emission & variational posterior. This approach requires us to generate a grid in the observation space. However, due to the use of LSTMs the variational posterior is context dependent. We’re open to the reviewer’s suggestions for visualizing the dynamics, but so far we haven’t found an appropriate visualization. **Definition of iSSM:** The SSM and iSSM in all experiments share the exact same characteristics–i.e. same latent dimension, emission & inference architecture, step size, etc.–except for the interventional components where SSM models interventions as an additive term. We did this to focus on the utility of interventional models while accounting for all other confounding factors and bringing the two models to the same footing. In the codes, there is an `interventional: bool` flag which if set to `False` will model the interventions in an additive way. For the Poisson experiment (Fig. 4) the model is equivalent to Gao et. al. 2016, except that they did not include any inputs to the model. For the Normal experiment (Fig. 2, 3) the model departs from Gao et. al. 2016 in terms of the observational noise model. **Low-d synthetic experiments:** The synthetic experiment in Fig. 2 is mainly presented to provide intuition. It’s an example where latents are clearly defined from a behavioral standpoint and the nonlinear emission is required to produce rotational trajectories. We think it’s an intuitive example for the computational neuroscience audience because it’s a toy version of two prevalent hypotheses about rotational dynamics in the motor cortex. The synthetic example in Supp. Fig. 7 uses higher dimensional latents. We used the remaining space in the responses to reviewer **3PQB** and **kBw8** to address your remaining concerns. Thanks again for your time; we look forward to your final assessment. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed response. **Generalizing to other SSMs:** Thank you for clarifying that the theoretical results only hold for linear dynamics. **Hallmark of short-term memory:** Thank you for the clarification. This makes sense. **Reproducibility:** I was not able to access Table 1, 2 on my end. Could you please share different links? **Recovering true latents:** Could the authors elaborate on what they mean by the variational gap and how the variational posterior is context-dependent? Without full details on the inference method and precise definitions of the terms, it is hard to evaluate correctness, but based on the info that I currently have, I think visualizing inferred dynamics using (1) is a valid way and more standard than (2). Some quantitative metric showing that the true and inferred flow fields match would be necessary for publication. **Definition of SSM:** The definition of SSM makes sense in that they are equivalent to iSSM, except for modeling interventions as additive. However, I’m puzzled by the authors’ statement that it is equivalent to Gao et al. for Fig. 4. If this was so, the authors should state this in the paper. Gao et al. doesn’t use LSTMs for the posterior whereas the authors’ model does, so they are not exactly equivalent. **Low-d synthetic experiments:** What about low-dimensional latents with large number of neurons? This is the kind of synthetic datasets used for validating e.g., LDS, rSLDS and LFADS in literature, so having them would be great. **Dynamic Attractor:** The Dynamic Attractor in Equation (S8) of [1] can generate a ring attractor because (S8) is in polar coordinates. For Equation (6), I don’t think the transformation gives a ring attractor, but a point attractor, as pointed out in my previous review. I think my questions above regarding what a_1 and a_2 were missed by the authors. I’m puzzled by how it is possible to generate flow fields like in Figure 2, which is a dynamic attractor like (S8) of [1] with the authors’ Equation (6). If it is indeed the case that dx/dt is non-zero at points other than the origin and the model doesn’t require perturbations to move along the circle, then I think this is inconsistent with the flow field that the authors plot. **Positive observations in Fig. 2C:** If what the authors say is true, is it possible to learn the correct flow field with this low number of trajectories? **Sparsity on B:** Thank you for the clarification. Does this mean there is an l1 regularization term in the loss function? Having more details in the supplement would be great. **Injective f:** Yes, I agree with the authors that often f can be injective without enforcing it to be injective. The authors mention they can check this doesn’t occur, but have the authors verified that the for models shown in the results, the learned f is injective? **Reconstruction accuracy:** Is it the correlation coefficient and not the coefficient of determination? **Question 3:** The authors did not address my question of whether the LSTM is running forward/backward, what the size of the LSTM is, etc. Could the authors provide references to back up the claim that “Using LSTMs as the variational family has become standard in the literature of SSMs”? The authors’ response did not answer why for stimulus count of 0, the reconstruction accuracy is so low for both SSM and iSSM. Other questions that I had in my original review below were not sufficiently addressed by the authors: >I think it is concerning that there is not enough details in the experiments to reproduce the results. For example, for the real datasets, how was the latent dimensionality chosen? What was the hyperparameter space? How was the hyperparameter optimized? How was the training and test split made? When the authors mention they do cross-validation, how many folds? (e.g., “To find the optimal hyperparameters, we performed cross-validation.” (Line 379) How many folds were used? What fold was used for showing the result in Fig 4D?) >Consistency score is defined only with respect to B. Have the authors evaluated consistency for A? >In Figure 3, the mean latents for correct and incorrect trials seem very similar. Given that the reconstruction accuracies for both SSM and iSSM are less than ~0.5, I’m not sure if the latent trajectories here should be trusted. To have some confidence, I think it would make sense to fit this dataset to an established model such as LDS with linear observations to see how SSM and iSSM perform with respect to this LDS baseline. >It is odd that the plots in Figure 4A are in terms of firing rates, and not binned spike counts... Could the authors clarify why this is in firing rates? The authors’ response raises further concerns about the validity of the analyses and rigour, and I believe the paper as is does not meet the standards for publication. I adjusted my score to reflect this. --- Reply to Comment 1.1.1: Comment: Thank you for engaging in the conversation, below we include the answers to your additional questions. **Table 1, 2**: We apologize for this, the links were generated by a free website and were expired short after. Here's the new [link](https://anonymous.4open.science/r/issm-figs-A800/repro.pdf) to both tables. **Recovering true latents**: We included the flow fields in the latent space [here](https://anonymous.4open.science/r/issm-figs-A800/flow-fields-x.png). While the fields aren't perfectly recovered but the general direction of the arrows are consistent. **Definition of SSM**: The original question was misunderstood as asking whether the generative models of iSSM and PfLDS are equivalent under certain configurations. When iSSM uses Poisson observations and excludes inputs, its generative model matches that of Gao et al's PfLDS. However, their inference methods and emission architectures differ. In the experiments, all factors were controlled using our implemented codebase, isolating the comparison to interventional versus additive inputs. **Low-d synthetic experiments**: We have included a new [figure](https://anonymous.4open.science/r/issm-figs-A800/motor-6d.png) in the rebuttal which projects the latent dynamics in the synthetic motor experiment into higher dimensions (6 dimensions). We will include more elaborate high-dimensional results in the appendix if we get accepted. **Dynamic Attractor** : We identified a small typo in the equation for Dynamic Attractor. Indeed the equation should be fixed to $\frac{dx}{dt} = [a_1 x_1, a_2 (1-x_1)]^{\top}$. All our results and flow fields are based on the correct equation, in fact the flow fields are programmatically generated by taking a grid and transforming according to the dynamics. **Sparsity on B**: Indeed an independent Laplace prior on the elements of $B$ translates into $l_1$ regularization on the matrix elements when computing log likelihoods. The information about the loss function is already included in the supplementary, we're happy to include a summary of the inference if the reviewer requests. **Injective f**: The injectivity of $f$ requires different $x_t$'s are mapped to different $f(x_t)$'s. We check it by checking the gradient of the learned $f(.)$ is nonzero almost surely. Moreover, the identification of the latents in synthetic experiments suggests that the map is indeed injective. **Reconstruction accuracy**: This is correct, we have included references in response to reviewer **3PQB** on the usage of this in the prior literature, please check our response above. **Question 3**: The LSTM always runs forward. The size of the LSTM is included in the new reproducibility tables linked above. **Stimulus count of 0**: We had to skip some questions due to character limits. When stim count is nonzero, the observational reconstruction accuracy is always close to 1 for both models. However, the latent reconstruction accuracy which directly measures the identifiability of the latents is only close to one for iSSM as the stim count increases. This is the main signature that the added interventional component to the model indeed leads to identifiability. When stim count is zero, the model does not have access to interventional data, therefore is not able to identify latents. **Reproducibility information**: We have included the reproducibility information in the tables, please let us know if you have further concerns. **Consistency score for A**: We only computed the consistency score to assess the robustness of the latent to perturbation cite relationships. Since the perturbation cite IDs are fixed we only need to account for the permutations of the latents by aligning the columns of $B$. However, to compute a consistency score for $A$ we need to permute the full matrix to account for permutation invariances, in the limited rebuttal time we were not able to perform this experiment, but if we get accepted we will include this in the final revision. **Latents for correct and incorrect trials**: Please notice the latents start to diverge close to the end of trial which hints at the type of information being encoded in the ALM region. The timing where the latents diverge corresponds to action selection which is thought to involve ALM. Therefore we believe the divergence of the signals is neuroscientifically meaningful and interesting. Regarding comparison with LDS, please notice that the model we’re fitting generalizes LDS. Therefore if LDS is a better fit to the dataset the inference should learn a simple linear emission. **Figure 4A firing rates**: We apologize for mislabeling the figure, the data shown in Fig. 4A is indeed spike counts and not firing rates. We hope our new results clarify our contributions and address your concerns. We believe our theoretical and algorithmic contributions are relevant to both neuroscience and causal dynamical systems communities and therefore we would appreciate your reconsideration of our score.
Summary: This paper provides an extension of the state space model (SSM) to an interventional SSM (iSSM). iSSM is able to causally identify and infer external inputs as an intervention to the neural dynamics, while also inferring the latent and reconstruct the observation accurately. Methods, assumptions, and derivations are clearly presented. These good properties are supported by one synthetic and three real-world experiments. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Have checked in detail before Theorem 3.4. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: Related to a series of SSMs, which is a class of commonly used neural data and neural dynamics analysis tools in computational neuroscience. Essential References Not Discussed: / Other Strengths And Weaknesses: * The presentation is great. * The experiment is comprehensive and scientifically interesting. Other Comments Or Suggestions: / Questions For Authors: * What is the definition of reconstruction **accuracy**. People usually use test likelihood, rather than accuracy, to evaluate observation data, at least in a lot of machine learning applications and especially the neuroscience field. * Is the reconstruction accuracy based on the optimal latent $\boldsymbol x_t$ inferred by the variational distribution? * Have the authors think more about the choice of variational distribution and its validity? LSTM is only unidirectional information flow, which might not be powerful enough to serve as the variational distribution. See, [this](https://proceedings.neurips.cc/paper/2007/hash/2bcab9d935d219641434683dd9d18a03-Abstract.html) and [this](https://openreview.net/pdf?id=2FKzbEE24s). * Since the SSM model used in this paper is essentially a nonlinear LDS, there are a lot of LDS as mentioned at the beginning of this paper, such as SLDS. Is it possible to compare iSSM with SLDS? Comparing with the simple LDS seems a bit weak. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate you and the other reviewers for your time and thoughtful evaluation of our work. We found the feedback to be highly constructive, as well as both fair and encouraging. Below we address your specific questions: **Definition of reconstruction accuracy:** We noticed that the definition of the reconstruction accuracy is missing in the paper. The reconstruction accuracy is defined as the correlation coefficient between the true and reconstructed latents, which can only be computed for synthetic experiments with ground truth latents. It’s a common metric used in the causal representation learning literature for assessing the identifiability of the latents [1,2,3]. [1] Khemakhem, I., Kingma, D., Monti, R., & Hyvarinen, A. (2020, June). Variational autoencoders and nonlinear ica: A unifying framework. In International conference on artificial intelligence and statistics (pp. 2207-2217). PMLR. [2] Khemakhem, I., Monti, R., Kingma, D., & Hyvarinen, A. (2020). Ice-beem: Identifiable conditional energy-based deep models based on nonlinear ica. Advances in Neural Information Processing Systems, 33, 12768-12778. [3] Song, X., Li, Z., Chen, G., Zheng, Y., Fan, Y., Dong, X., & Zhang, K. (2024). Causal temporal representation learning with nonstationary sparse transition. Advances in Neural Information Processing Systems, 37, 77098-77131. **Other variational distributions:** This is a great suggestion, we thank the reviewer for pointing this out. LSTMs have been used in previous SSM literature [1]. We tried vanilla RNNs and LSTMs and LSTMs were significantly better than RNNs. The performance enabled by LSTMs is already sufficient for identifiability experiments we show in the paper. Unfortunately due to the limited time we will not be able to provide new results using new variational families but if we benefit from new variational families we will incorporate the new results in the final manuscript upon acceptance. [1] Krishnan, Rahul G., Uri Shalit, and David Sontag. "Deep Kalman Filters." arXiv preprint arXiv:1511.05121 (2015). **Compare against other SSMs:** In all experiments where we compare SSM with iSSM, we use the exact same model, i.e. linear dynamics and nonlinear emissions. The only difference between SSM and iSSM is the interventional formulation. In addition, the focus of our experiments are identifiability as opposed to mere reconstruction of the data. That said, one can develop new iSSMs with different types of dynamics and emissions using the same framework, such as nonlinear dynamics and nonlinear emissions. Notice that the our inference method will still be effective in other scenarios, and only the generative model should account for the new components. This work attempts to lay the foundation of applying interventional models to neural datasets. While our theoretical results currently do not support nonlinear dynamics we’ll consider developing theoretical results for more complex models in the future work. Thanks again for your time; we look forward to your final assessment. &nbsp; ### **Further responses to reviewer xGnX** **Question 1**: Our main intuition is that the perturbations kick the state of the system outside of the "observational manifold" and force the system to explore regions in the state space that are not visited in the observational regime (we depict this in the schematic in Fig. 1). In addition, our theoretical results suggest that if the true generative model of the data follows our modeling assumptions, using interventional models allows for identifying all model components which theoretically leads to out of distribution generalization. However, in reality the true generative model of the data is far more complex and our model components only provide an approximation to their true counterparts. **Question 2:** iSSM is equivalent to SSM in the absence of interventional data. If $\boldsymbol{u}_t = 0$ then the term that involves $\boldsymbol{u}_t$ is ignored and we recover the observational SSM model. It’s only in the presence of interventional inputs where the latent nodes dissociate from their parents in the graphical model that enable all the benefits that we exploited (enabling the model to visit new states, changing the graphical model of the data providing more information about the model, etc.). **Question 3:** Yes. Using LSTMs as the variational family has become standard in the literature of SSMs. Some benefits of LSTMs include (1) it’s causal, therefore it respects the arrow of time (2) it’s context dependent, therefore it integrates information from far away time points to inform the model predictions (3) it’s data efficient and scales well with dimensionality and time. Due to these benefits and specifically since we observed sufficiently good results we did not experiment with alternatives. The only alternative we tried was vanilla RNNs which drastically underperformed LSTMs. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. I'll keep my recommendation score. --- Reply to Comment 1.1.1: Comment: Thank you for your time, please feel free to check our new results included in response to reviewer **xGnX**. We hope our new results improve the quality of the paper.
null
null
null
null
null
null
Efficiently Serving Large Multimodal Models Using EPD Disaggregation
Accept (poster)
Summary: --- score updated from 3 (weak accept) to 4 (accept) after the rebuttal. --- ---- Recently Disaggregated Inference was proposed to use separate nodes for prefill and decoding whilst serving LLMs. This allows to more easily control SLO times like inter token latency or time to first token. The paper generalizes this setting form LLMs to Multimodal models, proposing to additionally disaggregate the Encoding phase of images or videos. This idea is very relevant and timely, but also conceptually straightforward. The paper conducts many studies ablating the different aspects of the solution and convincingly making a case for the proposed solution. Their key innovations are a/ an asynchronous token transfer between the workers, b/ Intra-request parallelism which allows to further speedup encoding by running independent image encodings of a single request across workers c/ an alogrithm to allocate the resources between encoding/prefill/decode and d/ an approach to dynamically switch the roles. Claims And Evidence: Yes the empirical claims are supported by experimental results. Methods And Evaluation Criteria: Yes the Evaluation mostly makes sense. I would propose to also look at cost per served request. One criticism of Disaggregation is that it is not the most cost-efficient solution. This is fine, the authors are also not necessarily claiming this, but it could be clearly discussed as a limitation with also data illustrating when this happens. Theoretical Claims: There are no theoretical claims Experimental Designs Or Analyses: The experiments seem mostly adequate, see also above. However, the code is not provided. In particular for the optimized resource allocation it is left unclear how the algorithm works. Supplementary Material: Briefly, to check whether there are more details for the Resource allocation Relation To Broader Scientific Literature: Prior work is appropriately discussed. The current submission fits timely into this evolving field Essential References Not Discussed: n/A Other Strengths And Weaknesses: The major weakness is that no code is provided. Especially for the Optimized resource allocation and the dynamic role switching this is crucial IMO. There are not enough details provided to even reproduce the results of the paper. Since this paper mainly proposes an ellaborate efficient software for a rather simple scientific problem, I consider code and reproducibilty crucial for this work. Other Comments Or Suggestions: N/A Questions For Authors: Can you please include the code in your rebuttal? Then I will support acceptance. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments, recognition of the importance and timeliness of our work, and the positive assessment of our empirical validation. Below we address the main concerns raised: --- ### **Q1: Cost per served request and disaggregation tradeoffs** > _"I would propose to also look at cost per served request. One criticism of Disaggregation is that it is not the most cost-efficient solution... it could be clearly discussed as a limitation with also data illustrating when this happens."_ **Response:** We appreciate this insightful comment and fully agree that **disaggregation is not always the most cost-efficient approach**, particularly in scenarios where **tight Service-Level Objectives (SLOs) are not required** and **compute resources (e.g., GPUs) are limited**. For instance, in **offline batch processing**, where throughput is the sole objective, aggregated architectures can achieve **higher GPU utilization** and benefit from **lower inter-stage communication overhead**, making them more cost-effective. We will **explicitly discuss this limitation in the revised version** and provide a more nuanced view of when disaggregation may or may not be cost-efficient. That said, recent work such as [1] demonstrates that **disaggregation can sometimes yield even better throughput**, especially when **compute requirements of stages is significantly different**. By optimizing batching, parallelization and scheduling strateges, systems like DeepSeek-V3 (which disaggregates prefill and decode) are able to achieve better throughput. Our **Throughput in Offline Settings** analysis in the Supplementary Material supports this finding, showing that disaggregation can reduce computation bubbles and improve utilization when stages are well balanced. However, in **cloud-scale, interactive serving systems**, where **individual SLO guarantees (e.g., time-to-first-token, time-per-output-token)** become crucial for user experience, disaggregation is critical for cost efficiency. In such cases, **ensuring SLOs with fully aggregated systems requires significant over-provisioning**, which can lead to **higher overall cost** than disaggregated setups. Disaggregation allows each stage to be optimized independently, reducing interference and tradeoffs between stages, that result in better control of per-request level statistics like TTFT and TPOT. This is evident from experiments in Section 4.1. EPD has better SLO attainment than aggregated setups for same number of GPUs. Hence, for aggregated setups to match the SLOs of EPD, they need to overprovision, resulting in higher overall cost. [1] DeepSeek-AI. DeepSeek-V3 Technical Report. arXiv 2412.19437, 2025 --- ### **Q2 & Q3: Lack of code release, especially for resource allocation and role switching** > _"The major weakness is that no code is provided. Especially for the Optimized resource allocation and the dynamic role switching.. I consider code and reproducibilty crucial for this work."_ **Response:** We understand and agree with the reviewer that **code and reproducibility are vital**, especially for software systems that propose architectural and scheduling innovations. We had planned to release the code as part of the camera-ready version (post-acceptance), but in light of the reviewer’s concern, we have **made the code available during the review period** at the following anonymous link: > **Anonymous Code Repository:** [link](https://drive.google.com/drive/folders/1cEyGCPw54EkgBjZs73m-SZq51M2etD0X) (released under Apache license) This release includes: - The **full implementation of EPD**, including stage clusters, worker orchestration, cache management, and bridges - Our **custom resource allocator** with multiple backends (runtime, simulation) - The **dynamic role switching mechanism**, including metrics collection and migration orchestration - Scripts for **reproducing main experimental results** - **Readme file** located at <project_root>/README.md We hope this supports the reviewer’s confidence in the rigor and reproducibility of our work. ---
Summary: To address the negative impact of the multimodal encoding stage on key Service Level Objectives (SLOs), this paper proposes the Encode-Prefill-Decode (EPD) Disaggregation framework, which allocates the encoding, prefill, and decode stages to independent computing resources. Specifically, this work introduces: (1)A new mechanism for caching multimedia tokens to improve transmission efficiency. (2)A novel method for parallelizing encoding loads within the same request. (3)A resource allocation module optimized for disaggregated inference. (4)A dynamic role-switching approach to adapt to changing workload characteristics. Experiments on MiniCPMv-2.6, InternVL2-8B, and InternVL2-26B demonstrate that, compared to non-disaggregated methods, the proposed framework significantly improves memory efficiency, batch processing capacity, image handling capability, KV cache utilization, TTFT, and E2ETP. Overall, this paper presents a clear motivation, an intuitive and well-founded approach, and compelling experimental results, demonstrating strong practical applicability. Claims And Evidence: This paper claims three key contributions: Intra-Request Parallelization (IPR), Resource Allocation Optimization Modeling, and Dynamic Role Switching. After reading the full paper, I think all claims are clear and well-supported by evidence. Methods And Evaluation Criteria: The proposed method is reasonable and its effectiveness is validated using mainstream models and datasets. Theoretical Claims: This paper primarily focuses on engineering-oriented inference optimization and lacks theoretical foundations. However, I believe this does not diminish its value. Experimental Designs Or Analyses: The experiments in this paper primarily focus on the following four aspects: (1) End-to-end SLO evaluation (EPD significantly outperforms the baselines) (2)Computational resource utilization efficiency (comparison of maximum supported images and batch sizes) (3)Ablation studies on key modules (IPR, DRS, and optimizer) (4)Analysis of TTFT and TPOT (key inference performance metrics comparison). However, some in-depth analyses are missing in this paper. For example, why does TPOT perform better in the w/o Opt setting compared to w Opt in Table 4? Similarly, why is TTFT lower in the w/o Switch setting in Table 5? I hope the authors can provide reasonable explanations for these phenomena. Additionally, the paper only conducts experiments on the NextQA dataset. How does the proposed method perform on other more popular datasets, such as Video-MME? More comprehensive evaluations would strengthen the validity of the results. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks you! Glad to hear that the **motivation, design, and experimental results were found to be compelling**. Below we address the specific questions raised. --- ### **Q1: Why does TPOT perform better in the w/o Opt setting compared to w Opt in Table 4?** Thanks for pointing it out, here is the clarification. The optimization strategy may sometimes trade off TPOT to improve the global objective -- goodput. In the experiment from Table 4, for EPD with an optimizer, the optimizer's objective is solely to maximize the goodput metric. Therefore, the small decrease in average TPOT is due to the optimizer automatically finding a better tradeoff between average TTFT and TPOT, given the workload statistics of encoding, prefill, and decoding. Specifically, the optimizer sacrifices average TPOT only slightly (from 0.025s to 0.031s) while significantly reducing average TTFT (from 4.48s to 2.12s). The final TTFT value of 2.12 s falls well below the 3.9 s requirement (recall that our experiment setup requires TTFT and TPOT to be 3.9s and 0.06s, respectively, as shown in Table 6). --- ### **Q2: Why is TTFT lower in the w/o Switch setting in Table 5?** Thank you for this observation, we’re happy to clarify. The **lower TTFT in the w/o Switch setting** is a direct consequence of its **static resource allocation**: the system remains locked in the initial **5E1P2D** configuration, which was optimized offline for short outputs (50 tokens). This setup overprovisions the **encoding stage**, resulting in a slightly lower TTFT of **1.33s** vs **1.42s** for EPD. However, this comes at a significant cost. The static configuration fails to adapt when the workload shifts to **longer output requests (500 tokens)**, and decoding becomes the primary bottleneck. As a result, the **TPOT deteriorates sharply to 0.12s**, more than **2.4× worse** than EPD (**0.05s**). By contrast, the **full EPD system**, equipped with **dynamic role switching**, detects this workload shift and **migrates 3 encoding workers to decoding**, transitioning to a **2E1P5D** configuration. While this reduces encoding capacity and leads to a slightly higher TTFT (**1.42s**), it **significantly improves decoding throughput**, yielding a much lower TPOT and overall faster response. Ultimately, this leads to a **2.2× improvement in average end-to-end latency**: **28.01s for EPD vs. 61.10s for w/o Switch**. This tradeoff highlights the strength of our dynamic reconfiguration strategy: it **prioritizes end-to-end latency** by resolving the true bottleneck in real time, even if that means marginally a particular stage. --- ### **Q3: How does the method perform on other datasets like Video-MME?** Thank you for the thoughtful suggestion. To strengthen the paper, we conducted additional experiments on the **Video-MME benchmark**, which complements NextQA (open-ended questions) by focusing on multi-choice video QA with diverse video lengths. We evaluated **SLO attainment** (TTFT ≤ 3.1s, TPOT ≤ 0.025s) on 100 randomly sampled examples using **MiniCPM-v2.6**, with 64 uniformly sampled frames per video, adhering to the MiniCPM frame setting reported on the Video-MME leaderboard. Results are shown below: **Table 1: SLO Attainment Rate (%) ↑ under Different Request Rates** | Method \ Rate (req/s) | 0.5 | 1.0 | 1.5 | 2.0 | 2.25 | 2.5 | 3.0 | |------------------------|------|------|------|------|------|------|------| | **vLLM** | 70 | 52 | 46 | 29 | 32 | 21 | 3 | | **DistServe** | 72 | 57 | 31 | 28 | 24 | 19 | 0 | | **EPD (ours)** | **99** | **100** | **99** | **87** | **65** | **43** | **9** | **EPD consistently outperforms vLLM and DistServe** across all rates, demonstrating strong generalization to temporal multimodal workloads. This highlights the effectiveness of our proposed EPD beyond the NextQA dataset for temporal multimodal inputs like video. We also evaluated **TTFT latency** under varying video lengths (8–64 frames), at 1 req/s: **Table 2: TTFT (s) ↓ under Different Frame Counts** | Method \ # Frames/Video | 8 | 16 | 32 | 64 | |------------------------|------|------|------|------| | **vLLM** | 0.42 | 0.82 | 1.59 | 3.11 | | **DistServe** | 0.42 | 0.81 | 1.54 | 3.08 | | **EPD (ours)** | **0.24** | **0.30** | **0.49** | **1.00** | EPD **consistently achieves significantly lower TTFT latency** than both vLLM and DistServe across all frame counts. Moreover, the **performance gap widens with increasing video length**—for instance, at 8 frames, EPD reduces latency by **42.9%**, and at 64 frames, the reduction reaches **67.5%** compared to DistServe. These results highlight EPD’s **superior scalability and robustness** under increasingly demanding video processing workloads. --- Additionally, feel free to explore our released code at this [link](https://drive.google.com/drive/folders/1cEyGCPw54EkgBjZs73m-SZq51M2etD0X). ---
Summary: After rebuttal: Thank the authors for the detailed comments. I'm keeping my recommendation of 3. ---------------------------------- The authors propose a novel Encode-Prefill-Decode (EPD) disaggregation framework for Large Multimodal Model (LMM) inference. The proposed approach decouples encoding and prefill stages, so that GPU resources can be allocated more efficiently based on the jobs' encode, prefill, and decode stages. The EPD framework includes intra-request parallelization (IPR) to shard a request into independent sub-requests, an optimizer to handle resource allocation, and a dynamic role switching mechanism to monitor the bottlenecks and switch the role of a GPU if needed. Evaluations are conducted on MiniCPM-V 2.6, InternVL2-8B, and InternVL2-26B using synthetic workloads and NextQA benchmark dataset on a cluster of 8x NVIDIA A100 GPUs, and achieved 71% reduction in time to first token (TTFT) and 57% reduction in end-to-end throughput (E2ETP). Claims And Evidence: Overall I find the claims clear and convicing. The authors proprose the EPD framework to address the LMM inference efficiency issue brought by the additional encoding stage to process raw multimodal inputs (compared to LMMs), which could require substantial GPU resources to encode them into tokens. The experiments show that the proposed EPD framework can improve multiple memory and latency metrics. Ablation studies are also conducted to show the effectiveness of each component. I am not a big fan of the term Large Multimodal Model being used in the paper (I understand that InternVL calls themselves LLM). I think Vision Language Models would be more appropriate, as the advantage of the proposed EPD framework is largely based on the efficiency of the image encoding stage enabled by parallelism. We don't know if the same efficiency can be achieved for other modalities, e.g., audio. Methods And Evaluation Criteria: The authors use both synthetic datasets and the NextQA benchmark dataset for evaluation, which make sense to me. Metrics such as the supported number of images per request, the batch size, TTFT, TPOT, SLO attainment rate, and goodput are used to evaluate the proposed EPD framework, which are relevant to the claim being made. Theoretical Claims: N/A. This is an application paper. Experimental Designs Or Analyses: The authors mainly compare the proposed approach with DistServe [Zhong et al., 2024], which implements the prefill decode (PD) disaggregation framework for LLMs (so no/little encoding compared to this paper). Overall the experimental designs are sound to me. In multiple experiments the authors use a resolution of 4032 x 3024, which arguably favors the proposed EPD approach due to the encoding cost. Supplementary Material: I reviewed Appendix A and B for optimizer and experiment implementation details. Relation To Broader Scientific Literature: I'm not an expert in this area. The paper can feel like a natural extension of the DistServe framework (plus Intra-Request Parallelism), which is also a disaggregation framework for LMMs, except that the proposed EPD framework takes into account of the additional the encoding stage. To me the results are sound and decent but not surprising. A lot of these are based on the fact that image and video inputs can be patchified and parallelized efficiently, and I wonder if this still holds for more sequential type of modalities. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: The contributions of the paper are decent but not surprising. To be fair I feel VLM would be more appropriate in the title compared to LMM, as the advantage of the proposed EPD framework is largely based on the efficiency of the image encoding stage. We don't know if the same efficiency can be achieved for other modalities, e.g., audio. Other Comments Or Suggestions: N/A. Questions For Authors: - Regarding Eq. (2), I think this falls into formalism and I cannot get much insight from it. I wonder if the authors can provide a concrete example. How should one use the formula in practice? How would the authors define the search space in their experiments? - Have the authors considered other modalities such as audio in the evaluation? If not, do you think the proposed EPD framework can be generalized to other modalities? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive feedback. Below, we address the specific concerns. --- ### **Q1: Use of the Term “LMM” and Generalization Beyond Vision-Language Models** We deliberately use the term **LMMs** instead of “VLMs” to reflect the **general applicability of EPD across modalities**, not just vision. EPD is designed for any model that consists of a modality-specific encoder followed by an LLM decoder. This includes **audio**, **video**, and other input types where encoders operate independently on local patches or chunks. Even for sequential modalities like audio or video, temporal structure is typically **reconstructed by the LLM using positional embeddings (e.g., RoPE)**—not by the encoder—making **intra-request parallelism and stage disaggregation effective across modalities**. To support this, we have extended our experiments to the **audio domain**, using the `ultravox-v0_3` model (`LLaMA3.1-8B` backbone) in an **online, encode-heavy setting** with **24 audio files per request**. Table 1 shows that EPD **outperforms both vLLM and DistServe across request rates**, achieving **higher SLO attainment** and **higher goodput**. **Table 1. Online Audio Benchmarking: SLO Attainment Rate (↑)** *SLO targets: TTFT ≤ 2.0s, TPOT ≤ 0.025s* | Method (Config) \ Rate (req/s) | 0.1 | 0.25 | 0.5 | 1 | 1.1 | 1.15 | Goodput (r/s) | |------------------------|------|------|------|------|------|------|------| | **vLLM** (4D) | 0.99 | 1.00 | 0.99 | 0.91 | 0.87 | 0.87 | 1.01 | | **DistServe** (3P1D) | 0.99 | 0.94 | 0.89 | 0.72 | 0.69 | 0.68 | 0.45 | | **EPD (ours)** (2E1P1D) | **0.99** | **0.99** | **1.00** | **0.96** | **0.93** | **0.93** | **1.16** | These results, along with our video experiments (see response to Reviewer QF6q), confirm that **EPD generalizes effectively beyond vision**, justifying the broader **“LMM”** terminology. We will make this clearer in the final version. --- ### **Q2: Perception as a Natural Extension of DistServe with added Encoding** Our system is inspired by DistServe’s disaggregation approach, but **disaggregating encoding from prefill** presents **unique challenges and opportunities** beyond prefill-decode separation. - First, we identify **Intra-Request Parallelism (IRP)** as crucial for reducing TTFT in encoding-heavy workloads. Otherwise, the benefits of disaggregation can be offset by communication overhead, since both E and P stages have similar characteristics (compute-bound). - Second, **optimized resource allocation** is essential, as different LMMs exhibit diverse compute profiles (e.g., LLaVA is prefill-heavy, while MiniCPM is encoding-heavy), requiring stage-specific tuning of resources. - Third, in dynamic workloads, growing **pipeline imbalance can nullify IRP gains**, making **dynamic role switching** vital to maintain high throughput and efficiency. These system-level insights are foundational to the design and performance gains demonstrated by EPD. --- Additionally, feel free to explore our released code at this anonymized [link](https://drive.google.com/drive/folders/1cEyGCPw54EkgBjZs73m-SZq51M2etD0X). --- ### **Q3: Formalism of Equation (2), concrete example, and Search Space** Here is an explanation and we will expand this in the appendix and include examples in the released code. Equation (2) is a **general formulation** for expressing optimization objectives in a disaggregated inference setting. It is intentionally designed to be flexible across deployment scenarios. It captures the trade-off between system performance (e.g., throughput or SLO attainment) and deployment cost, and is designed to flexibly adapt to different deployment scenarios. Below, we explain two practical real-world use cases: **1. Offline Optimization:** Consider a small business that wants to deploy with 8 GPUs, the user’s goal is to maximize **goodput** under a fixed hardware budget. In this case, the cost term in Equation (2) is constant (since all GPUs are active), and the objective simplifies to maximizing throughput. The user collects historical workload traces and feeds them to our resource allocation module. The optimizer searches over: (1) number of instances per stage (E/P/D), (2) degree of parallelism (IRP, TP, PP), (3) batch sizes, and (4) scheduling strategies. This yields the best static deployment setup for the given workload. **2. Online Serving with Autoscaling:** In dynamic cloud settings, Equation (2) incorporates both **performance (e.g., SLO attainment)** and **cost (e.g., number of active workers)**. The objective might be to balance **SLO attainment** (e.g., 95% of requests meeting latency targets) against **cost** (e.g., number of active GPU workers). The system collects request traces periodically and re-solves Equation (2) to adapt to changes in workload—for example, shifting from 3E2P3D to 2E1P5D when decoding becomes the bottleneck. ---
null
null
null
null
null
null
null
null
GSM-$\infty$: How Do your LLMs Behave over Infinitely Increasing Reasoning Complexity and Context Length?
Accept (poster)
Summary: This paper introduces a new benchmark for testing the performance of LLMs on long-context reasoning.In particular, the authors start from the abstract computational graphs of the GSK-8K problems and develop the GSM-Infinite benchmark.With the constructed benchmark, the authors observe a consistent sigmoidal decline in reasoning performance as the complexity increases. Claims And Evidence: This paper argues that a long-context reasoning benchmark should offer controllable and scalable complexity, natural noise, and support infinite data generation.The first two points are somewhat realized in GSM-Infinite, but the third point is less clear. The authors have only expanded based on GSM-8K as a seed, and it is not evident whether the proposed method can be applied to various types of mathematical benchmarks. Methods And Evaluation Criteria: The methods and evaluation criteria are solid. I did not find any obvious flaws in the method and experiment sections. Theoretical Claims: Not applicable. This paper does not contain any theoretical claims. Experimental Designs Or Analyses: The experimental designs based on the GSM-infinite are sound. I also checked the experimental details in the appendix,and I did not notice any obvious flaws. Supplementary Material: I have carefully reviewed the appendix, including the related works, detailed experimental setup, full results, and additional analysis experiments. Relation To Broader Scientific Literature: This paper proposes GSM-Infinite, a synthetic dataset for testing reasoning capabilities over long contexts.It serves as a complement to existing mathematical benchmarks and can test a model's reasoning abilities in longer contexts or with introduced noises. Essential References Not Discussed: I did not notice the issue of missing references. Other Strengths And Weaknesses: This paper proposes GSM-Infinite, which synthetically constructs long-context questions based on GSM-8K as seed problems.Neither the dataset itself nor the experiments based on the dataset have significant issues. However, as the authors discussed in Appendix A.4,t here are many synthetic methods for expanding seed questions into long-context reasoning tasks with controllable quality. The pipeline designed by the authors is also rather tailored to GSM-8K, which raises concerns that the novelty and significance of this paper are limited. Therefore, I consider this paper to be at the borderline level. Other Comments Or Suggestions: It appears that the construction of GSM-Infinite is based on computational graphs, which can be applied to a broader range of mathematical and reasoning problems. Have you considered constructing multi-source datasets under this method to build“infinite”datasets, which could lead to more universal conclusions? Questions For Authors: Have you manually checked the synthesized questions? Are there any unnatural cases? Have you conducted any statistics on the diversity and quality of the questions? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Concern about Practicality of GSM-Infinite** We appreciate the reviewer’s concern about the practicality of GSM-Infinite, as the problem is crucial for understanding why we believe GSM-Infinite is helpful for researchers to use. We want to kindly rebut that first, different from “many synthetic methods for expanding seed questions into long-context reasoning tasks with controllable quality,” GSM-Infinite introduces two important techniques to offer important additional edges over past benchmarks. First, unlike prior work that expands GSM-8K seeds, our test examples are generated entirely from scratch using graph-based generation—no GSM-8K questions or data are used (Section 3, Pages 3–5). GSM-8K is saturated in both quantity and difficulty (Figure 5(a), Page 4), with only ~10 examples per op>8. In contrast, GSM-Infinite offers >500 test cases per op, scaling up to op=200+, saturating SOTA LLMs. Second, prior long-context reasoning benchmarks from "expanding seed" can often have noise filtered out by retrieval (RAG), making them poor tests of long-context LLMs (Figure 4(a), Page 4). Our graph-based expansion produces problems where noise is indistinguishable from essential info (Figure 4(b), Page 4), rendering RAG ineffective. GSM-Infinite scales both reasoning and context length while being fully LLM/human-free. Moreover, we believe that GSM-Infinite offers the reasoning community a new tool that evaluates LLMs solely on reasoning with complex graph traversal, without testing them with memorization of subject-specific knowledge. Recent LLMs often report metrics from MATH or AIME datasets, since prior datasets such as GSM-8K have been saturated. Although these datasets are of strong reasoning complexity, solving test examples convincingly requires a lot of math-related theorem knowledge in LLMs’ memory. We show in (Table 1, page 7) that the powerful reasoning models also see their performance decaying to zero as we gradually increase the ops. GSM-Infinite offers an alternative reasoning benchmark that is drastically less reliant on LLMs’ memory, while evaluating all LLMs based solely on their reasoning ability of traversing a complex computational graph problem with fine-grained difficulty control for the study. **Lacks to infinite dataset** Thank you for this insightful suggestion. Our current implementation, GSM-∞, focuses specifically on modeling the explicit arithmetic operations (+,-,×,÷) and implicit hierarchical relationships found in GSM-8K using computational graphs. The primary goal was to create a scalable benchmark for grade-school math reasoning with controllable complexity and context length. While the graph representation is potentially applicable to a broader range of problems, our current work does not aim to build a universal "infinite" dataset covering all reasoning types. Extending this methodology to model other complex structures, like code syntax trees or spatial relationships, is a promising direction for future research. **Dataset Quality Inspection and Overview** Questions were constantly evaluated during development via manual inspection and by feeding them to SOTA LLMs to ensure they were natural, solvable, and LLM-understandable. SOTA models demonstrate high accuracy (>88% for o3-mini on Hard op 30) on zero-noise, lower-op problems, indicating fundamental understandability. |op|Forward|Reverse|Score| |---|---|---|---| |10|1.00|0.96|0.98| |20|0.98|0.90|0.95| |30|0.94|0.84|0.89| Errors observed stemmed from model reasoning limitations (e.g., confusing concepts, hallucination, misinterpreting relationships), not ill-formed prompts. We collect examples in https://docs.google.com/document/d/1WP3ygB67yNUS-iliSYYjFRY4Ls81yIVcRVxpbTVTZ0o/edit?usp=sharing Part 4. Moreover, we present a table for an overview of the composition of the dataset (submitting version). | Subsets | Templates| | | | | |---|---|---|---|---|---| | **Symbolic** | Uniform template of variable name “V_number” | | | | | | **Medium and Hard** | **Configuration** | **Crazy Zootopia (80%)** | **Teachers and School (10%)** | | **Movie Awards (10%)** | || Forward (50%)| 40%|5%| 5%|| || Reverse (50%)|40%|5%|5%|| The amount of variation permitted in each of our templates is different, richest for "Crazy Zootopia" and slightly more limited for the other two templates. We use an un-even partition of templates for the ensemble of our dataset. We are actively expanding the number of available templates for GSM-Infinite, now made 10 (much more than 3 when submitting), and will be released once published. **Please notice that the template proportions are fully reconfigurable, and switching to a different template won’t leads to large difference in performance.** For detailed ablation experiments of templates, please refer to (Appendix F, page 19-20).
Summary: The paper introduces ​GSM-∞, a synthetic benchmark for evaluating long-context LLMs on grade-school math problems. It generates problems with ​controllable complexity (via computation graphs) and ​arbitrary context length (via "spider topology" noise). The paper finds that: 1. ​Sigmoid performance decay as reasoning complexity increases. 2. Reverse problems (backward reasoning) are harder than forward ones. 3. ​Exponential inference cost yields only linear performance gains. Claims And Evidence: Mostly yes. Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. The paper tests 18 models on zero-noise tasks and 10 on long-context. Supplementary Material: N/A Relation To Broader Scientific Literature: This paper is related to GSM8K, RULER, and LongBench. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The paper isolates reasoning from retrieval. 2. The ablation analysis is comprehensive. Weaknesses: 1. Lack of ablation study on different graph structures. 2. Limited Analysis when comparing models. 3. Insufficient discussion about compute costs. Other Comments Or Suggestions: 1. The explanation of the computation graph approach can be improved by introducing more detailed breakdowns. 2. Add more qualitative analysis of model errors. Questions For Authors: 1. In section 3.2, how different distributions of noise impact performance? Can a carefully designed RAG system skip it? 2. How different types of graph structures impact model performance? 3. The paper finds that LLMs perform worse on reverse problems than forward ones. Whether it is due to lack of training or data in the training set. Can it be effectively improved by providing more similar data? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Additional figures and tables in the link (anonymous): https://docs.google.com/document/d/1WP3ygB67yNUS-iliSYYjFRY4Ls81yIVcRVxpbTVTZ0o/edit?usp=sharing (referred to as **Sup**) **1. No ablation on different graph structures.** Thank you. We agree studying graph structure impact is valuable. We ran Qwen 2.5 7B IT on 40k Symbolic (op=9) to analyze the relationship of graph depth and accuracy. Depth is the max node distance from the root; mean depth is the average across nodes. We categorized samples by max depth (Depth=2 to 6). Results confirm deeper graphs are harder. Accuracy decreases as depth increases: |Depth|2|3|4|5|6| |---|---|---|---|---|---| |Accuracy|.438|.386|.347|.317|.257| Further analysis shows accuracy decreases as *mean depth* increases. When mean depth is similar, max depth has little impact, suggesting overall complexity (mean depth) is the primary factor. A plot confirms this trend across depth settings: **Sup** Part 1. **2. Limited Analysis between models.** Below data from Hard subset. Acknowledged. Based on Table 1 (p7): * **Reasoning Models:** Show significant gains over non-reasoning counterparts with identical parameters/architecture. |op|DeepSeek R1|DeepSeek V3|Qwen QWQ 32B Preview|Qwen 2.5 32B Instruct (non-reasoning)| |---|---|---|---|---| |10|.95|.79|.64|.61| |20|.94|.60|.60|.36| |30|.95|.36|.26|.12| |40|.92|.16|.13|.04| |AUC|8573.8|2407.9|2846.2|1399.8| * **Model Size:** Within families, larger models perform better. |op|Qwen2.5 72B Instruct|Qwen2.5 7B Instruct|Llama 3.1 70B Instruct|Llama 3.1 8B Instruct| |---|---|---|---|---| |5|.95|.59|.73|.52| |10|.82|.29|.64|.40| |15|.76|.18|.49|.19| |20|.64|.05|.33|.07| |25|.49|.03|.26|.06| |30|.37|.00|.04|.02| |AUC|2016.375|618.500|1205.250|606.500| * **Architecture:** Current hybrid models (Linear Attention/SSM) lag behind comparable Transformer models. |Models|Architecture|ParameterSize|AUC| |---|---|---|---| |Qwen-2.5-32B-Instruct|Transformer|32B|1405.055| |MiniMax-Text-01|Hybrid (Linear Attention)|456BMoE(45.9B/token)|1178.510| |Jamba-1.5-Large|Hybrid (SSM)|398BMoE(98B/token)|466.400| **3. No cost discussions.** We compute the cost for Llama-3.1-70B below as a reference. We evaluated 100+ examples per op across 3 subsets (use 100). We evaluated up to 40-55 ops per subset (sampling frequency decreases after op 30). Total zero-noise evaluation (~9300 examples) requires ~9.8M input / ~39.2M output tokens. Using competitive (DeepInfra) API pricing, costs are: |Context|Cost| |---|---| |Zero|$12.94| |8K|$5.83| |16K|$7.49| |32K|$12.60| |**Total**|**$30.43**| Larger/reasoning models cost more. With additional tables and elaboration in **Sup** Part 2., found using strides (evaluating every N ops) up to 4 has minimal impact (<3% AUC change) on results. Reducing samples per op (e.g., 100 or 50) also yields similar AUC, allowing further cost reduction. **4. Need more detailed graph generation process.** Acknowledged. We will make sure to add greater and finer step-by-step details about the generation process in the revised version of the paper. **5. No Model Error Discussion** We analyzed errors for Llama-3.1-8B and Qwen-2.5-7B on op=5 problems. Examples available in **Sup** Part 4. Common error types: |ErrorType|ExampleIndices(Internal)| |---|---| |Confuse similar concepts|(1),(8),(10)| |Hallucinate concepts not in question|(2),(4),(9)| |Distracted by irrelevant variables|(3)| |Misinterpret relationships|(5),(6),(7)| **6. Noise distribution impact & RAG.** Our ablation (Sec 5.3, Fig 7c/d) shows model performance is robust to noise types, but RAG struggles significantly with our "Spider Topology" noise. Our RAG baseline is strong (all-mpnet-base-v2 retriever, Llama-3.1-70B decoder, 2k token budget, strong benchmark performance). RAG fails because it uses contextual (semantic similarity) search. Spider Topology adds irrelevant nodes semantically close to and connected to core graph nodes. RAG's fixed budget means noise can displace relevant chunks. RAG retrievers evaluate chunk relevance contextually, lacking the deep reasoning needed to identify truly necessary information, which attention layers possess. Contextual info alone is insufficient. **7. Forward vs. Reverse Problems.** The difference isn't arithmetic ops, but implicit relationships (Sec 4.3, Fig 6). Forward problems use class-instance/hierarchical dependencies (concrete -> abstract). Reverse problems provide abstract values to find unknown instance values (abstract -> concrete). Most LLMs perform better on forward problems (App E), hypothesized due to training data bias towards constructive logic. We fine-tuned Qwen-2.5-500M with 1.3M generated problems (equal forward/reverse). The base model showed forward > reverse initially. Fine-tuning significantly boosted overall performance, with reverse slightly outperforming forward, confirming training data can address this. Detailed data in tables before and after fine-tuning is in **Sup** Part 3.
Summary: This paper introduces GSM-∞, a synthetic long-context reasoning benchmark generated entirely by an automatic system with fine-grained control over complexity and information density. Specifically, it generates the benchmark by modifying operations like "+ -" in the computational graphs of existing benchmarks, which makes the generation method easily scalable. Extensive experiments on this benchmark further leads to several useful findings that are useful for further research in this direction. ## update after rebuttal In the rebuttal, the authors answered my questions and provided an analysis that addresses my previous concerns. Therefore, i increased my score. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, I check the correctness of such proofs and claims. Experimental Designs Or Analyses: Yes, I check the experiments in the corresponding sections. Supplementary Material: Yes, I check the supplementary material. Relation To Broader Scientific Literature: The key contribution of the paper is related to a certain range of literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The proposed benchmark generation method is directly applied on computational graphs of existing benchmarks, which makes it easily scalable in an automatic way. 2. The paper is written in a good flow, and experimental results have demonstrated the effectiveness of the proposed benchmark. 3. Several interesting findings such as the discussion on repeated sampling are presented. Weaknesses: 1. There are no obvious weaknesses in this paper, except few typos that are listed below. Other Comments Or Suggestions: 1. Line 209, ” ->“ 2. Line 431, the reference is not working Questions For Authors: Have the authors tried other reasoning relationships that can be obtained through the graph, like entailment? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Response to other reasoning relationship question** We want to express our great appreciation to the reviewer for your positive feedback on GSM-Infinite. The question raised is also highly insightful and greatly appreciated. We want to clarify that the goal of GSM-Infinite is to model all the relationships that appear in GSM-8K with graph representation, an abstract way that is easier for manipulation and perturbation for scaling up in reasoning complexity and context length. GSM-Infinite model all explicit operations from plus, minus, multiply, and divide, and the implicit operations with hierarchical instance dependency. For example, “pigs” and “cats” are specific instances under the class “animal”, which GSM-Infinite can model well. (Is that the entailment relationship you are talking about?) On the other hand, GSM-Infinite by no means captures all the relationships possible through graph representations. We believe exploring these alternative complex relationships through a graph is also highly interesting and insightful. We will leave these explorations for further complex graph to natural language mappings to future works.
Summary: The paper introduces a new benchmark designed to evaluate the reasoning capabilities of Large Language Models (LLMs) in long and complex contexts. The benchmark is inspired by the abstraction of GSM-8K problems as computational graphs and introduces a methodology to generate grade-school math problems with infinitely scalable difficulty and context length. The authors evaluated existing LLMs, revealing a consistent sigmoid decline in reasoning performance as complexity increases, and an inefficiency in performance gains relative to inference computation scaling. Claims And Evidence: Mostly yes. The first remaining concern is that the current dataset fit a rather simple pattern. It is unclear how models' performance would change if optimizing prompt or enabling certain tool use like code execution would change the performance and corresponding conclusion. Some failure mode analysis of a few LLMs is helpful to understand the possible flaws of the dataset. The second concern is that it is unclear how the proposed dataset reflects real use case of reasoning models. Methods And Evaluation Criteria: No concerns. Theoretical Claims: The authors claimed infite length and mentioned reduced probability of multiple solutions but some formal analysis on the cost of generating a valid question w.r.t. length and the actual probability of such cases should be provided. Experimental Designs Or Analyses: Please check above for details. Supplementary Material: Yes, all. Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: Please check above for details. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Concern GSM-Infinite not practical** We appreciate the reviewer’s concern about GSM-Infinite’s practicality. While GSM-8K has been a standard for evaluating LLM math reasoning, its difficulty has become saturated (Figure 5(a), Page 4), which is why newer models have moved on. GSM-Infinite builds on GSM-8K by abstracting its core operations into graph representations and using graph perturbation to scale both reasoning complexity and context length. Our pipeline is entirely LLM- and human-free, generated independently of GSM-8K (Section 3, Pages 3–5). By scaling up the op=200 and above, we saturate SOTA LLMs and capture their full spectrum of reasoning ability for study. On the other hand, recent LLMs often report metrics from MATH or AIME datasets. Although these datasets are of strong reasoning complexity, solving test examples convincingly requires a lot of math-related theorem knowledge in LLMs’ memory. GSM-Infinite offers an alternative reasoning benchmark that is drastically less reliant on LLMs’ memory, while evaluating all LLMs based solely on their reasoning ability of traversing a complex computational graph problem with fine-grained difficulty control for the study. Because of space limitations, we hold five examples for op=5 for both Llama-3.1-8B-Instruct and Qwen-2.5-7B-Instruct in the following link: https://docs.google.com/document/d/1WP3ygB67yNUS-iliSYYjFRY4Ls81yIVcRVxpbTVTZ0o/edit?usp=sharing. We select a smaller operation setting with a smaller model to keep the complexity of the problem lower for the practicality of human inspection. Below is a summary of error causes and the corresponding index of the example. | Error Type | Index of the examples | |------------------------------------------------------------------------|---------------------------| | Confuses two similar but distinctive concepts in the question | (1), (8), (10) | | Hallucinate a concept in the question, which the question does not mention | (2), (4), (9) | | Get distracted by unnecessary variables in the question | (3) | | Misinterpret the relationship mentioned in the question | (5), (6), (7) | **Generation Throughput** During generation, we strictly constrain the number range in the solution path and disallow negative or floating-point numbers in arithmetic operations. These constraints make generating the Hard subset of GSM-Infinite significantly slower than Symbolic and Medium, as Hard requires heavy use of aggregation, implicit multiplication and addition when op is large—which quickly inflate numbers and reduce flexibility compared to Symbolic, where operations can be mixed more freely. Despite this, our system achieves far superior throughput on the Hard subset compared to any LLM- or human-in-the-loop approach. Using 16 threads on an M3 Pro, we reach 1.1M tokens/s for examples averaging 1k tokens (op=50), a speed that's hard to match with LLM-based generation. | op | Throughput (Valid Examples/s) | |-----|-------------------------------| | 10 | 5510.88 | | 20 | 6747.20 | | 30 | 1919.68 | | 40 | 2228.00 | | 50 | 1149.92 | When going up the op number, the throughput will go down because the chance that the operation will avoid the strict rejection sampling criteria is decreasing, causing some wasted computation. Just for reference, the much more flexible Symbolic subset can generate problems with 30k examples/s with 16 threads on CPU, since we can have more precise control of every operation in the Symbolic benchmark. We think the need for strong rejection sampling criteria is an implementation challenge, and more careful and more efficient graph control implementation is greatly possible and likely to lead to further improvement in throughput. |Context Length|Throughput (Valid Example/s)| |---|---| |Zero noise|6747.20| |8K|3896.48| |16K|3932.96| |32K|3898.08| |64K|3917.60| |128K|3954.18| The above shows throughput for op=20 for increasing context length. The sudden drop in throughput from zero noise to 8k is because when injecting noise to a specific context length requirement, we have to first perform a tokenization step of the zero-noise example and calculate how much noise statement we need. The one extra tokenization causes a slowdown in generation throughput. On the other hand, since we impose far less strict checking on the noise statements, generating noise is highly efficient and parallelizable across multiple threads. We show our generator’s strong context length flexibility. --- Rebuttal Comment 1.1: Comment: The reviewer appreciates the rebuttal. But the practical value of the proposed benchmark is still concerning for the reviewer. Although the authors argue that GSM-8k has been widely used nowadays, the reviewer still finds the proposed dataset distant from the real use case of long reasoning process, where much more broad knowledge or skill is needed and it is intellectual challenging to reach certain intermediate steps. But this is not the case with the current proposed evaluation. This is what the reviewer summarized as far from real use cases. The reviewer keeps the score because this current simple pattern could be still useful to probe long-context ability. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's follow-up comments. We want to kindly rebut that synthetic long-context benchmarks are highly popular today (Needle-in-a-haystack, RULER, etc). These tasks evaluate the fundamental ability of the LLM needed for more realistic downstream tasks. Needle-in-a-haystack, for example, evaluates the fundamental ability of LLM to retrieve a certain chunk in the context given the query. Failure in doing so suggests the LLM's fundamental inability to tackle tasks that are more realistic and intellectually challenging. **The synthetic tasks have helped many recent LLM developments to better benchmark their models' long-context abilities and have become universally accepted by the frontier LLMs for demonstrating their long-context ability** (e.g., Google Gemini 1.5 and later models) For GSM-Infinite, we first identify that although models are getting a perfect score on Needle-in-a-haystack, RULER, LongBench, etc, **it is far from claiming that the models are fundamentally strong in long-context tasks**. As we have shown (Section 2, page 3) that across most popular long-context benchmarks, RAGs, which are much cheaper in inference than LLMs, has performance on-par, or even better than LLMs. We need a harder benchmark that evaluates the true value of LLMs over cheap RAG methods to further facilitate the long-context LLM development. **We also have shown that constructing a harder benchmark isn't straightforward**, as it is non-trivial to convert short-context hard problems to long-context ones by injecting random noise, as that would even help RAGs beat long-context LLMs (Figure 7 (d), page 8). Following the prior synthetic long-context benchmarks that are heavily popular, we also want to design benchmarks that evaluate the fundamental reasoning capability of these LLMs. **We intentionally ensure that solving the problem correctly doesn't require domain-specific knowledge or high-level tool-using.** Therefore, we can know for certain that making a mistake on the benchmark is caused by the fundamental information extraction or understanding rather than unfamiliarity with the knowledge during pretraining. Grade-school math can be abstracted into fundamental operations and relationships (confirmed by previous work [1]). We take these operations and develop a scalable generator that can map all relationships that appear in GSM-8K to the generated problems that are LLM and human understandable. Since the operation is well-defined, we can evaluate LLM with greater precision on their behaviors. Also, the generator is crucial for developing long-context filler text that is deeply relevant to the main problem, such that RAGs are unable to filter (Figure 4(c), page 4). Admittedly, we fully agree that the benchmark needs broad knowledge and challenging skills. **However, we want to point out that naively collecting this real-world data is impractical at a large scale, as we have discussed and cited SWE-bench and DafnyBench.** They required a huge amount of human engineering for cleaning and deduplication while still limited by diversity and quantity. We argue that synthetic generation offers another possible path towards these benchmarks for its high controllability and flexibility in generation. GSM-Infinite offers a solid step towards that goal. [1] Ye, T., Xu, Z., Li, Y., & Allen-Zhu, Z. (2024, July). Physics of language models: Part 2.1, grade-school math and the hidden reasoning process. In The Thirteenth International Conference on Learning Representations.
Summary: This paper points out issues in existing long-context reasoning benchmarks and addresses the issues by designing new benchmark called GSM-Infinite. The main method they use is to construct computation graphs for problems in GSM8K and generate question-answer pairs with user-definable question difficulty measured by reasoning steps. The main findings is that LLMs' performance decays with reasoning complexity, addition of noise. They struggle more on backward thinking tasks and benefit from more inference steps. For model performance, they find that reasoning models do better than non-reasoning LLMs. Claims And Evidence: Claims they made: 1) LLM Performance Degradation Can be Modeled Using Sigmoid Function 2) Reverse Problems are Harder to Solve for LLMs 3) LLMs' performance degrade as the context grows long and as noise is added. All of them are supported by experimental results. Methods And Evaluation Criteria: Their evaluation method for different LLMs with different context windows make sense. Theoretical Claims: There is no theoretical claims in this paper. Experimental Designs Or Analyses: See Questions. Supplementary Material: I read about their RAG setup, their repeated sampling experiment setup and related works section. Relation To Broader Scientific Literature: I think this paper is related to the long-context LLM reading comprehension literature and also multi-hop reasoning benchmarks. Essential References Not Discussed: This paper does not provide discussion on multi-step reasoning literature. There are several classical benchmarks which should be included in the paper: [1] Sinha, Koustuv, et al. "CLUTRR: A diagnostic benchmark for inductive reasoning from text." arXiv preprint arXiv:1908.06177 (2019). [2] Yang, Zhilin, et al. "HotpotQA: A dataset for diverse, explainable multi-hop question answering." arXiv preprint arXiv:1809.09600 (2018). [3] Trivedi, Harsh, et al. "♫ MuSiQue: Multihop Questions via Single-hop Question Composition." Transactions of the Association for Computational Linguistics 10 (2022): 539-554. Other Strengths And Weaknesses: The authors did a good job explaining details e.g. how to build computation graph for reasoning problems including how to handle different operations and how to add noise. However, I feel that it lacks a good explanation on how the whole pipeline works. See Questions. Other Comments Or Suggestions: Some of the references are repeated or having inconsistent formatting. e.g. Kamradt, G. Needle in a haystack - pressure testing llms, 2023. URL https://github.com/gkamradt/LLMTestNeedleInAHaystack/tree/main. is repeated. Chieh. et al. 2024a and 2024b are the same. Questions For Authors: It is possible that I missed the relevant part, but I am confused about the RAG setting: what is treated as a single "article"? Is it a single graph? Why is it needed that that LLMs have 8k context minimum? Are you putting all the available graphs and generate questions at one shot? What is the intuition behind "reverse problems are harder to solve"? After the problems are generated properly using the computation graph, shouldn't it be the same for the models to solve dividing and minus math problems as solving the addition and multiply types? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Three missing citations** Greatly appreciate the comments. We acknowledge that more discussion of previous multi-hop commonsense reasoning benchmarks **will be added in the paper later version**. But these three papers address different problems from ours. Firstly, HotpotQA [2] and MusiQue [3] are on short-context 2 to 4-hop commonsense reasoning. Also, huge human labor is required to collect, limiting scaling up on quantity or the reasoning complexity. Also, these two are converted to long-context problems incorporated in RULER ([2]) and LOFT ([2] and [3]), which are cited and heavily studied (Section 2, page 3). The added noise in two can be effectively filtered by RAG, inapt for LC-LLM evaluation. Secondly, CLUTRR [1], though aired before LLMs, similarly generates problems of family relationship deduction from graphs but with 3 big distinctions. (1) Despite similarity in methods, CLUTRR is heavily tailored to family relation deduction, especially specialized to kinship graphs, severely limited by scope. (2) Human annotation is essential to making CLUTRR, while GSM-Infinite relies completely on software, no LLM nor humans. No guarantee of the absolute correctness of the labels provided by the generator is a crucial drawback, as humans make mistakes. Specifically, we found that a minor part of the dataset is mislabeled. One example below. *Label: sister Question: Given the story, please provide the relationship between two selected people. Story: [James] bought a new dress for his daughter [Lynn]. [Theodore] and his son [Wesley] went to look at cars. [Theodore] ended up buying the Mustang. [James] took his grandson [Theodore] to the baseball game. Question: What is the relationship between 'Wesley' and 'Lynn'?* The corrected label is Wesley is the grand nephew of Lynn. (3) Having either LLM or Human in the loop also leads to huge difficulty in scaling up the reasoning complexity. Below, we evaluate the 2to10 test set given by CLUTRR with various LLMs. | Models | Accuracy | |---|---| | Llama 3.1 8B Instruct | 0.64 | | GPT-4o | 0.7 | Lack of scaling prevents the evaluation of the full spectrum of behaviors. GSM-Infinite generates op=200 problems (and more if needed) to capture the full spectrum of SOTA models' reasoning performance. **RAG clarification** We follow the convention of context-level RAG method [1]. We solely use the input context as the data store (no external source). We segment the input text into chunks (roughly a sentence) using NLTK package. The retriever first rates each chunk based on its relevance to the query, and then the top-k scoring chunks are selected as the input to the decoder model. k is determined by the 2k budget from our experiments. [1] Lewis, P. Retrieval-augmented generation for knowledge-intensive nlp tasks. (2020) **Dataset generation with Graph Clarification** During generation, no human/LLM is in the loop. “One-shot” generation is irrelevant to our settings. Each problem, regardless of context length, is generated based on one computational graph (not multiple graphs). Natural language problems directly mapped from the graph using our developed generator software. For a detailed explanation and reference, please look at (Figure 6, page 6) and (Section 3.1, 3.2, page 3-5). **Forward Reverse Observation** The difference between forward and reverse problems isn’t the arithmetic operations. Referring to (Section 4.3, page 6), the explicit plus, minus, multiply, and divide operations appear in both forward and reverse problems. **The difference is the implicit operation/relationships contained.** The implicit addition and multiplication are displaying class-instance and hierarchical instance dependency, respectively. Problems in forward logic only contain these two implicit operations - you can notice that in the computation graph (Figure 6, page 6), the more concrete and detailed instances are first computed, and then, using them collectively, more general and summarizing variables are computed. However, reverse problems contain the implicit minus and division operations. Given the value for the abstract variables, the LLM needs to compute the unknown instance variables. Here is a simplified example. Forward Problem - #pigs in Zoo is 2. #sheeps in Zoo is 4. Assume the animals not mentioned are of 0 quantity. What is the #total animals in Zoo? Solution: We go from #pigs in Zoo (instance) -> #sheeps in Zoo (instance) -> total animals (abstract). Reverse Problem - #pigs in Zoo is 2. #Sheep in Zoo is non-zero. #total animals in Zoo is 6. #sheep in Zoo? Solution: We go from total animals (abstract) -> #pigs in Zoo (instance) -> #sheeps in Zoo (instance). We show a detailed study in (Appendix E, page 18). We hypothesize that models' usual strength in forward stems from that humans naturally write in the constructive logic ordering, or forward, more, leading to naturally less training data for reverse logics. --- Rebuttal Comment 1.1: Comment: Thanks for the replay and clarification. My concerns are mostly addressed and I will increase my score. --- Reply to Comment 1.1.1: Comment: Thank you so much for your positive review on GSM-Infinite!
null
null
null
null
EasyInv: Toward Fast and Better DDIM Inversion
Accept (poster)
Summary: This paper introduces ​**EasyInv**, an additional mixture operation integrated into the diffusion model-based image editing pipeline, aiming to ​**enhance inversion accuracy** and ​**improve computational efficiency**. ## update after rebuttal 1. I acknowledge the approximation of the Kalman Filter and additional experiments, thus raising the score to 2.5. 2. However, the inversion visual effect in Figure 2 is not that satisfactory, so I cannot give a higher score. 3. And I still doubt about analogizing learned score as Kalman noise; however, if it works in practice, it's fine, and an intuitive starting point may deviate from the underlying true mechanism. Claims And Evidence: **My concerns about the intuition behind EasyInv's Kalman filter analogy:** 1. ​**Over-Simplification of the Kalman Gain** - The original Kalman gain matrix dynamically adapts to uncertainty estimates, but in EasyInv, it is reduced to a ​**constant value**. - This simplification strips away the core adaptive mechanism of Kalman filters, raising questions about whether the analogy holds theoretical rigor. 2. ​**Questionable Noise Equivalence Assumption** - The derivation from Equations (12)-(13) to (14) implicitly assumes ​$ v_k = w_{k-1} $, where: - $ v_k $: Dynamic noise (process uncertainty), - $ w_{k-1} $: Measurement noise (observation uncertainty). - These two terms are fundamentally ​**independent** in Kalman filtering theory. Asserting their equivalence lacks justification and introduces a contradiction: combining two independent noise sources should not yield cancellation ($ v_k - w_{k-1} = 0 $). 3. ​**Misleading Analogy with PF-ODE Inversion** - The term $ \epsilon^* $ in PF-ODE inversion represents ​**scaled diffused data scores**, not random noise. - Unlike Kalman filter noise (stochastic by nature), $ \epsilon^* $ encodes meaningful gradient directions derived from data. Equating them conflates ​**stochastic control theory** with ​**deterministic score-based diffusion processes**, weakening the theoretical foundation. Methods And Evaluation Criteria: The evaluation of EasyInv on image editing and PIEbench is reasonable. Theoretical Claims: **The paper lacks theoretical proofs.** - It is unclear whether the ​**marginal distribution** of the inverted distribution converges to the ​**diffused distribution**. - EasyInv is presented as a solver for the reverse PF-ODE with ​**order of accuracy 0**. However, no proof is provided to demonstrate that the resulting random variable $ z_t $ of the EasyInv distribution adheres ​**distributionally** to the diffused distribution $ p_t $. - A formal proof of this adherence would significantly strengthen the theoretical foundation of the proposed method. Experimental Designs Or Analyses: **The impact of the two hyperparameters on performance remains unclear.** - The choice of $\eta = 0.5$ appears to be ​**heuristic**, lacking a theoretical or empirical justification. - When applying this method to ​**different tasks**, determining the optimal hyperparameter values becomes a critical challenge. - Conducting ​**ablation studies** on these hyperparameters could provide valuable insights into their influence and help establish guidelines for task-specific tuning. Supplementary Material: yes, I reviewed the code provided. Relation To Broader Scientific Literature: This work mainly leverages some findings in the control theory to devise enhanced methods in diffusion model-based inversion. Essential References Not Discussed: **The literature review appears to be somewhat limited.** - A range of ​**exact inversion sampler techniques**, such as EDICT [1], BDIA [2], and O-BELM [3], have not been discussed or included in the experiments. - Incorporating these methods into the analysis could provide a more comprehensive understanding of the landscape and strengthen the paper's contributions. **References:** [1] Wallace, Bram, Akash Gokul, and Nikhil Naik. "Edict: Exact diffusion inversion via coupled transformations." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2023. [2] Zhang, Guoqiang, Jonathan P. Lewis, and W. Bastiaan Kleijn. "Exact diffusion inversion via bidirectional integration approximation." *European Conference on Computer Vision*. 2024. [3] Wang, Fangyikang, et al. "BELM: Bidirectional Explicit Linear Multi-step Sampler for Exact Inversion in Diffusion Models." *NeurIPS*. 2024. Other Strengths And Weaknesses: The idea is expressed clearly, which is easy to follow. A lot of visual comparisons are provided. Other Comments Or Suggestions: no Questions For Authors: no Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for reviewing our paper. The novelty of our work is well recognized by the other two reviewers. However, we received the lowest score from Reviewer md9u. We appreciate your feedback and hope our explanations below address your concerns, leading to a score reconsideration. **1. Over-Simplification of the Kalman Gain and Questionable Noise Equivalence Assumption** Section 3.3 provides essential background on the Kalman filter, with Equations (12)--(14) summarizing standard formulations consistent with Section 7 of [a]. Notably, the cited reference explicitly states that for specific applications, the Kalman gain $K$ in Equation (14) can be simplified to a blending factor within $[0,1]$, a strategy directly adopted in our methodology. Furthermore, Equation (6) of KLDD [b] demonstrates that the Kalman gain can evolve into a time-varying constant at each timestep, aligning with our approach. [a] Kalman and bayesian filters in python. [b] Kldd: Kalman filter based linear deformable diffusion model in retinal image segmentation. **2. Misleading Analogy with PF-ODE Inversion** Regarding $\varepsilon^\ast$, while theoretically derived through score-matching optimization, it is important to clarify that this term is not simply a scaled score. Instead, as shown in Equation (15), it is scaled by $\bar{\beta}_t$. In practice, during SDv1.5 training, the model learns to predict the Gaussian noise added to original images [c]. Given its strong noise prediction capabilities, assuming $\varepsilon^\ast$ follows Gaussian characteristics is reasonable. [c] High-resolution image synthesis with latent diffusion models. **3.Distribution between $z_t$ and $p_t$** Our formulation preserves the latent variable $z$'s distributional consistency. Equations (18)--(19) show that at the final inversion step ($\bar{t}=T$), $z_T$ emerges as a convex combination of $\eta z_T$ and $(1-\eta)z_{T-1}$. Since ${z_T}$ and ${z}_{T-1}$ are close to the sum of a series of $\varepsilon_i^\ast$ (Gaussian-distributed), their linear combination retains Gaussian properties. To validate this, we computed the KL divergence between inversed latent representations and random input noise in SDv1.5, yielding an average KL divergence of 210.69049 on 298 samples. For comparison, the average KL divergence across 298 randomly generated noise pairs was 247.50026. These results suggest our method effectively preserves latent space distribution. **4. Studies on Parameter $\eta$** To clarify our experimental choices, we conduct additional ablation studies on parameter $\eta$ using the same dataset as Table 3. Since $\eta = 1$ corresponds to standard DDIM inversion and $\eta = 0$ bypasses inversion, these cases were excluded. Our results show that $\eta = 0.5$ achieves the best performance. | $\eta$ | Editing Method | Distance $\(\times 10^2\)\(\downarrow\)$ | PSNR $\(\uparrow\)$ | LPIPS $\(\times 10^3\)\(\downarrow\)$ | MSE $\(\times 10^3\)\(\downarrow\)$ | SSIM $\(\times 10^2\)\(\uparrow\)$ | Whole CLIP Similarity$\(\uparrow\)$ | Edited CLIP Similarity$\(\uparrow\)$ | |:-------:|:------:|:---------------------------------------:|:-----------------:|:------------------------------------:|:---------------------------------:|:--------------------------------:|:-------------------:|:--------------------:| | η=0.2 | PnP* | 25.90 | **23.29** | 113.86 | **66.11** | 80.13 | 24.66 | 21.60 | | η=0.4 | PnP* | 24.44 | **23.29** | 107.18 | 66.32 | 80.69 | 24.82 | 21.76 | | η=0.5 | PnP* | **22.88** | 22.56 | 102.34 | 78.57 | 80.27 | **25.38** | **22.53** | | η=0.6 | PnP* | 23.36 | 23.21 | 101.37 | 67.96 | **81.07** | 25.02 | 22.00 | | η=0.8 | PnP* | 22.89 | 22.92 | **98.44** | 72.30 | 80.94 | 25.30 | 22.29 | **5. Extra literature reviews** We appreciate the suggestion to broaden our literature review. The three cited methods—EDICT, BDIA, and O-BELM—are classified as exact inversion techniques. * **EDICT** extends affine coupling mechanisms with alternating dual-state refinement through reciprocal updates between primary and auxiliary diffusion variables. * **BDIA** enhances this approach with symmetric bidirectional state integration, ensuring provable invertibility but introducing trade-off parameters that may affect robustness across frameworks. * **O-BELM** synthesizes concepts from both methods, employing a bidirectional explicit architecture while replacing heuristic hyperparameters with analytically derived coefficients optimized via local truncation error minimization, improving numerical stability. While these methods highlight system reversibility, their empirical updates introduce trade-offs. We focus on fixed-point iteration but acknowledge that incorporating these methods could enhance our analysis. We will include them in the final version. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. The Extra literature reviews and ablation study are greatly appreciated. Though the ablation study shows that the $\eta = 0.5$ cannot achieve the best across every metric, the choice of $\eta$ on other datasets is also unclear. Due to the practical contribution of this paper, I will raise my score to 2. However, my concerns about Kalman's theory remain; the issue of $v_k = w_{k-1}$ is not straightforwardly answered. Simplification of the Kalman gain to a constant is ok for me now, because other literature also did this. > It is important to clarify that this term is not simply a scaled score. Instead, as shown in Equation (15), it is scaled by $\bar{\beta}_t$. A score scaled by $\bar{\beta}_t$ is indeed a scaled score. update: please just tell me how to derive eq 14 via eq 12 and eq 13. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, We would like to clarify that we did not implicitly assume that $v_k\ =\ w_{k-1}$. In formula (12) of our paper, $x_k$ represents the system state. Because our system may not operate exactly as assumed, $w_{k-1}$ is added in the end of that formula as representation of potential bias. The $y_k$ in formula (13) is the measured state of the system, and $v_k$ is introduced as the measurement bias for the same reason. Regarding formula (14), we have to remind reviewers that it is $\bar{x}\_k$ here in the right-hand side of this formula, not $x_k$. As mentioned, $x_k$ denotes the theoretical state of the system, but in practice the value of $w\_{k-1}$ is unknown, otherwise the precise results would be calculated without the needs of Kalman filter. Instead, we have the predicted value ${\bar{x}}\_k$, which is ${\bar{x}}\_k\ =\ A{\bar{x}}\_{k-1}\ +\ Bu_k$ as we presented in the end of section 3.3. $y_k$ remains unchanged because the result of measurement would always containing the measurement bias. Thus, formula (14) is independent of $w_{k-1}$, and we defiantly not implicitly assuming that $v_k\ =\ w_{k-1}$. Section 7 of [a] would supports this explanation. Regarding $\eta$ selection across datasets, we acknowledge two difficulties: (1) Image editing dataset preparation is very time-consuming, and (2) as far as we know only one public benchmark (used in Table 3) currently exists for this experiment. While comprehensive validation exceeds rebuttal time constraints, we will address this in future work. **Update**: In response to the question regarding how to derive Eq.14 from Eq.12 and Eq.13 — we would like to clarify that **Eq.14 is not derived via Eq.12 and Eq.13**. As we mentioned, Eq.14 is derived based on Eq.13 and the prediction formula ${\bar{x}}\_k\ =\ A{\bar{x}}\_{k-1}\ +\ Bu_k$. In this context,$y_k$from Eq.13 is the measured value, and ${\bar{x}}\_k\$ is the predicted (or estimated) value. The key idea of the Kalman filter is to fuse these two values in order to obtain a more accurate estimation of the system state. The fusion process is typically represented as: ${\tilde{x}}\_k\ = {\alpha}{\bar{x}}\_k\ + {\beta}y^{measure}$ where ${\alpha} + {\beta} = I$ and $y^{measure} = H^{-1}y_k$. We can rewrite the fusion equation as: ${\tilde{x}}\_k\ = (1-{\beta}){\bar{x}}\_k\ + {\beta}y^{measure} = {\bar{x}}\_k\ + {\beta}(H^{-1}y_k - {\bar{x}}\_k\)$ We then set ${\beta} = K{\cdot}H$, and the fusion function becomes: ${\tilde{x}}\_k\ = {\bar{x}}\_k\ + K(y_k - H{\bar{x}}\_k\)$ which is exactly our Eq.14. Sorry for the delayed response — we did not receive a notification of your update from this system. We hope this explanation clarifies your question and finds you well. [a] Kalman and bayesian filters in python.
Summary: The paper introduces EasyInv, a novel approach to DDIM inversion that improves efficiency and reconstruction quality by refining inversion noise approximation. The novelty compared to other inversion methods lies in the addition of a relaxation step, which makes it compatible with other inversion methods. The method follows from a Kalman-filter argument to compensate for errors in the noise approximation step. The authors showcase their methods on a number of examples, and show that it allows to perform image editing with superior results to other methods. Claims And Evidence: The main claim/evidences of the paper can be summarized as follows: - **Claim1:** Existing methods do not satisfy (7) rigorously and a correction step is necessary. - **Evidence1:** Rigorous theoretical derivations are proposed to show how to incorporate Kalman-type corrections. - **Claim2:** Existing methods require multiple forward passes, yielding slow inversion procedure. - **Evidence2:** The benefits of the proposed algorithm is not really clear on paper, since the method is added to an existing inversion algorithn. However, the experimental results clearly show a benefit. - **Claim3:** The proposed relaxation step leads to better inversion quality. - **Evidence3:** While the proposed theory does not prove that the correction is necessary nor that it does improve the results, the theoretical derivation is rigorous. Experiments support the claim. Methods And Evaluation Criteria: The proposed method is properly evaluated: it is compared with several existing methods, on large datasets. Theoretical Claims: This is not a theoretical paper. The light theoretical derivations are here for providing intuition, but they remain rigorous. Experimental Designs Or Analyses: Experimental design looks good. My only concern lies is a potential unfair comparison to: 1. Fixed-Point Iteration (Pan et al., 2023), which was reimplemented by the authors. However, the source code is unavailable, so the authors do not have alternatives. 2. ReNoise: the method seems to always fail in the visuals of the paper, despite Table 1. This is more critical and should be adressed by the authors. Supplementary Material: No supplementary available. Relation To Broader Scientific Literature: The relation to scientific literature is clear and well done. Essential References Not Discussed: Essential references are present. Other Strengths And Weaknesses: **Strenghts:** 1. The paper is well written and easy to follow. 2. Claims are supported by evidences. 3. The method is original and interesting, with good experimental results. 4. Literature review is quite broad. **Weaknesses:** 1. The comparison with ReNoise should be clarified. Table 1 and Figure 3 seem incompatible. In my opinion, this needs to be adressed for accepting the paper. 2. Ablation studies on the parameter $\eta$ are not present and this should be discussed. Other Comments Or Suggestions: - the (1) and (2) lines 201 - 207 are confusing as these are not equations. Maybe the authors could switch to (i)-(ii) or something else. Questions For Authors: 1. My main issue with the paper is the ReNoise results in Fig. 3, in contradiction with Table 1. Could the authors comment on that, and provide decent visual results for ReNoise? This is a reason for my rating to 3, which I would be happy to increase if the authors replied positively. EDIT this has been addressed by the authors. 2. Here, all results are shown in the image domain. However, the main claim of the paper is a fixed point interation in the latent. Is there a way to quantify the quality of the latent with respect to the ground-truth latent instead of performing visual comparisons? This would strengthen the paper, in particular in view of the fixed-point algorithms community. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer UW4y, We would like to express our sincere gratitude to you. It is truly an honor and a stroke of luck to have a reviewer as dedicated and responsible as you. Your constructive suggestions have been invaluable to us. We are particularly touched by your **understanding regarding our reimplementation** of the Fixed-Point Iteration, given the lack of alternatives. Your insight that addressing Table 1 and Figure 3 could potentially **raise our score** is greatly appreciated. We are also impressed by your careful review, which effectively summarized our claims and supporting evidence. We give our point-by-point response below. **1. Discrepancy Between Table 1 and Figure 3** The apparent discrepancy between Table 1 and Figure 3 arises from the different demonstration objectives of these experiments. While Table 1 quantifies ReNoise’s overall performance—showing that it performs reasonably well (albeit slightly inferior to our method), Figure 3 is designed to illustrate a critical limitation: ReNoise’s instability when processing **real images** with extensive white regions. In such scenarios, ReNoise tends to produce undesirable black artifacts. This visual evidence underscores the robustness and superior stability of our approach under challenging conditions. We would like to emphasize that the above explanation will be explicitly incorporated into our final paper. We sincerely hope that our detailed explanation can effectively address your concerns and look forward to an increase in our score. **2. Evaluating the Latent Quality Versus the Image Domain** Our main contribution lies in performing fixed-point iteration in the latent space to **improve computational efficiency** during the denoising process. Since the ultimate objective is to achieve high-quality image generation via the final decoded images, we opted to emphasize the image-domain evaluation. While efforts have been made, we found it hard (or even irrational) to compare with ``ground-truth latent'' in the latent space for: **1. Lack of Suitable Metrics**: We have been unable to identify a robust quantitative metric that can effectively measure the quality of latent representations.; **2. Irrelevance to Final Objective**: Comparisons in the latent space do not directly reflect the ultimate goal of generating high-quality images. The latent space is merely an intermediate representation, and its fidelity does not necessarily correlate with the quality of the final decoded images.; **3. Efficiency Focus**: Our primary motivation for operating in the latent space is to improve computational efficiency. The latent space serves as a means to an end, rather than an end in itself. We hope you understand the limitations we face in providing direct comparisons in the latent space and appreciate our focus on achieving the best possible outcomes in the image domain. **3. Additional Ablation Studies on Parameter $\eta$** To further clarify our experimental choices, we present additional ablation studies on the parameter $\eta$, conducted on the same dataset as in Table 3 of our paper. Note that $\eta = 1$ would correspond to a standard DDIM inversion method and $\eta = 0$ would effectively bypass the inversion operation. Therefore, these boundary cases were excluded from the ablation experiments. Our findings indicate that $\eta = 0.5$ yields the best overall performance: | $\eta$ | Editing Method | Distance $\(\times 10^2\)\(\downarrow\)$ | PSNR $\(\uparrow\)$ | LPIPS $\(\times 10^3\)\(\downarrow\)$ | MSE $\(\times 10^3\)\(\downarrow\)$ | SSIM $\(\times 10^2\)\(\uparrow\)$ | Whole CLIP Similarity$\(\uparrow\)$ | Edited CLIP Similarity$\(\uparrow\)$ | |:-------:|:------:|:---------------------------------------:|:-----------------:|:------------------------------------:|:---------------------------------:|:--------------------------------:|:-------------------:|:--------------------:| | η=0.2 | PnP* | 25.90 | **23.29** | 113.86 | **66.11** | 80.13 | 24.66 | 21.60 | | η=0.4 | PnP* | 24.44 | **23.29** | 107.18 | 66.32 | 80.69 | 24.82 | 21.76 | | η=0.5 | PnP* | **22.88** | 22.56 | 102.34 | 78.57 | 80.27 | **25.38** | **22.53** | | η=0.6 | PnP* | 23.36 | 23.21 | 101.37 | 67.96 | **81.07** | 25.02 | 22.00 | | η=0.8 | PnP* | 22.89 | 22.92 | **98.44** | 72.30 | 80.94 | 25.30 | 22.29 | **4. Clarification on Notation in Lines 201–207** We appreciate the reviewer’s suggestion regarding the notation used in lines 201–207. We agree that the current format is potentially confusing since these are not formal equations. In the revised version, we will switch to a clear itemized list (e.g., using (i) and (ii)) to improve clarity.
Summary: This paper introduces EasyInv, a novel DDIM inversion method that significantly enhances inversion efficiency and reconstruction quality by optimizing the utilization of the initial latent state. The work demonstrates thorough experimentation and good practical value, with solid theoretical foundation, making it a highly impactful contribution to the field. Claims And Evidence: The authors claim two major merits from two perspectives: fast and better. Tables 1 and 2 do provide the evidence of a fast inference. Performance from Tables 1 and 3 also show its overall best performance over existing methods. Also, downstream tasks in Figure 6 and 7 do provide better visual results. Methods And Evaluation Criteria: This paper provides both quantitative and qualitative evaluations. The quantitative evaluation mostly follows standard criteria with prior methods while the qualitative evaluations include visualization of different methods in inversions as well as the applications of downstream tasks. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental designs and analyses are comprehensive. In particular, it shows it practical applications on the downstream tasks, which, I believe, is a good case study to demonstrate its promising performance. Supplementary Material: NA. Relation To Broader Scientific Literature: The method in this paper contributes a lot to the DDIM inversion that reverses the denoising process. By aggregating current latent states with the last step’s, it results in a fast and better inversion results. Essential References Not Discussed: NA. Other Strengths And Weaknesses: *Strengths* 1. Novelty and Elegance of the Method: EasyInv’s core innovation lies in its departure from traditional iterative noise optimization. Instead, it emphasizes preserving and dynamically aggregating information from the previous latent state. The weighted fusion of latent states (Eq. 19) effectively mitigates noise accumulation errors while amplifying the dominance of the initial information during inversion. This strategy is both elegant and efficient, and its theoretical connection to the Kalman filter framework (Eq. 20-22) provides a solid foundation. Compared to existing methods (e.g., ReNoise, Fixed-Point Iteration), EasyInv exhibits unique problem formulation and solutions, representing a significant advancement in DDIM inversion. 2. Exceptional Efficiency and Practicality: Experiments show that EasyInv matches the inference speed of baseline DDIM inversion (5 seconds) while outperforming iterative optimization methods (e.g., ReNoise: 16 seconds). Its compatibility with half-precision computation (float16) further reduces computational costs, highlighting its practicality. The "four-line code integration" scheme (Algorithm 1) significantly lowers deployment barriers, enhancing its usability in real-world applications. 3. Comprehensive and Convincing Experiments: The paper thoroughly evaluates EasyInv through quantitative metrics (LPIPS, SSIM, PSNR), qualitative comparisons (Figs. 1-4), and downstream task validations (Table 3). Notably, EasyInv achieves superior or comparable performance on both the COCO dataset (2,298 images) and challenging scenarios (e.g., images with large white regions). The analysis of half- vs. full-precision computation (Table 2) further validates the method’s robustness. 4. Clear Theoretical Explanation: The authors draw a compelling analogy between latent state aggregation and the Kalman filter’s prediction-update mechanism (Eq. 18-22), offering a theoretical interpretation of the method. While the Kalman gain is simplified, this connection enriches the paper’s theoretical depth. **Weaknesses** 1. Quantitative Analysis of Over-Smoothing: While the "over-denoising" issue is qualitatively discussed (e.g., the "peach" example in Fig. 3), quantitative metrics (e.g., FID) or user studies could better assess its practical impact. Additional analysis here would provide a more holistic evaluation of limitations. 2. Generalization Across Models: Current experiments are primarily based on Stable Diffusion V1.4/V1.5. Expanding evaluations to larger models (e.g., SD-XL) or other architectures (e.g., DALL-E 3)would strengthen claims about the method’s generalizability. 3. Parameter Sensitivity Analysis: The paper sets empirical parameters (e.g., 0.05*T < t̃ < 0.25*T) without a detailed exploration of their impact. A sensitivity analysis of η and t̃ across different tasks or datasets would clarify their robustness and guide optimal parameter selection. 4. Validation Across Diverse Noise Levels: The current experiments primarily focus on a fixed noise configuration (e.g., T=50 steps). To ensure broader applicability, it would be valuable to evaluate EasyInv under varying noise levels (e.g., T=30, T=100) and generation scenarios (e.g., low-step fast generation vs. high-step high-precision generation). For instance, does EasyInv maintain its efficiency and reconstruction quality when the inversion steps are significantly reduced or increased? Such experiments would demonstrate robustness in real-world settings where noise configurations may vary. 5. Comparison with Established and Emerging Methods: While the paper presents extensive experimental results, it lacks direct comparisons with widely recognized classical inversion techniques (e.g., EDICT [1]) and novel approaches (e.g., Inversion-Free Image Editing [2]). Including such comparisons— particularly in terms of reconstruction fidelity, computational efficiency, and robustness to challenging inputs would strengthen the methodological positioning of EasyInv. Other Comments Or Suggestions: See weakness. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We appreciate the reviewer’s constructive feedback and suggestions for additional experimental validation. Due to space constraints, some of these analyses were deferred to future work; however, several of the suggested experiments have either been completed or are actively underway. We address the key points as follows: **1. Quantitative Analysis of Over-Smoothing** While our paper qualitatively discusses the over-denoising issue (e.g., the “peach” example in Fig. 3), we acknowledge that a more comprehensive quantitative analysis (e.g., using FID or user studies) would better elucidate its practical impact. We have initiated such studies and plan to incorporate these quantitative metrics in an extended version of our work. **2. Generalization Across Models** Our experiments currently focus on Stable Diffusion V1.5. We agree that evaluating our method on larger models (such as SD-XL) and alternative architectures (e.g., DALL-E 3) would strengthen the claims regarding generalizability. The qualitative experiments on SD-XL have been done, but not able to display here due to the restriction of OpenReview's rebuttal rule. We are actively extending rest experiments to include these models and will report the results in the final version. **3. Parameter Sensitivity Analysis** We conducted an ablation study on the parameter $\eta$, summarized in the table below, which guided our selection of $\eta = 0.5$ as it achieved the overall best performance. Note that $\eta = 1$ corresponds to a standard DDIM inversion method and $\eta = 0$ would essentially bypass the operation, so those cases were not included. | $\eta$ | Editing Method | Distance $\(\times 10^2\)\(\downarrow\)$ | PSNR $\(\uparrow\)$ | LPIPS $\(\times 10^3\)\(\downarrow\)$ | MSE $\(\times 10^3\)\(\downarrow\)$ | SSIM $\(\times 10^2\)\(\uparrow\)$ | Whole CLIP Similarity$\(\uparrow\)$ | Edited CLIP Similarity$\(\uparrow\)$ | |:-------:|:------:|:---------------------------------------:|:-----------------:|:------------------------------------:|:---------------------------------:|:--------------------------------:|:-------------------:|:--------------------:| | η=0.2 | PnP* | 25.90 | **23.29** | 113.86 | **66.11** | 80.13 | 24.66 | 21.60 | | η=0.4 | PnP* | 24.44 | **23.29** | 107.18 | 66.32 | 80.69 | 24.82 | 21.76 | | η=0.5 | PnP* | **22.88** | 22.56 | 102.34 | 78.57 | 80.27 | **25.38** | **22.53** | | η=0.6 | PnP* | 23.36 | 23.21 | 101.37 | 67.96 | **81.07** | 25.02 | 22.00 | | η=0.8 | PnP* | 22.89 | 22.92 | **98.44** | 72.30 | 80.94 | 25.30 | 22.29 | Regarding the empirical parameter $\tilde{t}$, our experiments with diffusion models having reduced parameter counts suggest that optimal performance is achieved when EasyInv is applied during the early denoising steps. We agree that further sensitivity analyses across different tasks and datasets would provide additional insights and are planned for our future revisions. **4. Validation Across Diverse Noise Levels** We recognize that our current experiments were conducted using a fixed noise configuration (T=50 steps). To further demonstrate the robustness of EasyInv, we plan to evaluate its performance under varying noise conditions (e.g., T=30 and T=100) and across different generation scenarios (e.g., low-step fast generation vs. high-step high-precision generation). These experiments will help clarify whether EasyInv maintains its efficiency and reconstruction quality when the inversion steps vary significantly. **5. Comparison with Established and Emerging Methods** We appreciate the suggestion to include direct comparisons with both classical inversion techniques (e.g., EDICT) and emerging approaches (e.g., Inversion-Free Image Editing). We excluded EDICT from our current comparisons because its original work lacked comprehensive evaluations against existing methods, which we felt diminished the persuasiveness of its results. Moreover, emerging techniques like inversion-free image editing rely on framework-specific implementations (e.g., rectified flow in SDv3/FLUX) that are currently incompatible with our framework. We plan to develop cross-framework adaptation layers and implement these reference methods in future work to provide a more comprehensive benchmarking of reconstruction fidelity, computational efficiency, and robustness. **Summary** We value the reviewer’s insights and are committed to enhancing the methodological rigor of our work through these additional experiments and comparative analyses. Many of these extensions are already underway, and we will incorporate the resulting findings in our revised and future publications.
null
null
null
null
null
null
null
null
STAR: Learning Diverse Robot Skill Abstractions through Rotation-Augmented Vector Quantization
Accept (spotlight poster)
Summary: This paper proposes STAR, a novel framework for learning diverse robot manipulation skills through skill quantization and causal modeling. STAR consists of two key components: RaRSQ, which enhances residual skill quantization with a rotation-based gradient mechanism to mitigate codebook collapse, and CST, a transformer-based model explicitly capturing temporal dependencies between discrete skills. The authors demonstrate that STAR significantly improves performance in multi-task imitation learning and complex, long-horizon manipulation tasks on standard benchmarks, achieving state-of-the-art results and highlighting its effectiveness in accurately composing complex action sequences. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical section is presented in the paper. Experimental Designs Or Analyses: I find the experimental designs reasonable and sufficiently aligned with the paper’s objectives. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: This paper builds upon recent advances in residual quantization methods and latent variable models (e.g., VQ-VAE, Rotation Trick) for robot skill learning, addressing common limitations such as codebook collapse and insufficient temporal skill modeling by introducing a novel combination of rotation-based quantization (RaRSQ) and autoregressive causal skill modeling (CST). Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: The paper effectively integrates existing techniques (Residual Quantization, Rotation Trick, and causal transformer architectures) into a practically effective framework. The experimental evaluation is extensive, utilizing multiple benchmarks (LIBERO, MetaWorld). Weaknesses: While the integration of existing techniques is practically effective, the proposed methodologies heavily rely on adaptations and combinations of previously established ideas, which somewhat limits the theoretical novelty. Specifically, the proposed approach significantly depends on well-established techniques (residual quantization and the rotational trick), and autoregressive skill prediction has already been extensively explored in QueST. My primary concern is that the notable performance improvement observed in this paper may arise more from offset prediction, as indicated in Table 5, rather than the novel combination itself. Other Comments Or Suggestions: To better highlight the strengths of your proposed approach, it would be beneficial to compare the performance explicitly against QueST with an additional offset prediction mechanism. This would clarify whether the observed performance improvements truly stem from the proposed combination or primarily from the offset predictor component. Questions For Authors: Have the authors considered comparing their method explicitly against QueST equipped with an offset prediction mechanism to isolate the contributions of the proposed combination more clearly? It is currently unclear which component of proposed method is primarily responsible for the observed performance gains. Could you further clarify to highlight the contribution of each component more clearly? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We appreciate your recognition of our extensive experiments and the practical effectiveness of our approach. We address your questions below: ## Q1:Have the authors considered comparing their method against QueST equipped with an offset prediction to isolate the contributions of the proposed combination more clearly? We appreciate and follow your good advice by adding additional experiments that compare STAR against QueST across various configurations: |Method|Finetune decoder|Offset|LIBERO-Object|LIBERO-Spatial|LIBERO-Goal|LIBERO-Long|Avg| |-|-|-|-|-|-|-|-| |QueST (Original)|✅|❌|90.0|84.5|76.7|69.1|80.1| |QueST(freeze decoder)|❌|❌|78.1|63|56|24|55.3| |QueST(fine-tune decoder + offset)|✅|✅|85.8|73.9|65.9|64.7|72.6| |Ours|❌|✅|**98.3**|**95.5**|**95.0**|**88.5**|**94.3**| The results demonstrate that QueST with offset prediction (72.6%) significantly underperforms compared to our STAR method (94.3%) We introduce offset head because our decoder is frozen during stage-1 training. This design means predicted actions can only reconstruct stage-0 actions, which undergo lossy compression through quantization, making fine-grained operations difficult. Therefore, we use an additional offset head to compensate for this gap. In contrast, quest has already fine-tuned the decoder to learn fine-grained operations, which serves a similar purpose to our approach but through a different mechanism. Further adding an offset head does not yield significant gains and may instead conflict with the decoder's output. To further verify the importance of learning fine-grained operations, we conducted an ablation study with QueST using a frozen decoder in stage-1. Performance dropped from 80.1% to 55.3%, consistent with our findings in Table 5 where removing the offset head significantly reduces our method's performance. In summary, both approaches require mechanisms for learning fine-grained operations, but the superior performance of STAR comes from our novel contributions (RaRSQ and CST) rather than the offset head alone. The ablation studies in our original Table. 2 further support this conclusion by isolating the contributions of each proposed component. ## Q2:It is currently unclear which component is primarily responsible for the performance gains. Could you further clarify to highlight the contribution of each component more clearly? Our framework consists of two components: (1)rotation-augmented residual skill quantization (RaRSQ), which addresses codebook collapse in robotic skill learning, and (2)causal skill transformer (CST) , which captures causal relationship between skills through autoregressive prediction. We would like to further clarify the contribution of each component: ### **RaRSQ: Enhanced Skill Diversity and Representation** RaRSQ directly addresses the fundamental limitation of naive residual VQ-VAE - codebook collapse. As shown in Fig. 4, residual VQ-VAE utilizes only 43.8% codes, severely limiting the robot's ability to express diverse actions. In contrast, RaRSQ achieves 100% codebook utilization with balanced distribution. This enhanced skill diversity translates to performance gains through precise skill decomposition. Complex manipulation tasks inherently require fine-grained action representation. With full codebook utilization, RaRSQ can distinguish between subtly different manipulation skills (e.g., picking different objects, precise positioning) that would otherwise be mapped to the same code in collapsed codebooks. This enhances the model's capability to recognize and execute task-specific manipulation patterns. ### **CST's Contribution to Skill Composition** CST's autoregressive design comes from analyzing the learned hierarchical dependencies between skills. The conditional probability analysis in Fig. 8 reveals causal relationships between first and second-level skills - some first-level codes show strong preferences for specific second-level codes. CST models the causal dependencies between skills, which is crucial for generating coherent action sequences. As show in Table. 2, when removing CST, we observe a significant performance drop, especially for LIBERO-Goal (-6.9%) and LIBERO-Long (-5.2%), indicating that modeling these dependencies is particularly important for complex manipulation tasks. ## W1:While the integration of existing techniques is practically effective, the proposed method rely on adaptations and combinations of established ideas, which somewhat limits the theoretical novelty. Our key contributions lie in designing a robot-specific residual VQ-VAE that effectively decomposes complex robot behaviors into discrete skills, addressing fundamental challenges in robot skill learning. RaRSQ directly addresses codebook collapse in robotic residual skill quantization, while CST explicitly models the causal dependencies between different skill abstraction levels, revealing structured relationships between coarse and fine-grained behaviors. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal. I especially appreciate the additional experimental comparisons provided, clearly distinguishing the contributions of STAR from the other baseline methods. Considering the additional experiments and context provided, I now have a more positive view of this work. --- Reply to Comment 1.1.1: Comment: Thank you once again for your positive view of our work. We truly appreciate your valuable feedback.
Summary: The paper investigates robot skill abstraction for manipulation tasks and introduces STAR—a framework for learning discrete robot skill representations. STAR comprises two main components: Rotation-Augmented Residual Skill Quantization (RaRSQ), which mitigates codebook collapse in VQ-VAE-based methods using rotation-based gradients, and a Causal Skill Transformer (CST) that explicitly models dependencies between hierarchical skill representations via an autoregressive mechanism. Experiments on the LIBERO and MetaWorld MT50 benchmarks, as well as real-world tasks, demonstrate that STAR outperforms several baselines in terms of success rate. Claims And Evidence: The paper makes two key claims. First, the proposed RaRSQ method helps mitigate codebook collapse in VQ-VAE-based skill abstraction (quantization) by leveraging rotation-based residual skill abstraction. Second, the proposed causal skill transformer explicitly models dependencies between skill representations through an autoregressive mechanism, making it effective for complex, long-horizon manipulation tasks. To validate these claims, the authors conducted experiments on the LIBERO and MetaWorld MT50 benchmarks, as well as real-world experiments. Results of experiments demonstrate superior performance of the method and thus support the claim. Methods And Evaluation Criteria: STAR leverages rotation-based gradients in its RaRSQ component and uses an autoregressive transformer to capture causal relationships between skills. The evaluation is conducted on established benchmarks (LIBERO and MetaWorld MT50) and validated with real-world robot manipulation tasks. The metrics focus on success rates, and the results indicate that STAR achieves higher performance than competing methods. Theoretical Claims: The paper does not present significant theoretical contributions. Experimental Designs Or Analyses: Experiments were conducted on the LIBERO and MetaWorld MT50 benchmarks, as well as on real-world tasks. The quantitative results demonstrate superior performance compared to several baselines, supporting the paper’s claims. Additionally, the authors performed ablation studies to assess the effectiveness of each variant of the proposed method, further reinforcing their findings. Supplementary Material: The supplementary material includes additional experimental details and results. Relation To Broader Scientific Literature: The work relates to VQ-VAE-based discrete representation learning, causal modeling, and robot learning. Essential References Not Discussed: A broader discussion on recent progress in robot learning—especially work on vision-language-action (VLA) transformers like π0 [1]—would provide valuable context regarding performance, efficiency, and scalability. [1] Kevin Black, et al.,”π0: A Vision-Language-Action Flow Model for General Robot Control”, https://www.physicalintelligence.company/blog/pi0. Other Strengths And Weaknesses: Strengths: 1. This paper addresses a critical and practical problem in robot learning: robot skill abstraction. 2. The proposed method is reasonable and effective, utilizing RaRSQ to prevent codebook collapse and CST to model dependencies between skills. 3. The thorough experiments demonstrate the algorithm’s effectiveness, and real-world experiments further support the paper’s claims. 3. Overall, the paper is well-written and easy to follow. Weaknesses: 1. My main concern is that this approach primarily builds on existing methods. For example, RaRSQ is essentially a combination of VQ-VAE and a rotation trick, while CST integrates a VLA Transformer with autoregressive prediction. This makes it difficult to pinpoint the paper’s unique contribution. Other Comments Or Suggestions: 1. Both eq(8) and eq(11) denote r_d. Please clarify whether this is a typo. 2. Including a video demo for the real-world manipulation tasks would be highly beneficial, as metrics beyond success rate—such as completion time—are also critical. Questions For Authors: 1. What is the unique contribution of STAR compared to prior works in VQ-VAE, rotation-based augmentation, and VLA transformers? It is important to highlight the differences. 2. Can you confirm if the use of r_d in both eq(8) and eq(11) is correct, or if it is a typographical error? 3. Could you provide more details or video on the real-world robot experiments as well as completion time or speed? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your thoughtful comments and positive assessment of our work. Below we address the specific questions and concerns: ## R1: A broader discussion on recent progress in robot learning—especially work on vision-language-action (VLA) transformers like π0—would provide valuable context regarding performance, efficiency, and scalability. Indeed, we have already compared our method with several advanced VLA methods in our manuscript, including OpenVLA (7B) and Octo, as shown in Table. 1. Further, we appreciate and follow your good advice to include π0 in our evaluation. The results on LIBERO benchmark are summarized below: | |**Size**|**Pretrain**|**LIBERO-Long**|**LIBERO-Spatial**|**LIBERO-Goal**|**LIBERO-Object**|**Avg**| |-|-|-|-|-|-|-|-| |π0|3.3B|True|85.2|96.8|95.8|98.8|94.15| |Ours|16.2M|False|88.5|95.5|95.0|98.3|94.30| Despite using significantly fewer parameters (200× smaller) and no pretraining, our approach achieves higher average performance (94.3% vs. 94.15%). Most notably, we observe the largest improvement on LIBERO-Long tasks (+3.3%), which aligns with our method's focus on addressing challenges in long-horizon tasks through effective skill composition. VLA methods typically follow a pretraining-finetuning paradigm, with performance dependent on pretraining data and architecture (e.g., π0 > OpenVLA). Our work shows lightweight models trained from scratch can match or exceed VLA methods requiring extensive pretraining. Future work could integrate our approach with large language models to further advance robot learning capabilities. ## Q1: What is the unique contribution of STAR compared to prior works in VQ-VAE, rotation-based augmentation, and VLA transformers? We would like to further clarify that our key contributions lie in designing robot-specific residual VQ-VAE that effectively decomposes complex robot behaviors into discrete skills, addressing fundamental challenges in robot skill learning. ### (1) Robot-Specific Solution to Codebook Collapse in Skill Learning Rather than a straightforward application of rotation tricks to VQ-VAE, RaRSQ is specifically designed for robotic skill space with hierarchical structure. While residual VQ-VAE offers larger representation space, this makes codebook collapse more problematic. Our integration of rotation-based gradients within the residual framework preserves geometric relationships throughout hierarchical action quantization, effectively capturing both coarse primitives (move, pick) and fine-grained adjustments required for complex manipulation. ### (2) Explicit Modeling of Hierarchical Skill Dependencies Our CST differs fundamentally from previous VLA transformers by explicitly modeling conditional dependencies between different skill abstraction levels. Unlike approaches that predict actions directly or treat skills independently, CST captures structured relationships between coarse and fine-grained behaviors. As shown in Fig. 7, it reveals distinct skill dependency patterns that validate this approach. For example, given a first-level code, we found strong preference for specific second-level codes, showing how coarse skills constrain fine-grained behaviors in ways prior approaches cannot capture. As shown in Table. 2, removing the autoregressive component significantly reduces performance, , especially in LIBERO-Goal (-6.9%) and LIBERO-Long (-5.2%). In addition, removing both components creates a 5.8% drop over all tasks, exceeding the sum of individual effects, demonstrating the synergistic relationship between skill representation and composition. ## Q2: Can you confirm if the use of r_d in both eq(8) and eq(11) is correct, or if it is a typographical error? Thanks for pointing out the typo. $r_d$ in eq(11) should be $\hat{r_d}$, and we will correct it in the final revision. ## Q3: Could you provide more details or video on the real-world robot experiments as well as completion time or speed? We have recorded comprehensive videos showing the complete execution sequences for both tasks (drawer manipulation and sequential object placement). Since ICML rebuttal guidelines explicitly support anonymous links for supplementary figures and tables but don't specifically videos, complete videos will be included on our future project page. Regarding completion time, STAR demonstrates favorable completion times, averaging 29.6 seconds for the sequential object placement task and 37.7 seconds for the drawer manipulation task. This efficiency is attributable to our hierarchical skill abstraction approach, where a single inference step produces an action chunk (consisting of eight atomic actions in our implementation). This reduces the number of required inference steps and consequently decreases overall execution time. We will incorporate these timing metrics in the final version to provide a more comprehensive evaluation. --- Rebuttal Comment 1.1: Comment: Thank you for the author’s response. It addresses most of my concerns. Overall, the paper appears to be of good quality, and I am inclined to recommend its acceptance. --- Reply to Comment 1.1.1: Comment: Thank you once again for your recognition of our work and your support for its acceptance. We truly appreciate your valuable feedback.
Summary: The paper proposes to improve prior latent discrete policies by preventing codebook collapse, improving the codebook utilization and proposes to use an autoregressive poiicy to chain together the various discrete skills. To achieve better codebook utilization (and preventing code collapse), the paper proposes to use rotation-augmented residual skill quantization. Essentially, this strategy introduces a hierarchical, coarse-to-fine latent vector codebook where the latents are encoded at different depths. Finally, a causal transformer predicts these latent vectors (again in a coarse to fine fashion) and decodes actions for the downstream policy. The proposed training strategy is tested on LIBERO and Meta-World, and shows superior performance than prior SOTA methods as well as ablative versions of the proposed method which do not use the rotation-augmented codebook strategy and the auto-regressive decoding process. ## update after rebuttal I have read all reviewer's review and author rebuttal. I think the author responses make sense and address most concerns raised by other reviewers. Hence, I will vote for acceptance. Claims And Evidence: Yes, in my reading, the claims are: a) that the proposed strategy leads to better use of codebooks (which is shown in section 4.3), and that leads to overall better performance than prior latent variable models (shown in Table-1) and the proposed changes help (shown in ablations of Table-2) Methods And Evaluation Criteria: Yes -- the proposed method is well motivated and works on improving the codebook utilization, and the benchmark datasets are reasonable (i.e. they are standard) and make sense for the problem. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes -- the experiments are sound and show that the proposed method is better than SOTA and that the proposed changes help improve performance. Supplementary Material: Briefly skimmed over all parts Relation To Broader Scientific Literature: The key contribution is to improve the performance of latent variable based policies by designing a strategy to improve codebook utilization and thus finally the downstream performance on manipulation tasks. Essential References Not Discussed: No Other Strengths And Weaknesses: In general, the paper is well-written and easy to follow. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank Reviewer G1Cu for the thorough and positive assessment of our work. We appreciate your recognition of our key contributions in improving codebook utilization through rotation-augmented residual skill quantization and implementing autoregressive decoding for effective skill composition. If you have any further questions, we would be more than happy to address them. --- Rebuttal Comment 1.1: Comment: I have read all reviewer's review and author rebuttal. I think the author responses make sense and address most concerns raised by other reviewers. Hence, I will vote for acceptance. --- Reply to Comment 1.1.1: Comment: Thank you once again for your recognition of our work and your support for its acceptance. We truly appreciate your valuable feedback.
null
null
null
null
null
null
null
null
Hyper: Hyperparameter Robust Efficient Exploration in Reinforcement Learning
Accept (poster)
Summary: This submission proposes a novel method, referred to as Hyper, to address the challenging issue of hyperparameter tuning in curiosity-based exploration methods. It introduces a repositioning and exploration mechanism that controls the horizon of exploitation before conducting exploration. The length of the exploitation horizon is sampled from a bounded Geometric distribution. The authors provide both theoretical and empirical evidence to demonstrate the effectiveness of the proposed Hyper method. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and the corresponding evaluation criteria are appropriate for addressing the problem. Theoretical Claims: The proofs for the theoretical claims in this submission are correct. Experimental Designs Or Analyses: The experimental design and analysis exhibit soundness and validity. Supplementary Material: This submission does not provide supplementary material. Relation To Broader Scientific Literature: This work primarily focuses on the field of reinforcement learning. Essential References Not Discussed: This submission includes sufficient related references. Other Strengths And Weaknesses: **Strengths** 1. Addressing the exploration-exploitation dilemma in reinforcement learning is valuable. 2. The proposed Hyper method is well-introduced and clearly described. 3. The authors provide theoretical analysis and empirical studies supporting the effectiveness of the proposed methods. **Weaknesses** See the following comments and questions. Other Comments Or Suggestions: 1. On line 175 of the right column, it is mentioned "We defer the formal proof to the appendix." The specific appendix should be cited for clarity. Questions For Authors: 1. I do not see the efficiency of Hyper according to the upper bound involving the $d^3H^4$. Could you clarify? 2. What is the $\epsilon$ in Theorem 4.2? 3. I see that the proposed Hyper method is an improvement based on curiosity-based methods. How does it compare to more recent exploration algorithms? 4. Is the parameter $\gamma$, which controls the probability of the repositioning phase, fixed for all tasks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive assessment and insightful questions. We appreciate your recognition of our work's value and address your questions below. ***Regarding Hyper’s efficiency*** Our claim that "Hyper is efficient" operates on two complementary levels: 1. **Theoretical guarantees**: Hyper is provably guaranteed to sufficiently explore the environment and converge to an optimal policy, as demonstrated by our theoretical analysis providing a worst-case sample complexity bound. 2. **Empirical efficiency**: Hyper demonstrates significantly greater empirical efficiency and robustness to the curiosity coefficient $\beta$ compared to baseline methods, as conclusively shown in our experimental evaluation in Section 6. While our theoretical sample complexity appears similar to existing methods in the worst case, our empirical results consistently demonstrate superior practical efficiency. This pattern of theoretical bounds appearing similar while practical performance differs substantially, this is common in reinforcement learning research, where worst-case bounds often don't fully capture the advantages of sophisticated exploration strategies in real environments. ***Regarding $\epsilon$ in Theorem 4.2*** It represents the optimality gap: the maximum distance between the value of the current policy and the optimal policy. Specifically, a policy $\pi$ is $\epsilon$-optimal if it satisfies: $V^*(s) - V^{\pi}(s) < \epsilon, \forall s \in \mathcal{S}$. This is standard notation in RL theory. ***Regarding comparison to recent methods*** Curiosity-based exploration remains the dominant paradigm in the field due to its empirical effectiveness, particularly in sparse-reward environments. Hyper is designed as a general framework compatible with any off-policy RL algorithm and any curiosity method. This extensible design allows Hyper to leverage advances in both RL algorithms and curiosity methods, ensuring its continued relevance as the field progresses. In our main experiments (Section 6), we use TD3 as the RL algorithm and Disagreement as the curiosity method for all methods (including baselines) for fair comparison. Additionally, in Appendix A.5, we compare with LESSON, an advanced recent method. For this comparison, we follow LESSON's original implementation using DQN as the RL algorithm and RND as the curiosity method. Hyper consistently outperforms LESSON on the MiniGrid environments used in their original paper, demonstrating its state-of-the-art performance. ***Regarding truncation probability parameter $\gamma$ and $p$*** The discount factor $\gamma=0.99$ is fixed across all environments, as is standard in RL research. For the truncation probability $p$, we use a unified decay schedule from 0.01 to 0.001 across all environments, as described in Section 5.2 and Appendix A.1. This consistent parameterization across diverse environments highlights Hyper's robustness—it maintains strong performance without environment-specific tuning, unlike traditional curiosity-driven methods that require extensive hyperparameter adjustment for each new environment. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful response. I have no further concerns and will maintain my current score. --- Reply to Comment 1.1.1: Comment: Thank you for your consistent support of our work. We're pleased that our rebuttal has comprehensively addressed all of your concerns and questions. Given that all reviewers now acknowledge the value of our contribution and we've successfully resolved all initial concerns, we would kindly request you consider raising your score to better reflect the significant contribution our paper makes to the RL community. We appreciate your thoughtful evaluation throughout this process.
Summary: This paper proposes a “repositioning_length” based method to alternate between exploration and explosion. The key idea is to choose the bounded geometric distribution with probability p to determine the repositioning_length to make the process more sample efficient. Claims And Evidence: Yes the motivation is clear though the presentation of method is unclear. Methods And Evaluation Criteria: Yes it uses toy tasks and some basic robot related tasks. Theoretical Claims: No. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes, particular the addition empirical comparison with the baseline method such as LESSON while I find it is not adequate. Relation To Broader Scientific Literature: It might inspire the understanding on the exploration. Essential References Not Discussed: No to my knowledge. Other Strengths And Weaknesses: # Strengths: - The presentation regarding motivation is generally clear, and the method is simple. - Results show improvement in most empirical evaluations. # Weaknesses: - The writing quality of the paper is moderate. The core design choice revolves around the repositioning-and-exploration mechanism, which requires a balanced approach. The paper needs substantial revision to improve clarity and readability. - Section 5.3 is the core of the paper. However, it is wordy and lacks informativeness, making it difficult to follow the method clearly. - Currently, Algorithm 1 provides the most concrete description of the method. The authors should refine Section 5.3 to offer a clearer explanation. - In Algorithm 1, I cannot find β. I am wondering whether the proposed method explicitly incorporates β in its design loop. From Figures 1 and 6, it appears that β is part of the algorithm loop. - The comparison with LESSON (another method for switching between exploration and exploitation) is questionable, as it is based on a single comparison in specific MiniGrid tasks. Other Comments Or Suggestions: - In Fig. 1, you should illustrate what β represents. - The first equation in Section 3 is unclear, as there is no indication that a discount is applied to the intrinsic reward. Use vector graphics for the figure plots. - The title "Hyperparameter Robust Exploration in Reinforcement Learning" is quite broad. To make it more explicit, consider including terms like "Repositioning & Exploration" for greater clarity. - Lines 16 and 17 in Algorithm 1 are unclear. It is not evident whether the method uses only one type of reward or if both types of rewards are used in each phase. Questions For Authors: No. Ethical Review Concerns: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your feedback, though we must respectfully note that **your summary appears to significantly mischaracterize our paper's contributions and scope**. Your summary focuses narrowly on a single implementation detail (the bounded geometric distribution) without acknowledging our paper's core contribution: addressing the fundamental hyperparameter sensitivity problem in curiosity-driven exploration. While the other three reviewers correctly identified this primary contribution along with our theoretical analysis and empirical validation, your assessment seems to overlook these central aspects. While our algorithm is elegantly simple, its design required substantial analysis to identify why existing methods fail in certain cases and how our approach resolves these limitations. This fundamental misunderstanding appears to have colored your assessment, as evidenced by your subsequent questions. Nevertheless, we address your specific concerns below. ### Difficulty of Understanding the Meaning of $\beta$ > In Algorithm 1, I cannot find $\beta$. I am wondering whether the proposed method explicitly incorporates $\beta$ in its design loop. From Figures 1 and 6, it appears that $\beta$ is part of the algorithm loop. > In Fig. 1, you should illustrate what $\beta$ represents. $\beta$ is the coefficient controlling the scale of curiosity reward, as explained in detail in Section 1 (Introduction). This is standard notation in the field. For clarity, we will explicitly include $\beta$ in Algorithm 1 in the camera-ready version, though it is implicitly present in the "with intrinsic reward" training step (line 16). The core contribution of our paper is precisely that Hyper significantly reduces sensitivity to $\beta$, as conclusively demonstrated in Figure 6. This addresses a central challenge in curiosity-driven exploration methods that has limited their practical applicability. ### Difficulty of Understanding Section 5 > The writing quality of the paper is moderate. The core design choice revolves around the repositioning-and-exploration mechanism, which requires a balanced approach. The paper needs substantial revision to improve clarity and readability. > Section 5.3 is the core of the paper. However, it is wordy and lacks informativeness, making it difficult to follow the method clearly. - Currently, Algorithm 1 provides the most concrete description of the method. The authors should refine Section 5.3 to offer a clearer explanation. We will refine Section 5.3 for clarity while maintaining its informative content. It's worth noting that other reviewers (L37S, 3vGR, and QGsd) did not express difficulty understanding our method or Section 5, suggesting the explanation is generally effective. We will strengthen this section by more explicitly connecting the theoretical insights to the practical implementation. > The comparison with LESSON is questionable, as it is based on a single comparison in specific MiniGrid tasks. Our comparison with LESSON follows standard scientific practice by evaluating on the environments used in the original LESSON paper. Fetch8x8, UnlockPickup, and LavaCrossingS9N1 are specifically tasks where LESSON demonstrated its strong performance compared to previous option-based and curiosity-driven methods. Hyper's superior performance on these same tasks provides strong evidence of its effectiveness. This approach to comparison is fair, rigorous, and follows established standards in the field. > The first equation in Section 3 is unclear, as there is no indication that a discount is applied to the intrinsic reward. Use vector graphics for the figure plots. The discount is indeed applied to the intrinsic reward $b(s, a, s')$, as indicated by its placement within the brackets. This follows standard notation in the field. We will use vector graphics for all figures in the camera-ready version. > The title is quite broad. To make it more explicit, consider including terms like "Repositioning & Exploration" for greater clarity. The current title accurately reflects our paper's primary contribution: achieving hyperparameter robustness in RL exploration. "Repositioning & Exploration" is the mechanism we developed to achieve this goal, not the goal itself. The title appropriately emphasizes our main contribution rather than the specific technique used. > Lines 16 and 17 in Algorithm 1 are unclear. It is not evident whether the method uses only one type of reward or if both types of rewards are used in each phase. We will clarify this in the camera-ready version. "With intrinsic reward" means training uses both task reward and intrinsic reward together, while "Without intrinsic reward" means using task reward only. **We respectfully request that you reconsider your assessment in light of our clarifications above, as your review appears to have overlooked our paper's primary contribution that was correctly identified by all other reviewers.** --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their patient explanation and kind response. The rebuttal addresses my concerns, and I am inclined to view this draft more favorably and will raise my score. --- Reply to Comment 1.1.1: Comment: We appreciate your decision to raise your score. We're pleased that our rebuttal has comprehensively addressed all of your concerns and questions. Given the strong theoretical guarantees and empirical results we've presented, and our commitment to incorporate all feedback in the camera-ready version, we respectfully hope you to further consider a stronger score.
Summary: The paper addresses hyperparameter sensitivity in curiosity-driven RL exploration, proposing **Hyper**, a two-phase algorithm that decouples exploration (curiosity-driven) and exploitation (repositioning-guided). Theoretical guarantees under linear MDP assumptions and empirical validation across navigation/locomotion tasks demonstrate Hyper’s robustness to the intrinsic reward coefficient β. Claims And Evidence: - **Claim 1 (β Sensitivity):** Supported by the navigation warm-up example (Fig. 2) and Table 1. **Limitation:** Baseline comparisons use β=1.0 for all methods, potentially disadvantaging baselines requiring tuned β. - **Claim 2 (Hyper’s Robustness):** Empirical results (Fig. 6) validate reduced β sensitivity. **Gap:** Robustness to **p** (truncation probability) is not rigorously tested. - **Claim 3 (Theoretical Efficiency):** Theorem 4.2 under linear MDPs is sound but lacks direct connection to practical Algorithm 1. Methods And Evaluation Criteria: - **Strengths:** Warm-up example effectively isolates β sensitivity; environment diversity (PointMaze, MuJoCo) tests multiple dimensions. - **Weaknesses:** - **Baseline Fairness:** Using β=1.0 for all methods may skew comparisons (e.g., Curiosity-Driven methods often require smaller β). - **Ablation Studies:** Missing analysis of repositioning phase’s contribution vs. truncation mechanism. Theoretical Claims: - Theorem 4.2 (sample efficiency) is valid under linear MDPs but assumes Algorithm 2 (theoretical) aligns with Algorithm 1 (practical). **Gap:** No discussion of how neural networks in practice affect theoretical guarantees. Experimental Designs Or Analyses: - **Statistical Significance:** Standard deviations reported but no formal tests (e.g., t-tests). - **Metric Consistency:** Success rate (navigation) vs. cumulative reward (locomotion) is appropriate but obscures sample efficiency comparisons. Supplementary Material: I reviewed the supplementary material, focusing on: 1. The implementation details and hyperparameter settings 2. The additional experimental results 3. The detailed proofs of the theoretical claims The supplementary material is comprehensive and provides necessary details to understand and potentially reproduce the work. Relation To Broader Scientific Literature: - **Strengths:** Builds on curiosity-driven (Pathak et al.) and decoupled RL (Schäfer et al.) literature. - **Gaps:** - Fails to cite meta-RL exploration strategies (e.g., Stadie et al. 2018). - Omits hierarchical RL with intrinsic motivation (Kulkarni et al. 2016). ## Relation To Broader Scientific Literature The paper is well-positioned within the broader literature on exploration in RL: 1. It builds upon the established curiosity-driven exploration methods (Bellemare et al., 2016; Pathak et al., 2017, 2019; Burda et al., 2018). 2. It addresses a practical limitation of these methods (hyperparameter sensitivity) that has been noted but not thoroughly addressed in previous work. 3. The decoupling of exploration and exploitation relates to work by Schäfer et al. (2021) and Whitney et al. (2021), but with the novel addition of the repositioning mechanism. 4. The theoretical analysis extends the line of work on provably efficient RL with function approximation (Jin et al., 2018, 2020; Yang & Wang, 2020). Essential References Not Discussed: The paper covers most relevant literature, but could benefit from discussing: 1. **Meta-RL approaches to exploration**: Recent work on meta-learning exploration strategies (e.g., Stadie et al., "Some Considerations on Learning to Explore via Meta-Reinforcement Learning," 2018) could provide context for automatically adapting exploration strategies. 2. **Intrinsic motivation in hierarchical RL**: The paper could discuss connections to hierarchical RL methods that use intrinsic motivation (e.g., Kulkarni et al., "Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation," 2016). 3. **Exploration in partially observable environments**: Since many real-world tasks involve partial observability, discussing how Hyper might perform in such settings would be valuable. Other Strengths And Weaknesses: **Strengths:** 1. The paper addresses a practical and significant problem in RL that limits the applicability of powerful exploration methods. 2. The proposed solution is elegant, combining theoretical guarantees with practical implementation. 3. The empirical results are comprehensive and convincing. 4. The paper is well-written and the ideas are clearly presented. **Weaknesses:** 1. The practical implementation of Hyper still requires setting the truncation probability p, which introduces another hyperparameter. While the authors provide a reasonable default and decay schedule, this somewhat undermines the claim of hyperparameter robustness. 2. The analysis of why Hyper works is somewhat limited - more insight into the interaction between the repositioning phase and exploration would strengthen the paper. 3. The environments, while varied, are still relatively standard RL benchmarks. Testing on more diverse or challenging environments would strengthen the claims. Other Comments Or Suggestions: 1. The paper would benefit from a more detailed discussion of the limitations of Hyper, particularly cases where it might not perform well. 2. A visualization of the agent's behavior during the repositioning and exploration phases would help readers understand the algorithm's dynamics. 3. The connection between the theoretical algorithm (Algorithm 2) and the practical implementation (Algorithm 1) could be more clearly explained. 4. Minor typos: - In equation (1), "Qπ = Eπ[PHh=1 γh−1rh(sh, ah)]" should likely be "Qπ = Eπ[∑Hh=1 γh−1rh(sh, ah)]" - Several instances of missing or incorrect mathematical notation in the PDF rendering. Questions For Authors: 1. **Truncation Probability Sensitivity:** How does Hyper’s performance vary with **p**? Does the decay schedule generalize across environments? 2. **Baseline Tuning:** Why was β=1.0 chosen for baselines? Were baselines tested with their optimal β ranges? 3. **Theory-Practice Gap:** How does Algorithm 1’s neural network implementation relate to Algorithm 2’s linear assumptions? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed feedback, and we appreciate your positive comments. We will address your concerns below. ***Regarding relevant literatures*** Thank you for the valuable suggestion. We will incorporate discussions on the references you suggested in the camera-ready version. ***Regarding partially observable environments*** We would like to refer you to Figure 9 in our paper, which shows the experimental results on MiniGrid environments, where the agent observes limited field-of-view (partially observable) observations. Hyper outperforms LESSON by Kim et al. (2023) in MiniGrid (Figure 9), which itself was already shown superior to standard curiosity approaches. This suggests Hyper's mechanisms transfer effectively to partially observable settings. ***Regarding baseline fairness*** We used $\beta=1.0$ following the official Disagreement implementation by Pathak et al. (2019) to ensure fair comparison with the baselines. ***Regarding experiment analysis and metric consistency*** We believe that our experimental results can demonstrate Hyper's superior sample efficiency. The performance curves in Figures 5,8,9 show that Hyper consistently achieves faster learning across all environments compared to baseline methods. This demonstrates how quickly an algorithm reaches a given performance level, which is precisely the definition of sample efficiency in reinforcement learning. ***Regarding $p$-robustness*** Hyper is much less sensitive to hyperparameters than existing methods as shown in Figure 6. Unlike $\beta$, the truncation probability $p$ requires no environment specific tuning. We use the same $p$ schedule across all experiments with consistently strong performance. Our truncated geometric distribution design ensures phase lengths adapt appropriately to different environment horizons. At extreme values, Hyper smoothly transitions between pure exploitation ($p=0$) and full exploration/Decouple ($p=1$). ***Regarding interaction between the repositioning and exploration phases*** Section 5 comprehensively explains Hyper's key mechanisms: 1. The repositioning phase strategically places the agent in promising regions, preventing over-exploration demonstrated in our warm-up example 2. The truncated geometric distribution ensures sufficient exploration while focusing resources on promising areas 3. By limiting exploration to regions informed by the exploitation policy, Hyper collects data that aligns with the exploitation policy's distribution, preventing distribution shift problems (Figure 3) and enabling more efficient learning. ***Regarding the choices of environments*** Our experiments demonstrate Hyper's performance across different challenges: diverse reward structures (dense-reward locomotion, sparse-reward locomotion, sparse-reward navigation), variable horizons (200-1000), and different state/action complexities. Hyper consistently excels across all settings, which provides compelling evidence for Hyper's generality and effectiveness. ***Regarding the limitations & future works*** We appreciate this suggestion and will expand the limitations discussion in our revised paper. The primary limitations include: 1. **Computational cost**: Like Decouple, Hyper requires maintaining and training an additional exploitation policy, increasing computational overhead compared to traditional curiosity-driven methods. 2. **Potential for flexibility improvements**: While Hyper's exploration paradigm of alternating between repositioning and exploration phases proves highly effective, it can be further improved by dynamically switching between phases based on environment feedback. We will incorporate this into discussion in the camera-ready version. ***Regarding the visualization of agent behavior*** Thank you for this valuable and insightful suggestion. We agree this visualization will strengthen the paper and have implemented it as suggested. The state-visitation plots can be found in this anonymous link: https://imgur.com/a/agent-behavior-8wQXtay. The visualizations clearly illustrate Hyper's advantage over baseline methods and provide further evidence of Hyper's exploration-exploitation balance. ***Regarding the connection between theory and practice*** Hyper represents a general RL exploration paradigm that can be integrated with various off-policy algorithms and curiosity methods. Algo2 is a realization of Algo1 with linear function approximation and UCB intrinsic reward. Under necessary assumptions, our theoretical analysis for Algo2 demonstrates convergence guarantees under function approximation, while Algo 1 shows how the framework can be implemented with modern deep RL methods. ## Reference **Kim et al. (2023) LESSON: Learning to Integrate Exploration Strategies for Reinforcement Learning via an Option Framework. In ICML** **Pathak et al. (2019) Self-Supervised Exploration via Disagreement. In ICML** --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal addressing my concerns. After considering your responses, I have updated my assessment of your paper. ## Regarding Partially Observable Environments I appreciate the clarification about MiniGrid experiments in Figure 9. This indeed demonstrates Hyper's effectiveness in partially observable settings, which strengthens your claims about robustness across different environment types. The comparison against LESSON, which was already shown to outperform standard curiosity approaches, provides compelling evidence for Hyper's capabilities in this domain. ## Regarding Baseline Fairness Your explanation that β=1.0 follows the official Disagreement implementation by Pathak et al. (2019) addresses my concern about potential unfairness in the baseline comparisons. This adherence to established implementations strengthens the validity of your comparative results. ## Regarding Experiment Analysis and Metric Consistency The learning curves in Figures 5, 8, and 9 do indeed demonstrate Hyper's superior sample efficiency across environments. I agree that these results effectively show how quickly Hyper achieves given performance levels compared to baselines, which is a standard measure of sample efficiency in RL. ## Regarding p-robustness Your explanation about the truncation probability p is convincing. The fact that you used the same p schedule across all experiments with consistently strong performance is significant evidence of Hyper's robustness. The design of the truncated geometric distribution to adapt phase lengths to different environment horizons is a particularly elegant solution that addresses my concerns about introducing another hyperparameter. ## Regarding Interaction Between Phases Section 5 does provide a comprehensive explanation of Hyper's mechanisms. The visualization you've added (linked in the rebuttal) further clarifies how the repositioning phase strategically places the agent and prevents over-exploration. This visualization effectively demonstrates Hyper's advantage over baseline methods and helps explain why the approach works so well. ## Regarding Theory and Practice Connection Your explanation of the relationship between Algorithms 1 and 2 clarifies how the theoretical guarantees for the linear function approximation case relate to the practical implementation with deep RL methods. This addresses my concern about the gap between theory and practice in your approach. ## Regarding Limitations and Future Work I appreciate your commitment to expand the limitations discussion in the revised paper. The points you've identified about computational cost and potential flexibility improvements are important considerations for readers to understand the trade-offs involved in adopting your approach. ## **Regarding Related Work** I strongly suggest revisiting the discussion on Bayesian RL approaches to exploration. The current characterization of Bayesian RL is incomplete and fails to acknowledge more recent advances in this area. Please include discussion on: --- 1. Osband, Ian, John Aslanides, and Albin Cassirer. "Randomized prior functions for deep reinforcement learning." Advances in neural information processing systems 31 (2018). 2. Osband, Ian, et al. "Deep exploration via randomized value functions." Journal of Machine Learning Research 20.124 (2019): 1-62. 3. Li, Yingru, et al. "Q-Star Meets Scalable Posterior Sampling: Bridging Theory and Practice via HyperAgent." International Conference on Machine Learning. PMLR, 2024. 4. Li, Yingru et al. “Scalable Thompson Sampling via Ensemble++ Agent.” (2024). These approaches have shown strong exploration capabilities in challenging environments, and a more thorough discussion would provide readers with a more accurate understanding of the current state of Bayesian exploration methods in RL. ## Updated Assessment Based on your responses and the additional materials provided, I now have a more positive view of your paper. The comprehensive experiments across diverse environments (including partially observable ones), the theoretical guarantees, and the clear explanation of Hyper's mechanisms make a compelling case for its effectiveness and robustness. The paper addresses an important practical problem in RL (hyperparameter sensitivity) with an elegant solution that is both theoretically grounded and empirically validated. The additional visualizations and clarifications you've provided further strengthen the paper's contributions. I recommend acceptance of this paper, as it makes a valuable contribution to the field of reinforcement learning by addressing a significant limitation of curiosity-driven exploration methods. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback and your recommendation of acceptance! We have comprehensively addressed all your concerns and questions in our rebuttal, demonstrating the robustness and effectiveness of our approach. As suggested, we will enhance our paper with additional discussions on meta-exploration and Bayesian RL approaches, incorporating the valuable references you provided. Given our thorough responses and commitment to these improvements in the camera-ready version, we respectfully hope you consider further raising your score to better reflect the significant contribution our paper makes to the field.
Summary: This paper has proposed a new method, referred to as "hyper-parameter robust exploration (Hyper)", which aims to mitigate the "extensive hyper-parameter tuning" problem in existing curiosity-based exploration methods. The proposed method Hyper is summarized in Algorithm 1. This paper also analyzes Hyper under the linear MDP setting (Theorem 4.2), and preliminary experiment results are demonstrated in Section 6. In particular, Section 6.2 demonstrates Hyper's robustness to $\beta$. ## update after rebuttal I have read the rebuttal and discussed with the authors. Claims And Evidence: Overall, the main claims of this paper are supported by both the theoretical analysis (Theorem 4.2) and experiment results in Section 6. Some comments: - The theoretical analysis and result are limited to linear MDPs. This is an obvious limitation, however, it is mainly due to existing analysis techniques in the theoretical RL community. I do not see an easy way to extend the analysis beyond the linear MDP framework. - The existing experiment results in Section 6 are solid, but I am wondering if they can be further strengthened. Specifically, my understanding is that the proposed Hyper method is a general method for all curiosity-based exploration approaches. However, in Section 6, only experiment results under a few algorithms have been demonstrated. I recommend the authors to add more experiment results under more algorithms to further strengthen the paper. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the considered problem. Theoretical Claims: I think the theoretical claims in Theorem 4.2 can be further strengthened, specifically - Please discuss the tightness of the upper bound developed in Theorem 4.2. Ideally, this paper should also develop a lower bound under the linear MDP setting and discuss the tightness. - Rather than the number of steps, why not present the results of Theorem 4.2 using a regret bound? - Please discuss how the results depend on the truncation probability $p$. Currently they are hidden in the $\tilde{O}$ notation. Experimental Designs Or Analyses: I have checked the experiment design. To the best of my knowledge, it is sound and valid. Supplementary Material: No Relation To Broader Scientific Literature: My understanding is that this paper has done a good job of literature review and well positions itself among the relevant literature. Essential References Not Discussed: I have not found any essential references that have not been discussed. Other Strengths And Weaknesses: - The flow of this paper can be further improved. In particular, Algorithm 1 is after the analysis section (Section 4), which makes it a little bit difficult to read Section 4. - I think the key points of Section 3 are well known to experts in this field. Maybe the authors can shorten it a little bit, and use the space to add more experiment results. Other Comments Or Suggestions: - Typo: in Section 2, when defining the total reward, $\gamma^{h-1}$ is missing before $b_h$ - This paper considers a setting with both the finite time horizon $H$ and the discount factor $\gamma$, this seems to be a non-standard RL setting. Usually we either consider a finite-horizon setting with $\gamma=1$, or an infinite horizon setting with $\gamma<1$. Questions For Authors: Please try to address the weaknesses and questions listed above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for recognizing the strengths of our work, particularly our careful experiment design and comprehensive literature review. Regarding your concerns about theoretical aspects, we want to clarify that the theoretical analysis serves as a convergence guarantee. Hyper's exploration framework deliberately diverges from traditional curiosity-driven exploration approaches by utilizing an exploitive policy for part of the buffer collection. The LinearMDP framework analysis demonstrates that even with this novel approach, we achieve robust exploration efficiency in worst-case scenarios. We address your specific points below. ***Regarding truncation probability $p$ dependency in sample complexity*** The sample complexity result is explicitly $O\left(\frac{d^3 H^4}{\epsilon^2 p}\right)$, which clearly shows how $p$ affects theoretical performance. This parameter provides valuable flexibility in exploration-exploitation balance: - When $p = 0$: The algorithm operates exclusively in exploitation mode. This will bring the theoretical sample complexity to infinity. - When $p = 1$: The algorithm commits fully to exploratory data collection, effectively functioning as Decouple algorithm. As Decouple algorithm exclusively uses exploration policy to collect the data, the sample complexity reduces to $O\left(\frac{d^3 H^4}{\epsilon^2 }\right)$, which exactly matches the bound in Jin et al. (2020). ***Regarding the tightness of the bound*** Our upper bound matches bound in Jin et al. (2020). Regarding tightness, recent work by He et al. (2023) established a minimax-optimal bound of $O(d\sqrt{TH^3})$ for the linear setting, which aligns with the lower bound presented by Zhou et al. (2021). Our follow-up work is already exploring the integration of He et al.'s techniques to derive tighter bounds for Hyper, which would further strengthen our theoretical guarantees. ***Regarding the regret bound*** Our sample complexity result derives directly from the regret bound $\tilde{O}(\sqrt{d^3 H^4 T \iota^2})$, or $O(\frac{\sqrt{d^3 H^4 T \iota^2}}{p})$, currently presented in the appendix. We will incorporate this regret bound into the main theorem in the revised version to provide a more complete theoretical picture, while maintaining our sample complexity result which better aligns with our empirical evaluation metrics. ***Regarding the paper flow*** Thank you for this organizational suggestion. We will restructure the paper to place Algorithm 1 before the analysis section and condense Section 3 to focus on essential background information. This reorganization will allow us to expand our experimental results and visualization sections. Following your suggestion and Reviewer 3vGR's feedback, we've added visualizations of agent behavior, available at https://imgur.com/a/agent-behavior-8wQXtay. These visitation maps demonstrate that Hyper achieves similar exploration capability as Curiosity and Decouple, but learns to exploit the exploratory data significantly faster. This visual evidence further validates our algorithm's effectiveness in balancing exploration and exploitation without requiring extensive hyperparameter tuning. ***Regarding the finite horizon & discounted setting*** We agree that theoretical RL research typically adopts either finite-horizon with $\gamma = 1$ or infinite-horizon with $\gamma < 1$, however, practical RL implementations frequently combine both elements. Our framework uses a fixed episode length $H$ with discount factor $\gamma$ to better reflect real-world applications where both immediate rewards and long-term planning are important. This approach maintains conceptual alignment with the finite-horizon framework while incorporating the practical benefits of discounting. The superior empirical results across diverse environments validate this design choice. In light of our responses and planned improvements, we believe our work represents a significant contribution. Our theoretical guarantees coupled with exceptional empirical performance across diverse environments establish Hyper as an important advancement in resolving the exploitation & exploration dilemma. The algorithmic improvements and visualizations we've added further strengthen our paper's impact. ### References - Jin et al. (2020). *Provably efficient reinforcement learning with linear function approximation*. In COLT - He et al. (2023). *Nearly minimax optimal reinforcement learning for linear Markov decision processes*. In ICML - Zhou et al. (2021). *Nearly minimax optimal reinforcement learning for linear mixture Markov decision processes*. In COLT --- Rebuttal Comment 1.1: Comment: Thanks a lot for the detailed rebuttal and explanations. The rebuttal has partially addressed my concerns. As to "Our follow-up work is already exploring the integration of He et al.'s techniques to derive tighter bounds for Hyper, which would further strengthen our theoretical guarantees.", is it possible to include this tighter regret bound in this paper? If so, I will increase my score to 4. Otherwise, I will keep my score at 3. --- Reply to Comment 1.1.1: Comment: Thank you for your consistent support of our work. Regarding the integration of He et al.'s techniques for tighter bounds - we want to clarify that our work's primary focus is on developing a practical algorithm that balances exploration and exploitation effectively without requiring extensive hyperparameter tuning. The theoretical analysis serves primarily as a worst-case guarantee rather than as the central contribution of our paper. Our current bound is sufficient to demonstrate that Hyper maintains robust exploration efficiency, which is validated by our strong empirical results across diverse environments. The experiments conclusively show that Hyper significantly outperforms baseline methods while being much less sensitive to hyperparameter settings. Based on our preliminary investigation, we are confident that integrating the techniques from He et al. (2023) to achieve tighter bounds for Hyper is definitely achievable. However, this requires addressing numerous technical details and constitutes a substantial extension that would be more appropriate for future work. We are actively pursuing this direction and plan to rigorously prove these tighter bounds in our follow-up research. The visualizations we've added further strengthen our practical claims by demonstrating that Hyper achieves similar exploration capability as curiosity-driven approaches but learns to exploit the exploratory data significantly faster without extensive tuning. We believe our work makes a substantial contribution as-is, with both adequate theoretical foundations and exceptional empirical results that address a significant challenge in reinforcement learning.
null
null
null
null
null
null
Diffusion Models are Secretly Exchangeable: Parallelizing DDPMs via Auto Speculation
Accept (poster)
Summary: This paper proposes and analyzes a parallel sampling scheme for diffusion models. The scheme is simple and natural -- instead of taking a single step from $x_t$ to $x_{t-1}$ during the reverse process in each iteration, multiple steps are taken in parallel from the expected positions $y_s$ given the position $x_t$ at time $t$, as "proposals" for the positions $x_s$, and then, rejection sampling is used to move back in time until a rejected sample if found. The authors show that this scheme gives an $O(d^{2/3})$ time sampling scheme whose performance exactly matches the analogous sequential scheme. Notably, the bound holds without assumption of Lipschitzness of the score, representing the first sublinear in $d$ bound for such distributions, albeit with the use of parallelism. Empirically, the authors show a speed up over DDPM. Claims And Evidence: Yes, the claims are supported by clear evidence. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, they are correct. Experimental Designs Or Analyses: Yes, they are sound. Supplementary Material: No supplemental material Relation To Broader Scientific Literature: This paper is part of a line of recent work proposing and analyzing sampling algorithms for diffusion models. It is the first paper that gives a sublinear in $d$ iteration complexity for sampling from distributions without a score Lipschitzness assumption, by making use of parallelism. In contrast, prior work (Gupta et al, 2024; Chen et al, 2024) has shown \emph{polylogarithmic} in $d$ bounds, under a Lipschitzness assumption. Unlike these works which \emph{approximate} the analogous sequential sampling algorithm, this paper proposes a scheme whose samples exactly match the sequential algorithm's. Essential References Not Discussed: N/A Other Strengths And Weaknesses: While the scheme is interesting, I would have liked to see experimental comparison with the parallel schemes from prior works -- currently it is unclear how well this scheme performs in practice relative to those approximation schemes. This is perhaps the biggest weakness of this work. Other Comments Or Suggestions: Generally the paper is well-written, but again, I would have liked to see comparison with prior parallel algorithms. Questions For Authors: Can you perform experiments that compare your scheme to prior parallel sampling schemes? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and for taking the time to engage with our work. We hope to address your concerns below: ## Comparing ASD with Prior Works Prior works based on Picard Iterations *can only produce approximate samples* from the DDPM output distribution. In particular, these approaches need to trade-off sample quality against parallel speedup by tuning the error tolerance hyperparameter of the Picard Iteration. On the contrary, ASD is an error-free parallelization scheme that *always produces exact samples* from the DDPM output distribution *for any choice of hyperparameters* (as was also noted by the reviewer). Due to this fundamental difference, we believe the two approaches cannot be compared on equal footing. Our approach of parallelizing DDPMs via autospeculation represents a novel and fundamentally different "axis of improvement" compared to Picard iterations. In particular, the two approaches are orthogonal to each other and can potentially be combined together for even greater parallel speedups (e.g., by using ASD to sample from each level of the Picard Iteration). This integration presents an exciting direction for future research. Given these considerations, we believe that the original DDPM sampling method is the most appropriate and fair benchmark for evaluating ASD. We highlight that the primary objective of our experiments was to empirically validate our theoretically guaranteed speedups, while demonstrating a practical application of the hidden exchangeability property of DDPMs (which we believe has potential for broader applications beyond faster DDPM inference).
Summary: Diffusion models can be expensive to sample from, since sampling involves integrating a certain stochastic process, and is hence autoregressive (first one needs to take one step, then another step conditional on the result of the previous step, and so on). It would be extremely nice if there were some way to parallelize this sequential process, although it is not obvious whether a way to do this exists. The authors adapt a recent method for parallelizing autoregressive LLM sampling to diffusion models. They provide a detailed theoretical account of this algorithm and its time complexity as a function of the data distribution's dimensionality, and also illustrate how much it empirically speeds up sampling in a variety of real settings. Their algorithm depends on a relatively surprising feature of diffusion model trajectories that they call "exchangeability". Claims And Evidence: The authors provide both detailed proofs of their theoretical claims and conduct a variety of helpful experiments to show that their method works well in practice. Both theory and experiment appear to be very high quality. Methods And Evaluation Criteria: Yes. Their algorithm makes sense, and they spend significant real estate to explain its intuition and details. Their experiments also make sense, and they are careful to explain various technical details (e.g., the different relevant senses in which their algorithm can provide a speedup). Theoretical Claims: No, but the authors write clearly and convincingly, so I am reasonably confident their results are correct. Experimental Designs Or Analyses: No, but the authors write clearly and convincingly, so I am reasonably confident their results are correct. Supplementary Material: No. Relation To Broader Scientific Literature: The authors' proposal makes an interesting bridge between making (sampling from) LLMs more efficient, and making (sampling from) diffusion models more efficient. Their theoretical work builds on a variety of previous formal-math-flavored theory for understanding diffusion models. Their proposal is also related to various other ideas about how to speed up diffusion model inference, and the authors clearly compare their approach to these ideas (e.g., by pointing out that their method has theoretical guarantees related to being error-free, unlike some other methods). Essential References Not Discussed: No references come to mind. Other Strengths And Weaknesses: The paper is well-written and clearly organized. The math is high-quality. The figures look great. Other Comments Or Suggestions: small typo in fig 3 caption, "different sampling method"->"different sampling methods" Questions For Authors: My main questions are related to the surprising exchangeability result. The empirical experiments seem to indicate that exchangeability is 'true', or at least close enough in practice to 'true'. But I wonder to what extent the theoretical validity of exchangeability relates to defining the forward process as a pure OU process (Eq. 1). In particular, Eq. 1 does not involve any explicit time-dependence, and seems not to obviously include two popular schemes (VP-SDE, and the VE scheme used by EDM). First, is it always possible to reparameterize a given forward process (e.g., VE or VP-SDE) to obtain a pure OU process like the one the authors use? This doesn't seem to be true in the case of VE. If it's not true, is it close enough to being true? e.g., one can consider an OU process with $- \epsilon \ \mathbf{x}$ for small $\epsilon > 0$ to model the VE case. Second, in practice, each forward process generalizes differently (for example, because common discretizations affect the corresponding reverse processes somewhat differently). So they are legitimately different, and not purely reparameterizations of one another. How can the authors' theory account for this? Also, does sampling via the authors' approach affect generalization or sample quality (e.g., FID scores) at all? A theoretical guarantee is one thing, but for various (theoretically interesting!) reasons things may be different in practice. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: $\newcommand{\d}{\mathsf{d}}$ $\newcommand{\vx}{\mathbf{x}}$ $\newcommand{\vy}{\mathbf{y}}$ $\newcommand{\vz}{\mathbf{z}}$ Thank you for your thoughtful review and insightful questions which have helped improve our work. We hope to address your concerns below: ## Hidden Exchangeability Beyond OU DDPMs Thanks for the helpful pointer! **The hidden exchangeability property is not limited to the OU process and actually holds for a large class of generic DDPM formulations, including both VP-SDE and VE-SDE**. This is because both VP-SDE and VE-SDE can be expressed as invertible reparametrizations of the OU process. As a consequence, they are both equivalent to the Stochastic Localization process, and thus, satisfy hidden exchangeability. We shall update our draft to include a proof of this result, which we briefly sketch below: Consider an arbitrary SDE of the form $\d \vz_t = h(t) \vz_t \d t + \sqrt{g(t)} \d B_t$ where $g, h$ are arbitrary continuously differentiable functions with $g(t) > 0$. Note that $h(t) = -\tfrac{1}{2}g(t)$ recovers VP-SDE and $h(t) = 0$ recovers VE-SDE. Now, consider the OU process $\d \vx_t = - \vx_t \d t + \sqrt{2} \d B_t$ with $\vx_0 = \vz_0$ and let $\vy_t = m(t,\vy_t) \d t + \d B_t$ denote the SL process as defined in Section 3.1 Eqn 4. Now, define the functions $\alpha(t)$ and $r(t)$ as follows: $$ \alpha(t) = \frac{1}{2} \ln \left(1 + \int_{0}^{t} g(s) \exp({-2 \int_{0}^{s} h(u) \d u}) \ \d s\right) $$ $$ r(t) = \exp(\alpha(t) + \int_{0}^{t} h(s) \d s)$$ Since $g(t) > 0$, $\alpha$ is a strictly increasing function with $\alpha(0) = 0$, i.e., $\alpha$ is an invertible function. Furthermore, $r(t) > 0 $. One can now Apply Ito's Lemma and the time transformation theorem for SDEs (see [1, Thm 8.5.7]) to prove that $\vz_t = r(t) \vx_{\alpha(t)}$, i.e., $\vz_t$ is reparametrizable to the OU process. As discussed in Section 3.1 of our work, $\vy_t = t e^{s(t)} \vx_{s(t)}$ where $s(t) = \tfrac{1}{2} \ln(\tfrac{t+1}{t})$. It follows that $\vz_t$ also maps to the SL process via the following parametrization: $$\vy_t = \frac{t e^{s(t)}}{r(\alpha^{-1}(s(t)))} \vz_{\alpha^{-1}(s(t))}$$ Consequently, **the hidden exchangeability property also applies to $\vz_t$** ## Effects of Autospeculation on Sample Quality As elucidated in Tables 1, 2 and 3, our evaluations demonstrate that **ASD consistently achieves the same sample quality as the sequential DDPM implementation** (benchmarked via CLIP and FID scores for Image Generation and Task Success Rates for Robomimic Tasks) These findings corroborate our theoretical guarantee that ASD is an error-free parallelization scheme that always produces exact samples from the original DDPM's output distribution (Theorem 3). ### References 1. Oksendal : Stochastic Differential Equations
Summary: This paper reveals the hidden exchangeability inherent in Denoising Diffusion Probabilistic Models (DDPMs) and proposes Autospeculative Decoding (ASD), a novel algorithm that leverages the model itself to generate multi-step speculations and verifies them in parallel. By eliminating auxiliary draft models, ASD achieves a theoretically guaranteed O(K^{1/3}) acceleration over sequential DDPM sampling while preserving zero quality loss. Empirical evaluations demonstrate 1.8-4x practical speedups across image generation (e.g., Stable Diffusion) and robotic control tasks, with CLIP/FID scores and policy success rates on par with original DDPMs. Claims And Evidence: 1.Practical implementations of DDPM typically use discrete steps, whereas the theoretical analysis is based on continuous SDEs. although Theorem 11 analyses the discretisation error, its impact on exchangeability (e.g., whether large step sizes destroy exchangeability) is not explicitly quantified. 2. The effectiveness of ASD relies on exchangeability, and experimental results showing lossless acceleration indirectly support the existence of this property. However, targeted experiments (e.g., distribution consistency tests after replacement increments) were not designed to directly validate exchangeability. Theoretical derivation is rigorous but discretisation effects need further discussion, and experiments indirectly support but lack direct validation. 3. Non-destructiveness not validated in higher dimensional tasks (e.g. video generation) or complex distributions (multimodal). Theoretical proof is sufficient, experimental support is valid but more evidence is needed for scalability. 4. Theorem 4 relies on the covariance decay of the SL process (Theorem 10) and rejection sampling probability bounds, under the assumptions that Tr(Cov[μ])≤βd and step sizes ηk​≤η are reasonable. The derivation adapts the autoregressive framework of Anari et al. (2024a) to continuous spaces. Whether the practical value of β is dimension-independent. If β scales as β=O(1) with respect to d, the theoretical speedup holds. However, if β=O(d), the theoretical guarantees may degrade, weakening the claimed acceleration ratio. Methods And Evaluation Criteria: The self-speculative decoding (ASD) method proposed in the paper addresses the core problem of slow inference of diffusion models, proves the exchangeability of denoised trajectory increments by revealing the equivalence between diffusion models and stochastic localisation (SL), provides a theoretical basis for parallelisation, and is theoretically innovative. With comprehensive assessment criteria: the experimental design of the paper covers multiple dimensions, and the assessment criteria are reasonable and persuasive. The paper has some room for improvement in comparison experiments, but the core validation is still persuasive. Theoretical Claims: 1.The exchangeability strictly requires equal step sizes ηi​. While the authors mention this condition in Theorem 1, they do not explicitly quantify how unequal ηi​ affects practical ASD performance (tested empirically but not theoretically). 2. In Theorem 11, The proof states E[Σt] is non-increasing in PSD order but does not explicitly cite the Löwner-Heinz theorem, which is required for this claim. Experimental Designs Or Analyses: In conclusion, the experimental design of the paper is generally reasonable, but there may be room for improvement in statistical significance and detailed description of the experimental setup. 1. The advantages of ASD in terms of error and acceleration ratio compared to Picard's iterative method of Shih et al. (2024) need to be quantitatively compared (e.g., FID vs. speed profiles). 2. ASD's computational cost of rejection sampling (e.g., rejection rate of Algorithm 3) is not quantitatively analysed. High rejection rates may weaken the actual acceleration, and acceptance rate statistics under different tasks need to be added. Supplementary Material: The entire article, including the appendices, was carefully reviewed.1.While prior diffusion acceleration methods (e.g., DDIM, DPM-Solver) trade quality for speed via deterministic approximations or reduced steps, ASD achieves lossless acceleration through parallelism, akin to Shih et al. (2024)’s parallel sampling but with theoretical guarantees. Unlike recent parallel DDPM methods (Gupta et al., 2024; Chen et al., 2024) that require restrictive Lipschitz assumptions, ASD operates under minimal second-moment conditions, broadening applicability. 2.The work adapts speculative decoding (Leviathan et al., 2023; Chen et al., 2023), originally designed for discrete autoregressive models (e.g., LLMs), to continuous-state diffusion models. Crucially, it eliminates the need for a draft model—a limitation in prior speculative methods—by exploiting the exchangeability property. This aligns with Anari et al. (2024a)’s framework for any-order autoregressive models but addresses the unique challenges of infinite token spaces (i.e., continuous domains).1.While prior diffusion acceleration methods (e.g., DDIM, DPM-Solver) trade quality for speed via deterministic approximations or reduced steps, ASD achieves lossless acceleration through parallelism, akin to Shih et al. (2024)’s parallel sampling but with theoretical guarantees. Unlike recent parallel DDPM methods (Gupta et al., 2024; Chen et al., 2024) that require restrictive Lipschitz assumptions, ASD operates under minimal second-moment conditions, broadening applicability. 2.The work adapts speculative decoding (Leviathan et al., 2023; Chen et al., 2023), originally designed for discrete autoregressive models (e.g., LLMs), to continuous-state diffusion models. Crucially, it eliminates the need for a draft model—a limitation in prior speculative methods—by exploiting the exchangeability property. This aligns with Anari et al. (2024a)’s framework for any-order autoregressive models but addresses the unique challenges of infinite token spaces (i.e., continuous domains). Relation To Broader Scientific Literature: 1.While prior diffusion acceleration methods (e.g., DDIM, DPM-Solver) trade quality for speed via deterministic approximations or reduced steps, ASD achieves lossless acceleration through parallelism, akin to Shih et al. (2024)’s parallel sampling but with theoretical guarantees. Unlike recent parallel DDPM methods (Gupta et al., 2024; Chen et al., 2024) that require restrictive Lipschitz assumptions, ASD operates under minimal second-moment conditions, broadening applicability. 2.The work adapts speculative decoding (Leviathan et al., 2023; Chen et al., 2023), originally designed for discrete autoregressive models (e.g., LLMs), to continuous-state diffusion models. Crucially, it eliminates the need for a draft model—a limitation in prior speculative methods—by exploiting the exchangeability property. This aligns with Anari et al. (2024a)’s framework for any-order autoregressive models but addresses the unique challenges of infinite token spaces (i.e., continuous domains). Essential References Not Discussed: The authors correctly cite speculative decoding on arbitrary order autoregressive models. However, a recent extension to continuous spaces is missing: 1. Distilled Decoding 1: One-step Sampling of Image Auto-regressive Models with Flow Matching[J]. arXiv preprint arXiv:2412.17153, 2024. 2. Accelerated Diffusion Models via Speculative Sampling[J]. arXiv preprint arXiv:2501.05370, 2025. Other Strengths And Weaknesses: Strengths: The discovery of hidden exchangeability via the DDPM-SL equivalence is novel, providing a fresh theoretical lens for diffusion models. Extending speculative decoding to continuous spaces without draft models is a significant conceptual leap. The proposed ASD achieves lossless acceleration (1.8-4x wall-clock speedup) with rigorous guarantees, addressing a critical bottleneck in real-time applications like robotics. Weaknesses: The comparative experiments are not comprehensive, and the theorems and explanations are not clear and complete enough. Refer to “Claims And Evidences”, “Theoretical Claims” and “Experimental Designs Or Analyses” sections. Other Comments Or Suggestions: No. Questions For Authors: 1.Practical implementations of DDPM typically use discrete steps, whereas the theoretical analysis is based on continuous SDEs. although Theorem 11 analyses the discretisation error, its impact on exchangeability (e.g., whether large step sizes destroy exchangeability) is not explicitly quantified. 2. The effectiveness of ASD relies on exchangeability, and experimental results showing lossless acceleration indirectly support the existence of this property. However, targeted experiments (e.g., distribution consistency tests after replacement increments) were not designed to directly validate exchangeability. Theoretical derivation is rigorous but discretisation effects need further discussion, and experiments indirectly support but lack direct validation. 3. Non-destructiveness not validated in higher dimensional tasks (e.g. video generation) or complex distributions (multimodal). Theoretical proof is sufficient, experimental support is valid but more evidence is needed for scalability. 4. Theorem 4 relies on the covariance decay of the SL process (Theorem 10) and rejection sampling probability bounds, under the assumptions that Tr(Cov[μ])≤βd and step sizes ηk​≤η are reasonable. The derivation adapts the autoregressive framework of Anari et al. (2024a) to continuous spaces. Whether the practical value of β is dimension-independent. If β scales as β=O(1) with respect to d, the theoretical speedup holds. However, if β=O(d), the theoretical guarantees may degrade, weakening the claimed acceleration ratio. 5. The exchangeability strictly requires equal step sizes ηi​. While the authors mention this condition in Theorem 1, they do not explicitly quantify how unequal ηi​ affects practical ASD performance (tested empirically but not theoretically). 6. In Theorem 11, The proof states E[Σt] is non-increasing in PSD order but does not explicitly cite the Löwner-Heinz theorem, which is required for this claim. 7. The advantages of ASD in terms of error and acceleration ratio compared to Picard's iterative method of Shih et al. (2024) need to be quantitatively compared (e.g., FID vs. speed profiles). 8. ASD's computational cost of rejection sampling (e.g., rejection rate of Algorithm 3) is not quantitatively analysed. High rejection rates may weaken the actual acceleration, and acceptance rate statistics under different tasks need to be added. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: $\newcommand{\bE}{\mathbb{E}}$ Thank you for your review and for taking the time to engage with our work. We hope to address your concerns below: ## Analysis of Discrete Steps We highlight that **all our theoretical guarantees for ASD directly consider the discrete-time regime**. To this end, our proof of Theorem 2 on the correctness of ASD *does not* make any use of the continuous time SL process, and relies only on the properties of the Gaussian Rejection Sampler. Our proof of Theorem 3, which analyzes the discrete-time parallel complexity of ASD, uses the SL process *only as an analytic tool* to interpolate between the ASD increments. Such interpolation arguments are ubiquitous in the sampling literature, particularly in the analysis of diffusion models [1,2] as well as gradient-based sampling algorithms like Langevin Monte Carlo. [3,4] ## Impact of Discretization on Exchangeability While our discussion on hidden exchangeability in Section 3.1 focuses on the continuous case for ease of exposition (to provide intuition behind why the arguments of Anari et. al.'24 can be extended to diffusion models), **the impact of discretization on hidden exchangeability is appropriately quantified in the proof of Theorem 3**. In particular, the proof of Theorem 3 establishes that, **as long as the step-size is not too large, ASD increments aproximately satisfy the hidden exchangeability property**. This holds because the discrete trajectory of ASD is statistically indistinguishable from that of the continuous SL process (which exactly satisfies hidden exchangeability as per Theorem 1). We rigorously quantify this in the proof of Theorem 16 in Equations 20 and 21 by proving that the TV distance between ASD and the continuous process is $O(\sqrt{\eta \beta d})$. Hence, the two can be made statistically indistinguishable by choosing $\eta \asymp \tfrac{1}{\beta d}$. We shall update our draft to highlight this point in the main text. ## High Dimensional Tasks Our experiments on Pixel Diffusion demonstrate the effectiveness of ASD on a 196,608 dimensional task. Evaluations on more complex modalities such as video is an interesting avenue of future work which we are unable to pursue at present due to resource constraints. ## Unequal Step-Sizes We respectfully disagree with the claim that the case of unequal step-sizes is not analyzed theoretically. We highlight that **Theorem 1 proves a general time invariance property** for the distribution of stochastic localization increments, **which holds even for unequal step-sizes $\eta_i$**. We call this time invariance the hidden exchangeability property as it reduces to exchangeability of increments when the step-sizes are equal. However, the key result of Theorem 1 is agnostic to the choice of step-sizes. Consequently, **all our theoretical guarantees for ASD are directly applicable to unequal step-sizes.** ## Assumption on $\beta$ We first note that $\beta = O(1)$ is a standard assumption in the theory of diffusion models [1,2]. Secondly, for arbitrary $\beta$, the canonical choice of step-size is $\eta \asymp \tfrac{1}{\beta d}$ (e.g. [1] uses $\eta \asymp \tfrac{1}{M_2}$ where $M_2 = \beta d$, see also [2, Appendix C]). Under this setting, the parallel runtime of ASD as per Theorem 4 is $O(K^{2/3})$. Hence, **the $K^{1/3}$ parallel speedup guarantee continues to hold.** ## References Thank you for the helpful references, which we weren't aware of at the time of writing. We shall update our draft to include them. ## Löwner-Heinz Theorem $\bE[\Sigma_t]$ being a decreasing function in the Löwner order directly follows from the facts that: 1. $\tfrac{d{\bE[\Sigma_t]}}{dt} = -\bE[\Sigma_t^2]$ and, 2. $\bE[\Sigma_t^2]$ is a PSD matrix. To our understanding, the Löwner-Heinz theorem (which deals with the operator monotonicity of $t^p$ for $p \in [-1,1]$) is not relevant here. Please let us know if we have made a mistake. ## Comparison to Shih et. al. Please refer to our response to Reviewer FuEw for a detailed discussion on why our results and that of Shih. et. al. cannot be compared on equal footing (and why our work and Shih. et. al. represent orthogonal axes of improvement for fast DDPM inference) ## Quantifying the Acceptance Rates We highlight that the acceptance rate is quantitatively analyzed in the proof of Theorem 4. In particular, the total probability of rejection is upper bounded by $O(\sqrt{K \theta \eta \beta d})$ ### Refs 1. Chen et. al. : Sampling is as easy as learning the score 2. Benton et. al. : Convergence Bounds for DDPMs via Stochastic Localization 3. Vempala & Wibisono : Rapid Convergence of the Unadjusted Langevin Algorithm 4. Balasubramanian et. al. : Towards a Theory of Non-Log Concave Sampling 5. Anari et. al. : Parallel Sampling via Counting
null
null
null
null
null
null
null
null
Toward a Unified Theory of Gradient Descent under Generalized Smoothness
Accept (poster)
Summary: --------- ## Update after rebuttal The authors showed that the issue in the proof can be fixed. I checked the corrected proof and it looks good to me, so I'm raising my score. ------------- This paper studies the convergence of gradients descent under a generalized assumption on the objective $L$-smoothness. Assuming that the Hessian of the objective $f$ satisfies $\Vert \nabla^2 f(x)\Vert \le \ell (\Vert \nabla f(x)\Vert)$, where $\ell$ is a nondecreasing positive locally Lipchitz function, the authors show that with the stepsize $\gamma_k = $. The authors give a few good motivating toy examples such as classification with a linear two-layer network with a single parameter in each layer. Even this extremely simplified example is not covered by previous theory, while the new work gives result if we choose $\ell(t) = L_0 + L_1t^\rho$ with some $\rho\ge 2$ (previous work only covers $\rho < 2$). And even in the case where the previous work does cover examples, the new stepsize rule leads to a superior complexity. The authors study nonconvex and convex problems. There are certain limitations to the results and one of the results is wrong (see the Theoretical Claims section of my review). The results for the case where $\ell$ is sub-quadratic have been already covered by the previous work, even though with a worse complexity, while the super-quadratic $\ell$ requires an assumption on bounded gradients. So while I found some of the small facts discovered in the paper to be elegant, the paper feels a bit inconclusive and doesn't seem to solve the problem of minimizing the functions advertised in the motivation. The empirical results are very small and are only provided in Appendix A. To conclude, the paper has very solid motivation, but the results are a bit underwhelming, especially taking into account one of them is wrong. I think removing the wrong part wouldn't be too difficult. Claims And Evidence: The claims are supported with evidence, I do not see any issue with that, and I'm pretty sure the authors willl be able to adjust them after removing the wrong part of the paper. Methods And Evaluation Criteria: There is no issues with methods and evulations. Theoretical Claims: The guarantees in this paper can be split it into several categories: 1. General **nonconvex** problems with $\ell(x)$ being **sub**-quadratic. This part looks **correct**, although not very interesting due to prior work already covering it. 2. General **nonconvex** problems with $\ell(x)$ potentially being **super**-quadratic. This one looks **correct**, but it requires an extra assumption that the gradient is bounded. 3. **Convex** problems with $\ell(x)$ being **sub**-quadratic. There is a **mistake** in the proof (see below). 4. **Convex** problems with $\ell(x)$ potentially being **super**-quadratic. This alternative approach seems to be correct as it uses a different proof technique. 5. **SGD** theory. I checked some steps in the proofs and they looked good to me, they follow the standard steps in high probability bounds. The convergence result requires large batch sizes, which is an issue present in a lot of previous papers on $(L_0, L_1)$-smoothness. It was shown in the work of Koloskova et al. that it's not surprising, clipping biases SGD and it doesn't converge in general. I see this as a small extension of the deterministic results. ## Mistake in the proof In Appendix K, the authors use $x+y\le 2\max(x, y)$ in the wrong way. They conclude from $\min_k \frac{f_k}{x+y}\le \frac{R^2}{T+1}$ that $\min_k \max(\frac{f_k}{2x}, \frac{f_k}{2y})\le \frac{R^2}{T+1}$, whereas the correct bound would be $\min_k \min(\frac{f_k}{2x}, \frac{f_k}{2y})\le \frac{R^2}{T+1}$, i.e., with $\min$ instead of $\max$ since $\frac{1}{\max(x, y)} = \min (\frac{1}{x}, \frac{1}{y})$. Unfortunately, I think it's a fundamental issue and can't be easily fixed. Experimental Designs Or Analyses: The experiments are only performed on simplified 1-dimensional functions and reported in the appendix providing the number of iterations needed to reach a target accuracy. I think it's fair to say they don't constitute a solid contribution and the paper should be viewed as purely theoretical. Supplementary Material: I went in details through some of the proofs in the supplementary material and found an issue in one of them, while the other checked proofs looked good to me. Relation To Broader Scientific Literature: The paper extends the prior work on generalized $L$-smoothness assumptions and provides a new stepsize that, to the best of my knowledge, hasn't been considered before. I think the contributions are novel and will be of interest to the optimization community. Essential References Not Discussed: One key paper on the topic has been missed by the authors: Xie et al., "Gradient-Variation Online Learning under Generalized Smoothness". The paper considers online learning under a time-varying function $\ell_t$ in the generalized $\ell$-smoothness assumption. Otherwise, the essential references seem to be already in the paper. Other Strengths And Weaknesses: I found it very easy to go through the paper and I'd commend the authors for the presentation of their results. Other Comments Or Suggestions: When discussing related work and complexities, the proper definition of complexity is missing, which is especially important in the nonconvex case since some other papers discuss the complexity if getting $\Vert \nabla f(x) \Vert \le \varepsilon$ rather than $\Vert \nabla f(x) \Vert^2 \le \varepsilon$. Furthermore, the authors need to make it clear they discuss the best iterate rather than the last iterate or the average iterate. I think the section structure is a bit sub-optimal. I'd suggest making sections 2, 3, and 4 subsections of section 1. Section 5, 6 could be unified as sections on general theory, while the rest except for the conclusion could be combined into a section on convergence. Typo on page 4: "it is know that". Typo on page 5: "it seems that this infeasible because to find an explicit formula of the optimal step size using (8)." Typo on page 5: "one easily calculate $\gamma_k$". Typo in equation (11): the gradient norm power is $p$ instead of $\rho$. Same typo is made in equation (28). Typo on page 6: "If $\ell(s) = L_0 + L_1s^\rho$ for $p > 2$". Typoe on page 6: "finds an $\varepsilon$–stationary after" The way Assumption 9.1 is formulated is a bit weird as it seems to ask for non-uniqueness of $x_*$, which the authors don't need. Questions For Authors: 1. The authors explain in Remark 6.7 that one can use the stepsize $\gamma_k = \frac{1}{\ell(2\Vert \nabla f(x_k)\Vert)}$ but it would lead to "a less tight final result". Can the authors clarifiy how much worse it's gonna be? From the practical perspective, the rule without integration seems to be a lot more appealing and relying on it would make the paper even stronger. Since (33) alreasy uses $\ell(2\Vert \nabla f(x_k)\Vert)$, it appears that the simpler choice would work at least for convex problems. 2. Can the authors provide an example of how Simpson's rule can be used on a specific example? Taking an example from Section 3 would be particularly illustrative. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the review! We now start with "Mistake in the proof." Thank you again for spotting a typo in the paper. **We will try to explain that this is a typo rather than a mistake because it is sufficient to fix "max" to "min," and nothing else should be changed.** Let us clarify this part. Both we and the reviewer agree that the right inequality is $$\min_{k \in \{0, \dots, T\}} \min\left[\frac{f_k}{2 L_0}, \frac{1}{2 L_1 f_k^{\rho - 1}\left(\frac{2 \sqrt{T + 1}}{R}\right)^{\rho}}\right] \leq \frac{R^2}{T + 1}.$$ Let us fix $T$ as in the paper $$T = \max\left[\frac{2 L_0 R^2}{\varepsilon}, \frac{16 L_1^{2 / (2 - \rho)} R^2}{\varepsilon^{2 (1 - \rho) / (2 - \rho)}}\right]$$ (smallest integer larger than this. Also, we should take 16 instead of 4). Notice that $$\min_{k \in \{0, \dots, T\}} \min\left[\frac{f_k}{2 L_0}, \frac{1}{2 L_1 f_k^{\rho - 1}\left(\frac{2 \sqrt{T + 1}}{R}\right)^{\rho}}\right] = \min\left[\frac{f_T}{2 L_0}, \frac{1}{2 L_1 (f_T)^{\rho - 1}\left(\frac{2 \sqrt{T + 1}}{R}\right)^{\rho}}\right] \leq \frac{R^2}{T + 1} (**)$$ because the terms in min are non-decreasing functions of $f_k$ for all $0 \leq \rho \leq 1$ and $f_T = \min_{ k \in [ 0, \dots, T ] } f_k$ (see Theorem 7.1). There are three options: 1) $\frac{f_T}{2 L_0} \leq \frac{1}{2 L_1 (f_T)^{\rho - 1}\left(\frac{2 \sqrt{T + 1}}{R}\right)^{\rho}}.$ Using (**), we get $$f_T \leq \frac{2 L_0 R^2}{T + 1} \leq \varepsilon,$$ where the last inequality due the choice of $T.$ 2) $\frac{f_T}{2 L_0} > \frac{1}{2 L_1 (f_T)^{\rho - 1}\left(\frac{2 \sqrt{T + 1}}{R}\right)^{\rho}}$ and $\rho = 1.$ Using (**), we get $$(f_T)^{1 - \rho } \leq \frac{2^{1 + \rho} L_1 R^{2 - \rho}}{(T + 1)^{1 - \rho / 2}}$$ and $$T + 1 \leq 16 L_1^2 R^2$$ because $\rho = 1,$ which can not be true due to the choice of $T.$ Thus, the second option is not possible after $T$ iterations. 3) $\frac{f_T}{2 L_0} > \frac{1}{2 L_1 (f_T)^{\rho - 1}\left(\frac{2 \sqrt{T + 1}}{R}\right)^{\rho}}$ and $\rho < 1.$ Using (**), we get $$(f_T)^{1 - \rho } \leq \frac{2^{1 + \rho} L_1 R^{2 - \rho}}{(T + 1)^{1 - \rho / 2}}.$$ Since $T \geq \frac{16 L_1^{2 / (2 - \rho)} R^2}{\varepsilon^{2 (1 - \rho) / (2 - \rho)}},$ we get $$(f_T)^{1 - \rho } \leq \frac{2^{1 + \rho} L_1 R^{2 - \rho}}{(T + 1)^{1 - \rho / 2}} \leq \varepsilon^{1 - \rho}$$ and $$f_T \leq \varepsilon$$ because $\rho < 1.$ In total, $f_T \leq \varepsilon$ for both possible options! We agree that these derivations are important, and we will add them to Section K. **The reviewer can see that there are no mistakes in Section K and only a small typo with "min" and "max".** Let us respond to other concerns: > One key paper on the topic has been missed by the authors: Xie et al., "Gradient-Variation Online Learning under Generalized Smoothness". Agree; we will add this paper to the discussion. > I'd suggest making sections 2, 3, and 4 subsections of section 1. Section 5, 6 could be unified as sections on general theory, while the rest except for the conclusion could be combined into a section on convergence. Agree; we will do it this way. See our discussion with Reviewer fNss. > Typos ... Thank you for spotting the typos! > The authors explain in Remark 6.7 that one can use the stepsize but it would lead to "a less tight final result". Can the authors clarifiy how much worse it's gonna be? We believe that it can be significantly worse, and the new iteration complexity with $\ell(2 || \nabla f(x_k)||)$ can become $\sup_{s \geq 0} \ell(2 s) / \ell(s)$ bigger. For the $(\rho, L_0, L_1)$-smooth setup, the difference can be $2^{\rho}$ times. If $\rho$ is large, it is better to use our rule with the integral. > Can the authors provide an example of how Simpson's rule can be used on a specific example? Taking an example from Section 3 would be particularly illustrative. In our implementation, we use the standard `scipy` library as follows: ```python import scipy.integrate as integrate def find_step_size(ell, norm_grad): return integrate.quad(lambda v: 1 / ell(norm_grad + norm_grad * v), 0, 1)[0] ``` It is not necessary Simpson's rule, but some other numerical method from the Fortran library QUADPACK. > To conclude, the paper has very solid motivation, but the results are a bit underwhelming, especially taking into account one of them is wrong. We hope that our clarification of the typo improves the perception of our work. Note that our work provides new state-of-the-art theoretical complexities in both convex and nonconvex settings, which we believe constitutes an important contribution to the ICML community given the current interest in generalized smoothness. Thank you once again for spotting the typo! --- Rebuttal Comment 1.1: Comment: I thank the authors for showing how the issue can be fixed. I verified the details and it looks good to me. I'll increase my score for the paper accordingly.
Summary: This work improves the convergence rates of gradient descent on the $\ell$-generalized smooth problem for both nonconvex and convex settings by using a novel integral-based stepsize. Then it extends the results to stochastic gradient descent algorithm. Claims And Evidence: The improved convergence rates claimed by this work are well supported by theoretical results. Methods And Evaluation Criteria: The numerical examples fit the generalized smooth condition well. The evaluation criterion I guess is the objective function $f(x_k)$, which is standard and reasonable. Please plot the learning curves which makes this criterion more clear. Theoretical Claims: By checking some key proofs, I believe the proofs are correct and novel. Experimental Designs Or Analyses: The numerical examples and reproducibility (e.g. hyperparameter choices) look fine. It is strongly recommended to plot the learning curves $f(x_k)$. Supplementary Material: I checked the proof of Lemmas 6.3, 6.5 \& I.1, the proof of Corollary 6.6 as well as its involved Appendix D, and the proof of Theorems 7.1, 10.3 \& 11.2. I skipped the remaining proof whose logic I can envision. I believe the theoretical proof are correct and novel. Relation To Broader Scientific Literature: This work studies currently the most generalized smooth optimization problem ($\ell$-generalized smooth problem), with better convergence rates than the state of the arts in the other works on the same or less generalized smooth optimization problem. Essential References Not Discussed: There are other works on $\ell$-smoothness, citing Li et al. (2024a). You could add them to the related works. Other Strengths And Weaknesses: Strengths: The $\ell$-generalized smoothness studied in this problem is so far the most generalized smooth, which covers a lot of examples and applications. The theoretical results and proof techniques are correct and very novel. For example, this work uses inverse of the integral operator $q^{-1}$ which yields an integral-based bound on function decrease $f(x_{k+1})-f(x_k)$ and thus an integral-based stepsize. These further yield convergence results that improve the state of the arts. Weakness mainly in presentation: (1) In ICML papers, usually Section 1 is introduction which introduces the studied problem, the drawback of existing works and the contribution of this work (move your Section 4 here) are missing in the Section 1. The final item of Section 4 could add "see Section xx". (2) The experimental results are usually shown by figure or table. In your case, it is strongly recommended to plot the learning curves $f(x_k)$. **I would like to raise my rating if you improve the presentation.** Other Comments Or Suggestions: (1) The suggestions about presentation above. (2) The final paragraph of Section 2 "Related Works" could mention that $\ell$-smoothness generalizes $(L_0,L_1)$-smoothness. Also, there are other works on $\ell$-smoothness, citing Li et al. (2024a). You could add them. (3) In Section 3, you could add machine learning application examples, for example, the examples in (Chen et. al. 2023). Also, are there any application examples that belong to $\ell$-smoothness but not $(L_0,L_1)$-smoothness (i.e., when $\ell(x)=\mathcal{O}(||x||^p)$ with $p\in(1,2)$ as $||x||\to+\infty$). (4) The exact expression of Eq. (17) could be written in Table 2 for convenient comparison. If there is not enough space, you may consider denoting constants like $c=L_0^{\frac{\rho}{2+\rho}} \Delta^{\frac{\rho}{2+\rho}} L_1^{\frac{2}{2+\rho}} R^{\frac{4}{2+\rho}}$. (5) The title of Section 5 could be "Assumptions for Nonconvex Setting". The title of Section 6 could be "Preliminary Theoretical Properties", since the main theoretical results are the convergence results in the consequent sections. (6) You could explain how to compute such stepsize $\gamma_k$ as an integral form in practice. (7) In Algorithm 2, you could explain that {$\xi_{kj}$}$_{j=1}^B$ are i.i.d. samples. (8) In the equation right after (23), the middle step could use $g(v)$ instead of $g(t)$ to avoid confusing the two $t$. (9) Lemma D.1 could define $U$ as $U(t,h)$ to be more clear. (10) Right after Lemma P.1, "We now prove the main theorem". Questions For Authors: No questions now. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you! > Please plot the learning curves which makes this criterion more clear. We've done this. We will add the following plots to the section with experiments: [figure](https://figicml.tiiny.site/) > Weakness mainly in presentation: (1) In ICML papers, usually Section 1 is introduction which introduces the studied problem, the drawback of existing works and the contribution of this work (move your Section 4 here) are missing in the Section 1. Thank you for the suggestions. We tried to follow the standard pattern: 1. Problem -> 2. Related Work -> 4. Contributions. (yet, we added motivating examples to give more motivation to our contributions). **We will align our structure with the reviewer's expectations: i) we will swap "3. Motivating Examples" and "4. Contributions." Then, we will unify "1. Problem," "2. Related Work," and "3. Contributions." into one section called "1. Introduction." It can be easily done in the camera-ready version.** > The experimental results are usually shown by figure or table. In your case, it is strongly recommended to plot the learning curves **We will add the following plots to the section with experiments**: [figure](https://www.dropbox.com/scl/fi/xo2pr2cow8201ce4p75mj/fig.pdf?rlkey=cyhhm2wbvqrr2b21xqezzmjbi&e=1&st=7n46aqyx&dl=0) > The final paragraph of Section 2 "Related Works" could mention ... You could add them. > In Section 3, you could add machine learning application examples, for example, the examples in (Chen et. al. 2023). Also, are there any application examples t Agree. We will add these important papers to the overview. > The title of Section 5 could be "Assumptions for Nonconvex Setting". The title of Section 6 could be "Preliminary Theoretical Properties", since the main theoretical results are the convergence results in the consequent sections. Also agree; this title makes more sense. > You could explain how to compute such stepsize as an integral form in practice. In our implementation, we use the standard `scipy` library as follows: ```python import scipy.integrate as integrate def find_step_size(ell, norm_grad): return integrate.quad(lambda v: 1 / ell(norm_grad + norm_grad * v), 0, 1)[0] ``` > In the equation right after (23), the middle step could use > Lemma D.1 could define > Right after Lemma P.1 We will fix these problems. Thank you very much for the nice review, which helps to improve our work! --- Rebuttal Comment 1.1: Comment: I have increased my rating to 4. For the plots: You could add captions to each plot, such as the corresponding objective function.
Summary: The paper discusses optimization under the generalized smoothness assumptions and show a choice of step size that improves the known convergence bounds in this setting. The paper also presents convergence rates in scenarios where previous work did not consider (e.g., $\rho\geq 2$). # Update after rebuttal In the rebuttal, the authors addressed my concerns about the computational cost and the accuracy needed in calculating the new step sizes. The is a good paper and I support in acceptance. Claims And Evidence: Yes, all of the theoretical claims in the paper are proved in the text or in the appendices. Methods And Evaluation Criteria: Yes, the paper is mostly theoretical. Theoretical Claims: No. I focused on reading the main text of the paper. Experimental Designs Or Analyses: No. I focused on the theoretical part of the paper. Supplementary Material: N.A Relation To Broader Scientific Literature: The paper show convergence bounds that improve rates from previous work. Moreover, there are bounds for scenarios that was not studied in previous work. This improvements are achieved due to a different step size than considered in previous work. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1.The paper is well-written and clearly presented. 2. It establishes improved convergence rates for gradient descent under the generalized smoothness assumption in both convex and non-convex settings, which has been a topic of growing interest in recent years. 3. The approach of utilizing the function $q$ to reformulate the generalized smoothness conditions and derive a new step size is both novel and insightful. 4. The proposed step size allows the authors to obtain new theoretical results, also in the case of $\rho \geq 2$. Weaknesses: 1.The fact that the new step size does not always have a closed-form expression could pose challenges for its practical implementation in real-world applications. 2. From a theoretical perspective, the additional computation required to determine the step size at each iteration may negatively impact the overall computational complexity compared to standard gradient descent. Other Comments Or Suggestions: N.A. Questions For Authors: I have several questions regarding the computation of the new step size: 1. Is it necessary to compute the step size exactly in order to achieve the improved convergence rates? If not, how do the convergence bounds degrade when the step size is determined with an accuracy of $\delta$? 2. For a general $\ell$, what is the computational cost of computing the new step size with a precision of $\delta$? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you! Let us respond to the weaknesses and questions: > The fact that the new step size does not always have a closed-form expression could pose challenges for its practical implementation in real-world applications. We now show how we implement it in the experiments and how it can be done in Python: ```python import scipy.integrate as integrate def find_step_size(ell, norm_grad): return integrate.quad(lambda v: 1 / ell(norm_grad + norm_grad * v), 0, 1)[0] ``` We believe that the implementation is straightforward and only requires the standard `scipy` library. > From a theoretical perspective, the additional computation required to determine the step size at each iteration may negatively impact the overall computational complexity compared to standard gradient descent. > For a general $\ell$, what is the computational cost of computing the new step size with a precision of $\delta$? We agree that the function above requires an additional call to `integrate.quad`. However, compared to the complexity of gradient computations, this operation is negligible. Calculating a numerical integral of a bounded and well-behaved function is a very simple one-dimensional numerical task. > Is it necessary to compute the step size exactly in order to achieve the improved convergence rates? If not, how do the convergence bounds degrade when the step size is determined with an accuracy $\delta$? This is a good question. We have not investigated it in detail, but the rationale is similar: computing the numerical integral in one dimension with very good accuracy is a very inexpensive operation. This operation is almost as simple as calculating $e^x$, division, and so on, which are also not computed with perfect precision. However, the errors arising from summation, division, $e^x,$ $\sin x,$ and other standard operations are typically ignored in practice due to the high level of accuracy.
Summary: This paper studies the performance of the Gradient Descent (GD) method under a generalized $\ell$-smoothness condition. The authors propose a universal step size applicable across different parameter choices. Using this step size, they improve existing theoretical results and establish new convergence guarantees for previously unexplored settings. ## update after rebuttal Thanks the authors for answering my questions. I decide to keep my score. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. I think it is correct. Experimental Designs Or Analyses: Yes. In fact, there are no experiments in the main content. Supplementary Material: Yes. Relation To Broader Scientific Literature: See below. Essential References Not Discussed: No. Other Strengths And Weaknesses: Weaknesses: * Can these results be extended to the constrained setting? * It would strengthen the paper to include experiments demonstrating the universal applicability of the proposed step sizes across different problem settings. * More details on the practical implementation of the step size computation would be beneficial. * For experiments mentioned in Appendix A, it would be better to provide a figure to show the results. Strengths: * The paper provides a unified analysis and a universal step size for problems satisfying the $\ell$-smoothness condition, which encompasses many existing smoothness assumptions. Furthermore, the convergence results recover all known results as special cases. * By leveraging the new step size, the authors improve the convergence rate of GD for certain function classes. * The paper is well-written and easy to follow. Other Comments Or Suggestions: See above. Questions For Authors: See above. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the positive review! We would like to clarify the weaknesses: > Can these results be extended to the constrained setting? This is a good question. We have not yet explored this extension in depth; however, it appears that the constrained setting might be more challenging than in the $L$-smooth case. This is an important future work. > It would strengthen the paper to include experiments demonstrating the universal applicability of the proposed step sizes across different problem settings. Our main objective was to establish new theoretical results, supported by toy experiments. We acknowledge that there is significant potential for further research to explore the proposed approach in various practical and deep learning scenarios. > More details on the practical implementation of the step size computation would be beneficial. In our implementation, we use the standard `scipy` library as follows: ```python import scipy.integrate as integrate def find_step_size(ell, norm_grad): return integrate.quad(lambda v: 1 / ell(norm_grad + norm_grad * v), 0, 1)[0] ``` > For experiments mentioned in Appendix A, it would be better to provide a figure to show the results. We have prepared figures (see [figure](https://www.dropbox.com/scl/fi/xo2pr2cow8201ce4p75mj/fig.pdf?rlkey=cyhhm2wbvqrr2b21xqezzmjbi&e=1&st=7n46aqyx&dl=0)) that we will include in the camera-ready version of the paper.
null
null
null
null
null
null
Independence Tests for Language Models
Accept (spotlight poster)
Summary: This paper introduces a method to assess whether two large language models (LLMs) are independent or if their training procedures exhibit dependencies. The core idea is based on the principle that if two LLM weights are independent, the distribution of differences between arbitrary permutations of their weights should be uniform. Conversely, if the models are dependent, the differences between their original weights will be significantly smaller than those between permuted versions of their weights. Motivated by this observation, the authors propose a method to compute p-values from the distribution of weight differences, which quantify the probability of the two models being independent. The paper presents both the conceptual framework and algorithmic implementation of their approach. Empirical results on some cases demonstrates its effectiveness in detecting dependencies between LLMs. Claims And Evidence: The submission makes claims in Theorem 3 that require further clarification and evidence. Specifically: - The theorem does not guarantee that the parameters (thetas) are independent when two matches are independent, which raises concerns about the potential for a high Type-II error in the proposed test. This limitation is not sufficiently addressed, and the evidence supporting the test's robustness in such scenarios is unclear. - The equivariant-type condition, which is central to the theorem, may not hold in practice. The submission does not provide adequate justification or empirical validation for this condition, casting doubt on the generality and applicability of the claimed results. Methods And Evaluation Criteria: I have some concerns about the proposed methods and evaluation criteria for the problem at hand. First, the definition of independence in Section 2.1, presented in Equation 1, is not well-defined. The notation $\theta_1 \perp \theta_2$ is used without a clear explanation of it. For instance, does it imply statistical independence, zero mutual information, or some other form of independence? This lack of clarity makes it difficult to assess the validity of the proposed test. Furthermore, the authors suggest that if (non-independent initializations), then (non-independent final weights). However, they also imply that if or even $A_1 = A_2$ (i.e., non-independent or identical training procedures), then $A_1(\theta_1^0) = A_2(\theta_2^0)$ (i.e., independent final weights), without providing sufficient justification. Second, the evaluation criteria do not seem adequate for the problem. For example, the authors do not report results for Type-I error rates or statistical power, which are critical for assessing the reliability and effectiveness of the proposed test. Including these metrics would provide a more comprehensive evaluation of the method's performance. Theoretical Claims: Yes, I have reviewed the proofs provided in Appendix B of the paper. I have the following concerns regarding their correctness. 1. The definition of independence in Appendix B is not well-defined. The lack of a precise and rigorous formulation makes it difficult to assess the validity of the theoretical claims and proofs that rely on this definition. 2. The proof appears to be overly general and does not leverage any specific characteristics of language models. While this generality might seem advantageous, it raises questions about whether the proof is sufficiently tailored to the problem at hand. The absence of model-specific considerations limits the depth of the theoretical insights and their relevance to the application domain. Experimental Designs Or Analyses: I have concerns about the soundness and validity of the experimental design and analyses in the paper: - Given that the proposed method is agnostic to neural networks, it would be beneficial to include experiments with simple neural network architectures to study Type-I and Type-II errors. This would help validate the method's effectiveness and robustness in a controlled setting, which is currently missing from the experimental design. - $T$ used in the experiments appears to be too small, which may limit the reliability of the results. Supplementary Material: Yes, I have reviewed the supplementary material. Specifically, I examined Appendix B. Relation To Broader Scientific Literature: This paper's contributions relate to the broader literature on independence testing, particularly for large language models (LLMs). It may be helpful for applications like LLM-based ensemble methods and model voting. Essential References Not Discussed: This paper employs a permutation-based technique to test independence. However, recent relevant works, such as [1-2], are omitted from the discussion. Reference - [1] Berrett, Thomas B., Ioannis Kontoyiannis, and Richard J. Samworth. "Optimal rates for independence testing via U-statistic permutation tests." The Annals of Statistics 49.5 (2021): 2457-2490. - [2] Kim, Ilmun, Sivaraman Balakrishnan, and Larry Wasserman. "Minimax optimality of permutation tests." The Annals of Statistics 50.1 (2022): 225-251. Other Strengths And Weaknesses: - Strengths: The paper is mathematically thorough and bases its thesis in well formulated assumptions. The idea is interesting and the method relatively simple and inexpensive, which may be attractive for implementation. - Weaknesses: - The significance of independence testing for language models is not sufficiently motivated or clarified. The paper would benefit from a more detailed and board discussion of why this problem is important in the context of language models and their applications. - The clarity of writing could be improved. The presentation of ideas is often unclear, which hinders the reader's ability to fully grasp it. Other Comments Or Suggestions: Typos: - Section 3.1, the first sentence: "We first validate validate the effectiveness" Questions For Authors: Please refer to **Claims and Evidence**, **Methods and Evaluation Criteria**, and **Weaknesses**. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time and address some of their concerns below. We will add the references suggested. $\textbf{Claims and Evidence Comment 1}$ For the constrained test, we in fact guarantee that the test leads to exact p-values under the null hypothesis when two models are independent, i.e. two independent models will not lead to a low p-value. But with regards to a Type II error, we agree that our tests do not inherently guarantee that two non-independent models will always lead to a low p-value. However, empirically we find this is true in all our experiments. Specifically, Figures 5 and 6 in Appendix E show that on all (69) dependent Llama 7B pairs, the tests $\phi_{U}$ and $\phi_H$ yield p-values less than 2.2e-308 (which is our Type-II error rate, i.e. the maximum p-value we observe in the case that the null hypothesis (independence) does not hold). The same holds for $\phi_\text{MATCH}$ (Figure 7). We believe that this is sufficiently addressed through evidence from the Llama 7B and 70B results. We note that with regards to a Type I error, that our theorem does guarantee a uniform distribution for two independent models under the equivariance assumption, which we discuss more below. $\textbf{Claims and Evidence Comment 2}$ Standard machine learning algorithms such as SGD, which are the most common algorithms for training language models follow the equivariance condition, as the gradients are permutation-equivariant, explained briefly in Example 2. We also emphasize that only one of the models needs to satisfy these assumptions, so a trusted model’s developer (who used SGD for example) can run this test without assumptions on the training strategy for another model. $\textbf{Methods and Evaluations Criteria}$ The independence is statistical independence of two random variables, $\theta_1$ and $\theta_2$ (equivalently, zero mutual information), i.e. $X \perp Y$ if and only if $P(X|Y) = P(X)$. We will clarify this in a footnote in updated writing. In this case, if $\theta_1^0$ and $\theta_2^0$ are independent random variables, then by the post-processing inequality, since $A$ is a (deterministic) function, then $A(\theta_1^0)$ and $A(\theta_2^0)$ are also independent. We will add these details and this explanation. The Type-I error rate is the $\alpha$ threshold determined by the test user, since our tests give p-values. If the user chooses a threshold of $\alpha = 0.0001$ for example, then the Type-I error rate is 0.0001. This result is a consequence of our Theorem 1 with the equivariance condition. Further, we also plotted the null distribution for the 141 independent model pairs and found that the values are uniformly distributed, with exact values shown in Figure 6, for example. In our experiments (Figures 5, 6, 7 in the Appendix) (using ground truth from Hugging Face), choosing $\alpha =$ 1e-307 would yield a Type-I error rate of 1e-307 and a Type-II error rate of 0. $\textbf{Experimental Designs Or Analyses}$ Our experiments are already conducted on neural networks; $\phi_U$, $\phi_H$, and $\phi_\text{MATCH}$ are all defined over the individual MLP component of the Transformer models, which are in principle simple neural networks. For example, $\phi_U$ uses the weights of the up-projection matrix, i.e. one layer of a neural network. By varying model size and the dimensions of the weight / activation matrices from 1B to 7B to 70B models, we test the effectiveness and robustness of the test; we also discuss Type-I and Type-II errors above. Our tests $\phi_U$, $\phi_H$, and $\phi_\text{MATCH}$ do not use a $T$ value, as they follow the general form provided in Equation (2). In principle, $T$ is actually the total number of permutations, which is why we get a very low p-value like e-308. $\textbf{Theoretical Claims comment and Weakness 1}$ We agree that our proof holds for a broad class of machine learning models and neural networks. We intentionally chose this to highlight the generality of our method, but we focused our empirical experiments on language models due to their widespread re-use and the subsequent intellectual property concerns. Specifically, language model capabilities and pretraining costs are growing, which makes models more at risk of being stolen. Furthermore, many parties will fine tune an open-source model for a downstream task rather than pretraining their own model. We briefly discussed this in the second paragraph of the introduction but can further expand on the motivation. But, we note that the unconstrained test is geared towards language models (GLU MLPs). We acknowledge your concern about model-specific considerations and would appreciate it if you could point out any aspects that would significantly benefit from incorporating more language model specific characteristics, and if there was a specific issue you found with the proof. We will also work on the clarity of writing and appreciate the in-depth feedback. Thank you! --- Rebuttal Comment 1.1: Comment: The authors confused several statistical concepts. The p-values (random variables) are not equivalent to Type-I and Type-II error (fixed constants). And p-values themselves cannot be used to estimate the Type-I and Type-II errors. If independence is defined as zero mutual information, the proposed method may be problematic. The setup involves two model parameters theta1 and theta2, each trained only once, yielding a single realization per parameter. Consider a simplest case, theta1 and theta2 jointly comes from a bivariate gaussian distribution, and we have just observation from the Gaussian distribution. With only one observation, Type-II error can be uncontrolled large even for parametric test. The absence of empirical results on Type-I/II error rates exacerbates these concerns. The proof of the theorem does not clearly leverage any specific features of language models that might offer sharper results (or weaker assumption). The result itself seems counterintuitive: the pseudo-observations used for testing are not independent, yet the theorem does not address how dependence affects Type-I error control. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their additional time, and address comments from the new rebuttal response. We hope this clarifies some of the discussion points. 1. "The authors confused several statistical concepts. The p-values (random variables) are not equivalent to Type-I and Type-II error (fixed constants). And p-values themselves cannot be used to estimate the Type-I and Type-II errors." We do not equate p-values with type-I/II errors. Rather, because we guarantee that Algorithm 1 test yields a valid p-value (i.e., the output of Algorithm 1 will be uniformly distributed between 0 and 1 under the null hypothesis), if we define a test based on thresholding the output of the algorithm by some value $\alpha \in [0,1]$ (i.e., if the output is larger than $\alpha$ then the test decides the two models are independent) then the type I error of this test will be $\textbf{exactly}$ $\alpha$. Because we have this guarantee for type I error, we focus our empirical evaluations in the constrained setting on type II error. In the unconstrained setting, we evaluate both errors since we no longer no longer have guaranteed control over either. 2. "If independence is defined as zero mutual information, the proposed method may be problematic." Independence of two random variables $\textbf{is equivalent}$ to those two random variables having zero mutual information (i.e., two random variables are independent if and only if they have zero mutual information. This follows directly from the definition of mutual information and the strict convexity of the map $x \mapsto -\log x$ (strict convexity implies the KL divergence between two random variables is 0 if and only if they are equal in distribution). 3. "The setup involves two model parameters theta1 and theta2, each trained only once, yielding a single realization per parameter." The goal of statistical inference is to draw conclusions about a random variable given a realization of the random variable (i.e., a sample). We adopt this familiar goal in our work. 4. "Consider a simplest case, theta1 and theta2 jointly comes from a bivariate gaussian distribution, and we have just observation from the Gaussian distribution. With only one observation, Type-II error can be uncontrolled large even for parametric test." We are not sure what task is being referenced here and the relevance to our work, but we would be happy to discuss further upon clarification. We certainly agree there are many tasks that are not achievable from one observation (e.g., two observations are required to obtain an unbiased estimator for the variance of a distribution). 5. "The absence of empirical results on Type-I/II error rates exacerbates these concerns." We empirically evaluate both type I/II errors in the unconstrained setting, and we evaluate type II errors in the unconstrained setting (see above for an explanation of why it would be redundant to evaluate type I errors in the constrained setting). 6. "The proof of the theorem does not clearly leverage any specific features of language models that might offer sharper results (or weaker assumption)." The result of the theorem cannot be any sharper: we prove Algorithm 1 yields an $\textbf{exact}$ p-value, so there is no bound to improve. We agree the proof does not leverage specific features of language models. This property is a strength rather than a weakness of the theorem: leveraging specific features of language models would necessarily require $\textbf{stronger assumptions}$ (for starters, we would need to assume $\theta_1$ and $\theta_2$ are language models). 7. "the pseudo-observations used for testing are not independent, yet the theorem does not address how dependence affects Type-I error control." As we discuss in the main body (e.g., Abstract, Introduction, and Section 2.2.1) and proof of the theorem (Appendix A), we crucially use the fact that the permuted models are $\textbf{exchangeable}$ with the original model (despite not being independent) to prove the theorem. See [1] for a definition of exchangeability. [1] https://en.wikipedia.org/wiki/Exchangeable_random_variables
Summary: The paper investigates a method to determine whether two models’ weights were trained independently (i.e., from different random initializations) or if one model was derived from the other through fine-tuning, pruning, or partial reuse. This is framed as a hypothesis test for independence between two sets of model weights. The study considers two settings: - Constrained setting: Both models have the same architecture. The authors assume the training process is equivariant to permutations of the hidden units, allowing them to compute exact p-values under the null hypothesis of independent training. They validate this method on 21 open-weight models and correctly identify all non-independent pairs. - Unconstrained setting: Models can have different architectures. The authors develop a robust test based on aligning hidden activations, which remains effective despite architectural changes or adversarial modifications. Though this test does not produce exact p-values, it empirically behaves like one and can even pinpoint specific model components that are shared or derived. Overall, the authors claim that the proposed methods reliably distinguish independent models from non-independent ones, even in cases where dependencies are obscured by architectural modifications or selective weight reuse. Claims And Evidence: The paper provides strong empirical evidence for the effectiveness of its proposed methods, particularly in detecting non-independent model pairs and identifying shared components. Methods And Evaluation Criteria: Yes. Theoretical Claims: No issues as far as I know. Experimental Designs Or Analyses: Overall, the experiments conducted by the author is sound, tests were conducted across a wide range of models (mostly those related to Llama). For the unconstrained case, it would have been interesting to also explore other model families such as the Microsoft's Phi family. Supplementary Material: No. Relation To Broader Scientific Literature: The paper contributes to the broader literature on intellectual property protection and model fingerprinting by demonstrating that model weights themselves can serve as a fingerprint for tracing model lineage. Unlike prior work that relies on embedding traceable signals in model outputs or specific responses, this study shows that statistical tests on model weights can effectively determine whether a model was trained independently or derived from another. This insight enhances provenance tracking and provides a new tool for enforcing licensing restrictions and protecting intellectual property in machine learning. Essential References Not Discussed: None that I know of. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their time and positive feedback! We find the tests also work on smaller-scale models such as the Phi-3 family. Both $\phi_\text{CSH}$ and $\phi_\text{MATCH}$ return a statistic approximately 1e-308 (aggregated with Fisher’s method) on the fine-tuned model pair microsoft/Phi-3.5-mini-instruct and numind/NuExtract-v1.5 (3.8B parameters). We are also happy to add more experiments on other model families and will update our experiment section in our paper Thank you!
Summary: This paper introduces a rigorous statistical framework for testing whether two language models were trained independently. Concretely, the authors propose hypothesis tests in both constrained and unconstrained settings. The constrained setting assumes known model architecture and training conditions, allowing for exact p-value computation through simulations of exchangeable copies of each model. The unconstrained setting removes these assumptions, making the test robust to adversarial modifications that preserve model outputs but alter internal weight structures. The proposed methods are validated on many open-weight models. ## update after rebuttal After reviewing the rebuttal addressed to me and those for other reviewers, I am willing to maintain my score. Claims And Evidence: Yes, the claims made in the submission appear to be supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed method makes sense for the independence testing of two language models. Theoretical Claims: Yes, I check some proofs, including Theorems 1, 2 and 3. Experimental Designs Or Analyses: Yes. The experimental designs and analyses appear to be sound. Supplementary Material: No. Relation To Broader Scientific Literature: This paper aims to address a fundamental question in model provenance and intellectual property protection. Essential References Not Discussed: No, the paper includes essential references. Other Strengths And Weaknesses: Strengths - This paper is well written and its structure is clear. - The paper frames model independence as a hypothesis testing problem. - In constrained setting: this paper uses exchangeable model copies under specific assumptions to compute exact p-values. Weaknesses: - The assumption for permutation-equivariant training may not always hold in real-world applications. - While the unconstrained test is empirically robust, theoretical guarantees are lacking. - More importantly, the assumption that the learning algorithms are deterministic functions is seriously inconsistent with the facts. Obviously, the outputs of the learning algorithms are primarily influenced by the training data and thus are random. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time and positive feedback! $\textbf{Weakness 1}$ Thank you for bringing up this concern. Standard machine learning algorithms such as SGD which are the most common algorithms for training language models follow the equivariance condition, as the gradients are permutation-equivariant, explained briefly in Example 2. We also emphasize that only one of the models needs to satisfy these assumptions, so a trusted model’s developer (who used SGD for example) can run this test without assumptions on the training strategy for another model. $\textbf{Weakness 2}$ Yes, we agree, and we do not claim our test is in fact robust to all adversarial attacks. We have at least empirically validated that our test is robust to a superset of attacks over prior work (Zeng et al. 2024). $\textbf{Weakness 3}$ We agree that the final model weights are heavily influenced by the training data. However, in our definition of a learning algorithm (Section 2.1) we write “$A$ includes the choice of training data…” Given the training data, minibatch ordering, and other parameters, then in fact $A$ is deterministic and it is possible to fully reproduce the learning algorithm given the initial weights. This is an abstraction to simplify our framework and may be an unconventional way to describe learning algorithms, so we will add more clarification. We also acknowledge there are other sources of randomness which deterministic functions with a fixed seed do not capture such as dropout; thus, in Appendix A, we state and prove a more general version of Theorem 1 for randomized learning algorithms, for which our conclusions still hold. Thank you, and we are happy to answer more questions!
Summary: The paper addresses the large model independence test: given the weights of two language models, can we determine if they were trained independently or if one model’s weights are derived from the other? Leveraging permutation invariance and equivariance in MLP neurons, it provides exact p-values. Extensive evaluations on open-weight models show that the test performs effectively. Claims And Evidence: Overall, the claims made in the paper are consistently supported. The strong statements are backed by theoretical proof or experiments. I did not find any major claim that is unsupported. Methods And Evaluation Criteria: 1. The paper employs cosine similarity as a core metric for comparing model parameters. However, its reliance on linear relationships may overlook nonlinear dependencies between models. This limitation could lead to false negatives, particularly in the unconstrained setting where architectural differences and adversarial modifications are more prevalent, and the paper does not explore alternative metrics to address this gap. 2. Additionally, while the unconstrained setting handles size inconsistencies via zero-padding, this approach may introduce bias or reduce sensitivity when the dimensional disparity is significant. The effectiveness of zero-padding is demonstrated empirically (e.g., Llama-3.1-8B vs. Llama-3.2-3B), but the paper lacks analysis of its impact under extreme size mismatches or discussion of alternative alignment strategies, potentially compromising robustness in broader scenarios. 3. In the unconstrained setting, the paper employs a matching approach via the MATCH algorithm (Algorithm 2) to compare $\theta_1$ and $\theta_2$ despite potential dimensional inconsistencies, using zero-padding to align matrix sizes. While this method effectively identifies dependencies in experiments, its reliance on a strict one-to-one correspondence may be inadequate when the true relationship between models is one-to-many or many-to-many, such as in scenarios involving pruning, expansion, or complex retraining. Such relationships, which do not align with the bijective assumption of MATCH, could lead to false negatives, especially when dimensional disparities are significant, as zero-padding might obscure nuanced dependencies. The paper does not explore these possibilities or test the method’s robustness against non-bijective dependencies, limiting its applicability to more intricate model relationships. Theoretical Claims: I checked the proofs for Theorems 1-3 in the constrained setting. The logic is correct. One issue is that the authors treat learning algorithms as deterministic functions, ignoring the possible randomness of dropouts, etc. This assumption limits the practical applicability of Theorem 1. Although partly discussed in Appendix A, the main text should provide a clear discussion of this limitation. Experimental Designs Or Analyses: 1. The evaluation primarily focuses on the Llama-7B architecture, which raises concerns about the generalizability of the proposed methods. Expanding the experiments to include a broader range of training regimes or adversarial scenarios would enhance confidence in their robustness. 2. While baselines such as $\phi_{JSD}$ and $\phi_{l_2}$ are included, the comparison to Zeng et al. (2024) is insufficiently addressed in the main text. A more thorough discussion of how the proposed methods outperform or complement prior work would strengthen the significance of the claims. Supplementary Material: I checked the proofs of Theorem 1-3 in the Appendix. Relation To Broader Scientific Literature: This contribution bridges classical statistical tools with modern deep learning to apply in a novel real-world problem, language model independence tests, which adapt permutation methods to a new domain while leveraging neural network symmetries. Essential References Not Discussed: Some related works about permutation tests may help unfamiliar readers to better understand Algorithm 1, e.g., [1, 2]. References: 1. Phillip Good. Permutation Tests: A Practical Guide to Resampling Methods for Testing Hypotheses. 2. E.L. Lehmann, Joseph P. Romano. Testing Statistical Hypotheses. Other Strengths And Weaknesses: Strengths: 1. The problem of testing model independence has significant implications. 2. The methods are computationally efficient (e.g., avoiding full retraining by using permutations or proxy models), making them feasible for large-scale models. 3. The formal definitions (e.g., $\Pi$-invariance, $\Pi$-equivariance) and theorems provide a solid theoretical backbone. Weaknesses: 1. From a practical application perspective, I think determining the direction of causal relationships between large models may be more meaningful than merely testing for the existence of a dependency. However, the latter remains a highly important and intriguing problem. 2. The running time of the test is also a very important practical consideration. Authors need to explicitly report needed times. 3. In the unconstrained setting, Algorithm 5’s proxy GLU MLP construction depends on the choice of hidden dimension $h$ and input distribution $P$. The paper offers little insight into their impact and how they were selected. The sensitivity of the method to these hyperparameters is also unclear. Other Comments Or Suggestions: 1. Adding the mathematical definition of "exchangeable" is better. 2. The assumptions of $\Pi$-equivariance and $\Pi$-invariance are mathematically clear but may be difficult to readers unfamiliar with deep learning symmetries. More intuitive explanations or examples could improve accessibility. Questions For Authors: 1. The motivation for choosing gate and up projections in $\phi_{MATCH}$ (Section 2.3.1) is described as a conjecture rather than a derived principle, which may leave readers questioning its robustness. Could you provide more justification? 2. In real-world applications, how to set significance level $\alpha$? 3. When the true relationship between sub-models is one-to-many or many-to-many, which do not align with the bijective assumption of MATCH, is there any better solution? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback! We will add the references mentioned. $\textbf{Experimental Designs Or Analyses Comment 1}$ We report experiments on Llama 70B, the hybrid StripedHyena and Mistral model, and (distilled) GPT 2 models in Table 5, 9, and 15 and find our statistics work for these different architectures as well. In Reviewer RDaz's comment, we also experiment with smaller Phi models and find the test holds high power for those models too. We believe these models likely encompass a broad range of training regimes. In Table 7 of the Appendix, we also run our tests on adversarially-transformed models and show how our unconstrained test is robust to the transformation (whereas prior work is not). If there are other model architectures or families that would be beneficial, we would be happy to add those experiments! $\textbf{Experimental Designs Or Analyses Comment 2}$ We include more discussion and experiments in Appendix F.1 (HuREF Invariants) but are happy to move discussion to the main text. Specifically in Table 7, we demonstrate how our adversarial transformation can be used to break the HuREF invariants — each of the transformed Llama-2-7b-hf, vicuna-7b-v1.5, and Nous-Hermes-llama-2-7b models have low $M_a$, $M_b$, and $M_c$ values when compared with Llama-2-7b-hf, whereas our unconstrained statistic $\phi_\text{MATCH}$ gives a value of 2.2e-308. We also mention in the paragraph at Line 96 of the introduction that our tests yield p-values whereas Zeng et al.’s do not. Please let us know if there is more we can provide about comparison with Zeng et al. (2024). $\textbf{Weakness 1}$ We agree. But in those cases, then causal relationships can be determined by first using our test, then using metadata, such as the dates of model releases. $\textbf{Weakness 2}$ We report our times on a Nvidia RTX A6000. For $\phi_H$, the bottleneck is computing the forward pass to obtain the intermediate activations — and on two 7B models, the test on all 32 Transformer blocks combined takes on average less than 2 minutes. $\phi_\text{MATCH}$ requires the forward pass and aligning the activations; on dependent models, this will also take around 2 minutes total, whereas it may take 5-10 minutes per Transformer block for independent models. We believe this is reasonable computationally. We will include these details. $\textbf{Weakness 3}$ The hidden dimension $h$ is determined by the model weights, i.e. for Llama 2-7B the hidden dimension is 11008. This dimension varies for the models we test (28672 for Llama 2-70B, 8192 for Llama 3.2-3B) but do not affect the strength of our tests. In our paper, we only report results for using WikiText as the input distribution, but in fact we are able to achieve similar performance using random tokens from the vocabulary as the input distribution. We will include further experiments and ablations on this distribution. $\textbf{Question 1}$ We choose the gate and up projections of an MLP because they are two matrices that are combined via a direct product and activation function — which makes significant transformations to the weights difficult. However, the results in Figure 2 from the GLU MLP remain a conjecture. We reason that the gate and up projections are trained to be very aligned during training with a very large loss landscape for two independent models. $\textbf{Question 2}$ Empirically, we find our results hold with a threshold of even 2.2e-308, and e-61 for the generalized unconstrained test where we distill the MLP. In practice, a third party or model provider may choose $\alpha =$ 1e-5 for example, or their own confidence level. $\textbf{Question 3}$ We have tested non-bijective cases such as between Llama 3.1-8B and Llama 3.2-3B, where the dimension is reduced from 11008 to 8192 (more than 30%). We test $\phi_\text{MATCH}$ on many pruned models including the Nvidia Minitron models and Sheared Llama models in Appendix I.2, where the bijective assumption does not hold and find that the test is still strong. Also, if the reviewer is asking about many-to-many sub-models, we also run experiments, such as with the hybrid StripedHyena model, where only some layers are taken from the Mistral 7B model (i.e. embedding) but not others (MLP projections). We are happy to run more experiments or provide more clarification. We will also add more explanation with regards to equivariance and invariance. We appreciate the in-depth review and are happy to answer more questions!
Summary: This paper proposes a statistical test for determining whether the initializations of two language models (really, "deep networks containing GLU MLPs", or even really slightly weaker than that) are independent or not, when treating the algorithms themselves (and any data they use, etc) as fixed. Exactly valid tests are developed under equivariance assumptions for the training algorithms; when this is not true, the paper proposes heuristic tests which seem to behave reasonably under the null. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The proof techniques pass a "smell check" for me, but I did not carefully verify them. Experimental Designs Or Analyses: The setups seem reasonable but I did not carefully examine the details. Supplementary Material: Read most of the appendices, but skimmed some parts. Relation To Broader Scientific Literature: The proposed test differs from previous approaches for the same problem in a useful way, and seems to work better. Essential References Not Discussed: Theorem 1 is very closely related to e.g. Theorem 2 of [Hemerik and Goeman](https://arxiv.org/abs/1411.7565), who also cite earlier sources for roughly the same result. This is not a big deal, since that theorem in itself is simple and not a major contribution of the paper, but as that paper goes in more depth about related properties, it would be good to point readers to. (They assume a group structure on their transformations, implying invariance, while you directly assume invariance; I think this is the only difference.) Other Strengths And Weaknesses: The proposed test is clever, relevant for an interesting problem, and appears to work in practice. I think the paper is worth publishing at ICML. The proper interpretation of the null hypothesis, however, is subtle. The paper includes some discussion of these issues, and I don't think any of it is incorrect or even misleading, but thinking about independence where the data is fixed is somewhat unnatural, and I can easily see practitioners misinterpreting the outputs of the test. As you point out, in the constrained setting you do have a valid test, and hence the only consideration is the power. You don't, though, give any formal discussion to the power of your tests. It seems like it might be possible to, for example, at least say something about how using Fisher's procedure across blocks relates in power to the permutation test? I'm not sure what "consistency" or similar properties would mean here since there's not iid data...but I believe there are probably some situations where despite the null hypothesis not holding, the test based on (2) has only trivial power. (I thought about it for a few minutes and couldn't come up with one in the constrained setting, but also couldn't convince myself that it's impossible; I think it likely is. In the unconstrained setting, doing so is trivial.) > Due to the element-wise product operation, we conjecture that in general it is not possible to permute the rows of $G_i$ while preserving the output of $\theta_i$ without permuting the rows $U_i$ in the same way "In general" is doing a lot of work here; it is easy to construct silly examples where this is not the case (e.g. take the activation $\sigma$ to always map to zero). This is not a big deal; for "reasonable models," this should be true. But it highlights the general issue in this paper that while some attempt is made at formality, there are many parts which are difficult to really formalize. I think that's probably inherent to the problem setting, but it does highlight how basically everything in the "unconstrained" setting is generally "reasonable" but does not have any strict definitions the way the "constrained" setting does (even if those definitions themselves require some thought to understand). For the retraining and distilling test: it seems that this scheme could be tricked by first permuting the hidden units before and after the MLP, then retraining the MLP layer from scratch there, right? Other Comments Or Suggestions: - A few times the paper refers to "the set of permutations over the hidden units of the network"; this isn't really right (or what you do), since it wouldn't make sense to swap hidden units across layers/different modules. - It would be good to add a sentence about the LAP algorithm of Ramshaw and Tarjan, even just saying that it is an algorithm for weighted matchings in bipartite graphs. You could easily save the space by not writing Algorithm 2 out in algorithm form and instead just put it in an equation display, since the algorithm form adds basically no information for this one. - (extremely minor) "Our robust test reposes on the design" – this uncommon usage of "reposes" seems to be used mostly in theology and is probably unfamiliar to most ICLR readers."Relies on" would be far more typical. - Your bib file is rather sloppy; you should e.g. remove most of the urls, especially the ones from Semantic Scholar. Questions For Authors: I don't think I have any specific questions, although I'd be interested to hear if you're able to say a little more about the power of the test in the constrained setting, or other points I raised above. Update after rebuttal ------------------------ I accidentally posted the below comment in a way that you weren't able to see. I think it would be worth considering the question below for the next revision of your paper (whether camera-ready or a resubmit). > Thanks for your reply; I remain happy with the paper. > > I realized in thinking slightly more about the setting today that: is there a particular reason you chose the statistic to compute the Spearman correlation between the best permutation and $[n]$? In particular, that induces a strong "locality" on any swaps: swapping hidden units 1 and 2 would "cost" much less to the statistic than swapping hidden units 1 and 1000. That doesn't seem to make sense to me, though, since there really isn't any locality in this sense to the network. In a related point, comparing whether the *best* match lines up is distinct from asking how much "better" the best match is than the identity. When permuting, I wonder whether it might not make more sense to ask about the ratio of the matching objective, or similar. This would probably be harder to avoid permuting with the approximate p-value from the Spearman correlation, though. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their time and positive feedback! We will add the Hemerik and Goeman reference mentioned, thank you. About some of the weaknesses discussed: From our empirical results, the power of our constrained and unconstrained test is as low as 2.2e-308 — in the cases of rejecting the null hypothesis, the p-value derived is less than 2.2e-308. (See Figure 5, 6, 7 in the Appendix.) In particular, in the best case where $\theta_1 = \theta_2$, then the power will be exp(- # hidden units). The test could have trivial power in the case where the learning algorithms are constant functions, then the p-values will be non-significant for any initializations even if they are non-independent. However, this would never occur in practice for any non-trivially trained language model, and our tests have strong power for the wide array of language models we evaluate. We agree that the "in general" does a lot of work, so we only make our claims here via experimental results. We are happy to amend our writing to better calibrate to your feedback "For the retraining and distilling test": The unconstrained test is robust to permutations --- permuting the hidden units before and after the MLP (or if we permute the entire model) and distilling the permuted model would not change the efficacy of the test, as the gate and up projection matrices would need to share that original permutation. We will also apply the edits from "Other Comments or Suggestions." We are happy to answer more questions and appreciate the in-depth response!
null
null
null
null
Ferret: Federated Full-Parameter Tuning at Scale for Large Language Models
Accept (poster)
Summary: This work proposes a large-scale federated full parameter tuning framework for LLM, Ferret, which mainly exploits the first-order method of shared randomness. The proposed method mainly consists of three steps: using first-order methods for efficient local updates, projecting these updates into a low-dimensional space, and reconstructing local updates from this low-dimensional space. The method achieves reduced communication overhead and efficient full parameter updates. The article provides comprehensive theoretical and experimental evidence for this method. Claims And Evidence: The proposed method is clearly explained and extensive experiments are performed. Methods And Evaluation Criteria: The proposed method is reasonable. The dataset and model used are reasonable. Theoretical Claims: I have checked the theoretical analysis, no major problems are found. Experimental Designs Or Analyses: The design of experiments are generally comprehensive. But, in most cases, the authors only compare with FedZO, FedKSeed and FedAvg, the other baselines are not fully compared. Supplementary Material: I have checked the supplementary materials with additional proof, detailed experiment settings and additional experiments. Relation To Broader Scientific Literature: This work considers a first-order fine-tuning for LLM in FL on the basis of zero-order methods. Essential References Not Discussed: The related work is solid enough, this work is most related to FedZO and FedKSeed. Other Strengths And Weaknesses: S1: The theoretical analysis of this paper is comprehensive. S2: Overall, this work is well-written and easy to follow. W1: Some prompt-tuning and LoRA-tuning baselines are only compared in Table 2. I think larger model fine-tuning in Table 3, communication and computation cost in the following tables are worth to compared with these baselines. W2: In Table 4 and 5, the cost of Ferret is much greater than FedAvg. Why does this phenomenon happen? W3: FL setting is significant, should moved to the main paper and give more detailed explanations. As well as ablation study, it is crucial for the effectiveness of the proposed methods. W4: The privacy of exchanging updated weights could provide a more detailed analysis. Other Comments Or Suggestions: N/A Questions For Authors: Q1: I am more interested in fine-tuning LLM by LoRA, what is the distinguished feature of first-order methods rather than low-rank fine-tuning? Q2: I find that this work has a preliminary published workshop version. It could be better to provide the improvement and difference to the original version. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer CfcA for recognizing the comprehensive theoretical analysis, solid related work, and clarity of our paper. We would like to address your concerns below: > W1: Performance Comparison with LoRA methods. - **Clarification on Scope**: Our work focuses on **full-parameter fine-tuning** of LLMs. This choice is motivated by the goal of achieving better performance, as PEFT methods like LoRA may not consistently reach the same performance ceiling [2,3]. We primarily compare Ferret against other full-parameter federated methods (FedZO, FedKSeed, FedAvg). - **Comparison with FedIT (LoRA baseline)**: We appreciate the suggestion to compare Ferret with PEFT methods regarding performance, communication, and computation costs. We have provided **additional results** on FedIT (LoRA-tuning, rank=8, alpha=16) and present the results below (Table-R2 and R3). - The results show that while FedIT offers lower computational costs, it incurs significantly higher communication costs compared to Ferret. More importantly, Ferret maintains a strong performance close to FedAvg and outperforms FedIT by a large margin. - Due to the limited rebuttal period, we have prioritized the FedIT (LoRA-tuning) baseline. We commit to including results for other PEFT methods in our final manuscript. **Table-R2: Comparison of Computational and Communication Costs against FedIT** | Model | Computational Cost (Overall Sec.) | |Communication Cost (# params.)| | |-|-|-|-|-| | | LLaMA-3B | LLaMA2-7B | LLaMA-3B | LLaMA2-7B | | FedIT | $3.9$ |$4.5$| $4.2\times 10^6$ | $6.6\times 10^6$ | | Ferret | $30.3$ | $97.2$| $7.8\times10^3$ | $6.4\times10^3$ | || **Table-R3: Performance comparison against FedIT** | Algorithm | CodeAlpaca | | GSM8K| | |-|-|-|-|-| | | LLaMA2-7B | LLaMA2-13B | LLaMA2-7B | LLaMA2-13B | | FedIT | $4.66 \pm 0.18$ | $6.10 \pm 0.18$| $30.31 \pm 0.29$ | $13.46 \pm 0.34$| | FedZO | $4.58 \pm 0.26$ | $6.19 \pm 0.32$ | $30.41 \pm 0.31$ | $13.63 \pm 0.34$ | | FedKSeed | $8.33 \pm 0.98$ | $10.70 \pm 0.47$| $28.26 \pm 3.60$ | $33.67 \pm 1.15$ | | FedAvg | $\mathbf{15.41} \pm 0.43$ | $\mathbf{14.68} \pm 0.26$ | $\mathbf{38.30} \pm 0.40$ | $\mathbf{39.82} \pm 0.17$ | $\mathbf{38.30} \pm 0.40$ | $\mathbf{39.82} \pm 0.17$ | | Ferret (ours)| $\underline{12.10} \pm 0.47$ | $\underline{11.84} \pm 0.91$ | $\underline{36.10} \pm 1.18$ | $\underline{34.50} \pm 1.42$ | || [2] Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 Kilobytes [3] Empirical Analysis of the Strengths and Weaknesses of PEFT Techniques for LLMs > W2: Computational cost of Ferret compared to FedAvg - This is correct and represents an **intentional design trade-off**. The increased computational cost in Ferret stems directly from the gradient projection technique employed to significantly reduce communication costs. - This trade-off (higher local computation for lower communication) is common in communication-efficient FL algorithms, including the baseline FedKSeed. However, Ferret significantly optimizes this trade-off: compared to FedKSeed, Ferret achieves a $\sim6\times$ reduction in computational overhead while providing substantial communication savings over FedAvg. > W3: The presentation regarding the FL setting Thank you for this constructive suggestion. We will move the FL setup and key ablations into the main paper in the final version. > W4: Privacy analysis. We agree that formal privacy analysis would be valuable. As noted in Sec. 6, we are encouraged to do so for future research. > Q1: distinguished feature of first-order methods compared to LoRA If we understand correctly, we believe you are referring to the comparison between **full-parameter fine-tuning** (like FedAvg or our Ferret using first-order optimization) and **low-rank fine-tuning** (like FedIT, also typically using first-order optimization). The distinguished features are: - Full-parameter fine-tuning updates all model parameters, but FedIT only updates a small set of *low-rank* adapter parameters. - Standard full-parameter methods (like FedAvg) communicate all parameter updates, leading to high cost. LoRA communicates only the adapter updates (still large, as shown in Table-R2). Our method, Ferret, applies projection technique after ful-parameter gradient computation to significantly reduce communication. - As supported by existing literature [2,3] and our results (Table-R3), full-parameter fine-tuning generally achieves higher performance ceilings compared to PEFT methods, especially on complex tasks. > Q2 We understand your interest but are unable to address this question due to the anonymity policy during the review process. Thank you for your understanding. --- We hope these responses and the additional experimental results effectively address your concerns and clarify the contributions and positioning of Ferret. We welcome any further questions. --- Rebuttal Comment 1.1: Comment: Thanks for your response, which addressed my concerns. I have raised my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer CfcA, We are very happy that our response has addressed your concerns and improved your opinion about our work! We will include those valuable discussions and additional results in the final version. Best regards, Authors
Summary: In this work, the authors proposed Ferret, a federated learning method for efficient full-parameter fine-tuning of LLMs, combining first-order optimization with random projection-based dimensionality reduction. It uses shared randomness to reconstruct local updates at the server, significantly reducing communication overhead. The authors provide theoretical guarantees on unbiased reconstruction and convergence, alongside experiments demonstrating reduced communication and computational costs compared to existing methods. Claims And Evidence: One of my major concerns of this work is about the technical novelty. I like this paper as a whole, especially given the interesting topic and extensive theories, however, I feel the technical novelty is somewhat overclaimed. Although the authors emphasize the unique challenges of federated full-model fine-tuning of large language models, the proposed random projection method resembles existing techniques (e.g., FetchSGD). Furthermore, the block-wise reconstruction shares conceptual similarity with SparseGPT (Frantar et al.), though SparseGPT is not cited. Specifically, - The method's projection-based approach is intuitive and theoretically grounded. However, similar methods, particularly FetchSGD (ICML 2020), already utilize random projections for federated optimization. - The block-wise reconstruction technique introduced, although useful, closely mirrors existing weight reconstruction ideas (e.g., SparseGPT), yet SparseGPT is neither cited nor compared. Methods And Evaluation Criteria: I am a bit confused about the federated setup of experiments in this work. How many clients in total are used per round for the proposed method and baselines on each dataset? - According to the paper, *"In each round of federated learning, 5% of clients were randomly selected to participate."* How many clients in total did you use? - In addition, you mentioned *"Due to the compelling efficiency of our method, we set the total number of communication rounds to 12 for the NI dataset and 20 for Dolly-15K for Ferret."* So the proposed method may not leverage some clients on NI dataset? (12 $\times$ 5% = 60%) And for Dolly-15K it is highly likely to happen as well? (otherwise each sampling of clients has to be perfectly non-overlapping) Theoretical Claims: The provided theoretical analyses on unbiasedness and convergence are rigorous and sound, representing a significant strength of the paper. Experimental Designs Or Analyses: - Although memory footprint analysis is provided in the appendix, the proposed method (Ferret) incurs significantly higher GPU memory costs compared to the zeroth-order optimization method (FedKSeed). Given federated learning's typical deployment constraints on resource-limited devices, this large discrepancy raises concerns about the practicality and optimality of the proposed trade-off between memory usage and communication efficiency. - LlaMA / LlaMA-2 families are too dated in 2025 and I strongly suggest that the authors consider evaluation on more SOTA LLMs such as Qwen-2.5 and LlaMA-3 families. Supplementary Material: I briefly checked the mathematical proofs (not thorough and may overlook details) as well as additional results. Relation To Broader Scientific Literature: The absence of a discussion with SparseGPT, despite methodological resemblance in block-wise reconstruction approaches, weakens the credibility of the claimed novelty. Essential References Not Discussed: Frantar et al., "SparseGPT: Massive Language Models Can be Accurately Pruned in One-shot," ICML, 2023. Other Strengths And Weaknesses: S1. I found the research topic of full-model LLM finetuning under FL interesting and prompt. S2. Effective empirical demonstration of communication efficiency and fast convergence. S3. Solid theoretical analyses on unbiasedness and error bounds. S4. The paper is well-written and easy to follow. W1. Insufficient technical novelty compared to existing projection-based FL methods. W2. The block-wise reconstruction is conceptually similar to that of SparseGPT while it is not cited and discussed. W3. Practical concerns (memory and computational complexity) are inadequately addressed. W4. LLMs utilized in the experiments are somehow too outdated. Other Comments Or Suggestions: i think this paper has many merits such as interesting research idea, extensive theories, and nice results. My major concerns are about the technical novelty, and some evaluations of the proposed method. Hence, I will be fully open to the rebuttal, i.e., other reviewers' comments and discussion, and will adjust my score accordingly. Questions For Authors: Please refer to my comments above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough evaluation and constructive feedback. We address the points raised below: > Claims And Evidence - We appreciate the reviewer's feedback on the comparison to FetchSGD. While both methods use dimensionality reduction, Ferret's technical approach and goal are fundamentally different, establishing its novelty, especially for full-parameter LLM tuning. FetchSGD uses Count Sketch (based on *hashing coordinate indices*) primarily to enable server-side state management (momentum, error accumulation) for a subsequent *biased Top-K sparse update*. Ferret, conversely, uses shared random vectors **v** to directly project the *entire first-order local update vector* **Δ** via dot products (Eq. 6), determined through convex optimization (Eq. 4). Crucially, Ferret aims to *reconstruct an approximation of the full, dense update* **Δ̃** (Eq. 7) for aggregation, vital for maintaining accuracy in full-parameter LLM tuning. This contrasts sharply with FetchSGD's goal of facilitating a *sparse* update. Furthermore, Ferret's design targets an *unbiased reconstruction* (Thm. 1), avoiding the explicit error accumulation required by FetchSGD's biased sparsification step. Ferret's novelty lies in being the first approach to uniquely combine efficient *first-order local updates* with *shared randomness for reconstructing dense updates*, specifically optimized (e.g., block-wise reconstruction) for the unique scale and demands of federated full-parameter LLM tuning. - Thanks for the SparseGPT suggestion (citation will be added). SparseGPT uses block-wise processing on *model weights* for *pruning*. Ferret applies it to *federated updates* (Δ) solely to improve computational efficiency during the *dense update reconstruction* step (Eq. 7 -> Eq. 8), making it scalable for LLMs. Ferret's novelty lies in using this block-wise strategy specifically for scaling *update reconstruction* in our federated tuning context, distinct from model pruning. We acknowledge the conceptual similarities, but we emphasize that our method Ferret is technically distinct and novel based on our comparison above. In our revision, we promise to cite the SparseGPT paper, add a detailed discussion regarding the block-wise reconstruction, and also highlight the difference between FetchSGD and Ferret. > Methods And Evaluation Criteria: - We clarify our FL setting follows the previous literature (FedKSeed), utilizing 738 clients on Natural-Instruction and 200 clients on Dolly-15K datasets in the FL system. - Yes, some clients may not be leveraged for training the FL model, as we have fewer communication rounds. In each round, we independently and randomly sample 5% of clients for both NI dataset and Dolly-15K dataset to participate. As shown in Figure 2, Ferret converges rapidly (similar to FedAvg), reaching a point where additional training with more clients yield diminishing returns. We think this might be due to the nature of this data source used. > Experimental Designs Or Analyses - **Memory Footprint**: We clarify that our primary focus is communication efficiency in a standard distributed data setting where clients can perform backpropagation. While Ferret currently uses a standard SGD optimizer (implying typical client-side memory requirements for backpropagation), we acknowledge that our memory usage could be potentially reduced by integrating memory-efficient optimizers like those in [1] for future works. [1] Full Parameter Fine-tuning for Large Language Models with Limited Resources. - **Additional Experiments**: Follwoing the reviewer's constructive suggestion, we conducted **additional experiments** on CodeAlpaca and GSM8K using **Llama3-8B** and **Qwen2.5-7B**. The results below demonstrate Ferret's consistant effectiveness, achieving near FedAvg performance with significantly reduced communication overhead across models and tasks. **Table-R3: Performance comparison on Llama3-8B and Qwen2.5-7B models** | Algorithm | Alpaca | | GSM8K | | |-|-|-|-|-| | | Llama3-8B | Qwen2.5-7B | Llama3-8B | Qwen2.5-7B | | FedKSeed | $5.73 \pm 1.26$ | $9.14 \pm 0.32$ |$7.79 \pm 1.36$ | $23.84 \pm 1.19$ | | FedZO | $16.66 \pm 0.50$ | $11.76 \pm 0.38$ | $37.44 \pm 0.11$ | $28.04 \pm 0.13$ | | FedAvg | $\mathbf{19.88} \pm 0.67$ | $\mathbf{17.47} \pm 0.49$ | $\mathbf{45.48} \pm 0.51$ | $\mathbf{43.86} \pm 0.36$ | | Ferret | $\underline{19.59} \pm 0.66$ | $\underline{14.64} \pm 0.74$ | $\underline{45.07} \pm 0.78$ | $\underline{38.28} \pm 1.70$ | || > Response to W1 & W2. Please see our response under `Claims And Evidence` above. > Response to W3 & W4. Please see our response under `Experimental Designs Or Analyses` above. --- We sincerely thank Reviewer C3NM for the valuable feedback and are encouraged by the positive remarks on our research idea, theories, and results. We hope our clarification and additional results effectively address your concerns and increase your opinions of our work. We welcome any further questions. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed rebuttal and appreciate the additional experiments and clarifications provided. The additional experiments using more recent LLMs address my initial concern about using outdated models, and thus I have increased my score accordingly. However, I remain concerned about two issues that need further clarification: - Technical Novelty Concerns: Although the authors clarified differences from FetchSGD and SparseGPT, their explanations did not fully address the core issue of novelty. For example, the block-wise reconstruction used in Ferret still mirrors the method employed by SparseGPT. While the authors differentiate their application (fine-tuning versus pruning), mathematically, the underlying reconstruction process remains substantially similar. Fine-tuning and pruning objectives, absent the pruning masks, are fundamentally analogous optimization problems. I understand assessing a work's technical novelty can be highly subjective and tricky, and that's why I have increased my score regardless of this concern and intend this feedback to further enhance our discussion. - Memory Footprint and Practicality: The authors acknowledge the significantly higher memory costs of Ferret compared to zero-order optimization methods but suggest the potential use of efficient first-order optimizers to mitigate this. I reviewed the cited reference [1] and noted that these optimizers typically yield only modest memory reductions (in single-digit percentage ranges, at most 10%). Given that Ferret incurs substantially greater memory costs, even after such optimizations, the method would still exhibit a significant memory overhead. Additionally, placing the memory footprint analysis solely in the appendix undermines transparency regarding the critical trade-off between performance and resource constraints, which is highly relevant for practical federated learning deployments. I strongly recommend moving this analysis into the main manuscript to facilitate a clearer assessment of Ferret's practicality against e.g., zero-order methods. [1] Full Parameter Fine-tuning for Large Language Models with Limited Resources. --- Reply to Comment 1.1.1: Comment: Thank you once again for your detailed feedback, constructive engagement, and for raising your score. We sincerely appreciate this opportunity to further elaborate on the technical novelty and memory footprint of Ferret, addressing the remaining points you've helpfully raised. > Technical Novelty Concerns We understand your perspective regarding the mathematical similarity of block-wise processing when viewed in isolation. We do *acknowledge* that block-wise decomposition, as a technique, has been employed before, notably in SparseGPT. We commit to ensuring SparseGPT is appropriately credited for its use of this technique in our revision. However, we respectfully argue that Ferret's novelty (on *block-wise design*) lies not in the invention of this technique, but from **its unique integration** within a novel FL framework to efficiently reconstruct gradient updates in a FL setting, and its **novel theoretical results** (not presented in SparseGPT). To clarify, we have proved that (in Prop. 1) the block-wise reconstruction reduces computational complexity, and (in Prop. 2) the reconstruction error can be minimized by allocating **# random seeds** according to the gradient norm of each block. Our Prop. 2 is the foundments of our **novel design** on adaptively allocating the number of random seeds for each block, and its emprical success is validated in Fig. 9 in Appx. C.6. We would like to emphasize that the block-wise design is only one part of Ferret. Ferret's overall novelty lies in the its whole FL framework and rigorous theoretical analyses: **the first first-order FL approach with shared randomness**, which uses novel random update projection and reconstruction to significantly enhances the scalability of FL full-parameter tuning of LLMs while maintaining competitive model accuracy. > Memory Footprint and Practicality We appreciate the opportunity to address your concerns regarding memory footprint here: - It is important to clariy that our method Ferret **does not incur additional memory cost** compared to the standard first-order method (FedAvg). - We acknowledge your point regarding zeroth-order methods, which **inherently** offer lower memory footprints. This reflects a trade-off in optimization: zeroth-order methods reduce memory but often require more steps to converge, may reach lower final accuracy, and can be less stable compared to first-order methods, especially for complex models (e.g., Llama3-8B). Ferret is designed for improved scalability where retaining the potential benefits of first-order gradient information (e.g., faster convergence, higher accuracy) is desirable. - We understand you viewed reference [1] (introducing LOMO) and noted concerns about the extent of memory savings. We would like to respectfully clarify the results of Table 1 in [1] shows LOMO has a $\sim70\\%$ total memory reduction (from 51.99 GB to 14.58 GB with activation checkpointing), while the zeroth-order method FedKSeed has a **slightly lower** memory reduction ($\sim60\\%$ shown in Table 9 in our paper). We believe this demonstrates a promising path to dramatically reduce the memory footprint of Ferret. This could make first-order FL methods more memory efficient and practical. - We thank the reviewer for the suggestion of moving the footprint analysis into the main paper to facilitate a clearer assessment. We will do so in the final version. --- Thank you again for your constructive engagement and valuable feedback throughout this discussion period. We have found this discussion very helpful and are committed to incorporating these clarifications to strengthen the final paper. We hope our responses have successfully addressed all your concerns and improved your opinion of our work.
Summary: This paper introduces Ferret, a method for full-parameter tuning of large language models (LLMs) in federated learning. It primarily addresses the challenge of communication overhead by combining the strengths of first-order optimization (efficient computation and fast convergence) and zeroth-order optimization (reduced communication overhead). The method utilizes shared randomness to project updates into a low-dimensional space, effectively reducing communication costs. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I have conducted a rough review of the entire proof but may have overlooked some details. Experimental Designs Or Analyses: In general, the experimental results are sufficient and convincing. Supplementary Material: I have conducted a rough review of the entire material but may have overlooked some details. Relation To Broader Scientific Literature: See the Strengths And Weaknesses below for details. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: 1. The paper provides rigorous theoretical analyses to support Ferret's effectiveness, showcasing its advantages over existing methods in terms of computational efficiency, communication overhead, and convergence speed. 2. The experimental results are sufficient and convincing. 3. The proposed method and the addressed challenge are both meaningful and novel. Weaknesses: 1. Reconstruction Error: While Ferret reduces communication overhead, the reconstruction of updates from low-dimensional projections may introduce some error, particularly for complex tasks. This is evident in the slightly lower performance of Ferret compared to FedAvg on the CodeAlpaca and GSM8K datasets. 2. Despite reducing communication costs, Ferret still requires substantial computational resources, especially for larger models such as LLaMA2-13B. The authors are encouraged to explore ways to optimize computational efficiency in future research. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer oeGQ for the positive evaluation, particularly recognizing our theoretical analyses, experimental results, and the novelty of our approach. We address the reviewer's concerns below: > W1: Reconstruction Error We clarify that the slight performance difference compared to FedAvg is an expected consequence of the gradient compression inherent in Ferret, which enables massive communication savings. We highlight three key points: 1. Our reconstruction error is theoretically bounded (Thm 2) and can be reduced by increasing $K$. This allows practitioners to balance performance and communication overhead based on their resource constraints (analysis in Appx. C.5). 2. While the baseline FedKSeed also suffers from reconstruction error due to gradient projection, Ferret achieves more accurate reconstruction, leading to superior empirical performance, as validated in our experiments. 3. The minimal performance difference (Ferret vs. FedAvg) on CodeAlpaca and GSM8K (Table 3) is vastly outweighed by the $10^6\times$ reduction in communication cost (Table 4&5). This reflects our intentional design choice to **prioritize communication efficiency**, which we believe is crucial for the scalable and practical deployment of FL systems. We will explicitly discuss this trade-off and include these discussions in the revised manuscript to further strengthen our paper. > W2: Computational Resources We appreciate the reviewer's suggestion regarding computational optimization. We acknowledge that while Ferret significantly reduces communication overhead, it requires additional computational costs, especially for large models like LLaMA2-13B. We believe this is an area for future work, and we would like to clarify that: - Compared with the relevant communication-efficient baseline, FedKSeed, Ferret already achieves a significant ($\sim6\times$) reduction in computational cost. - We have already incorporated optimizations like **Reconstruction w/o Inversion** and **Block-Wise Reconstruction** (Sec. 3.2) to mitigate this computational overhead during projection and reconstruction steps. - We recognize that further improvements are possible. We plan to explore techniques such as quantization on gradients or adaptive projection based on gradient sparsity in future research to further reduce the computational burden while preserving communication efficiency. --- Thank you again for the constructive feedback. We will enhance the discussion on these trade-offs in the final manuscript. We hope this response clarifies our contributions and addresses the reviewer's concerns effectively.
null
null
null
null
null
null
null
null
Improving the Statistical Efficiency of Cross-Conformal Prediction
Accept (poster)
Summary: This paper proposes several new variants of (modified) cross-conformal prediction--called e/u/eu-modified cross conformal prediction--that theoretically and empirically attain more efficient (i.e., smaller and thus more informative) prediction sets/intervals than the original (modified) cross-conformal prediction method, all while maintaining the same worst-case coverage guarantee ($\geq 1-2\alpha$, where $\alpha \in (0, 1)$ is the target miscoverage rate). These new methods and their guarantees are derived using new results on the combination of exchangeable p-values in Gasparin et al. 2024. ## update after rebuttal I maintain my positive score--I appreciate the authors' clarifications, highlighting experimental results related to coverage variability, and discussion related to my question about recommendations on conditions for using which CP methods (which could be good to mention in the paper's discussion/conclusion). Although much of the theoretical analysis is mainly citing Gasparin et al. 2024, the contribution is solid, and with the paper's clear writing, it should be of interest to the community. Claims And Evidence: Yes. Proofs are provided for the main theorems and sufficiently convincing experiments are provided on both synthetic data with an unstable algorithm and real data. Methods And Evaluation Criteria: Yes. Sufficiently convincing experiments are provided on synthetic data with an unstable algorithm (ie, instability around d=80 is an “adversarial” case where is important to show coverage holds empirically) and on real datasets, supporting the provided coverage guarantees and claims about more efficient prediction sets. Theoretical Claims: Yes. I have made an effort to check the proofs and they appear to be sound--key steps largely cite results in Gasparin et al. 2024. The theorems provide coverage guarantees for the proposed methods (Theorems 4.4, 4.6, & 4.7) and demonstrate that the proposed methods are not more conservative (ie, prediction sets no larger) than the original cross-conformal method (Theorem 4.9). Experimental Designs Or Analyses: Mostly yes, I have reviewed the overall experimental setting described and they seem reasonable and reliable. Eg, the synthetic-data experiments are reasonably implemented with the same setting as Barber et al. 2021, where the algorithm is unstable around d=80 which makes for an important “adversarial” test case where it is good to verify coverage claims (and coverage appears appropriate). The real data experiments include standard tabular UCI datasets that are often used for evaluation in the conformal literature. Some suggestions: - It would be good to make it clearer in the figure captions or figures themselves what alpha is (ie, what target coverage is) - (Optional) It could be interesting/valuable to add an evaluation of coverage *variance*, ie how the coverage varies over different random draws of training/cal data. - (Optional) It could be interesting/valuable to add supplemental experiments comparing the proposed e/u/eu-mod-cross method run with target coverage *$\alpha/2$* versus cross/split conformal run at target coverage $\alpha$. That is, especially since the empirical coverage of the proposed methods may dip below the target level (ie in $[1-2\alpha, 1-\alpha]$), practitioners may wish to run eg the eu-mod-cros method targeting $1-\alpha/2$ to ensure $\geq1-\alpha$ worst-case miscoverage. So, further discussion and evaluation of this would be useful. Supplementary Material: Mostly yes, I made an effort to review the proofs and I briefly looked at the other supplements. Relation To Broader Scientific Literature: *Most relevant prior literature:* Cross-conformal prediction was originally introduced by Vovk (2015) with worst-case coverage guarantees of the form $1-2\alpha - B$ (for some $B=2(1-\alpha)(K-1)/(n+K)$ that becomes negligible for smaller numbers of folds, $K$); a simple modification was introduced in Vovk et al. (2018), which cites earlier work by Vovk and Wang (2012), to achieve a guarantee at $\geq 1-2\alpha$. Barber et al. (2021) also considers this “modified cross-conformal” method from Vovk et al. (2018) (to introduce a related cross-validation+ method). Gasparin et al. (2024) has results on combining exchangeable p-values that are used in this paper. *Contribution in context:* The proposed method improves the statistical efficiency of prior cross-conformal methods in that it empirically attains sharper (smaller and more informative) prediction sets, and theoretically the sets are no larger than original cross-conformal. The main proof steps are largely citing Gasparin et al. (2024). Essential References Not Discussed: Essential references are discussed. A few further related works that could be relevant or interesting for the authors to look at or mention are given in the “Other Comments or Suggestions” section. Other Strengths And Weaknesses: *Strengths:* The paper is very clearly written, and all claims are sufficiently supported with proofs and/or empirical evidence. The paper’s contribution is framed appropriately and can be useful for future progress on thinking about improving the efficiency of conformal prediction sets. *Weaknesses:* One limitation of the proposed methods is that--whereas the original (modified) cross-conformal methods typically have empirical coverage at or above the target level ($\geq 1-\alpha$)--the proposed methods seem to have typical empirical coverage that may be below the target level and closer to the worst-case guarantee, ie, $\in [1-2\alpha, 1-\alpha]$ (eg, see Table 2). It would probably improve the paper to further discuss and/or evaluate this, and potentially to provide recommendations for how the proposed method should be used in practice: ie, should practitioners run the method targeting a more conservative level (eg, targeting $1-2\alpha$) to achieve $1-\alpha$ in the worst-case, or run targeting $1-\alpha$, while acknowledging that empirically it appears more likely that the coverage will fall in $[1-2\alpha, 1-\alpha]$? Other Comments Or Suggestions: **Other refs that may be relevant/of interest to authors:** *Other ref using result of Vovk and Wang (2020) for combining conformal sets:* - Stutz, D., Roy, A. G., Matejovicova, T., Strachan, P., Cemgil, A. T., and Doucet, A. Conformal prediction under ambiguous ground truth. arXiv preprint arXiv:2307.09302, 2023 - Couple refs on selecting conformal sets for efficiency: - Yang, Y. and Kuchibhotla, A. K. Selection and aggregation of conformal prediction sets. Journal of the American Statistical Association, pp. 1–13, 2021. - Liang, R., Zhu, W., & Barber, R. F. (2024). Conformal prediction after efficiency-oriented model selection. arXiv preprint arXiv:2408.07066. *Refs on related cross-validation style conformal methods (jackknife+ and CV+) under distribution shift (ie, extending the proposed methods to account for distribution shift could be an interesting future direction for the authors):* - Prinster, D., Liu, A., & Saria, S. (2022). Jaws: Auditing predictive uncertainty under covariate shift. Advances in Neural Information Processing Systems, 35, 35907-35920. - Prinster, D., Saria, S., & Liu, A. (2023, July). Jaws-x: addressing efficiency bottlenecks of conformal prediction under standard and feedback covariate shift. In International Conference on Machine Learning (pp. 28167-28190). PMLR. **Other Suggestions:** - Related to “Weakness” mentioned before, it could be useful to add discussion about when the authors recommend practitioners to use one method or another. Eg, when/why would be recommended to use proposed methods at target $1-\alpha$ (thus achieving $1-2\alpha$ guarantee) versus at target $1-\alpha/2$ (thus achieving $1-\alpha$ guarantee). - It may be helpful to use language such as “target/nominal” coverage or miscoverage when referring to the user’s inputted $1-\alpha$ or $\alpha$, to more clearly distinguish from worst-case guarantee level. - Authors may want to consider slight modification to names of their methods: Eg, it’s understandable from reading the paper why they name one of their methods “exchangeable modified cross-conformal prediction (e-mod-cross),” ie because it’s attained using results on merging exchangeable p-values, but it may cause confusion, as sometimes standard CP methods are called “exchangeable CP” methods to refer to assumption of exchangeable data. Example slight modification could be “exchangeable p-value modified cross conformal (ep-mod-cross)” - It might be advisable to state exchangeability/IID as an assumption within the actual theorem statements themselves - End of proof of Theorem 4.6: When you state “holds under arbitrary dependence,” do you mean arbitrary dependence that still maintains exchangeability? If so, would be good to make this explicit to ensure avoiding confusion (understand may be implicit) Questions For Authors: No major questions, see “Suggestions” for minor questions/comments. Congrats on a nice paper! Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your positive and constructive comments on our paper. Please find below a detailed response to your questions. ### Experimental Designs: - To improve clarity, we will add the target coverage level $1-\alpha=0.9$ in the captions. - As index of variability, in Table 1, we report the maximum and the minimum of the empirical coverage observed over the 20 replications. A part of the table (which refers to LM) is reported below. The variability (see Range) is not so different for all the methods. || mod-cross | e-mod-cross | u-mod-cross | eu-mod-cross | cross | split | split(2$\alpha$) | |--|--|--|--|--|--|--|--| | Mean | 0.903 | 0.899 | 0.851 | 0.858 | 0.902 | 0.902 | 0.800 | | Min | 0.896 | 0.885 | 0.834 | 0.840 | 0.895 | 0.890 | 0.779 | | Max | 0.917 | 0.915 | 0.874 | 0.881 | 0.916 | 0.920 | 0.831 | | Range | 0.021 | 0.030 | 0.040 | 0.041 | 0.021 | 0.030| 0.052 || - Some of the simulation studies report also the results of split conformal prediction with target coverage $1-2\alpha$. This implies that the proposed variants (trained at level $\alpha$) and split conformal prediction with target level $1-2\alpha$ guarantee the same teorethical coverage (and this is equivalent of using $\alpha/2$ to obtain coverage $1-\alpha$). In Table 5, for example, the eu-mod-cross method is better in terms of size than split conformal prediction trained at level $2\alpha$. (Standard) cross-conformal prediction at level $2\alpha$ could be added; however, we expect smaller sets if compared to those of split conformal but with the same empirical coverage (as observed for the $\alpha$ level). We discuss more on this in the first point of *Other suggestion*. ### Weaknesses: See the first point in *Other Suggestions*. ### References: Thank you very much for pointing out some relevant references. We will add them to the paper. ### Other Suggestions: - Thank you for you comment. The question of which variant of cross-conformal prediction is to be preferred in practice is subtle: - If one really needs to have a rigorous $1-\alpha$ guarantee against the worst case distributions and unstable algorithms (for example, if there are downstream decisions that crucially depend on the provided theoretical guarantee), then it makes sense to run our new methods at level $\alpha/2$. - If one is only using conformal prediction as "weak guidance" for downstream decisions, and the user is somehow ok with violations of the target $1-\alpha$, then we recommend running our variants of cross-conformal at level $\alpha$ (where its worst case guarantee is $1-2\alpha$ but it achieves coverage in between $1-2\alpha$ and $1-\alpha$). - If the situation is in between, where conformal prediction is neither being used extremely rigorously, nor as loose guidance, but one wants $1-\alpha$ coverage for "typical" datasets and algorithms, but is ok with both overcoverage and undercoverage for odd distributions/algorithms, then perhaps the original (modified) cross-conformal is best. - If one does not tolerate over- or under-coverage of any sort, and really wants essentially exact $1-\alpha$ coverage, then only split conformal delivers the goods. - We agree with this suggestion and we will revise the text accordingly to clarify the distinction. - Thank you for the suggestion, we will consider this change. - We will add the assumption of exchangeability within the theorem statements, as suggested. Example: "*... . If data are exchangeable,* $$ P\left(Y_{n+1}\in\hat C_{n,K,\alpha}^{\mathrm{e-mod-cross}}(X_{n+1})\right)\ge1-2\alpha. $$ - Theorems 4.4 and 4.7 require that the p-values $P_1, \dots, P_K$ be exchangeable (and the exchangeability of the p-values is shown in Theorem 4.2). However, the assumption of exchangeability is not necessary for proving Theorem 4.6. It is sufficient that $P_1, \dots, P_K$ are valid p-values (i.e., $P(P_k \leq \alpha) \leq \alpha$), and this assumption is satisfied in the case of rank-based p-values. In fact, the rules $2 \times \text{mean}(\mathbf{P})$ (used by Vovk et al. (2018) to prove the coverage guarantee for cross conformal prediction) and $2/(2-U) \times \text{mean}(\mathbf{P})$ are valid under arbitrary dependence, which is a broader condition than exchangeability. We will clarify this in the text.
Summary: The authors move from the interesting, yet relatively limited literature on methods that work on improving the dramatic data inefficiency of split conformal prediction. Their proposal exploits novel results about p-value combination to introduce a variant of the well known cross-conformal prediction methods that achieves better theoretical coverage properties. After a thorough description, and proofs about its theoretical properties, the author put their modified cross-conformal prediction method to the test, in both a simulation and a real world example. Claims And Evidence: All claims are well supported by well crafted proofs, I am a bit more skeptical about the applicative results (but more on this later) Methods And Evaluation Criteria: Both the simulation and the real world example are satisfactory. It would be worth maybe to propose a slightly extended real world test, in order to grasp better the practical advancements introduced by the method. Theoretical Claims: I have carefully checked all the proofs in the paper, and everything seems in order. Experimental Designs Or Analyses: As mentioned previously, I believe that the experimental study on real world cases is a bit too limited, as it does not really give much insights with respect to the situations where the method proposed by the authors give significant practical advantages Supplementary Material: The supplementary material is fully included in the submission, and greatly contributes in the understanding of the whole paper Relation To Broader Scientific Literature: The background on Conformal Prediction is not correctly stated. CP was not introduced in Vovk et al. 2005, (which instead represents an early formalisation of the first research on the subject), but in Saunders et al 1999 (https://www.ijcai.org/Proceedings/99-2/Papers/010.pdf). Essential References Not Discussed: I would mention the second edition of Algorithmic Learning in a Random World (Vovk et al. 2023). Lei et al. 2018 moreover specifies a very interesting results about merging multiple splits in Conformal Prediction, which triggers some questions (see below) Other Strengths And Weaknesses: nothing of relevance Other Comments Or Suggestions: nothing of relevance. Questions For Authors: I find the conclusions quite underwhelming. I appreciate the interesting work the authors have proposed, but fail to understand the practical implications of it. Is the method useful? In what situations? In what use cases? Please give more insights. I find this result contrasting quite strikingly with a result offered in Lei et al. 2018 (section 2.3). I wonder if you have something to comment on this. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your feedback. Below is our response to your comments # Experimental Designs We now additionally analyze a real dataset on electricity consumption, where accurate uncertainty quantification is crucial as the supplier’s revenue depends on customer energy use. The dataset contains 35,411 observations and 20 covariates; 30,000 observations are used for training, while the remaining ones are used for testing. The splitting is repeated 20 times, and the algorithm used is a random forest. In this case, methods such as Full CP and Jackknife+ can be computationally expensive, making sample-splitting-based methods preferable. From the table, it can be seen that Cross CP is more efficient than Split CP, producing more informative sets while ensuring valid coverage. See the first point in *Questions* below for a further discussion. The column *Uneven split* represents Split CP, where the training set comprises a $(K-1)/K$ fraction of the data points (ie, Split CP with the same fraction of training data points used by a single round of Cross CP). The last two columns represent cases where p-values are merged using the Bonferroni rule at level $\alpha$ (ie,$K\min(p)$) or at level $2\alpha$ (ie,$K/2\min(p)$). These results are discussed in the 2nd point of the *Questions* section ||mod-cross|e-mod-cross|u-mod-cross|eu-mod-cross|cross|split|uneven split|split(2$\alpha$)|Bonf|Bonf(2$\alpha$)| |-|-|-|-|-|-|-|-|-|-|-| |Mean|50.53|47.20|33.64|32.26|50.40|59.59|53.43|26.59|223.86|153.81| |Sd|0.52|1.70|0.32|1.27|0.51|1.39|2.74|0.45|10.99|5.82| |Min|49.50|42.74|33.00|29.08|49.36|57.09|48.12|25.64|202.48|141.66| |Max|51.47|49.23|34.43|33.72|51.34|62.26|58.62|27.71|244.33|165.03| |Cov|0.90|0.89|0.86|0.84|0.90|0.90|0.90|0.80|0.98|0.96| # References - Thank you for pointing to this. We will add the paper by Saunders et al - We will discuss the two references appropriately in the text # Questions **Conclusions** We believe that the methods has practical utility. For instance, when dealing with a large number of observations (and potentially a high-dimensional feature space), some methods, such as full CP and jackknife+, become impractical. One possible approach is to trade off some statistical efficiency for reduced computational cost by using methods based on sample-splitting. However, split CP can be inefficient because it only uses a subset of the data for model training. Cross CP generally improves the efficiency of split CP but can overcover. Our work improves statistical efficiency of Cross CP (at the same computational efficiency) while maintaining its coverage guarantee. The question of which variant of cross CP is to be preferred in practice is subtle. - If one really needs to have a rigorous $1-\alpha$ guarantee against the worst case distributions and unstable algorithms (for example, if there are downstream decisions that crucially depend on the provided theoretical guarantee), then it makes sense to run our new methods at level $\alpha/2$ - If one is only using conformal prediction as "weak guidance" for downstream decisions, and the user is somehow ok with violations of the target $1-\alpha$, then we recommend running our variants of cross-conformal at level $\alpha$ (where its worst case guarantee is $1-2\alpha$ but it achieves coverage in between $1-2\alpha$ and $1-\alpha$) - If the situation is in between, where conformal prediction is neither being used extremely rigorously, nor as loose guidance, but one wants $1-\alpha$ coverage for "typical" datasets and algorithms, but is ok with both overcoverage and undercoverage for odd distributions/algorithms, then perhaps the original (modified) cross-conformal is best - If one does not tolerate over- or under-coverage of any sort, and really wants essentially exact $1-\alpha$ coverage, then only split conformal delivers the goods We acknowledge that the conclusions can be improved and will incorporate the above takeaway messages there **Discussion Lei et al** The results in Lei et al are based on the Bonferroni rule, that is not powerful when p-values (or sets) are highly dependent. Indeed, the Bonferroni correction is tightest when the p-values are nearly independent, while the conformal p-values across folds are highly dependent. On the other hand, the rule "twice the mean" used in cross CP is more powerful if p-values are dependent; see Sec.6.1 in Vovk & Wang (2020). For completeness; the above table also reported the use of the Bonferroni rule as merging function. The last 2 columns guarantee coverage of at least $1-\alpha$ and $1-2\alpha$, respectively. However, the methods have coverage near 1. This aligns with Thm.4 in Lei et al, which states that multisplit+Bonferroni produce wide sets (specifically, sets are wider than single split CP). For comparison, we have added split CP with an uneven split. The set size is smaller than that of Split CP but larger than that of Cross CP and variants. Moreover, its Sd is higher. We will add this discussion --- Rebuttal Comment 1.1: Comment: I thank the authors for the insightful answers, which I believe are very satisfactory. I now sincerely believe the paper to be of better quality, and to be more useful and clear in terms of clarifying the issues raised in the official comment I will increase my evaluation
Summary: The paper proposes new variants of cross-conformal prediction to obtain smaller prediction sets while guaranteeing the same worst-case miscoverage rate. They use recent results on the combination of dependent and exchangeable p-values to obtain their results. Empirical evaluation on simulated data and the news popularity dataset show that their proposed improvements lead to smaller sets than cross-conformal prediction and modified cross-conformal prediction. Similar improvements are observed in the additional experiments. Claims And Evidence: Claims made in the paper are supported by theory (Theorem 4.4 - 4.9) and empirical evidence. The results show that the proposed improvements result in smaller prediction sets while maintaining the coverage guarantees. Methods And Evaluation Criteria: The empirical evaluation is extensive and the experiments are performed on multiple datasets. The benchmark datasets and the reported size and coverage metrics make sense for the problem. Theoretical Claims: I went over the proofs for Theorem 4.6-4.9 in Section F. Experimental Designs Or Analyses: I checked the soundness of all experiments, including the simulation study, real data application, and additional results reported in the appendix. The experimental details and choices have been carefully explained, and there is effort to experiment with different algorithms and parameters. Supplementary Material: I went over the full supplementary material including proofs and additional experiments. Relation To Broader Scientific Literature: Building on the literature of cross-conformal prediction and combination of dependent p-values, this paper contributes in producing smaller sets than cross-conformal prediction (and modified version) and split conformal prediction; while the empirical coverage lies between $1 - 2\alpha$ and $1 - \alpha$. The guarantees presented and supporting empirical evidence show improvement over prior work with similar model training cost. Essential References Not Discussed: I feel the paper is fairly complete in its discussion of important references for understanding the context. Other Strengths And Weaknesses: I appreciate the clear writing and how the paper lays out past work to contextualize their work and contributions better. The remarks in the paper further add to the clarity. Other Comments Or Suggestions: minor typo: p2 l68: obtained ‘by’ applying Questions For Authors: No specific questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your positive feedback on our paper. We truly appreciate your comments. Thank you for noticing the typo, we corrected it.
Summary: The authors propose new variants of cross-conformal prediction that leverages recent results on combining p-values through exchangeability and randomization. They theoretically demonstrate that their methods can reduce the size of the prediction set while maintaining a marginal coverage of at least $1 - 2\alpha$. The paper also highlights the computational advantages of these new methods, as they require training the model only a limited number of times (K times) rather than for every possible response value. Claims And Evidence: The theoretical guarantees are well supported by proofs in the appendices, leveraging the coverage properties of p-value combination results. Simulations and real-world experiments validate the smaller prediction set sizes compared to baselines, with empirical coverage aligning with theoretical bounds. Methods And Evaluation Criteria: The use of exchangeable p-values and randomization is novel and logically derived in this context. The regression tasks on benchmark datasets are appropriate. Theoretical Claims: The theoretical claims are well supported by the proof in appendices. Experimental Designs Or Analyses: The experimental design is appropriate but could be strengthened by increasing the number of trials (e.g., from 20 to 100) to improve reliability and including the experiments with varying fold sizes. Supplementary Material: All appendices were reviewed. Relation To Broader Scientific Literature: The work in this paper builds on cross-conformal prediction and recent p-value combination methods, improving the statistical efficiency, and further advancing the development of conformal prediction. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: 1. The research innovatively utilizes combination of p-values through exchangeability and randomization to cross-conformal prediction. 2. Maintains theoretical guarantees with smaller prediction sets and computational efficiency (K folds). Weaknesses: 1. Randomization introduces variability in prediction sets, which may limit deterministic applications and lead to reproducibility concerns. Other Comments Or Suggestions: No further suggestions. Questions For Authors: NA Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback on our paper. Please find below a detailed response to your concerns. ### Experimental Designs Or Analyses: Thank you for the suggestions. Below, we present an example where the experimental setting is identical to the last experiment in Appendix E, but with $K$ set to 5, 10, and 20. In addition, the number of trials is 100. The second table ($K=10$) is similar to the table reported in the paper with 20 trials (this number is used in many conformal prediction works; see, for example, Barber et al. (2021) or Romano et al. (2019)). There is a slight decrease in the empirical size of sets, and a slight increase in variability, of the e and eu-mod-cross methods as $K$ increases. Clearly, nothing changes for the split conformal methods. We will add the example and add more trials for other experiments. | K=5 | cross | e-cross | u-cross | eu-cross | split | split($2\alpha$) | |--|--|--|--|--|--|--| | Mean | 6.887 | 6.382 | 5.711 | 5.508 | 6.764 | 4.735 | | Sd | 0.072 | 0.162 | 0.058 | 0.182 | 0.167 | 0.100 | | Median | 6.881 | 6.389 | 5.711 | 5.518 | 6.748 | 4.719 | | Min | 6.656 | 5.964 | 5.576 | 5.074 | 6.331 | 4.516 | | Max | 7.064 | 6.762 | 5.837 | 6.015 | 7.268 | 4.967 | | Coverage | 0.910 | 0.885 | 0.867 | 0.845 | 0.897 | 0.800 | | K=10 | cross | e-cross | u-cross | eu-cross | split | split(2$\alpha$) | |--|--|--|--|--|--|--| | Mean | 6.869 | 6.323 | 5.695 | 5.444 | 6.816 | 4.741 | | Sd | 0.060 | 0.225 | 0.069 | 0.258 | 0.171 | 0.110 | | Median | 6.865 | 6.360 | 5.692 | 5.450 | 6.815 | 4.750 | | Min | 6.725 | 5.559 | 5.545 | 4.667 | 6.351 | 4.419 | | Max | 7.030 | 6.800 | 5.878 | 6.061 | 7.235 | 5.060 | | Coverage | 0.910 | 0.886 | 0.863 | 0.843 | 0.899 | 0.798 | | K=20 | cross | e-cross | u-cross | eu-cross | split | split($2\alpha$) | |--|--|--|--|--|--|--| | Mean | 6.855 | 6.210 | 5.649 | 5.379 | 6.817 | 4.717 | | Sd | 0.062 | 0.340 | 0.063 | 0.381 | 0.190 | 0.089 | | Median | 6.850 | 6.283 | 5.642 | 5.403 | 6.837 | 4.728 | | Min | 6.648 | 4.992 | 5.520 | 4.321 | 6.396 | 4.503 | | Max | 7.032 | 6.760 | 5.787 | 6.315 | 7.358 | 4.971 | | Coverage | 0.909 | 0.879 | 0.862 | 0.839 | 0.899 | 0.795 | We additionally analyze the case where the number of observations in each fold differs. Since in cross-conformal prediction the $K$ folds serve the same role (being used for both training and calibration), having folds with significantly different sizes would be unwise. This principle applies not only to cross-conformal prediction but also to other methods that randomly partition the data into $K$ folds, such as Hulc (Kuchibhotla et al., 2024) and MoM (Lugosi & Mendelson, 2019). As explained at the beginning of Section 3, this means that the folds can differ by at most $K-1$ observations (with $K$ typically set to 5 or 10). For completeness, we present the results of a simulation study based on the Boston dataset. The setting is the same as described in Appendix E, except that $n=204$ and one fold at random contains 44 observations instead of 40 (a 10% increase). The number of trials is 100. One could, in principle, have four folds with 41 observations and one with 40, but we consider a more 'extreme' setting. The results are presented in the Table, and the conclusions remain qualitatively similar to those reported in Appendix E. | | mod-cross | e-mod-cross | u-mod-cross | eu-mod-cross | cross | |--|--|--|--|--|--| | Mean | 17.299 | 15.206 | 14.083 | 13.565 | 15.883 | | Sd | 1.542 | 2.218 | 1.211 | 2.159 | 1.414 | | Median | 17.245 | 15.229 | 14.068 | 13.605 | 15.819 | | Min | 14.029 | 9.518 | 11.699 | 8.895 | 13.000 | | Max | 21.196 | 20.869 | 16.885 | 20.199 | 19.243 | | Coverage | 0.926 | 0.892 | 0.877 | 0.859 | 0.909 | ### Weaknesses: We agree that randomization can introduce variability in the prediction sets; however, employing randomized and asymmetric combination rules is the only way to enhance the efficiency of cross-conformal prediction sets while preserving the same (worst-case) coverage guarantee. Furthermore, as noted in Remark 4.8, randomization plays a role at various stages of the data pipeline in both cross and split conformal prediction. As discussed in the paper, in some applications of large-scale deployment involving thousands of daily predictions, improved statistical efficiency may be preferred. In light of the responses provided, we hope we have addressed your concerns and that you may consider raising the score. Kuchibhotla, A. K., Balakrishnan, S., & Wasserman, L. (2024). The HulC: confidence regions from convex hulls. Journal of the Royal Statistical Society Series B: Statistical Methodology, 86(3), 586-622. Lugosi, G., & Mendelson, S. (2019). Mean estimation and regression under heavy-tailed distributions: A survey. Foundations of Computational Mathematics, 19(5), 1145-1190.
null
null
null
null
null
null
Towards Cost-Effective Reward Guided Text Generation
Accept (poster)
Summary: In earlier works on reward guided text generation (RGTG), the reward model is usually operationalized with a regression head on top of LM. However, this comes at the cost of having to evaluate all next possible tokens V times. This paper proposes a simple change that turns this into a V-channel head, just like normal language modeling head, to save cost. Claims And Evidence: Yes, they are supported by theoretical and experimental results. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. I went through the proofs. Experimental Designs Or Analyses: Yes. I read the experimental results and they made sense. Most experimental set-ups follow from previous works to ensure comparability. Supplementary Material: NA Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: Yeah, after reading other reviewer's comments, I think a key point that's missing here is the scaling relationship between the size of the RM and the proposed method's performance. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We want to thank you for reviewing our paper and your strong endorsement of our work.
Summary: This paper proposes an improved reward model for reward-guided text generation (RGTG), an alternative to offline RLHF for aligning language models with human preferences. Traditional RGTG incurs high inference costs as reward models score tokens individually and are optimized for full sequences, leading to suboptimal choices. To address this, the authors introduce a Bradley-Terry loss-based model that predicts optimal token expansions in a single step, reducing inference overhead. Theoretical analysis and empirical results show that the proposed method improves efficiency while maintaining competitive performance compared to existing RGTG and RLHF approaches. Claims And Evidence: S2: The FaRMA addresses the limitations of the existing RGTG method, which requires multiple calls of the reward model and suffers from unreasonable scoring for partial-sequences. The method significantly enhances efficiency while ensuring performance effectiveness; S3: Theorems 1–3 provide a clear analysis of suboptimality in prior methods (PARGS, CD) and prove FaRMA’s guarantees under infinite training. This strengthens the paper’s credibility; W3: While Theorems 1–3 highlight theoretical advantages, the paper does not analyze scenarios where FaRMA’s greedy token-wise optimization might diverge from global sequence optimality (e.g., compounding errors in long generations); W5: In Section 4.2, the authors propose a novel approach that utilizes the maximization of V_θ (y_(1:i+1) |x) to determine V_θ (y_(1:i) |x) , thereby ensuring that V_θ (y_(1:i) |x) can represent the overall sentence-level reward. However, does this approach significantly amplify the computational cost of training the reward model, given that it requires iterating over all tokens in the sentence for the proposed constraint-based training? Have the authors considered experimenting with〖 V〗_θ (y_(1:i+k) |x) for ( k >= 1 ) ? Would such an approach offer a more efficient alternative while preserving the effectiveness of the method? Methods And Evaluation Criteria: S4: The authors tackles the word ambiguity and complex network problem for CNER task, which is benefit for the researchers that may encounter similar problems. Actually the word ambiguity problem of external knowledge is not easy to handle, this work points out a direction for it. Theoretical Claims: W3: While Theorems 1–3 highlight theoretical advantages, the paper does not analyze scenarios where FaRMA’s greedy token-wise optimization might diverge from global sequence optimality (e.g., compounding errors in long generations); Experimental Designs Or Analyses: W1: The primary experiments in this paper utilize base models with 1B or 2.8B parameters. Additional Fine-Grained Text Generation experiments in the appendix, conducted with a 7B model, indicate that the performance advantage of FaRMA decreases as the base model size increases. Further empirical validation on this trend would enhance the evaluation of FaRMA’s robustness. W2: The experiments primarily train reward models of the same size as the base model. If the reward model is always required to match the base model in size, FaRMA's advantage over DPO (which directly optimizes the base model) becomes marginal. Notably, in the HH Dialogue experiment, using a smaller reward model resulted in significantly worse performance. Exploring the influence of reward model size would clarify the limitations of the proposed approach; W4: Narrow Baseline Comparison: CARDS is the only baseline compared under varying reward thresholds (Appendix D). A similar ablation for FaRMA’s hyperparameters (e.g., β in Eq.(14)) is missing, making robustness claims less substantiated. Supplementary Material: Yes. (1) trainers (2) HH (3) TLDR (4) UF Relation To Broader Scientific Literature: The authors tackles the word ambiguity and complex network problem for CNER task, which is benefit for the researchers that may encounter similar problems. Actually the word ambiguity problem of external knowledge is not easy to handle, this work points out a direction for it. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: Strengths: S1: Overall, this paper is well-written and easy to read. S5: Results across diverse tasks (summarization, dialogue, UltraFeedback) show FaRMA outperforms RGTG baselines in reward scores while matching or exceeding RLHF methods (DPO/PPO) without LLM fine-tuning. The analysis of diversity (ROUGE-L) and GPT-4 preference rankings adds depth. Other Comments Or Suggestions: W6: In the introduction, the authors should ensure consistency in the use of punctuation between parentheses containing abbreviations and citation references. W7: In Line:76, a period is missing before "On the TLDR." Questions For Authors: Please refer to the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper and numbering your comments. # W1:Performance Advantage on larger models We would like to point out the result on the 7 billion model (Table 5 Appendix) is on a different dataset, Ultra Feedback, and the absolute reward values on different datasets cannot be compared to each other. Additionally we observe that FaRMA on average performs better than the baselines including DPO. Therefore, based on the result we can not conclude that the advantage of FaRMa is diminished on larger models. Additionally we evaluate this setting using GPT-4 and provide the win-tie rate of FaRMA against DPO and ARGS for this experiment below: | Method A | Method B | Win-Tie % | | -------- | -------- | -------- | | FaRMA | DPO | 52 | | FaRMA | ARGS | 58 | We can observe that FaRMA has a higher winning rate and does well in this setting. # W2: Smaller Reward Model Please note that even at the same reward model size FaRMA has a computational advantage over DPO and PPO. Please refer to point 1 of the rebuttal for Reviewer wvoK. We note that FaRMA at the same model size trains $3\times$ faster. Therefore a smaller reward model would trade-off the performance against even more training efficiency. We also note that the smaller reward model is significantly better than $\pi_{ref}$ and comparable to RGTG baselines (at half the reward model size). # W3: Tokenwise Optimization and Optimality The best way to understand FaRMA is from a reinforcement learning (RL) perspective. In RL, we train a policy to select actions that greedily maximize cumulative future rewards captured by the state-action value function. Similarly, FaRMA chooses the next token greedily to maximize the value function. The fact that FaRMA does not do a look-ahead search during decoding is consistent with RL since RL training ensures that the resulting policy should not need a look-ahead search during execution. The look-ahead search is implicitly done during training (Eq. 17) by the estimation of the value function that captures cumulative rewards for all future steps. Furthermore, the loss function in Eq. 17 naturally accounts for compounding of errors since the value of a partial sequence is trained with respect to the value of its continuations, including any error. # W4: $\beta$ Ablation for FaRMA | $\beta$ | r $\pm$ SE | | -------- | -------- | | 0.5 | 1.33 $\pm$ 0.18 | | 1.0 | 1.77 $\pm$ 0.17 | | 1.5 | 2.11 $\pm$ 0.16 | | 2.0 | 2.1 $\pm$ 0.14 | We present an ablation on changing the value of $\beta$ for FaRMA on the TLDR dataset. A $\beta$ value of 1.5 is optimal in this case. We want to point out that the inference time is independent of $\beta$ for FaRMA. However, for CARDS the hyper-parameter controls the trade-off between higher reward and higher inference time. Therefore, we presented the result in the appendix to justify the threshold that we chose for CARDS. # W5: Computational Cost Our approach does not increase the training time since, in practice, instead of training over the entire sequence we sample some steps from the sequence. Note that this is standard in many ML algorithms, e.g. diffusion models [1]. The sampling is done to keep the training set comparable to the preference dataset for full-sequence reward learning. Note that on Table 9 of the Appendix, on the same GPUs, the training time for ARGS (full sequence reward model) and Ours is comparable. Furthermore, your suggestion of matching the reward of a partial sequence to a longer trajectory instead of just the next token is an interesting variation. However, an exponential search in terms of vocabulary size is required to find such a trajectory with maximum reward, and this would lead to significantly higher training cost. 1. Bishop, Christopher M., and Hugh Bishop. Deep learning: Foundations and concepts. Springer Nature, 2023. Algorithm 20.1 # W6 and W7: Typos and Corrections Thanks for pointing these out. We will correct these in our paper.
Summary: The paper proposes a new method to do reward-guided text generation (RGTG) that prefers the optimal expansion of a sequence and needs only one call to score all candidate tokens simultaneously. The experiments show that the proposed reward model leads to faster inference than other RGTG methods and performs on par with previous RGTG and offline RLHF methods. Claims And Evidence: The claim in 2nd Paragraph of Introduction, “Reward models are cheaper to train compared to offline RLHF updates even if both the reward model and LLM have the same number of parameters”. Even given the footnote associated with the statement, I’m still confused about why reward models must be cheaper than DPO? If the used reward models have the same number of parameters and the same output size (as this paper’s proposed method). In such case, one also need to load and make calls to one additional model (the reward model), which has the same size as $\pi_{ref}$. Do you actually assume that the training reward model requires fewer iterations than training the language model here? I would suggest to revise the statement of “In RLHF, $\pi_{ref}$ is the quantity that we seek to improve so it does not make sense to improve $\pi_{ref}$ with a value function that depends on $\pi_{ref}$ itself” to something like “In RGTG, …..so it can introduce bias to train a value function depends on $\pi_{ref}$”. Since in more general RLHF, $\pi_{ref}$ may not mean the generation part, and using term like “introduce bias” could be more specific of prior work’s cons compared to “does not make sense”. Methods And Evaluation Criteria: The method is reasonable for improving inference efficiency and the reward model performance, especially if the goal is to reward a prefix that can potentially grow into the optimal sequence, but not the average possible sequences a prefix can potentially achieve. However, for LLM, this method may need more proof that most inputs can find an optimal sequence and that there is no need to consider the output diversity. Theoretical Claims: The three theory results, proved by counter examples, seem correct to me. Experimental Designs Or Analyses: For the experimental setup, how are responses paired to GPT-4 to score? Within one pair, are the two responses generated from different models? Or is one of them always the human-written response? If both responses are model generation, why can Figure 2 compare them all together? How many pairs are evaluated? The 2nd paragraph in Section 6.2 mentioned again that “DPO and PPO based RLHF that is expensive to fine-tune”, however, there is no comparison of the total training time of the proposed FaRMA method (of the reward model) and the total training time of DPO (of the language model). The training equations of FaRMA itself when using a same size of the language model seem to possibly require a similar fine-tuning time. If not, it would be great to show the empirical evidence. Line 416-419 on Page 8 mentioned that “we can further reduce the cost for both training and inference by reducing the reward model size while still improving over $\pi_{ref}$”. However, the experiment only demonstrates the same size and half size results on HH Dialogue, and the performance drops a lot from 1.80 to 1.41 average reward. If making this statement, I would expect to see the multiple sizes results (e.g., also include 1/8 and 1/8). For the diversity experiment in Table 4, I’m interested in (1) the PARGS and CD results, as they are taken as the key baselines and discussed in Section 3, and (2) the temperature of sampling here, as temperature can impact a lot to the diversity. Supplementary Material: I checked what information was provided in the supplementary materials and evaluated if they would change my review. Relation To Broader Scientific Literature: A new, more efficient method for reward guided text generation (RGTG) for LLM. Essential References Not Discussed: Beyond section 5 paragraph 3, there is much work discussing the fine-grained rewards or Q value functions when using RL to train language models. For example, “Adversarial Learning for Neural Dialogue Generation, 2017” proposed a method to learn the RL rewards (cast as a discriminator) by sampling the prefixes and updating the model with policy gradient. “Improving Conditional Sequence Generative Adversarial Networks by Stepwise Evaluation, 2018” and “Proximal Policy Optimization and its Dynamic Version for Sequence Generation, 2018” together approximate the Q-value for each prefix when learning the full episode RL returns (casted as discriminator) and update the model with PPO. Other Strengths And Weaknesses: No, I have listed the strengths and weaknesses above. Other Comments Or Suggestions: * In page 7 line 354, “which is” is a typo that should be revised to clarify the whole sentence. I guess the authors mean all the baselines and the proposed method are sampled using top-k. * In page 8 line 417, “can further reduces” => “can further reduce”. Questions For Authors: Among the above comments, I will change my evaluation if 1) the training time comparison of DPO and FaRMA is provided and FaRMA can be trained much faster than DPO as the paper claims, 2) the GPT-4 evaluation details (asked above) are provided and the setup is reasonable, 3) the diversity experiments expand with PARGS and CD, and 4) more prior work of models that score partial sequences are discussed (as the whole paper, including Abstract and Introduction continue talk about “the reward model is usually only trained to score full sequences” but much prior such contributions are not discussed.) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed review and questions. We hope that our response will satisfy your concerns. # 1. FaRMA Efficiency vs DPO DPO is more expensive to train because it loads and makes calls to an additional reference model, $\pi_{ref}$, along with the model which is trained, $\pi_\theta$. FaRMA does not require access to $\pi_{ref}$. This also increases the memory footprint of DPO. | Method | Training time (mins) | | -------- | -------- | | FaRMA | 82 | | DPO | 254 | | PPO | 238 | We compared the training time of PPO, DPO and FaRMA, on the TLDR dataset, on the llama3.2-1B model, on a single A100 GPU (to ensure fair comparison). As shown in the table above, FaRMA trains three times faster compared to DPO. | Method | Peak Memory (GB) | | -------- | -------- | | FaRMA | 8 | | DPO | 28 | | PPO | 30 | We also looked at the peak memory usage per training batch for each of these methods. Again FaRMA has a much lower memory footprint. # 2. GPT-4 Evaluation The evaluation shows the win-rate of baselines vs FaRMA, i.e. DPO vs FaRMA, CARDS vs FaRMA, etc. This comparison is standard in literature [1,2,3]. We average this result over a hundred prompts and use the same hundred prompts for all comparisons. These prompts are randomly sampled, initially, and then fixed for all evaluations. All the comparisons on the graph, including win-rate and inference time, are with respect to FaRMa. The blue point for FaRMA is at the $50\%$ mark on the y-axis for reference. We will add a horizontal line on the plot for an easier comparison. 1. Khanov, Maxim, Jirayu Burapacheep, and Yixuan Li. "ARGS: Alignment as Reward-Guided Search." ICLR 2024 2. Li, Bolian, et al. "Cascade Reward Sampling for Efficient Decoding-Time Alignment." ICML 2024 Next Generation of AI Safety Workshop. 3. Rashid, Ahmad, et al. "A Critical Look At Tokenwise Reward-Guided Text Generation." ICML 2024 Workshop on Foundation Models in the Wild. # 3. Diversity result for CD and PARGS **HH Dialogue Dataset** | Method | Rouge-L | | -------- | -------- | | FaRMA | 0.21 $\pm$ 0.02 | | PARGS | 0.22 $\pm$ 0.02 | | CD | 0.24 $\pm$ 0.01 | **TLDR Dataset** | Method | Rouge-L | | -------- | -------- | | FaRMA | 0.24 $\pm$ 0.01 | | PARGS | 0.33 $\pm$ 0.01 | | CD | 0.32 $\pm$ 0.01 | We present additional diversity experiments on CD and PARGS and observe that FaRMA still produces the most diverse text. The temperature used for all diversity tests is 1. # 4. Additional References We thank the reviewer for pointing us to these works. We will add them to our paper. The three suggested papers [1,2,3] train Generative Adversarial networks for dialogue generation. They employ either the policy gradient method [1,2] or PPO [3] to train the generator, and train the discriminator to provide rewards. To mitigate the problem of sparse rewards they employ methods of training step-wise Q-functions. However, we would like to clarify that the discussion in the abstract and introduction of our paper is in the context of RLHF, specifically reward guided text generation, that aligns LLMs to preference data at inference. Whereas the aforementioned works explicitly apply RL techniques to train text generators, RGTG methods avoid the use of off-line RL and instead employ reward guided decoding. 1. Tuan, Yi-Lin, and Hung-Yi Lee. "Improving conditional sequence generative adversarial networks by stepwise evaluation." 2019 IEEE Transactions 2. Li, Jiwei, et al. "Adversarial Learning for Neural Dialogue Generation." EMNLP 2017 3. Tuan, Yi-Lin, et al. "Proximal policy optimization and its dynamic version for sequence generation." arXiv preprint(2018). # 5. Optimal Sequence In FaRMA the search for an optimal sequence is done during training of the value function V. While we could do a search for an optimal sequence during decoding, this is not desirable since this would increase decoding time. Note that the search for an optimal sequence is implicitly achieved by the loss function in Eq. 17. This loss function ensures that by the end of training the value of a partial sequence corresponds to the max of the values of all continuations. This loss function is similar to the temporal difference loss function in traditional RL to estimate the value of the best plan going forward. Hence at decoding time, the LLM does not need to search for an optimal continuation and does not worry about average continuations since the resulting policy is trained to select tokens that are greedy with respect to the value function, which already accounts for the best continuation. # 6. Typos and Corrections Thank you for pointing these out. We will correct them in the final version. --- Rebuttal Comment 1.1: Comment: Thank the authors for the detailed response. While most of my concerns are addressed, I would like to confirm the following 2 questions before adjusting my evaluation. 1. Regarding the FaRMA efficiency vs DPO, I am curious about the reward model training time for FaRMA. Can you provide the time? I suppose your listed 82min is only the time using FaRMA with a trained reward model? Please correct me if the 82min already includes the reward model training time. 2. I haven't seen a response to the following question in my review > Line 416-419 on Page 8 mentioned that “we can further reduce the cost for both training and inference by reducing the reward model size while still improving over”. However, the experiment only demonstrates the same size and half size results on HH Dialogue, and the performance drops a lot from 1.80 to 1.41 average reward. If making this statement, I would expect to see the multiple sizes results (e.g., also include 1/8 and 1/8). > To make it clear and correct my typo, I meant that could you provide the results of other reward model sizes, such as 1/4 and 1/8, to make the statement? --- Reply to Comment 1.1.1: Comment: Thank you. We are glad that we addressed your other concerns. Regarding these questions: ## Q1: Training time FaRMA Please note that this table lists the FaRMA reward model training time. We are comparing the DPO training time with the reward model training time. Both are initialized from the llama 3.2 1 billion model. ## Q2: FaRMA with smaller reward model We present FaRMA results on the HH dataset with smaller reward models. These experiments were done on the Pythia language model and these are the available smaller models from this series. | Column 1 | r $\pm$ SE | | -------- | -------- | | $\pi_{ref}$ - 2.8b| 1.18 $\pm$ 0.12| | FaRMA - 400m | 1.49 $\pm$ 0.12| | FaRMA - 1b | 1.56 $\pm$ 0.18| | FaRMA - 1.4b | 1.41 $\pm$ 0.16| | FaRMA - 2.8b | 1.80 $\pm$ 0.18| All the FaRMA results are presented with the same $\beta=1.2$ which was used in the paper. We claimed in the paper that smaller FaRMA models still improve over $\pi_{ref}$. We note that even at $\frac{1}{7}^{th}$ of the orginal size, the FaRMA results are better than $\pi_{ref}$. We also note that even though the 1.4 billion FaRMA result is lower than the 1b and the 400m, these results are still within the standard error. Please let us know if you have any other questions.
null
null
null
null
null
null
null
null
Learning Dynamics under Environmental Constraints via Measurement-Induced Bundle Structures
Accept (spotlight poster)
Summary: This paper presents a novel geometric framework for learning unknown dynamics under environmental constraints when constraint information is only locally available and uncertain. The authors introduce a fiber bundle structure over the state space that unifies measurements, constraints, and dynamics learning. This geometric approach enables measurement-aware Control Barrier Functions (mCBFs) that adapt to local sensing conditions. By integrating Neural ODEs, the framework learns continuous-time dynamics while preserving geometric constraints, with theoretical guarantees of learning convergence and constraint satisfaction dependent on sensing quality. The authors demonstrate through simulations that their approach significantly improves both learning efficiency and constraint satisfaction compared to traditional methods, especially under limited and uncertain sensing conditions. Claims And Evidence: The claims made in the submission are supported by both theoretical analysis and experimental evidence. The authors claim that their geometric framework provides a unified approach to handling measurement uncertainty, system dynamics, and constraints, which is substantiated by the detailed mathematical formulation of the fiber bundle structure and its properties. The claim that their approach leads to improved learning efficiency and constraint satisfaction is supported by experimental results, though the paper would benefit from more detailed quantitative comparisons with baseline methods. The theoretical guarantees of learning convergence and constraint satisfaction are rigorously derived from the geometric properties of the bundle structure, providing a solid foundation for the practical implementation of the framework. Methods And Evaluation Criteria: The proposed methods are mathematically sound and well-suited to the problem of learning dynamics under environmental constraints with uncertain measurements. The authors develop a comprehensive geometric framework based on fiber bundle theory, which provides a natural setting for handling measurement uncertainty. The evaluation criteria include both theoretical analysis of convergence and safety guarantees, as well as practical demonstrations of the framework's effectiveness in simulation environments. The paper would benefit from more explicit descriptions of the baseline methods used for comparison and clearer metrics for quantifying improvements in learning efficiency and constraint satisfaction. Theoretical Claims: The paper makes several significant theoretical claims, including the formulation of measurement-adapted Control Barrier Functions, the convergence of the learning dynamics, and probabilistic safety guarantees. I have examined the theoretical development in Sections 3 and 4, including the key Theorem 3.1 which provides probabilistic safety guarantees. The proof sketch outlines a three-step approach that appears sound, though the full proof is referenced to be in Appendix A which was not available for review. The theoretical framework is well-grounded in differential geometry and control theory, with clear connections to established concepts such as fiber bundles, connections, and barrier functions. Experimental Designs Or Analyses: The paper mentions "extensive simulations" that demonstrate significant improvements in learning efficiency and constraint satisfaction, but the details of these experiments are somewhat limited in the sections I was able to review. The experimental design appears to involve learning dynamics models under various measurement uncertainty conditions, with comparisons to traditional methods. However, without more specific information about the simulation environments, performance metrics, and statistical analyses, it is difficult to fully assess the validity of the experimental claims. The paper would benefit from more detailed descriptions of the experimental setup and comprehensive results. Supplementary Material: I gave a look (not review) at the proof. Relation To Broader Scientific Literature: The paper effectively situates its contributions within the broader scientific literature on safety-critical control, geometric learning, and measurement-aware control. The authors provide a comprehensive review of related work in Section 2, acknowledging foundational contributions in differential geometry (Ehresmann, 1950; Kobayashi & Nomizu, 1996), control barrier functions (Ames et al., 2019), and geometric learning (Chen et al., 2018). They clearly articulate how their approach addresses limitations in existing methods, particularly in handling measurement uncertainty as an intrinsic geometric property rather than an external disturbance. The paper builds upon and extends several important lines of research in a coherent and well-motivated manner. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The paper presents a novel and mathematically rigorous geometric framework that unifies measurements, constraints, and dynamics learning. - The theoretical guarantees of learning convergence and constraint satisfaction provide a solid foundation for the practical implementation of the framework. - The approach naturally adapts to local sensing conditions, making it particularly suitable for real-world applications with uncertain measurements. - The integration with Neural ODEs enables learning continuous-time dynamics while preserving geometric constraints. Weaknesses: - The paper is highly theoretical and may be challenging for readers without a strong background in differential geometry and control theory. - The experimental validation could be more comprehensive, with clearer comparisons to baseline methods and more detailed quantitative results. - The practical implementation details of the framework, particularly for complex systems, are somewhat limited. - The paper does not extensively discuss the computational complexity of the approach or potential scalability issues. Other Comments Or Suggestions: - The paper would benefit from more illustrative examples or case studies to demonstrate the practical application of the theoretical framework. - A more detailed discussion of the computational aspects of implementing the framework, particularly for high-dimensional systems, would strengthen the paper. - The authors could consider providing more intuitive explanations of the key geometric concepts to make the paper more accessible to a broader audience. - Additional visualizations of the fiber bundle structure and its relationship to measurement uncertainty would help readers better understand the geometric framework. Questions For Authors: - How does the computational complexity of your approach scale with the dimensionality of the state space and the complexity of the constraints? Are there practical limitations to applying this framework to high-dimensional systems? - The paper focuses on theoretical guarantees and simulation results. Have you considered or tested the application of this framework on physical systems with real sensor measurements? What additional challenges might arise in such settings? - The measurement-adapted Control Barrier Functions (mCBFs) adapt to local sensing conditions. Could you elaborate on how this adaptation mechanism performs in environments with highly variable sensing quality, such as those with occlusions or sensor failures? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your feedback. ## Experimental Details & Baselines Our Genesis physics engine uses task-specific configurations: Semi-implicit Euler (5e-4s timestep) with neo-Hookean material (μ=2kPa, λ=10kPa) for soft worm; RK4 (1e-2s timestep) with friction coefficients (0.15 static, 0.09 dynamic) for Franka; and RK4 (2e-3s timestep) with aerodynamic effects for quadrotor. Simulations employ continuous collision detection (3mm threshold) and sequential impulse solving (200 iterations). Statistical validation includes 95% CIs from 10 trials, with variation coefficients of 3.2-6.7%. We use Poisson disk obstacle sampling (20cm separation), bi-directional RRT for feasibility checks, and standardized evaluation protocols. While Table 1 provides comprehensive metrics across all baselines, the revised appendix will include detailed statistical analysis and simulation parameters to enhance reproducibility. ## Theoretical Framework While fiber bundle formalism necessitates rigorous mathematics, we'll add this intuitive analogy with a supporting diagram: fiber bundle as multi-story building where base space (state space) is the floor plan, fibers are vertical spaces representing possible measurements, safety certificates are structural supports, and connection forms are elevators maintaining coherence between horizontal (state) and vertical (measurement) dimensions. The key insight is that traditional approaches treat measurement uncertainty as an external disturbance, whereas our approach incorporates it as an intrinsic geometric property of the system structure. We will add a visualization of the fiber bundle structure in the revised version, showing how trajectories in state space are associated with measurement uncertainties in the total space. ## Computational Complexity & High-Dimensionality Our algorithm demonstrates excellent scalability with complexity divided into: fiber bundle connection calculation (O(n²), n=state dimension), mCBF evaluation (O(n+m), m=measurement dimension), and safety-constrained optimization (O(nd), d=policy parameters). The revised version will include a new section on computational complexity analysis with runtime measurements across system dimensions (n=6: 1.2ms, n=12: 3.8ms, n=24: 9.5ms, n=48: 22.7ms), plus IEEE 14-bus power grid validation tests using PYPOWER in the appendix, where our method achieved 97.2% constraint satisfaction under line thermal limits of 85% and voltage stability constraints of ±0.05 p.u., demonstrating effective application to practical higher-dimensional systems. ## Physical Validation on real Franka Arm Our revision conducted validation on a 7-DOF Franka arm in the real world with joint velocity limits (±1.0 rad/s), acceleration constraints (±2.0 rad/s²), end-effector pose stability (±5° deviation), force limits (5N), and power consumption constraints (≤60W). We applied Gaussian noise, dropouts, and delays to IMU/force sensors. Results show 93.5% constraint satisfaction versus 81.2% for baseline methods, and trajectory error of 2.3cm compared to the baseline's 4.8cm. The revised paper will include detailed experimental setup photos, hardware specifications, and constraint visualization plots. We identified challenges, including joint friction, inconsistent sensor frequencies, and constraint priority conflicts. Our geometric framework naturally addresses these via dynamic constraint better priority adjustment based on measurement quality compared with other methods. ## Measurement Quality Adaptation We created controlled testing scenarios with noise levels (0-8%) and data dropout rates (0-25%) for the simulation Franka task. Results show safety boundary adaptations: high-quality regions (noise <2%, dropout <5%) at 3.5cm, medium uncertainty (noise 4-6%) at 5.7cm, high uncertainty (noise >7%, dropout >20%) at 8.2cm, with fallback to conservative behavior during sensor failure. This geometric adaptation (from Equation 13) reduced path length by 25.3% versus fixed-boundary methods while maintaining safety, confirming the fiber bundle framework's effectiveness for varying measurement quality. The revised version will add an adaptive safety boundary analysis figure showing boundary values as a continuous function of measurement quality with corresponding robot trajectories under different sensing conditions. ## Appendix Accessibility The complete PDF contains all appendices, including the proof of Theorem 3.1 through six steps: preliminary assumptions, perfect measurement invariance, uncertainty propagation, local certification, temporal correlation analysis, and global safety guarantees. We can provide a separate file if needed. ## Conclusion We believe these clarifications and additional visualizations in the revised version will improve readability while confirming the effectiveness and practicality of our method through real system validation.
Summary: This paper considers the problem of learning unknown dynamics models in the presence of model constraints. The paper points out that classical treatments of this problem ignore the system's inherent geometry while taking measurements into account and, in doing so, ignore important information that could be useful during learning. However, measurement uncertainty induces a fiber bundle structure which naturally lends itself to neural ODE models. Positive results are shown in an extensive empirical study. Claims And Evidence: *Claims* 1. Proposes a novel geometric framework that unifies measurement uncertainty, system dynamics, and constraints within fiber bundle structures 2. Introduces adaptive measurement aware safety certificates that adjust conservative margins based on local measurement quality. 3. Demonstrates enhanced generalization capabilities across different scenarios without requiring global information through experimental validation. 4. The proposed method is theoretically sound; learning converges with a certificate of safety. *Evidence* 1. This is supported with the formalism in section 3. Further support comes from the context provided by related work. 2. This is also supported with the formalism in section 3. 3. Experiments with three simulated environments provide the data supporting this claim. The proposed method consistently performs well when compared with a variety of metrics and among relevant state-of-the-art baseline methods. 4. Evidence comes from Theorems 3.1 and 4.1. Methods And Evaluation Criteria: Experiments evaluate the proposed method in three simulation environments. 1. Worm robot 2. Manipulator arm 3. Quadrotor drone Uniformly positive results among these suggest an algorithm's ability to perform well across a variety of settings. Performance is evaluated with - Success rate - Path efficiency measures - Safety margins - Control quality Within these settings, experiments consider four kinds of evaluations. *Learning-based Safety Certification:* Methods with fixed barrier functions achieve reasonable success but come second to the proposed method. *Physics-informed and Geometric Methods:* The proposed method outperforms these baselines in this environment, but the paper does not explain why. The baseline was introduced as a check of "physical consistency." What that means precisely is left unsaid. Further explanation of this result is needed to avoid the baseline being viewed as a strawman. *Robust and Adaptive Control:* Baselines achieve high rates of constraint satisfaction while producing overly conservative trajectories. The proposed method avoids this pathology. *Uncertainty-Aware Predictive Control:* The proposed method ranks highest under this evaluation due to its integration of relevant measurement uncertainty with constraints. Finally, the paper performs an ablation of several aspects of their method to demonstrate the relative performance gain of each feature. *Assessment :* Overall the experiments were thorough, comprehensive, well-documented, of sound methodology, and demonstrate consistently positive results. Theoretical Claims: *Claims* The paper makes two main theoretical claims. 1. Theorem 3.1 (paraphrased): Whenever the system starts under safe conditions, the probability that the state remains safe is bounded below. 2. Theorem 4.1 (paraphrased): The proposed learning dynamics converge in a safe manner. *Evidence* 1. The proof of this involved several intermediate results. Each was presented clearly and appears sound. 2. The proof of this was easy to follow and appears sound. The proof makes an assumption about real-valued eigenvalues. The conditions under which this assumption hold should qualify the theorem statement. *Minor comment about the formalism* - The dependence of $u$ is missing from (9). Experimental Designs Or Analyses: I thoroughly reviewed the experiments, their methodology, results, and analysis. More comments can be found in my response to methods. Supplementary Material: I reviewed all of the supplementary material, including the appendix, videos, and briefly skimming the attached code. *Videos* - Annotating the video or adding descriptive audio would help viewers interpret these demonstrations. Relation To Broader Scientific Literature: This paper is well-positioned with respect to related work. The paper does a good job of clearly covering the broader areas of research which it intersects, and providing descriptions of more closely-related works. Essential References Not Discussed: I did not identify any essential references that were omitted. Below are some pointers to non-essential related work---references the authors may find interesting. *Learning sensor geometry* 1. [Map learning with uninterpreted sensors and effectors](https://www.sciencedirect.com/science/article/pii/S0004370296000513) 2. [Discovering sensor space: Constructing spatial embeddings that explain sensor correlations](https://ieeexplore.ieee.org/document/5578854) 3. [Adapting the Function Approximation Architecture in Online Reinforcement Learning](https://arxiv.org/pdf/2106.09776) Other Strengths And Weaknesses: *Strengths* - This paper checks about every box - Originality: The paper appears meaningfully novel, both theoretically and from a practical perspective. - Clarity: The paper is exceptionally clear. I expect that even non-experts from this area will be able to grasp its main contributions. - Significance: Based on the theoretical results and the extensive empirical validation, I think it's fair to say this paper makes a significant contribution toward algorithms that learn how systems move and behave, while ensuring they stay safe, even when they only have limited information about their surroundings. *Weaknesses* - The material is quite technical and, as a result, unnecessarily restrict its potential audience. It may be possible to appeal to more readers by elaborating on key concepts, such as the three mCBF conditions. Other Comments Or Suggestions: I have no additional comments to add here. Questions For Authors: I have no questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We appreciate your evaluation and suggestions. Your positive assessment is encouraging, and we accept the improvements you suggested. Below are the enhancements we plan to implement: ## Correction of Theoretical Statements We will add the dependency on $\alpha$ in equation (9), correcting it to $\inf_{u\in U}[L_fb + L_gbu + \alpha(b)] \geq 0$ to ensure mathematical rigor. In the statement of Theorem 4.1, we will specify the conditions for the real-valued eigenvalue assumption. We will add: "Assume that $\mathcal{L}_1$ is a symmetric positive definite operator" (so it only has positive eigenvalues) and discuss in which practical systems this assumption naturally holds, such as in mechanical systems with physical dissipation processes. ## Enhancing Accessibility of Technical Content We will make the technical content accessible by enhancing the intuitive explanation of mCBF conditions. Condition 1 ($b(x,y) \geq 0 \Rightarrow x \in S_0$) ensures that positive values of the safety certificate directly correspond to physically safe system states; Condition 2 ($\inf_{u\in U}[L_fb + L_gbu + \alpha(b)] \geq 0$) guarantees that system dynamics automatically "avoid" unsafe region boundaries, with stronger "avoidance force" as boundaries are approached; Condition 3 ($|b(x,y_1) - b(x,y_2)| \leq L_bd_Y(y_1,y_2)$) ensures that measurement noise does not cause drastic changes in safety certificates, providing robustness against measurement uncertainty. We will add a key visualization showing how trajectories in the state space M intersect with measurement uncertainty fibers in the total space E, and how the connection form transfers tangent vectors from the base space to the fibers, demonstrating the difference between our method and traditional approaches in handling measurement uncertainty. ## Key Insights on Fiber Bundle Structure for Measurement Uncertainty Traditional methods and our approach have differences: traditional methods treat measurement uncertainty as external disturbances, requiring first estimating states then controlling (introducing compound errors). In contrast, our method treats measurement uncertainty as an intrinsic geometric property of the system structure, directly mapping from measurement space to control actions through the fiber bundle structure. We will add visualization of the fiber bundle structure, showing how state trajectories in the base space M generate "measurement uncertainty tubes" in the total space E, and how the connection operator $K(x)(y-h(x))$ automatically adjusts safety boundaries. We will explain how the terms $K(x)(y-h(x))$ and $\alpha\|y-h(x)\|^2\nabla_y\Phi$ in equation (12) act as natural "regulators" of measurement quality, automatically increasing conservatism in uncertain regions. These terms transform the geometric structure of measurement uncertainty into adaptive safety margins, enabling our method to enhance safety guarantees as measurement uncertainty increases. ## Improving Fairness and Relevance of Experimental Analysis Regarding the comparison with physical consistency baseline methods, we will articulate the fairness of comparisons. The PNDS method focuses on maintaining physical consistency but lacks explicit handling of measurement uncertainty; GeoPath and GEM excel at maintaining geometric invariance, but they were not designed to handle uncertainty induced by measurements. We will add detailed quantitative comparative analysis, including performance of different methods under various noise levels, changes in physical constraint violation rates as measurement noise increases, and the relationship between dynamic learning errors and measurement uncertainty. We will explain how we ensure all methods use the same neural network architecture, computational budget, initial and target state distributions, noise distributions, and system parameter uncertainties, providing a fairer basis for comparison. ## Integration of Recommended References We will create a new subsection discussing connections with sensor geometry learning research. Pierce & Kuipers' work on constructing spatial embeddings from sensor correlations provides interesting parallels to our fiber bundle structure, though our focus extends to safety guarantees under uncertainty. Their work on learning hierarchical models from uninterpreted sensorimotor signals also resonates with our approach to learning dynamics under environmental constraints. We will explore how these perspectives complement our measurement-adaptive learning methods and discuss how our framework might benefit from their insights on sensor space embedding and abstraction of continuous environments to more manageable representations. We thank you for the valuable suggestions, which help improve our work and provide guidance for our research direction. We believe that with these improvements, the paper will be clearer, more rigorous, and more accessible while maintaining its technical depth and theoretical contributions.
Summary: This study proposes a geometric approach to learning dynamics with safety guarantees, leveraging the bundle structure to account for uncertainties. After presenting the geometric approach for controlled dynamical systems, the study introduces measurement-adapted control barrier functions, which enable the development of safety guarantees. Furthermore, the paper presents a learning framework for policy design with safety guarantees based on the bundle framework. Claims And Evidence: The safety guarantee is given in Theorem 3.1. The numerical experiments show superior performance compared to existing methods. Methods And Evaluation Criteria: The proposed method is based on a mathematical geometric framework, specifically leveraging bundle structures. The theoretical foundation is sound. I do not see any issues with the evaluation criteria for the numerical experiments. Theoretical Claims: As far as I have checked, the proofs in the appendix are sound. However, the statement of Theorem 3.1 may need to be revised. Since the control u is considered in the controlled dynamical system, the theorem’s statement should be like as follows: Theorem 3.1. Given an $m C B F$ b satisfying the preceding conditions, if $b(x(0), y(0)) \geq 0$, then for any admissible noise sequences $w(\cdot), v(\cdot)$, **there exists $u_t$ ($t \in [0, \infty)$)**: $$ \mathbb{P}\left(x(t) \in \mathcal{S}_0 \text { for all } t \geq 0\right) \geq 1-\exp \left(-c / \delta_v^2\right) $$ where $c>0$ is a constant depending on system parameters. The solution $x(t)$ depends on the control $u$. Therefore, the current form of the statement of Theorem 3.1 does not clearly specify which control $u$ is used to obtain the probabilistic bound. Experimental Designs Or Analyses: There are no issues with the experimental design or analyses. Supplementary Material: I have primarily checked the proofs in the supplementary materials. Relation To Broader Scientific Literature: This study is motivated by the geometric approaches in analytical mechanics and control theory. The bundle plays an important role in these fields. In this sense, a key contribution of this study—introducing the bundle-based approach—is its role in bridging the existing results in control theory and learning. Essential References Not Discussed: None Other Strengths And Weaknesses: This paper has its strength in proposing a geometric approach based on the bundle structure. This is interesting, but it may be necessary to improve the presentation to ensure that readers clearly understand the contribution. Specifically, it is somewhat unclear whether all the content in Section 3, such as the modeling in 3.1 and the fiber bundle framework in 3.2, is proposed in this study or has already been presented in existing papers. Other Comments Or Suggestions: In Section 4.4, the cost $J(\Theta)$ is given in a discrete-time setting. However, the system in (1) is formulated as a continuous-time system. Please clarify this inconsistency. Questions For Authors: 1. The problem setting is somewhat unclear. Given the system in (1), which includes the measurement equation, a typical setting would be a partially observed control setting. However, the policy in Section 4.4 is given in the form of $\pi_{\Theta}: \mathcal{M} \to \mathcal{U}$, which has access to the value of $x$. It is unclear why the measurement y is considered when obtaining the safety policy. 2. I was wondering if it is possible to provide a concrete example of $\mathcal{L}_1$ and $\mathcal{L}_2$ in (12). I could not fully follow the discussion in Section 4.1. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your thoughtful feedback and have prepared appropriate revisions. ## On Theorem 3.1 Formulation We appreciate that you correctly noted that the theorem should explicitly specify the control law. We will revise Theorem 3.1 to clearly indicate that there exists a control strategy $\pi(x,y) \in \Pi_{\text{safe}}$ for which the safety probability bound holds. Specifically, $\Pi_{\text{safe}} =$ \{$\pi : M \times Y \to U \mid L_e b(x,y,\pi(x,y)) + \alpha(b(x,y)) \geq 0, \forall(x,y) \in E$\}, where $L_e b$ is the extended Lie derivative along the fiber bundle connection, defined as $L_e b(x,y,u) = \nabla_x b(x,y)^T f(x,u) + \nabla_y b(x,y)^T \frac{\partial h}{\partial x}f(x,u)$ in Section 3.6. This clarifies that safety guarantees apply to any control strategy respecting the mCBF condition. ## Clarification of Contributions in Section 3 The system model in Section 3.1 builds on standard stochastic control formulations, but our innovation lies in integrating it with the fiber bundle perspective specifically for measurement uncertainty. Section 3.2's fiber bundle framework represents our original contribution, particularly the connection form in Equation (3) that couples measurement uncertainty with state dynamics through $K(x)$. To our knowledge, applying fiber bundles to measurement uncertainty in safety-critical learning is novel. ## Time Setting Consistency Issue Regarding the apparent inconsistency between continuous-time dynamics and discrete-time cost, we can formalize this transition as follows: Define continuous-time cost $J_c(\Theta) = \int_0^\infty e^{-\rho t} c(x(t),\Theta(x(t)))dt$ and discrete-time cost $J_d(\Theta) = \sum_{k=0}^\infty \gamma^k c(x(k\Delta t),\Theta(x(k\Delta t)))$, where $\gamma = e^{-\rho \Delta t}$. For small $\Delta t$, we have $|J_c(\Theta) - J_d(\Theta)| \leq C \cdot \Delta t$. Similarly, if continuous-time systems satisfy the mCBF condition $L_e b(x,y,\pi(x,y)) + \alpha(b(x,y)) \geq 0$, discrete systems maintain safety under $(b(x_{k+1},y_{k+1}) - b(x_k,y_k))/\Delta t + \alpha(b(x_k,y_k)) \geq \beta \cdot \Delta t$, where $\beta$ depends on second-order derivatives of system dynamics. ## Clarification of Problem Setting Traditional methods use the separation principle: estimate state $\hat{x} = E[x|y_{1:t}]$ then apply controller $u = \pi(\hat{x})$. Under high uncertainty, estimation error $e = x - \hat{x}$ may violate safety constraints. Our approach constructs a safety framework directly on fiber bundle $E = M \times Y$ with control strategy $\pi : M \times Y \to U$, incorporating measurement uncertainty into the safety condition: $\nabla_x b(x,y) \cdot f(x,\pi(x,y)) + \nabla_y b(x,y) \cdot \dot{h}(x) + \alpha(b(x,y)) \geq 0$. This avoids intermediate state estimation and automatically adjusts safety margins with measurement quality—as $\|y - h(x)\|$ increases, the control becomes more conservative. ## Concrete Examples of Learning Operators For dynamics learning operator $L_1$ in (12), we implement it on a 2D system where $x = [p, \xi]^T$, $f(x,u) = [\xi, u]^T$, and observation $y = h(x) + w$ with $y \in \mathbb{R}^m$. Here, $h: \mathbb{R}^2 \rightarrow \mathbb{R}^m$ is the observation function. The bundle structure comes from the relationship between states and measurements: $ L_1(f_{\theta} - f)(x,y) = \nabla_x(f_{\theta} - f) \cdot f(x,u) + K(x)(y - h(x)) $ where $K(x) \in \mathbb{R}^{2 \times m}$ is the gain matrix. This operator works along two directions: horizontally through state dynamics (first term) and vertically through measurement fibers (second term). In implementation, the parameter update is: $\frac{d\theta}{dt} = -\Gamma \sum_{i} [(f_{\theta}(x_i) - \dot{x}i)^T \Sigma_i^{-1} \frac{\partial f_{\theta}}{\partial \theta} + \lambda K(x_i)(y_i - h(x_i))^T \frac{\partial f_{\theta}}{\partial \theta}]$ Similarly, safety certificate operator $L_2$ in (12) for constraint $h_s(x) = x_{safe} - p \geq 0$ becomes: $L_2(\Phi - \Phi^*)(x,y) = \nabla_x (\Phi - \Phi^*) \cdot f(x,u) + \alpha ‖y - h(x)‖^2 \nabla_y(\Phi - \Phi^*)$ where $h_s: \mathbb{R}^2 \rightarrow \mathbb{R}$ is a scalar safety constraint on position. This approach automatically adapts safety margins with measurement uncertainty. Consider position control with noisy sensors: traditional methods first estimate position and then control (introducing compounding errors), while our operators $L_1, L_2$ directly map measurements to controls, becoming more conservative as uncertainty increases through terms like $\alpha‖y-h(x)‖^2$. Our 2D navigation system benchmark in the revision confirms this advantage: with noise $\sigma_y=0.1$, dynamics learning error improves by 47% (0.08 vs 0.15); at $\sigma_y=0.5$, improvement reaches 60% (0.17 vs 0.42). For safety near obstacle at $x_{obs}=2$ with $\sigma_y=0.5$, our method maintains a 0.47 margin while standard CBFs drop to 0.22 with violations - showing how Equation 12 handles measurement uncertainty without separate estimation steps.
null
null
null
null
null
null
null
null
Pareto-Optimal Fronts for Benchmarking Symbolic Regression Algorithms
Accept (poster)
Summary: The authors aim to generate a set of "absolute pareto optimal" model results for 34 of the datasets in SRBench, as a way of having an upper bound on the performance of symbolic models on those datasets up to a given equation length. They exhaustively search for equations for those datasets up to a given length and use different numerical optimizers to tune the constants. They then compare to SRBench results and argue for a number of analysis conventions. Claims And Evidence: > Convention #1: For SR benchmarking, axes in performance trade-offs plots should be in terms of the actual quantity and not in terms of ranking. > Raw performance comparisons and performance rankings achieve different goals and answer different questions. Raw performance metrics answer the question “how much different is the performance of alg A from alg B on dataset X?” and work well when comparing methods on a single dataset like in Figure 2. (Even there, though, the collapse of equation lengths in 2c) hides the fact that SBGP, ITEA, FFX, MRGP, and AI-Feynman produce models of different complexity, which is made apparent when comparing rankings in 2a). However, they bias the comparison of algs across datasets difficult because the distribution of values varies among problems. In contrast, performance rankings are the more robust way to answer “how often does alg A outperform alg B over (many) datasets?” and that is primary question multi-dataset benchmarks like SRBench try to answer. It is also why rankings are the accepted standard for statistical comparisons of algorithms over multiple datasets. (e.g., Demšar, J. Statistical Comparisons of Classifiers over Multiple Data Sets. *Journal of Machine Learning Research* **2006**, *7* (Jan), 1–30.) The problems the authors point out are that 1) rankings change when the set of algorithms change; and 2) rankings lose the magnitude of the changes. These points are true but they are, IMO, well-known. One shouldn’t compare rankings for different experiments or to assess magnitude differences; they are used because they are useful for benchmarking over many datasets and answering whether algorithm performances differ or not. > Convention #2: Aggregating results across datasets can have contradictory conclusions, it should be supplemented with analysis on individual datasets to confirm the trend. > Ironically, the problem illustrated in Figure 3a)-3c) is caused by aggregating the *actual quantities* rather than the rankings, which contradicts convention #1 recommendation. It also appears (judging by operon R2) that the authors are using the mean R2 over datasets, when they should use the median. R2 is unbounded on the negative end, so median scores would give a more robust estimate of performance across datasets. An even more robust way to estimate relative performances of algorithms across multiple datasets would be to use rankings, which are not dataset-dependent like the R2 scores and equation lengths are. This problem (mean aggregation of raw performance scores) also biases the results in Figure 1. Methods And Evaluation Criteria: > In our experiments, we use a primitive function set of {Add,Sub,Mul,Div,Pow} > The operator set the authors used doesn’t match the set used in developing/benchmarking SR algorithms on SRBench. La Cava et al 2021 / SRBench allowed the operator set {+, −, ∗, /, sin, cos, arcsin, arccos, exp, log, pow, max, min}. So, strictly speaking, the absolute pareto optimal (APO) fronts generated by the authors are not pareto-optimal with respect to the task definition in the SRBench results. I would expect the APO front to be heavily influenced by the chosen operator set, and the exhaustive space of equations to be much larger than what the authors have generated. So it is not clear to me that the APO results are directly comparable to SRBench without some large caveats. In section 3, the authors argue against aggregating results, but then in all results figures, results with equation lengths greater than 20 (sometimes 100) are collapsed together. In some cases this presentation skews the Pareto front interpretation since most of the benchmarked methods produce results that are $\geq$ 20. Isn’t this expressly against the recommendation? pretty much all of the Type 1 datasets are Friedman datasets, which are synthetically generated from ground-truth analytical equations. (ref: Jerome H Friedman. Greedy function approximation: A gradient boosting machine. Annals of statistics, pages 1189–1232, 2001.) if the goal is to produce absolute pareto optimal front for those equations, presumably you would want to include the ground truth equation. the authors don’t make it clear how they are handling train/test splits to compare to SRBench. if they are just training (model parameters) on the full datasets, the APO models will be biased relative to ones that could be possibly obtained if they followed train/test procedure of the benchmarked methods. i.e., even if a “optimal” model in some sense exists, it is not a given that one could possibly find it via finite sample training. Theoretical Claims: n/a Experimental Designs Or Analyses: See methods & evaluation criteria. Supplementary Material: yes, read the pdf and skimmed the code Relation To Broader Scientific Literature: The contribution of the paper builds on SRBench and subsequent papers that use this benchmarking resource. Its main contribution is to specify an upper front of performance for a subset of the datasets in that benchmark. to my knowledge this hasn't been suggested previously. Essential References Not Discussed: Prior work is sufficiently discussed. Other Strengths And Weaknesses: Strengths: Having a set of pareto-optimal models for SRBench is, in general, a good idea, as it sets a measurable goal for these datasets. Weaknesses: I think the authors work has some serious caveats that cause it to fall a bit short of delivering on that goal, including: 1) restriction of operator space in generating equations 2) apparent optimization of model constants on test data, which likely overestimates performance potential of APO 2) proposed analysis conventions which are misguided IMO 3) coverage of only a fraction of the datasets in SRBench Other Comments Or Suggestions: I would suggest the authors totally remove the analysis conventions suggestions. I think they are misguided and lack nuance. I would also suggest the authors improve their discussion of limitations to incorporate the points above. Finally I encourage the authors to discuss what types of measures an APO could provide within SRBench. Questions For Authors: - Is there a measure like distance to the optimal Pareto front that could viably be used? - what is the likelihood that some of the APO models actually appear in the pareto fronts of population-based SR methods, and aren't selected because of finite sample limitations? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > General Please refer to (G.1). > Claims (D.1) We thank the reviewer for the suggestions for “Convention #1”. We will improve the clarity of our discussion by explicitly stating that our comments refer only to Pareto fronts/optimality and not statistical tests in (Demšar, 2006). The reviewer noted that “One shouldn’t compare rankings for different experiments or to assess magnitude differences”, and this is precisely the issue with the SRBench paper and accompanying code (in SRBench github master/postprocessing/blackbox_results.ipynb and Figure 2 of their NeurIPS paper) which is the motivation for our convention. We felt it was urgent to highlight this point, given its widespread adoption in SR papers. We appreciate the reviewer’s agreement on this matter and will incorporate the quoted phrasing to enhance clarity. Furthermore, in other fields where Pareto fronts/optimality is used, e.g., economics, engineering, operations, multi-objective optimization, raw performance metrics serves as the axes. To our knowledge, SRBench’s analysis is one of, if not the only, to deviate from this practice by using rankings. Other reasons to use raw performance for Pareto fronts/optimality is to ensure the results are “transferable” and comparable across different studies, which other reviewers highlighted. We apologize for the previous wording that suggested a disagreement with the reviewer. We will add these clarifications to ensure that our position–fully aligned with the reviewer–is accurately conveyed. (D.2) For “Convention #2”, the aggregation affects both raw quantities and ranking, both convention #1 and #2 needs to be applied simultaneously. We will include median plots in Appendix L for completeness. Per the convention, we actually don’t recommend either the mean or median. Finally, Figure 1 is meant for illustration purposes, which we will address in convention #2 that unaggregated results are required for a holistic assessment. > Methods And Evaluation (D.3) Actually, SRBench does not use a consistent operator set for SR algorithms according to their paper, code and results file (i.e., $\texttt{.feather}$ file). We found {Add,Sub,Mul,Div,Pow} to be commonly included in most SR algorithms. Thus, we used it so that distance away from the APO front can be attributed to search inefficiency rather than a lack of certain functions. We will make this clearer in the paper. However, we still create some new assets for the expanded function set (please see (C.3)) (D.4) We thank the reviewer and will add the full untruncated plots in Appendix L, the ordering of the SR algorithms still remains the same under truncation, unlike taking average of the ranks as the axis for SRBench (which we recommend against), which can change the ordering depending on the algorithms selected. (D.5) We will include the data sampling process for the Friedman datasets in Appendix A. (D.6) In order to obtain the best upper bound on the performance with the best possible equation in the search space, we train on the full datasets. The alternative approach of using only the train set is problematic because we have to add regularization to the objective and it is not clear which to use to obtain the best upper bound. However, to provide more measurements to SR researchers we have included data on all combinations of optimization procedures (please refer to point (A.3) in the response to Reviewer NEMV). A meaningful future work enabled with the data is to explore which regularization techniques would create the closest performance to this upper bound on performance. We will include this discussion with plots in Appendix H. > Supplementary, Broader Literature, Essential References (D.7) We thank the reviewer for the validation. > Other Strengths And Weaknesses: (D.8) For the 4 numbered weaknesses please refer to points 1. (D.3), 2. (D.6), 3. (D.1) & (D.2), 4. (B.6), respectively. > Comments (D.9) For conventions, please refer to (D.1) & (D.2). (D.10) For our improvements in limitations, please see (A.1) & (C.3). (D.11) The APO plots allow users to come up with a large range of measures. A non-exhaustive list of suggested measures include: i). R-squared closeness, by measuring the percentage of datasets where the R-squared value obtained by the SR algorithm is within 0.1 (i.e, 0.1 vertical distance on the Pareto plot) of the extended APO front (plateau after max length searched), ii). Euclidian distance from a single algorithm with the front, though the relative magnitudes of axes need to be decided, iii). compare the entire field of SR with respect to the front with existing measures like IGD and HV. For the algorithms AFP, AFP_FE, AIFeynman, BSR, DSR, EPLEX, FEAT, FFX, GP-GOMEA, ITEA, MRGP, Operon, SBP-GP, gplearn, the values for i). are 62%, 59%, 12%, 15%, 62%, 68%, 68%, 35%, 65%, 47%, 32%, 62%, 68%, 59%, respectively. We will include the full results in Appendix K. > Questions (D.12) For both questions, please refer to point (D.11). --- Rebuttal Comment 1.1: Comment: I don't think the authors understood my critique of their proposed conventions: > The reviewer noted that “One shouldn’t compare rankings for different experiments or to assess magnitude differences”, and this is precisely the issue with the SRBench paper and accompanying code which is the motivation for our convention. By "experiments" I mean one shouldn't compare rankings between experiments _with different sets of algorithms_. One *should* use rankings or other problem-independent measures to compare the *same set of algorithms* across *multiple datasets*. This isn't just for statistical tests. > For “Convention #2”, the aggregation affects both raw quantities and ranking, Mean aggregation of R2 is extremely sensitive to outliers. Rankings don't have outliers, so it isn't true that they are affected the same way. > the ordering of the SR algorithms still remains the same under truncation, ... This isn't true, truncation introduces ties between algorithms that aren't there. > unlike taking average of the ranks as the axis for SRBench (which we recommend against), which can change the ordering depending on the algorithms selected. This also isn't true. If algorithm A is better than B, that relative ordering is maintained whether you compare A,B,C,D or A,B,C. > (D.3) Actually, SRBench does not use a consistent operator set for SR algorithms according to their paper, code and results file The SRBench experiment defined a set of operators that could be used. Each algorithm, in turn, had different available operator implementations from that set. The authors' results here don't cover the possible set of operators, so their pareto optimality is not "absolute" w.r.t. the original design. I would like reiterate my original point that rankings and R2 achieve different goals in benchmarking contexts and require a nuanced discussion. --- Reply to Comment 1.1.1: Comment: (D.13) We thank the reviewer for the feedback. We first present an example with concrete values to show how aggregate rankings work out. Consider 4 SR algorithms: A, B, C & D and 3 datasets, Ds1, Ds2, Ds3. On Ds1, the raw performance are: $R^2$ (higher is better) – C: 0.9, B: 0.8, A: 0.7, D: 0.6 Model size (lower is better) – B: 3, A: 5, C: 7, D: 9 On Ds2: $R^2$ – C: 0.9, D: 0.8, A: 0.7, B: 0.6 Model size – A: 3, B: 5, C: 7, D: 9 On Ds3: $R^2$ – B: 0.9, A: 0.8, C: 0.7, D: 0.6 Model size – D: 3, A: 5, B: 7, C: 9 Now, using the procedure in SRBench to determine which algorithms are relatively Pareto optimal (using SRBench github $\texttt{master/postprocessing/blackbox\\_results.ipynb}$ and consistent with Fig. 2 of their NeurIPS paper): first take the rank per dataset, then take the median of ranks. **Case 1: Using only Algo A, B, C** On Ds1: A: (Rank 3 in $R^2$, Rank 2 in model size), B: (2, 1), C: (1, 3) On Ds2: A: (2, 1), B: (3, 2), C: (1, 3) On Ds3: A: (2, 1), B: (1, 2), C: (3, 3) Median rank across Ds1, Ds2, Ds3: A: (2, 1), B: (2, 2), C: (1, 3) Thus, one would conclude Algo A & C are relatively Pareto optimal. **Case 2: Using Algo A, B, C & D** On Ds1: A: (3, 2), B: (2, 1), C: (1, 3), D: (4, 4) On Ds2: A: (3, 1), B: (4, 2), C: (1, 3), D: (2, 4) On Ds3: A: (2, 2), B: (1, 3), C: (3, 4), D: (4, 1) Median rank across Ds1, Ds2, Ds3: A: (3, 2), B: (2, 2), C: (1, 3), D: (4, 4) Thus, one would conclude Algo B & C are relatively Pareto optimal. **Note that although only Algo D was added, Algo B is now suddenly optimal and Algo A is suddenly not optimal.** We can call this the **“Rank Inversion Paradox”**, inspired by [1,2]. (D.14) We thank the reviewer for the clarifications on “experiments”. By nature, the “set of algorithms” needs to be changed over time as new algorithms are developed and is not static. For instance, say in Year 2024, only 3 algo are available, Algo A, B, C (see (D.13)), the conclusion from using aggregate rankings is that Algo A & C are the best (i.e., relatively Pareto optimal). Then in Year 2025, Algo D (see (D.13)) is developed, which would necessitate its inclusion in the analysis (and hence a change in the set of algorithms analyzed), would yield the new conclusion from using aggregate rankings that Algo B & C are the best. (D.15) We agree that both raw quantities and ranking are not “affected the same way”, but rankings are still affected as seen in Example (D.13). (D.16) To clarify further, we will add the untruncated plots in Appendix L. (D.17) The observation that “If algorithm A is better than B, that relative ordering is maintained whether you compare A,B,C,D or A,B,C” is not true for aggregate ranks. We provide a counter-example in (D.13) above. Also, see [1,2]. (D.18) In agreement with the reviewer, our understanding is also that each SR algorithm in SRBench has a different operator set. As addressed in (D.3), we will now include results on the larger set and provide rationale for why a smaller commonly used subset can be more useful. **(D.19) Appeal** We understand the reviewer disagrees with our convention which criticizes using aggregate rankings in Pareto analysis. We are not denying potential pros, but feel SR researchers should be made aware of the cons. An easy resolution to gain the reviewer’s complete support is to “totally remove the analysis conventions suggestions” as suggested, since the top contribution of this paper are the APO fronts anyway and not the conventions. However, we are very passionate about improving the field of SR and hope for one chance to win the reviewer over to our side before considering removal. i). We sincerely believe the conventions are necessary because the top-most priority is for easily transferable results across SR. If a new SR algorithm is developed, we want to be able to take the existing Pareto plot, and simply include a new coordinate and get consistent conclusions. The current approach of using aggregate rankings in Pareto plots means that existing plots cannot be reused and there is a potential for contradictory conclusions, as shown by the **“Rank Inversion Paradox”** in (D.13). ii). Most fields, if not all, do Pareto analysis in raw performance, not rankings. These fields include economics, engineering, operations, multi-objective optimization and “chip design and biomedical modeling” from Reviewer YJ4h. iii). The other reviewers support the conventions and even highlight similar issues in other fields to support our proposed conventions. This makes it awkward for us to remove the conventions without their opinion. **In the revision, we will include a nuanced discussion on pros and cons of using both raw quantities and ranking.** We hope if the remaining concerns have been addressed, the reviewer could consider increasing the score. [1] Chèze, G., et al. The Inversion Paradox and Ranking Methods in Tournaments. [2] “Rank reversals in decision-making”, Wikipedia
Summary: One common way to evaluate symbolic regression (SR) algorithms is to judge whether one method Pareto-dominates other SR algorithms. That means it has better performance for a given expression length. This paper proposes to evaluate SR methods against absolute Pareto-optimal solutions instead. It finds an absolute Pareto-optimal front of expressions for 34 real-world datasets from SRBench, a widely used SR benchmark. This is achieved by exhaustive search over all possible expressions of a given length and over eight different numerical optimization methods. The main contribution is a new baseline for benchmarking SR that informs SR researchers about the achievable limits of SR algorithms. Additionally, the paper proposes conventions for analyzing SR benchmark results and discusses several findings from the experiments. ## update after rebuttal The current paper has certain limitations: - not enough operators/functions included in the search space - relatively small maximum length of the equations - not all datasets in SRBench considered However, I also do understand that some choices needed to be made to limit the already large computational resources needed. Even in light of these limitations, I support the acceptance of this paper. It is the first paper to provide some APO fronts, and I believe these can already be useful to researchers. From what I understand, the authors will release all evaluated equations, their fitness and complexity measured with respect to different metrics. These would be very interesting to analyze, especially for the datasets where there is a clear gap between the current methods and the found APO front. Very often, equations are only useful if they are "simple" enough to interpret. This paper shows that in some cases it is possible to find shorter equations with much better performance than the current methods (Figures 4c, 4d). Claims And Evidence: Claims in the papers are supported by clear and convincing evidence. The observations made about the current practices in SR literature are justified and their limitations discussed. Methods And Evaluation Criteria: The paper proposes a new baseline for a well-known and appropriate benchmarking dataset - SRBench. In particular, it focuses on the real-world datasets in SRBench which do not have a known ground truth. Thus establishing Pareto-optimal equations for these datasets is important. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental designs and analyses seem valid. The paper describes the search space of equations and exhaustively searches through it. It also checks eight different numerical optimization algorithms and runs them for different seeds to ensure that they do not tend to be stuck in some suboptimal local minima. The random seeds are described and chosen to match the ones used in SRBench. Supplementary Material: I reviewed the supplementary material in the main PDF. It contains details of the datasets and the APO front expressions, the paper's main contribution. Relation To Broader Scientific Literature: The paper addresses an important problem in SR benchmarking. Although the number of terms is currently the main measure used to judge the interpretability of an expression, it would be beneficial to briefly discuss other approaches to measuring equation's complexity (Vladislavleva et al., 2009; Vanneschi et al., 2010; Kommenda et al., 2015; Virgolin et al., 2020; Virgolin at al., 2021). Kommenda, M., Beham, A., Affenzeller, M., and Kronberger, G. (2015). Complexity Measures for Multiobjective Symbolic Regression. Vanneschi, L., Castelli, M., and Silva, S. (2010). Measuring bloat, overfitting and functional complexity in genetic programming. Virgolin, M., De Lorenzo, A., Medvet, E., and Randone, F. (2020). Learning a Formula of Interpretability to Learn Interpretable Formulas. Virgolin, M., De Lorenzo, A., Randone, F., Medvet, E., and Wahde, M. (2021). Model learning with per sonalized interpretability estimation (ML-PIE). Vladislavleva, E. J., Smits, G. F., and den Hertog, D. (2009). Order of Nonlinearity as a Complexity Measure for Models Generated by Symbolic Regression via Pareto Genetic Programming. Essential References Not Discussed: All essential references are discussed. Other Strengths And Weaknesses: I think the paper contributes a very valuable asset to the community. Not only does it allow us to better judge the performance of SR algorithms in absolute terms, but it also gives us insights into the complexity of different datasets. I also believe that the dataset of all evaluated equations may provide additional insights, similar to the ones already present in the paper. Table 1 (and Table 3) shows the APO front expressions for multiple datasets from SRBench. These are very interesting, especially in conjunction with Figure 4, which shows that for certain datasets, none of the tested methods is on the absolute Pareto front. This demonstrates a large gap in the capabilities of the current algorithms and hopefully will stimulate further research. Having "ground truth" equations for real-world datasets will allow for a better evaluation of how far we are from closing this gap. The main weakness of the paper lies in a still relatively limited search space. I am fully aware that the current search space already requires extensive computational resources. However, it would be beneficial to understand what the Pareto front looks like for bigger equation lengths and for a richer function set (e.g., containing trigonometric functions or exponentials). It is possible that some of the methods are, in fact, on the absolute Pareto front for some datasets - we just do not know what this front looks like for larger equations. For instance, GP-GOMEA could, in theory, be optimal for datasets in Figures 4a and 4b. This is a significant limitation for datasets where the performance of the current algorithms is much higher than the maximum performance on the found absolute Pareto front (e.g., Figure 4a). I believe the limited search space (and thus unknown parts of the absolute Pareto front) should be emphasized in the limitations section. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > General (G.1) We thank all 4 reviewers for their comprehensive reviews, identifying strengths of the work (e.g., “the authors have given concrete targets for future SR algorithm development”, “addresses a critical gap in SR benchmarking, where prior evaluations lacked universal reference point”) while providing actionable recommendations. Our personal motivation for this work began after receiving questions from SR researchers such as “is it still worth it to develop better SR algorithms?” and “is there still meaningful room for improvements in SR algorithms or has performance saturated?” in the context of new SR algorithms’ performance on SRBench being somewhat incremental (on average ~0.0077 $R^2$ improvements). Previously, one could only answer those questions subjectively based on trends, in the same spirit that Moore's Law does with speculating transistor density. With this work, we now confidently know that there are many cases in SRBench where there is still a large potential improvement. This is in spite of some computational limits we had to make given our already-large resource budget. Thus, this work is a data-driven justification that continuing research into developing new SR algorithms is still worth it. Though the main contribution is the Pareto front equations (i.e., given explicitly as equations in Appendix B and available in supplementary materials), the results also provide insights on how SR algorithms can achieve these performances. Particularly, it shows that search space of short, simple equations is sufficiently expressive and should be explored more in SR algorithms before expanding the search space to longer equations – a mechanism that is also related to increasing explainability and interpretability, which are the primary reasons for practitioners to pick SR over alternative machine learning algorithms in the first place. Among the various strengths, if you i). find that the Pareto fronts contributed in this paper are highly relevant and applicable to many, if not all, SR algorithms’ evaluation and ii). think that our work introduces and makes it convenient to add an informative and important baseline that you would like new SR algorithms and research papers to adopt, we hope that you could consider helping us advocate for acceptance of this work. This work makes publicly accessible, to communities of various funding levels, an important computationally-expensive baseline. We address the main concerns in the individual replies and hope that in light of these, the reviewers would consider increasing their recommendation. > Claims And Evidence, Methods And Evaluation, Experimental Designs, Supplementary (C.1) We thank the reviewer for the validation. > Broader Literature (C.2) We thank the reviewers for the suggestion to include more complexity measures to increase utility. Our provided data allows the computation of these with negligible costs as they are derived quantities from the obtained equation. In the csv files, we have extracted, for each row in the $\texttt{EquationStructure}$ column i). the count of operators, ii). the count of numerical constants, iii). Kommenda’s complexity, iv). Virgolin’s trained linear elastic net interpretability estimator (we use their trained rescaled coefficients), v). Vladislavleva’s order of non-linearity (using $\epsilon$=1e-6 as done in their work). We have also added Peterson’s complexity that is used in DSR. The files will be updated and the code for this processing will be made publicly available in the next upload opportunity. We are also committed to adding new metrics, and will update the files when new works such as [1] are publicly available. Below is an example (numbers truncated to 4 s.f.) of the improvement (e.g., $\texttt{1027\\_ESL\\_BFGS\\_3\\_860\\_summary.csv}$): |EquationLength|EquationStructure|...|OperatorCount|ConstantsCount|Kommenda|Virgolin|Vladislavleva|Peterson|...| |-|-|-|-|-|-|-|-|-|-| |...| |7|$\texttt{Sub(Mul(xdata[3],x[0]),Mul(xdata[1],x[1]))}$|...|3|2|6|0.7620|2|7|...| [1] Kacprzyk, K., & van der Schaar, M. (2025). Beyond Size-Based Metrics: Measuring Task-Specific Complexity in Symbolic Regression. In AISTATS. > Other Strengths And Weaknesses (C.3) We thank the reviewers for the suggestions for expansion of the function set. To perform a similar analysis, our sampling of run-time estimates a 12.7X increase in compute resources required. However, we recognize the interest in having some indication of expansion of the function set and have done so on a single random seed 860, and using only BFGS optimization, which will be included in supplementary materials available to readers and discussed in Appendix G. However, for the expansion of length, even a small increase in length led to an estimated 84.0X increase cost, so we are unable to provide that even with a single random seed and only BFGS. We will make these limitations clearer in the limitations section.
Summary: This paper proposes an absolute evaluation criterion for symbolic regression, namely the Absolute Pareto Optimality (APO). At the same time, the effects of eight different optimization algorithms are analyzed. The establishment of this criterion is of great significance, as it provides a fair and reliable benchmark for researchers to evaluate the performance of symbolic regression algorithms, and solves the problem that in previous comparisons of algorithms, only relative indicators such as the recovery rate and R-squared value could be used for evaluation. The exhaustive search that consumed 1,480,000 supercomputer core-compute-hours demonstrates the author's determination. In this way, future comparisons of various methods will be more fair. It is expected to address the problem where one paper claims that its own method is excellent, but the results are different in another paper. ## update after rebuttal I agree with the assertion from other reviewers about the attractiveness of the proposed method, and I also consider that there are some problems in this paper. The responses from authors address most of my concerns, even though they are not quite satisfactory. Since I acknowledge the value of the idea proposed by this paper, I am inclined to accept it and will keep my rating. Claims And Evidence: The authors claim three main contributions in this paper, and there are related evidences to support these claims: 1.They establish absolute Pareto-optimal fronts for 34 real-world datasets by exhaustively searching expressions up to fixed sizes using gene expression programming and K-expressions. This is rigorously supported by their computational effort and empirical comparisons showing gaps in current SR algorithms. The publicly released APO expressions and performance metrics provide a concrete, reproducible baseline. 2. The authors propose standardized conventions for SR benchmarking, advocating the use of actual metric values instead of rankings. This is validated through examples shown in Figure 2 and Figure 3, demonstrating how ranked axes distort conclusions. The proposal's logic aligns with the need for transferable and interpretable benchmarks. 3. The authors conduct an empirical comparison of numerical optimization methods for SR, showing minimal impact on APO front quality. Quantitative results and low $R^2$ variance across methods robustly support this claim. Methods And Evaluation Criteria: The method proposed in this paper is important to the field of symbolic regression and its benchmarking practices. 1.By constructing absolute Pareto-optimal fronts through exhaustive search, the authors provide an objective, domain-agnostic baseline for SR algorithms. Unlike relative Pareto fronts, APO fronts define the theoretical performance ceiling for any SR method, enabling researchers to quantify how close their algorithms are to the "best possible" expressions for real-world datasets. This addresses a critical gap in SR benchmarking, where prior evaluations lacked universal reference points. 2.The APO fronts reveal that state-of-the-art SR algorithms systematically underperform on short expressions. This insight directs the SR community to prioritize improving compactness-aware search strategies, balancing accuracy and interpretability. 3.The proposal to use actual metrics ($R^2$, expression length) instead of rankings mitigates biases introduced by algorithm selection and enhances reproducibility. For example, Figure 2 demonstrates how ranked axes can misleadingly compress performance differences, while actual values expose true gaps. This standardization fosters fairer comparisons and accelerates progress by aligning the community on shared evaluation criteria. Theoretical Claims: I do not find theoretical claims in this paper. Experimental Designs Or Analyses: The experimental design and result analysis in this paper are methodologically sound and rigorously structured. The authors construct absolute Pareto-optimal fronts through exhaustive search using gene expression programming and K-expressions across 34 real-world datasets. This approach aligns with multi-objective optimization principles and leverages prior validated techniques. Testing eight numerical optimization methods further ensures robustness, mirroring engineering practices where Pareto solutions are evaluated across diverse algorithms. The experimental results are also analyzed rigorously. (1) Datasets are classified into four types, with some exposing insufficient exploration of compact expressions, which is a finding consistent with Pareto front "inaccessibility" observed in optimization studies. (2) Stability tests reveal minimal impact of numerical methods on APO front quality, demonstrating experimental control over confounding variables. Supplementary Material: I have reviewed the suppplementary materials which gives more details on their experiments. Relation To Broader Scientific Literature: The paper’s contributions are related to broader scientific literature: 1.The absolute Pareto-optimal front concept extends classical Pareto optimality principles, widely applied in fields like chip design and biomedical modeling. By exhaustively generating APO fronts for symbolic regression, the work mirrors deterministic Pareto methods while adapting them to data-driven modeling challenges, thereby bridging optimization theory and interpretable machine learning. 2.The empirical comparison of numerical optimization methods aligns with studies evaluating multi-objective algorithms, such as orthogonal evolutionary strategies or learning automata. Findings that numerical methods minimally impact APO quality resonate with literature emphasizing Pareto front stability under parameter variations, reinforcing the validity of SR algorithm evaluations. Essential References Not Discussed: I do not recognize missing essential references that are not discussed. Other Strengths And Weaknesses: Strengths 1.This paper builds on Pareto-optimality principles, proposing a framework for generating absolute Pareto fronts through exhaustive search. This aligns with multi-objective optimization methodologies. The use of gene expression programming and K-expressions ensures valid mathematical structures, reflecting prior work in symbolic regression and genetic algorithms. 2. The public release of all expressions, optimization parameters, and performance data reduces redundant computational efforts. The proposal to use actual metrics ($R^2$, expression length) over rankings addresses biases seen in multiagent learning evaluations. Weakness: While multi-seed experiments mitigate local optima risks, the “absoluteness” of APO fronts remains contingent on numerical optimization convergence. It is a limitation also noted in game-theoretic learning, where Pareto outcomes depend on assumptions like non-deceptive opponent behavior. Other Comments Or Suggestions: I have a minor suggestion to refine the expression. In the abstract, there is a sentence "serves as an important benchmark that serves as a performance limit" --> “serves as an important benchmark and performance limit". Questions For Authors: 1.How do you address potential limitations in achieving global optimality due to local minima in numerical optimization, despite using multiple random seeds? Are there theoretical or empirical guarantees that the APO front represents the global Pareto-optimal set for the expression spaces explored? 2.The APO fronts are derived from 34 SRBench datasets. How do you ensure these results generalize to domains with higher dimensionality, dynamic environments, or datasets outside SRBench? 3.How should SR researchers leverage the APO front to improve existing algorithms? For instance, would integrating APO-based heuristics enhance search efficiency? 4.Please explain the following terms: max_arity, terminal_symbol_count, head_length. 5.Why the length of the tail is determined by $h\times (n_{max}-1)+1$? 6.In Figure 5, why do different optimization methods need to be used to obtain optimal results for different data sets? 7.In Finding #3, why do you say you can describe the loss landscape by considering if expressions that are two mutations away from the APO front are likely to mutate back into the APO front? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > General Please refer to point (G.1) in the response to Reviewer Gtam. > Claims, Methods and Evaluation, Experimental Designs, Supplementary, Broader Literature, References (B.1) We thank the reviewer for the validation. > Other Strengths And Weaknesses (B.2) We thank the reviewer for the reference to problems in using ranking in other fields such as multiagent learning evaluations. We will add this to the revision to link it to broader literature. (B.3) For “absoluteness”, please refer to point (A.1) in the response to Reviewer NEMV. > Other Comments (B.4) We thank the reviewer and will refine the sentence. > Questions For Authors (B.5) We used multiple random seeds and also multiple numerical optimization methods. We can guarantee all possible structures (within a certain complexity) are searched, but to the best of our knowledge, there are no theoretical or empirical guarantees in literature for the numerical optimization methods for all the different structures, so these are the best results we can produce with current technology and knowledge. We will be clearer on this in the revision as outlined in (A.1). (B.6) We thank the reviewer for the question. Because the main reasons to select SR in the first place is its explainable and interpretable models via short concise equations, when handling datasets with higher dimensions, feature selection can be performed first. Feature selection is also already a common preprocessing step in SR algorithms since SR algorithms do not scale well with dimensions. However, the best feature selection method to use for SR is highly debatable, hence we focused on datasets with low-dimensions. SRBench is already a compilation of datasets from multiple environments and can be said to be the de facto benchmark for SR, with extensively-tuned performance of SR algorithms, we did not use other datasets. (B.7) For improving existing algorithms, please see point (G.1) para 3. Additionally, using loss landscape analysis (“Finding #3”) but using k-mutations instead of 2-mutations, and finding the best k, can be incorporated into SR algorithms, where greedy k-mutations are performed for every candidate (i.e., for every candidate structure, greedily search the best k-mutations). Note that the second suggestion is an ongoing future work which needs more thorough analysis, but is only enabled by having the APO fronts. (B.8) max_arity is the number of arguments of the function with the most arguments, terminal_symbol_count is the length of the tail given by a formula, head_length is a hyperparameter that is the number of operators and operands in the head of a K-expression. We will include a longer explanation with simple examples of K-expression as well as proofs of the properties of K-expressions by Ferreira (2002) in Appendix D. (B.9) The length of the tail is to ensure that there are no non-terminal symbols with empty arguments. It is computed from assuming the worst-case scenario where each symbol in the head is an operator, there will be at most $n_{max}$ arguments. Since the symbol itself would fill an empty argument as well (with the exception of the first symbol), we have $h \times (n_{max}-1)+1$. We will include this in Appendix D. (B.10) We thank the reviewer for the question about Figure 5 in “Finding #2”. We did this because we wanted to show a variety of rarities of equations in the top-bin, so for each unique combination of i). random seed, ii). optimization method and iii). dataset, we made a histogram. The 3 in Figure 5 “were selected as their top-bin in the histogram had the minimum, median and maximum value among all other histograms”. As we learn later in “Finding #4”, no optimization method provides a clear prediction performance advantage. To communicate our findings more effectively in the revision, we will fix the histograms in Figure 5 to BFGS only, so as not the conflate the message of “Finding #2” with “Finding #5”. (B.11) For “Finding #3”, we are interested in studying properties of the loss landscape that tell us the tendency of getting stuck at local optima and the difficulties in assessing the global optima. For typical machine learning, for loss landscapes similar to the Rastrigin function, in regions close to the global optimum, the function has many small depressions or "basins of attraction" (valleys) that can trap an optimization algorithm in local minima. Although the global minimum exists, the surrounding parameter space is filled with numerous local minima that resemble shallow valleys. For SR, the loss landscape of greater interest is on the function structure rather than on the numerical parameters and “considering if expressions that are two mutations away from the APO front are likely to mutate back into the APO front”, is our way of assessing if the equations on the APO front are surrounded by many "basins of attraction" that makes it tougher for SR algorithms to discover them. We will include this discussion in Appendix I. --- Rebuttal Comment 1.1: Comment: I am basically satisfied with the authors' responses and will keep my rating. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the time and effort invested in this process. We hope to have the opportunity to present this work at ICML, where we believe it can both advance SR benchmarking and spark critical discussion to move benchmarking in SR toward a higher level of maturity comparable to that of the top ML subfields.
Summary: This paper proposes the absolute Pareto optimal (APO) front as a new benchmarking methodology for evaluating symbolic regression (SR) algorithms. Conventional SR evaluation is based on relative Pareto dominance with respect to other algorithms, but this does not provide any measure of efficiency or achievable limits. The authors established the theoretical limits of the trade-off between expression length and prediction performance (R-squared) through exhaustive search on 34 real-world datasets in SRBench. Specifically, they generated all possible expressions (within a specific length limit) using the K-expression format, and applied eight different numerical optimization methods to find the optimal parameters. As a result, the difference in performance between the current SR algorithm and the APO front was clarified, and it was shown that many algorithms were unable to find potential optimal solutions, especially in the search for short expressions. The paper also proposes new conventions for SR benchmark analysis (using actual values, and performing individual analysis rather than just aggregating results across datasets). The results of this research, which required a huge amount of computing resources (approximately 1.48 million core compute hours), are published as a valuable baseline for future SR algorithm development. ## Update After Rebuttal After careful consideration of the authors' rebuttal, I am changing my evaluation from "weak reject (2/5)" to "weak accept (3/5)". The authors have sufficiently addressed my main concerns and promised the following improvements: 1. Clear explanation of the limitations of "absolute Pareto optimality" and improved notation (A.1) 1. Addition of complexity metrics beyond expression length (Kommenda, Virgolin, Vladislavleva, etc.) (A.3, C.2) 1. Provision of performance evaluations on both training and test data (A.3) 1. Self-contained explanation including proofs of K-expression properties (A.4) 1. Validation of the practical value of the APO front in modeling physical phenomena (A.11) 1. Provision of additional analyses such as function frequency analysis (A.14) What particularly influenced my evaluation is that, as other Reviewers (especially Gtam) have emphasized, this research represents an important contribution to the evaluation of SR algorithms and provides a valuable resource to the research community despite its high computational cost. The improvements indicated by the authors address the limitations of the paper and enhance the practical utility of the results. I also understand the concerns raised by other reviewers (particularly Reviewer GBr3) regarding the analysis conventions, but the authors' rebuttal (especially D.13-D.19) provides compelling examples of why traditional ranking-based analysis is problematic. On this point, while recognizing that there are various perspectives, I believe there is value in the practices proposed by the authors. Overall, I am confident that this paper makes a valuable contribution to SR research, and with the implementation of the improvements promised by the authors, its value will be further enhanced. Therefore, I support the acceptance of this paper. Claims And Evidence: The arguments in this paper are well supported by the evidence presented. First, the argument regarding the construction of the APO front is supported by data obtained as a result of exhaustive search using a vast amount of computing resources. The authors explain the search method in detail and present it as a reproducible algorithm. The authors' claims regarding the comparison of the current SR algorithm and APO fronts are supported by detailed analysis of 34 datasets, which are classified into four types to clearly explain the trends. In particular, Figure 4 visually shows representative examples of each type to support the claims. The claims regarding the comparison of numerical optimization methods are also supported by specific data, such as the measurement of differences in distribution using KL divergence and statistics on the ratio of equations generated on the APO front. However, there are some limitations to the claim of “absolute” Pareto optimality. As the authors themselves acknowledge, there is no guarantee that the numerical optimization methods used can find the true global optimum solution, and in this respect, it cannot be said to be truly “absolute”. However, efforts have been made to mitigate this problem by using multiple random number seeds and eight different optimization methods. Methods And Evaluation Criteria: The proposed method and evaluation criteria are appropriate for the SR benchmarking problem. Exhaustive search using fixed-length expressions with K-expression provides a reasonable trade-off between limiting the search space and guaranteeing the generation of valid formulas. The Pareto front of expression length and R-squared values as evaluation criteria directly correspond to the essential goal of SR (balancing predictive performance and interpretability). In addition, the evaluation criteria proposed by the authors (using actual values rather than rankings, and performing individual analysis as well as dataset aggregation) appropriately point out the problems with current SR benchmarking. However, the fact that they do not take into account model complexity indicators other than formula length (e.g. the type of mathematical operations or the number of numerical constants) and that they only consider the R-squared value of the training data and do not evaluate generalization performance are methodological limitations. However, the authors are aware of these limitations, and given the constraints on computing resources, this is a reasonable choice. Theoretical Claims: This paper is mainly an empirical study, and there are few theoretical assertions that require formal mathematical proof. The assertions regarding the properties of K-expressions (such as the fact that all K-expressions can be decoded into valid mathematical expressions) are based on the cited literature (Ferreira, 2002) and are not directly proven within the paper. The claim about the complexity of the exhaustive search space (O(d^l), where d is the number of variables in the dataset and l is the length of the expression) is based on basic combinatorics and is considered to be correct. The core claim that the APO front represents the performance limit of any SR algorithm is logically derived from the nature of exhaustive search, but is subject to the aforementioned constraint of the limits of numerical optimization. Experimental Designs Or Analyses: The experimental design is sound overall, and the results are valid. The authors describe the experimental settings in detail, including the selection of random number seeds, the set of primitive functions used, and the numerical optimization method, in order to ensure reproducibility. The method for extracting APO fronts is clearly defined, and the procedure of selecting the formula with the highest R-squared score for each formula length for each dataset is logical. The condition that longer formulas must perform better than shorter formulas is also reasonable. Of particular note is that the authors compared eight different numerical optimization methods and showed that there was little difference in their performance. This provides valuable insight into the selection of numerical optimization methods in SR research. The limitations of the experiment include the restriction on the length of the formula (head length = 3 and 4 only), the restriction on the function set used (only 5 binary operations), and the selection of a specific data set (less than 1000 data points, less than 10 features), but these are reasonable choices given the constraints on computing resources. Supplementary Material: The appendices provide important supplementary material that supports the detailed experimental results and systematic analysis of this research. Appendix A presents the characteristics and sample sizes of the datasets used in the “Details of the 34 SRBench datasets” in tabular form. Appendix B lists the optimal formulas found for each of the 34 datasets in “APO front expression formulas for all datasets”. Finally, Appendix C lists the datasets classified into four types (Types I to IV) based on the performance of the SR algorithm. The datasets are also shared as files, and the fact that the authors are trying to increase transparency regarding the efficiency and limitations of the symbolic regression algorithm is worthy of praise. Relation To Broader Scientific Literature: In terms of the evaluation method for symbolic regression, this study extends SRBench (La Cava et al., 2021). While SRBench was based on relative Pareto dominance, this study introduces the concept of absolute Pareto optimality. The K-expression-based approach is based on Ferreira (2002)'s genome-phenome system as an approach that combines the advantages of genetic algorithms and genetic programming. It has also been shown to be related to other K-expression-based algorithms, such as DistilSR (Fong & Motani, 2023). The comparison of numerical optimization methods contributes to the research on the optimization of numerical constants in SR (Kommenda et al., 2020; Chen et al., 2015). In particular, the fact that it compares methods other than the BFGS algorithm (frequently used in Biggio et al., 2021; Petersen et al., 2019) on a large scale is new. This study also mentions SR applications in various fields, such as physics (Udrescu & Tegmark, 2020), materials science (Wang et al., 2019), engineering (Martinez-Gil & Chaves-Gonzalez, 2020), and healthcare (Christensen et al., 2022), showing its relevance to a wide range of scientific literature. Essential References Not Discussed: The paper cites and discusses a certain amount of relevant major literature. The major previous studies are appropriately cited in each category of symbolic regression benchmarking (SRBench), utilization of K-expression, exhaustive search SR, and numerical optimization in SR. As a recent development in SR benchmarking, the paper also mentions SRBench++ (de Franca et al., 2024), but on the other hand, it does not follow benchmarks that propose new datasets and evaluation metrics, such as SRSD (Matsubara et al., 2024). Other Strengths And Weaknesses: **Strengths:** ・Originality: The introduction of the concept of absolute Pareto optimality into the evaluation of SR algorithms can be seen as an innovation in the evaluation paradigm. ・Practicality: By providing a baseline in the form of the APO front, the authors have given concrete targets for future SR algorithm development. This is particularly valuable for algorithm development that focuses on the search for short formulas. ・Contribution to the research community: By publishing results that require a huge amount of computing resources, the authors have reduced the computational burden on other researchers. Thorough analysis: The SR is examined from multiple angles, including a comparison of eight numerical optimization methods and an analysis of the loss landscape around the APO front. **Weaknesses:** - Limited versatility: The applicability to problems with more complex relationships is limited due to the set of functions used and the restriction on the length of the expression. - Computational cost issues: The proposed method requires a large amount of computational resources, making it difficult to extend to new datasets and function sets. ・Lack of practical guidance: Although the gap between the APO front and the current algorithm is shown, there are few specific algorithm design guidelines to fill the gap. ・Interpretability definition: Only the formula length is used as an interpretability indicator, and other aspects (such as conceptual simplicity) are not considered. Other Comments Or Suggestions: - By verifying the formula used to construct the APO front in actual application examples (e.g. modeling physical phenomena), it may be possible to further demonstrate its practical value. - It may be beneficial to conduct research on an extended version of the APO front that includes a wider range of functions (e.g. trigonometric functions, exponential functions, etc.). - By reformulating the problem as a multi-objective optimization that includes R-squared values for test data as well as training data, it may be possible to construct an APO front that also takes into account generalization performance. - By performing feature analysis (such as what structures and functions are frequently used) on the equations on the APO front, it is possible to gain further insights for designing efficient SR algorithms. - The methods used in this research have the potential to be applied to benchmarking interpretable machine learning methods other than symbolic regression, and it is also worth exploring this direction. Questions For Authors: I look forward to your response to the concerns I have raised above, but I have no further questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > General Please refer to point (G.1) in the response to Reviewer Gtam. > Claims And Evidence (A.1) We thank the reviewers for advice on our discussion on the subtleties with the term “absolute” in the limitations section that includes our mitigation strategies. To further improve the clarity, we will denote APO with a subscript that states the primitive function set used and the numerical optimization method (e.g., ${APO}_{(+,-,*,/,**,sin,cos,...),SLSQP}$) as a caveat. Despite not having a true global numerical optimizer, we are already able to find fronts that have a large performance gap with the equations found via existing SR algorithms. > Methods And Evaluation (A.2) For complexity indicators, please refer to point (C.2) in the response to Reviewer Gtam. (A.3) We implement the reviewers’ suggestions and include more data for SR research. In the $\texttt{Extracted\\_APO\\_Fronts}$ folder, instead of having only one column for $\texttt{EquationParameters}$, we now have 3 columns, each consisting of a vector of numerical constants optimized based on all data, train data only and test data only, respectively. For each of the 3 vectors of numerical constants, we also include their performance on all data, train data only and test data only. These will be made publicly available in the next upload opportunity. Below is an example (numbers truncated to 4 s.f.) of the improvement (e.g., $\texttt{1027\\_ESL\\_BFGS\\_3\\_860\\_summary.csv}$): |…|EquationStructure|EquationParametersAll|R2FitAllEvalAll|R2FitAllEvalTrain|R2FitAllEvalTest|…| |-|-|-|-|-|-|-| |...| |…|$\texttt{Sub(Mul(xdata[3],x[0]),Mul(xdata[1],x[1]))}$|[0.5503,-0.5307]|0.8056|0.8009|0.8172|…|[0.5713,-0.5118]|0.8052|0.8013|0.8136| …|[0.4931,-0.5881]|0.8016|0.7941|0.8216|...| |…|EquationParametersTrain|R2FitTrainEvalAll|R2FitTrainEvalTrain|R2FitTrainEvalTest|… |EquationParametersTest|R2FitTestEvalAll|R2FitTestEvalTrain|R2FitTestEvalTest|...| |-|-|-|-|-|-|-|-|-|-|-| |...| |…|[0.5713,-0.5118]|0.8052|0.8013|0.8136| …|[0.4931,-0.5881]|0.8016|0.7941|0.8216|...| > Theoretical Claims (A.4) We will make the paper more self-contained by including proofs of the properties from Ferreira in Appendix D. > Experimental Designs, Supplementary, Broader Literature (A.5) We thank the reviewer for the validation. > Essential References (A.6) We thank the reviewer and will add all SRSD metrics. For accuracy metric, this is simply $R^2>0.999$. For solution rate and NED, these are only computable on datasets with closed-form ground-truth, so not available. However, we thought that a meaningful way to incorporate these metrics is by taking the APO equations we found and treating them as “proxy ground-truth”, enabling a new metric to assess SR algorithms performance on black-box datasets. We thank the reviewer for inspiring this additional use of the APO equations and will add this and the discussion of other less commonly-used but high quality datasets in literature (e.g., SRSD) in Appendix E. > Other Strengths And Weaknesses (A.7) For “Limited versatility”, please refer to point (C.3) for more results and discussion. (A.8) For “Computational cost issues”, to the best of our knowledge, there is no empirical or theoretical alternative which could create a similarly universal baseline, so we felt the benefits far outweighed the cost. Our cost reducing strategy is to open-source these files so that it is a one-off cost instead. (A.9) For “Lack of practical guidance”, we will add guidance in Section 4 similar to point (G.1) para 3. (A.10) For “Interpretability definition", please refer to point (C.2) where we add other complexity metrics. > Other Comments (A.11) We thank the reviewer for the creative and impactful suggestion. In the revision, we will add Appendix F which shows that in [1] in the Newtonian dynamics experiments where SR is applied on the internal functions, the equation structures discovered by Cranmer et al. appear on the APO front we find using our approach, and these equations are the true underlying physical law. [1] Cranmer, M., et al. (2020). Discovering symbolic models from deep learning with inductive biases. NeurIPS. (A.12) For APO front on a wider range of functions, please refer to point (C.3). (A.13) We thank the reviewer for the idea. Using the data in point (A.3), we will include results of train vs test $R^2$ in Appendix H. In most cases, there is no trade-off, so we also include results of train minus test $R^2$ against length. (A.14) We thank the reviewer again and add function frequency analysis in Appendix J. For example, for equations of length 9, the average frequency of Add, Sub, Mul, Div, Pow is 0.7000, 0.8625, 0.9500, 0.4833, 1.0042, respectively, which can possibly inform the settings when generating the candidate solutions in SR algorithms. (A.15) We plan to explore using this approach for various permutations of common activation functions in small neural network architectures as a baseline for NAS algorithms.
null
null
null
null
null
null
Tensorized Multi-View Multi-Label Classification via Laplace Tensor Rank
Accept (poster)
Summary: In this paper, the authors propose a novel approach that introduces a low-rank tensor classifier combined with the innovative Laplace Tensor Rank (LTR), which jointly captures high-order feature correlations and label dependencies. Extensive experiments across six benchmark datasets demonstrate TMvML’s superior performance. Claims And Evidence: Yes, the central claims are supported by evidence. Extensive experiments across six benchmark datasets demonstrate TMvML’s superior performance. The superiority of LTR over existing tensor rank approximations is validated through ablation studies. Methods And Evaluation Criteria: Yes, the methodological design is reasonable. The rotation operation on the tensor classifier is a clever design choice, enabling the exploration of interactions between different views and labels through frontal slice comparisons. Theoretical Claims: Yes, the proofs for Theorem 3.1 is correct. Experimental Designs Or Analyses: Yes, the experimental designs is reasonable. The use of six widely adopted MVML datasets and five standard metrics ensures a comprehensive and fair comparison. The inclusion of statistical tests further strengthens the reliability of the results. Supplementary Material: Yes, the codes are provided in the supplementary material. Relation To Broader Scientific Literature: TMvML builds on prior work in tensor-based method and multi-label classification. The paper extends the principles of low-rank representation to the MVML setting, leveraging a novel tensorized classifier to capture high-order correlations across views and labels. Essential References Not Discussed: All relevant works critical to understanding the main contribution of the method are cited in the paper. Other Strengths And Weaknesses: Strengths: (1) This work innovatively leverages a concise low-rank MVML tensor classifier to excavate cross-view feature correlations and characterize multi-label semantic relationships simultaneously. The whole paper is well organized and easy to understand. (2) This paper designs a new Laplace Tensor Rank, which preserves larger singular values and discards smaller ones to obtain an accurate low-rank tensor representation. Such new component is welcome to multi-view community. Weaknesses: (1) The Figure 2 is ambiguous. Unclear labeling of vertical and horizontal coordinates, for example, why the true rank is 3 and the singular value gets progressively larger from 0 to 9. (2) Some expressions are not accurate enough. For example, The matrix S is designed to ensure the multi-view representation is predictive corresponding to the known labels, and it is not clear. Other Comments Or Suggestions: See the above weaknesses. Questions For Authors: See the above weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for the feedback on our paper. We appreciate the time and effort you have put into reviewing our work. In this rebuttal, we respond to the concerns raised in the reviews. **W1:** In this figure, we tested the ability of multiple low-rank tensor norms (including TNN, ETR, LTSpN and LTR) to approximate the true rank of three-dimensional tensors. Specifically, we constructed **a series of three-dimensional tensors with a fixed true rank of 3, while varying singular values that progressively increase from 0 to 9.** The horizontal axis represents the singular values, while the vertical axis represents the approximation value of the rank function. Such setup allows us to evaluate how well each method approximates the true rank under different singular value distributions. We will revise the figure to include a detailed caption explaining the experimental setup. We hope these changes address the reviewer’s concerns. **W2:** To clarify, our learning process follows a **transductive learning paradigm**, where the model leverages both labeled and unlabeled data during training but only uses the labels from the training set for supervision. The matrix $\bf S$ is a filtering matrix designed to ensure that the optimization process only utilizes the label information from the training set, while excluding any label information from the test set. Specifically, $\bf S$ is defined as a diagonal matrix: $\mathbf{S}_{ii}=\begin{cases}1&\text{if the }i\text{-th sample belongs to the training set,}\\\\0&\text{otherwise (samples belongs to the test set).}\end{cases}$
Summary: This paper proposes a method named TMvML for multi-view multi-label learning (MVML). The approach includes a low-rank tensor classifier to capture both consistent correlations across views and modeling complex multi-label relationships. Additionally, a new Laplace Tensor Rank (LTR) is introduced to capture higher-order correlations within the tensor space. This approach leads to significant improvements in MVML, as demonstrated by extensive experiments. Claims And Evidence: The paper's main claims are supported by convincing evidence. Extensive experiments on six datasets demonstrate TMvML’s superiority over state-of-the-art methods across multiple metrics. Methods And Evaluation Criteria: The proposed methods make sense for the problem. TMvML innovatively leverages tensor classifier to encode high-order correlations across both multi-view and multi-label, while the Laplace Tensor Rank (LTR) constraint effectively balances the preservation of critical semantic relationships and the suppression of noise. The experimental evaluation utilizes widely recognized MVML benchmark datasets and state-of-the-art baseline methods. Theoretical Claims: I checked the correctness of the proofs for theoretical claims, including the theorems and proofs related to the effectiveness of LTR and closed-form solution in optimization. Experimental Designs Or Analyses: I checked the validity of the experimental designs and analyses. Extensive experiments are conducted on six widely-used MVML benchmark datasets, with results averaged over multiple runs to ensure statistical reliability. The issues are listed behind in the Weaknesses. Supplementary Material: I reviewed the supplementary material, which includes the code for the proposed method and the code can reproduce the experimental results. Relation To Broader Scientific Literature: The method TMvML is the first attempt to utilize tensorized low-rank MVML classifier in MVML problem. Essential References Not Discussed: There are no related works that are not currently discussed in the paper. Other Strengths And Weaknesses: The paper proposes a Tensorized Multi-View MultiLabel Classification method (TMvML), which is the first attempt to utilize tensorized low-rank MVML classifier to achieve the high-order feature correlations extraction and multi-label semantic correlations characterization simultaneously. Meanwhile, the paper designs a new Laplace Tensor Rank (LTR), which serves as a tighter surrogate of tensor rank to effectively capturing high-order fiber correlations. There are also some weaknesses: 1. In tensor-based methods, the Tensor Nuclear Norm (TNN) is commonly used to capture the low-rank structure of tensors [1,2]. The proposed Laplace Tensor Rank (LTR) should be compared with TNN in the experiments to better highlight its effectiveness and potential advantages. 2. The proposed tensor classifier construction involves merging view-specific mapping matrices and rotating the resulting tensor to align label-view interactions. While this rotating design is theoretically motivated, ablation studies would strengthen the claim that rotation is essential for capturing label consistency and view correlations. Such experiments would provide concrete evidence of the rotation operation’s contribution to the method’s overall performance. 3. In Fig 6, the font of the coordinate axis should be enlarged. [1] Zhao S, Wen J, Fei L, et al. Tensorized incomplete multi-view clustering with intrinsic graph completion[C] // Proceedings of the AAAI Conference on Artificial Intelligence. 2023, 37(9): 11327-11335. [2] Zhang C, Li H, Lv W, et al. Enhanced tensor low-rank and sparse representation recovery for incomplete multi-view clustering[C] // Proceedings of the AAAI conference on artificial intelligence. 2023, 37(9): 11174-11182. Other Comments Or Suggestions: I would like to learn about the authors' response to the weaknesses listed above, which may give me a clearer perspective on the paper's contribution. Questions For Authors: I would like to learn about the authors' response to the weaknesses listed above, which may give me a clearer perspective on the paper's contribution. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the feedback on our paper. We appreciate the time and effort you have put into reviewing our work. In this rebuttal, we respond to the concerns raised in the reviews. **W1**: We agree that comparing the proposed Laplace Tensor Rank (LTR) with the widely used Tensor Nuclear Norm (TNN) is essential to highlight the advantages of our method. In fact, we have already performed a comparison of the LTR function with the TNN function and the results are shown in Fig. 3. According to the figure, LTR provides a tighter approximation to the true rank function compared to TNN, especially for larger singular values. Theoretically, LTR’s nonconvex formulation more aggressively suppresses small (noise-corrupted) singular values while preserving larger (signal-carrying) ones, leading to a more accurate low-rank representation. This property is not fully captured by TNN, which tends to over-penalize larger singular values due to its convex nature. However, we fully agree with the reviewer that comparing LTR with TNN in the experiments would better highlight its effectiveness. Thus, we compared TMvML with its variant TMvML-TNN, where the LTR was replaced by TNN to capture the low-rank tensor structure, and we report the results for the numerical experiments: | | | Emotions | Yeast | Corel5k | Plant| Espgame |Human| |-|-|-|-|-|-|-|-| |TMvML|AP| **0.811±0.020** | **0.771±0.008** | **0.440±0.008** | **0.608±0.007** | **0.306±0.001** | **0.631±0.010**| |TMvML-TNN|AP|0.738±0.014|0.747±0.014|0.382±0.015|0.601±0.017|0.270±0.020|0.621±0.007| |TMvML|Cov| **0.300±0.070** | **0.460±0.002** | **0.266±0.006** | **0.169±0.013** | **0.409±0.008** | **0.150±0.003** | |TMvML-TNN|Cov|0.344±0.032|0.467±0.009|0.279±0.003|0.171±0.012|0.452±0.009|0.162±0.005| Experimental results demonstrate that TMvML consistently outperforms TMvML-TNN across all datasets. This compelling evidence proves that our proposed LTR is more effective than traditional TNN in modeling the complex high-order correlations in multi-view multi-label learning tasks, particularly in preserving discriminative singular values while suppressing noise-corrupted ones. **W2**: We agree that ablation study to validate the rotation operation in the tensor classifier construction would provide concrete evidence of the rotation’s contribution to capturing label consistency and view correlations. We compared TMvML with a variant that removes the rotation operation (denoted as TMvML-NoRot), and the results are summarized below: | | | Emotions | Yeast | Corel5k | Plant| Espgame |Human| |-|-|-|-|-|-|-|-| |TMvML|AP| **0.811±0.020** | **0.771±0.008** | **0.440±0.008** | **0.608±0.007** | **0.306±0.001** | **0.631±0.010** | |TMvML-NoRot|AP|0.628±0.021|0.733±0.021|0.231±0.002|0.511±0.015|0.191±0.001|0.532±0.005| |TMvML |Cov| **0.300±0.070** | **0.470±0.002** | **0.266±0.006** | **0.169±0.013** | **0.409±0.008** | **0.150±0.003** | |TMvML-NoRot|Cov|0.473±0.005|0.485±0.004|0.350±0.002|0.210±0.007|0.498±0.000 |0.185±0.002| The results show that TMvML consistently outperforms TMvML-NoRot across all datasets. This significant performance gap highlights the critical role of rotation in extraction of both cross-view consistent correlations and multi-label semantic relationships simultaneously. **W3**: Thank you for pointing this out. We will enlarge the font of the coordinate axes to improve readability and ensure better clarity in the visualization.
Summary: This paper presented a method for Multi-View Multi-Label Learning (TMvML) which utilizes tensorized MVML classifier to achieve the high-order feature correlations extraction and multi-label semantic relationships characterization simultaneously. Moreover, a new Laplace Tensor Rank is designed to characterize a better low-rank tensor structure. Experiments show some good results. ## update after rebuttal Thank you for your response. The new explanations and experimental results have strengthened the evaluation. After reading the authors' response, I would like to raise my rating to "Accept". Claims And Evidence: The claims are supported by evidence. TMvML’s superiority is evident in its consistent outperformance of existing methods. Methods And Evaluation Criteria: The method design makes sense. Motivated by the fact that tensor can characterize the low-rank structure of multi-dimensional, tensorized MVML classifier can deal with MVML problem. Theoretical Claims: The proofs for theoretical claims in optimization are correct. Experimental Designs Or Analyses: The experimental designs and analyses are reasonable, with thorough benchmark datasets of varying scales and complexities. The experiment also performs relevant ablation experiments, convergence analysis and hyperparametric analysis. Supplementary Material: I reviewed the code in the supplementary material. Relation To Broader Scientific Literature: This approach aligns with recent efforts to enhance tensor rank approximations, but goes further by integrating multi-view and multi-label learning into a unified framework. Essential References Not Discussed: All related works are cited or discussed in the paper. Other Strengths And Weaknesses: **Strengths:** - The organization of this article is reasonable and well-written. - The proposed LTR offers a non-convex surrogate for tensor rank. - Extensive experiments demonstrate TMvML’s superior performance. **Weaknesses:** - Recent tensor-based MVML methods should be compared, which can judge whether TMvML’s gains stem from tensorization itself. A direct comparison with recent tensor-based MVML methods would provide clearer insights into the specific contributions of tensorization and help validate the effectiveness of the proposed framework. - The modified Laplace function used in LTR introduces an additional exponential term $e^{\delta}$ compared with the original Laplace function. The authors need to elaborate on the specific advantages of this modification. - The paper lacks a dedicated convergence analysis with formal theoretical proofs. Especially, are there formal proofs or conditions ensuring convergence to a stationary point? Addressing these points would enhance the theoretical rigor of the paper. Other Comments Or Suggestions: - Recent tensor-based MVML methods should be compared, which can judge whether TMvML’s gains stem from tensorization itself. A direct comparison with recent tensor-based MVML methods would provide clearer insights into the specific contributions of tensorization and help validate the effectiveness of the proposed framework. - The modified Laplace function used in LTR introduces an additional exponential term $e^{\delta}$ compared with the original Laplace function. The authors need to elaborate on the specific advantages of this modification. Questions For Authors: Are there formal proofs or conditions that ensure the convergence to a stationary point? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the feedback on our paper. We appreciate the time and effort you have put into reviewing our work. In this rebuttal, we respond to the concerns raised in the reviews. **W1(C1):** Existing tensor-based methods have primarily been applied to multi-view clustering tasks for mining higher-order feature correlations, while some matrix-based methods employ low-rank constraints to capture label semantic relevance. To the best of our knowledge, our proposed TMvML represents the first attempt to utilize tensor structures for MVML tasks, designed to simultaneously model both multi-view high-order correlations and multi-label co-occurrence patterns. The tensor formulation provides a natural and effective framework for capturing the intrinsic multi-dimensional relationships in MVML data that conventional matrix-based approaches cannot fully characterize. Although direct comparisons with other tensor methods are unavailable, we validate the effectiveness of our proposed Laplace Tensor Rank (LTR) by comparing TMvML with its variant TMvML-TNN, where we replace LTR with the traditional Tensor Nuclear Norm (TNN) for low-rank tensor structure approximation. Experimental results demonstrate that TMvML consistently outperforms TMvML-TNN across all datasets. This compelling evidence proves that our proposed LTR is more effective than traditional TNN in modeling the complex high-order correlations in multi-view multi-label learning tasks. | | | Emotions | Yeast | Corel5k | Plant| Espgame |Human| |-|-|-|-|-|-|-|-| |TMvML|AP| **0.811±0.020** | **0.771±0.008** | **0.440±0.008** | **0.608±0.007** | **0.306±0.001** | **0.631±0.010**| |TMvML-TNN|AP|0.738±0.014|0.747±0.014|0.382±0.015|0.601±0.017|0.270±0.020|0.621±0.007| |TMvML|Cov| **0.300±0.070** | **0.460±0.002** | **0.266±0.006** | **0.169±0.013** | **0.409±0.008** | **0.150±0.003** | |TMvML-TNN|Cov|0.344±0.032|0.467±0.009|0.279±0.003|0.171±0.012|0.452±0.009|0.162±0.005| **W2(C2):** The introduction of the additional exponential term e^δ in the modified Laplace function, $f_{\mathrm{LTR}}(x)=1-\exp\left(-\frac{e^\delta x}{\delta}\right)$, provides several key advantages over the original Laplace function, $f_{\mathrm{Laplace}}(x)=1-\exp\left(-\frac{x}{\delta}\right)$. - The modified function offers enhanced flexibility by allowing dynamic adjustment of the growth rate and magnitude through $e^δ$. When $δ$ is large, $e^δ$ amplifies $x$, making the function grow faster for small singular values. When δ is small, the effect of $e^δ$ is reduced, and the function behaves similarly to the original Laplace function. This adaptability makes the modified function more versatile in handling different data distributions. - The modified function exhibits faster convergence for large values of $x$ due to the exponential scaling $e^δ$, improving optimization efficiency in tasks involving large-scale data or high-dimensional tensors. Thank you again for this valuable feedback. Please let us know if there is any additional information we can provide to assist with your evaluation. **W3(Q1):** We agree that theoretical guarantees for convergence are critical for ensuring the reliability and robustness of the optimization framework. We have added formal theoretical proofs ensuring convergence to a stationary point and reported the convergence theorem and its detailed proof in our rebuttal to **Reviewer miAu**.
Summary: This paper proposes a Tensorized Multi-View Multi-Label Classification (TMvML) method to address the limitations of existing approaches that independently model cross-view consistent correlations and multi-label semantic relationships in MVML learning. The method reconstructs multi-view multi-label mapping matrices into a tensor classifier, where tensor rotation and low-rank constraints are jointly applied to unify view-level feature consistency and label-level semantic co-occurrence. Moreover, Laplace Tensor Rank is designed as a tight surrogate of tensor rank to capture high-order fiber correlations. The experimental results demonstrate the effectiveness of the proposed framework. Claims And Evidence: In this paper, the claims are supported: 1. TMvML’s superiority over SOTA is validated via experiments (Table 2) and statistical tests (Friedman/Bonferroni-Dunn). 2. LTR’s effectiveness is justified theoretically and empirically (Figure 3, Figure 5). Methods And Evaluation Criteria: This paper uses the tensorized classifier for MVML, as tensors naturally model multi-dimensional relationships. The rotation operation cleverly reorients the tensor to align label-view interactions, addressing a key limitation of matrix-based methods. This paper evaluates the proposed framework on widely-used MVML benchmark datasets with five standard evaluation metrics and the results demonstrate the effectiveness of the method. Theoretical Claims: Theorem 3.1 is correct and clearly proven. Experimental Designs Or Analyses: 1. This paper conducts extensive experiments on several datasets, and the experimental results demonstrate the effectiveness of the proposed framework. 2. Ablation studies and parameter sensitivity tests rigorously validate the contributions of LTR and hyperparameter stability. Supplementary Material: The code in supplementary material allows for reproducibility and further exploration of the method's implementation. Relation To Broader Scientific Literature: The paper builds on foundational works in tensor-based multi-view learning and multi-label classification, but it uniquely integrates these two paradigms through a unified tensorized framework. It applies the tensor framework to the MVML problem for the first time. Essential References Not Discussed: There are no essential references missing or overlooked in the paper's discussion of related work. Other Strengths And Weaknesses: Strengths: - Originality: This paper applies the tensor framework to the MVML problem for the first time, advancing the field by unifying cross-view consistency and label semantics in a single tensor classifier. This addresses a critical gap in existing MVML methods, which often handle these aspects independently. - Experiments: The experiments are sufficient and the effectiveness of the proposed method is substantiated through these experiments. - Clarity: This paper is well organized and the proposed method is clearly written to understand. All experiments details are provided and the codes are also released. Weaknesses: - In Section 5.3, the authors provide a textual analysis of the convergence behavior of TMvML, supported by empirical convergence curves (Figure 7). While the empirical results demonstrate stable convergence across datasets, the paper would benefit from theoretical guarantees to further strengthen the credibility of the optimization process. Other Comments Or Suggestions: Please see the points under Weaknesses above. Questions For Authors: Please see the points under Weaknesses above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **W1**: The convergence of TMvML is guaranteed through the validation presented in Theorem 1, with comprehensive and rigorous proof below. **Theorem 1**: Let $\\{\mathcal{P}_k = ({\bf Z}_k^{v}, {\bf E}_k^{v}, {\bf A}_k^{v}, {\bf W}_k^{v}, {\bf B}_k^{v}, \mathcal{C}\_k, \mathcal{G}\_k)\\}\_{k=0}^{\infty}$ be the sequence generated by Algorithm 1, then the sequence $\{\mathcal{P}_k\}$ meets the following two principles: 1). $\\{\mathcal{P}_k\\}$ is bounded; 2). Any accumulation point of $\\{\mathcal{P}_k\\}$ is a KKT point of Algorithm 1. To prove Theorem 1, we first introduce two lemmas. **Lemma 1** [1]: Let $\mathcal{H}$ be a real Hilbert space with inner product $\langle \cdot, \cdot \rangle$, norm $\\|\cdot\\|$, and dual norm $\\|\cdot\\|^{dual}$. For $y \in \partial \\|x\\|$, we have $\\|y\\|^{dual} = 1$ if $x \neq 0$ and $\\|y\\|^{dual} \leq 1$ if $x = 0$. **Lemma 2** [2]: Let $F: \mathbb{R}^{m \times n} \to \mathbb{R}$ be defined as $F(\mathbf{X}) = f(\sigma(\mathbf{X}))$, where ${\bf X} = {\bf U} \mathrm{Diag}(\sigma({\bf X})) {\bf V}^T$ is the SVD of ${\bf X}$, $r = \min(m,n)$, and $f: \mathbb{R}^r\to\mathbb{R}$ is differentiable and absolutely symmetric at $\sigma(\mathbf{X})$. Then, $\frac{\partial F(\mathbf{X})}{\partial\mathbf{X}}=\mathbf{U}\mathrm{Diag}(\partial f(\sigma(\mathbf{X}))) \mathbf{V}^T.$ where $\partial f(\sigma(\mathbf{X})) = (\frac{\partial f(\sigma_1(x))}{\partial \mathbf{X}}, \dots, \frac{\partial f(\sigma_r(x))}{\partial\mathbf{X}}).$ **Proof of the first part**: On the $k+1$ iteration, from the updating rule of $ \mathbf{E}_{k+1}$, the first-order optimal condition should be satisfied. $0=\alpha\partial\left\\|{\bf E}_{k+1}^v\right\\|\_{2,1}+\mu_k({\bf E}\_{k+1}^v-({\bf X}^v-{\bf{Z}\_{k+1}}^v{\bf A}^v+{\bf B}\_k^v/\mu\_k))=\alpha\partial\left\\|{\bf E}\_{k+1}^v\right\\|\_{2,1}-{\bf B}\_{k+1}^v,$ Thus, we have $\frac{1}{\alpha}[\mathbf{B}\_{k+1}^{v}]\_{i,j}=\partial\left\\|\left[\mathbf{E}\_{k+1}^{v}\right]\_{:,j}\right\\|\_{2},$ The $\ell\_{2}$ norm is self-dual, so based on the Lemma 1, we have $\left\\|\frac{1}{\alpha}[\mathbf{B}\_{k+1}^{v}]\_{:,j}\right\\|\_{2}\geq1$. So the sequence $\\{\mathbf{B}\_{k+1}^{v}\\}$ is bounded. Next, according to the updating rule of $\mathcal{G}$, the first-order optimality condition holds: $\partial \\|\mathcal{G}\_{k+1}\\|\_{LTR}=\mathcal{C}\_{k+1}$ Let $\mathcal{U} * \mathcal{S} * \mathcal{V}^{T}$ be the t-SVD of tensor $\mathcal{G}$. Based on Lemma 2 and the definition of LTR, we have: $\\|\partial\\|\mathcal{G}\_{k+1}\\|\_{\text{LTR}}\\|\_F^2\leq\frac{e^{2\delta}\min(n\_1,n\_2)}{\delta^2 n\_3^2}$ Thus, $\\{\mathcal{C}\_{k+1}\\}$ is bounded. Based on the iterative method constructed in the algorithm, we can deduce: $\mathcal{L}\_{k}(\mathbf{Z}\_{k+1}^v, \mathbf{E}\_{k+1}^v, \mathbf{A}\_{k+1}^v, \mathbf{W}\_{k+1}^v, \mathcal{G}\_{k+1}, \mathbf{B}\_k^v, \mathcal{C}\_k) \leq \mathcal{L}\_{k-1} + \frac{\rho\_k + \rho\_{k-1}}{2 \rho\_{k-1}^2} \\| \mathcal{C}\_k - \mathcal{C}\_{k-1} \\|\_F^2 + \frac{\mu\_k + \mu\_{k-1}}{2 \mu\_{k-1}^2} \sum\_v \\| \mathbf{B}\_k^v - \mathbf{B}\_{k-1}^v \\|\_F^2$ Summing both sides results in: $\mathcal{L}\_k$ is bounded, and consequently, all its components are bounded, including $\\| \mathcal{G}\_{k+1}\\|\_{\text{LTR}}$. Then, the boundedness of $\\{\mathcal{G}\_{k+1}, \mathbf{Z}\_{k+1}, \mathbf{A}\_{k+1}\\}$ is easy to prove. Therefore, the sequence $\\{\mathcal{P}\_k\\}$ is bounded. **Proof of the second part:** By the Weierstrass-Bolzano theorem [3], there exists at least one accumulation point of the sequence $\\{{\mathcal{P}\_k}\\}_{k=1}^{\infty}$, denoted as $\mathcal{P}\_*$. Then, we have: $\lim_{k\to\infty}({\bf Z}\_k^v, {\bf E}\_k^v, {\bf A}\_k^v, {\bf B}\_k^v, {\bf W}\_k^v, \mathcal{C}\_k, \mathcal{G}\_k)=({\bf Z}\_*^v, {\bf E}\_*^v, {\bf A}\_*^v, {\bf B}\_*^v, {\bf W}\_*^v, \mathcal{C}\_\*,\mathcal{G}\_\*)$ From the update rules of $\mathbf{B}\_k^v$, $\mathcal{C}\_k$, with $\\{\mathbf{B}\_k^v\\}$, $\\{\mathcal{C}\_k\\}$ bounded and the fact $\lim_{k\to\infty}\mu_{k}=\infty$, we obtain: $\lim_{k\to\infty}(\mathbf{X}^v-\mathbf{Z}\_{k+1}^v \mathbf{A}\_{k+1}^v - \mathbf{E}\_{k+1}^v) = 0 \Rightarrow \mathbf{X}^v = \mathbf{Z}\_*^v \mathbf{A}\_*^v + \mathbf{E}\_*^v$ $\lim_{k \to \infty} (\mathcal{W}\_{k+1} - \mathcal{G}\_{k+1}) = 0 \Rightarrow \mathcal{W}\_* = \mathcal{G}\_*$ Combining the first-order optimality conditions of $\mathbf{E}\_{k+1}^v$ and $\mathcal{G}\_{k+1}$, we take the limit and obtain: $\mathbf{B}\_*^v=\alpha \partial\\|\mathbf{E}\_*^v\\|\_{2,1}, \quad \mathcal{C}\_\*=\beta\partial\\|\mathcal{G}\_\*\\|\_{LTR}$ Therefore, the accumulation point $\mathcal{P}\_*$ generated by TMvML satisfies the KKT conditions. [1] The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv, 2010. [2] Nonsmooth analysis of singular values. Part I: Theory. Set-Valued Analysis, 2005. [3] Introduction to real analysis, Wiley New York, 2000.
null
null
null
null
null
null
PIGDreamer: Privileged Information Guided World Models for Safe Partially Observable Reinforcement Learning
Accept (poster)
Summary: This paper introduces PIGDreamer, a novel model-based reinforcement learning approach designed to enhance safety and performance in partially observable environments by leveraging privileged information during training. The authors propose Asymmetric Constrained Partially Observable Markov Decision Processes (ACPOMDPs), a theoretical framework that extends CPOMDPs by allowing the value function to access underlying states, thereby reducing the representation space and improving policy learning. PIGDreamer integrates privileged information through three key mechanisms: privileged representations, which align the state representations of the naive world model with privileged information; privileged predictors, which enhance the accuracy of reward and cost predictions; and privileged critics, which refine policy estimations. Empirical evaluations on the Safety-Gymnasium benchmark demonstrate that PIGDreamer significantly outperforms existing methods in terms of safety and task performance, achieving near-zero-cost performance while maintaining high rewards. The approach also exhibits superior training efficiency, with only a modest increase in training time compared to its unprivileged variant. Overall, PIGDreamer represents a significant advancement in model-based safe reinforcement learning for partially observable environments, effectively utilizing privileged information to achieve high performance and safety. Claims And Evidence: The claims made in the submission are generally well-supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the paper's proposed methods and evaluation criteria are well-suited for the problem and application. Theoretical Claims: I did not find any obvious issues in the proofs for theoretical claims in the paper. Experimental Designs Or Analyses: I did not find any obvious issues in the experimental designs or analyses in the paper. Supplementary Material: The supplementary material is not attached. Relation To Broader Scientific Literature: World Models: The use of world models to enhance sample efficiency and enable agents to learn from partial observations has been explored in several works (Hafner et al., 2019; Hogewind et al., 2022). These models learn environmental dynamics and task-specific predictions from past observations and actions. The idea of using privileged information during training to improve performance has been explored in model-free RL (Hu et al., 2024; Lambrechts et al., 2024). However, its application in model-based RL, especially for safety-critical tasks, is less explored. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths** The paper introduces the Asymmetric Constrained Partially Observable Markov Decision Processes (ACPOMDPs) framework, which is a novel extension of existing POMDPs. This framework allows the value function to access underlying states, leading to more efficient critic updates and superior policies. This theoretical contribution is significant and provides a new perspective on leveraging privileged information in partially observable environments. The paper demonstrates strong empirical results on the Safety-Gymnasium benchmark, showing that PIGDreamer outperforms existing methods in terms of both safety and task performance. Additionally, the method achieves these improvements with only a modest increase in training time, making it a practical solution for real-world applications. **Weaknesses** The paper could benefit from a more comprehensive discussion of related work, particularly recent advancements in safe RL that also leverage privileged information. This would provide a broader context for the contributions and highlight the unique aspects of PIGDreamer. The paper evaluates PIGDreamer on the Safety-Gymnasium benchmark, but additional experiments on more diverse benchmarks or complex environments would strengthen the claims. This would provide further evidence of the method's robustness and generalizability. While the paper reports aggregate metrics and confidence intervals, a more detailed statistical analysis, including significance tests, would further strengthen the empirical results. This would provide additional confidence in the robustness of the findings. These points highlight the key strengths and areas for improvement in the paper, providing a balanced view of its contributions and potential enhancements. Other Comments Or Suggestions: N/A Questions For Authors: Could you elaborate on the assumptions made in the ACPOMDPs framework, particularly regarding the nature of the privileged information and its impact on the generalizability of the results? How do these assumptions affect the applicability of your framework to real-world scenarios where privileged information might not always be available or reliable? In addition to the Safety-Gymnasium benchmark, have you considered evaluating PIGDreamer on other benchmarks or real-world datasets? If so, could you share any preliminary results or insights from those evaluations? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer Rrit for your thoughtful comments and questions. Your insights significantly contribute to the further enhancement of our paper. **Q1. A more comprehensive discussion of related work.** Thank you for your suggestion. In the next version of our manuscript, we will discuss recent advancements in safe reinforcement learning that utilize privileged information. **Q2. Guard Benchmark.** We appreciate your valuable suggestion. We conducted additional experiments within the Guard benchmark. Our empirical results indicate that our method significantly outperforms other baseline approaches. A more comprehensive response is available in **Q3** of our reply to Reviewer ou54. | Model | | Goal_Ant_8Hazards | | | Goal_Ant_8Ghosts | | | Goal_Humanoid_8Hazards | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | Training time | Cost | Reward | Training time | Cost | Reward | Training time | Cost | Reward | | SafeDreamer | 14.27 | 1.00 | 2.54 | 16.7 | 0.93 | 10.67 | 13.94 | 10.8 | 0.34 | | Informed-Dreamer(Lag) | 17.03 | **0.03** | 1.32 | 18.28 | 2.54 | 10.09 | 14.03 | 10.72 | -1.60 | | Scaffolder(Lag) | 24.82 | 0.04 | 1.86 | 27.14 | **0.83** | 6.15 | 23.8 | 13.14 | **2.13** | | Distill | 20.12 | 10.52 | 1.32 | 23.21 | 1.02 | 0.50 | 21.04 | 24.07 | 1.41 | | PIGDreamer(Ours) | 18.45 | 0.92 | **14.18** | 20.86 | 2.28 | **15.76** | 18.12 | **10.56** | 2.09 | **Q3. The assumptions of the ACPOMDP framework.** Thank you for your question. In the ACPOMDPs framework, we assume that the privileged information consists of the underlying states of the environment; however, this assumption is impractical in real-world scenarios, as defining the underlying states and determining how to obtain them is challenging. Consequently, we relax this assumption to incorporate certain sensors that are available and precise, albeit costly, in real-world scenarios. For example, in the Safety-Gymnasium benchmark, we define the privileged information as including the proprioceptive state, historical actions, and the relative positions of hazards and the goal. In the Guard benchmark, the privileged information consists of the proprioceptive state and LiDAR perceptions of hazards and the goal. All the aforementioned sensors are accessible in real-world applications, making our method a viable solution for practical implementation. **Q4. Detailed Statistical Analysis.** Thank you for your suggestion. We apologize for our inability to complete all the experiments within the limited time available during the rebuttal period. Consequently, we intend to conduct a more comprehensive statistical analysis in the upcoming round of rebuttals.
Summary: The work focuses on the development of a novel safe model-based reinforcement learning method. The researchers utilize the concept of privileged information and introduce the so-called asymmetric constrained partially observable Markov decision process (ACPOMDP) task, which requires access to the actual state in addition to observations for training the critic. They utilize the basic architecture of Dreamer and extend it to function simultaneously with both observations and actual states from the environment. Two models of the world are created: a naive model and a privileged model. During the deployment phase, only the naive model with access to observations is utilized. During the learning process, a multi-component loss is employed, incorporating privileged information. Safety is ensured through standard regularization using a Lagrangian approach. Experiments are conducted using Safety Gymnasium, and the proposed method demonstrates the same performance compared to SafeDreamer, Safe-SLAC, and other privileged approaches. ## update after rebuttal The authors have conducted additional experiments in a new Guard setting, enhancing their work's experimental part. These experiments show that PIGdreamer has low quality at a cost. These and previous results do not conclusively prove a significant advantage over other baselines for me. In my opinion, safe reinforcement learning methods should first demonstrate excellent results in terms of cost and only secondarily show good performance and high rewards. Considering my other comments, I am inclined to leave my assessment unchanged for now, but I will not object to the acceptance of the article. Claims And Evidence: The statements proposed by the authors in the article on the effectiveness of using privileged information are generally confirmed by the experiments conducted. Methods And Evaluation Criteria: The authors use the standard Safety Gymnasium benchmark, considering many tasks there, which is the standard in this field. In general, other benchmarks could be considered, such as SafeAntMaze based on MuJoCo. Theoretical Claims: The paper makes a fairly obvious claim that, in the presence of complete state information, a critic will form more accurate value functions than if it has incomplete observations. However, this claim does not add any significant insight to the approach. Instead, the proof in the appendix follows the logic presented in Pineau et al. (2006). Experimental Designs Or Analyses: The experiments were conducted according to the necessary requirements, but it is difficult to say that they were convincing. PIGDreamer does not differ significantly from the baseline with privileged information in terms of reward plots and cost value. However, the disadvantage of SafeDreamer in terms of not accessing true states is obvious. Figure 4 shows that all results are within the range of variations. Supplementary Material: The authors did not attach the code, so it is difficult to say about the reproducibility of the results and the correctness of the comparisons in this regard. Relation To Broader Scientific Literature: In general, the proposed idea combines well-known methods: the SafeDreamer approach with a Lagrangian, and the idea of duplicating observations and states when training a world model. The positive aspect is that the authors have successfully formalized and trained all these elements into a single model. However, the experimental results do not seem very convincing. Essential References Not Discussed: Most of the necessary papers are correctly cited. Other Strengths And Weaknesses: The authors do not clearly explain how the naive and privileged world models are separated by parameters. Based on the notation, it seems that they are trained with the same set of parameters, which is not well described. I have no other significant comments at this time. Other Comments Or Suggestions: The authors were careless about the links to pictures and pictures. The order is clearly messed up. The link to page 8 doesn't lead anywhere at all. Figure 4 shows strange x-axis signatures, especially on the cost graph. Questions For Authors: Do naive and privileged world models have the same set of parameters? How is the privileged part discarded during the deployment phase? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the valuable suggestions provided by Reviewer OU54. In response, we have conducted additional experiments and clarifications, with the hope that these efforts will address your concerns. **Q1. Clarification Regarding Model Parameters and Deployment.** The naive and privileged world models are trained jointly using the same set of parameters. During the deployment phase, the parameters for both models are loaded; however, only the naive world model and the actor are employed to generate actions. **Q2. Relation Between Our Theoretical Claim and Proposed Method.** We appreciate your question. Our claim that a critic can develop more accurate value functions through direct access to the underlying states motivates our methodology in two key aspects: 1. **Enhanced Value Estimation.** We utilize an asymmetric architecture to facilitate more precise value estimations, thereby improving the policy. 2. **Mitigation of Risk Underestimation.** The analysis presented in Theorem 3.3 illustrates that reliance on partial observations results in the critic underestimating associated risks. This finding compels us to integrate privileged information into Safe Reinforcement Learning (SafeRL). **Q3. Guard Benchmark.** Thank you for your suggestion. In response, we conducted additional experiments within the Guard benchmark under a limited timeframe. The Guard benchmark is a Safe Reinforcement Learning (SafeRL) framework that includes more complex agents, such as Ant and Humanoid, as well as more challenging tasks. In our experiments, we utilized 64 x 64 pixel images as observations, along with low-dimensional sensors as privileged information. Our empirical results demonstrate that our method significantly outperforms other baseline approaches. Furthermore, we found that, in comparison to the Safety-Gymnasium benchmark, our method achieves highly competitive results within the Guard benchmark. We attribute this to the relative ease of the Safety-Gymnasium benchmark, which may obscure the advantages of our method. | Model | | Goal_Ant_8Hazards | | | Goal_Ant_8Ghosts | | | Goal_Humanoid_8Hazards | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | Training time | Cost | Reward | Training time | Cost | Reward | Training time | Cost | Reward | | SafeDreamer | 14.27 | 1.00 | 2.54 | 16.7 | 0.93 | 10.67 | 13.94 | 10.8 | 0.34 | | Informed-Dreamer(Lag) | 17.03 | **0.03** | 1.32 | 18.28 | 2.54 | 10.09 | 14.03 | 10.72 | -1.60 | | Scaffolder(Lag) | 24.82 | 0.04 | 1.86 | 27.14 | **0.83** | 6.15 | 23.8 | 13.14 | **2.13** | | Distill | 20.12 | 10.52 | 1.32 | 23.21 | 1.02 | 0.50 | 21.04 | 24.07 | 1.41 | | PIGDreamer(Ours) | 18.45 | 0.92 | **14.18** | 20.86 | 2.28 | **15.76** | 18.12 | **10.56** | 2.09 | **Q4. The performance improvement of our method in the Safety-Gymnasium benchmark.** Thank you for your question. We acknowledge that our performance improvement in the Safety-Gymnasium benchmark may appear modest; however, there are several contributions of our work that we would like to highlight: 1. **Privileged Learning for Safety.** We investigated the application of privileged MBRL in safety-critical tasks, a topic that is rarely addressed within the SafeRL community. In this regard, we reimplemented the most advanced privileged MBRL methods, Scaffolder and Informed-Dreamer, to incorporate safety constraints. The benefits of integrating privileged information are convincingly evidenced by our experimental results. 2. **Efficiency.** As noted by other reviewers, our method achieves superior performance at the expense of a modest increase in training time, rendering it a practical solution for real-world applications. 3. **More competitive results.** In **Q3**, we discuss the empirical results from the Guard benchmark, which we hope will address your concerns regarding the performance of our method. **Q5. Other Comments.** We will update our manuscript based on your valuable suggestions. Additionally, we are in the process of developing a website for this paper, where we will include the code and videos. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for providing answers to my questions and comments. I also want to mention that the authors have conducted additional experiments in a new Guard setting, which enhances the experimental part of their work. These experiments show that PIGdreamer has low quality at a cost. These and previous results do not conclusively prove a significant advantage over other baselines for me, so I will leave the score unchanged for now. --- Reply to Comment 1.1.1: Comment: We thank Reviewer ou54 for responding to our rebuttal. But we think there is a misunderstanding concerning our experiments on the Guard benchmark. To allay your concerns, we have released an anonymous repository with code and videos. https://anonymous.4open.science/r/PIGDreamer-270B. --- **Guard Benchmark.** > These experiments show that PIGDreamer has low quality at a cost. > In our experiments on the Guard benchmark, PIGDreamer incurs relatively higher costs in the Goal_Ant_8Hazards and Goal_Ant_8Ghosts tasks. However, in these tasks, PIGDreamer significantly outperforms the baselines in terms of rewards. In the Goal_Ant_8Hazards task, Informed-Dreamer (Lag) and Scaffolder (Lag) achieve the lowest costs of 0.03 and 0.04, respectively, but at the expense of extremely low rewards. In fact, the agents trained with Informed-Dreamer (Lag) and Scaffolder (Lag) develop ineffective policies, as they consistently remain stationary and fail to exhibit any movement, as evidenced by the videos attached to the anonymous repository. A similar phenomenon is observed in the Goal_Ant_8Ghosts task. Conversely, PIGDreamer learns an effective policy while maintaining costs below the threshold of 3, thereby demonstrating its significant advantages. **Reproducibility.** To address the concern regarding reproducibility, we have attached our code and videos at the following link: https://anonymous.4open.science/r/PIGDreamer-270B. **The advantages of our method.** We wish to emphasize that our method demonstrates the advantages of efficiency and high performance: - **Efficiency.** Our method achieves optimal safety and task performance with a 23.14% increase in training time, whereas the alternative method, Scaffolder (Lag), attains the second-best performance with a 76.94% increase in training time. | Model | SafetyPointGoal2 | SafetyCarGoal1 | SafetyRacecarGoal1 | SafetyPointPush1 | SafetyPointButton1 | Avg | | --- | --- | --- | --- | --- | --- | --- | | Informed-Dreamer(Lag) | 6.42% | 1.02% | 5.98% | 9.40% | 8.01% | 6.16% | | Scaffolder(Lag) | 71.56% | 77.62% | 68.34% | 93.28% | 73.90% | 76.94% | | PIGDreamer(Ours) | 16.58% | 41.20% | 14.95% | 25.13% | 17.85% | 23.14% | - **Performance.** PIGDreamer significantly surpasses the baselines on the Guard benchmark, effectively addressing tasks that alternative methods fail to conquer. Furthermore, in contrast to other model-based Safe Reinforcement Learning methods (SafeDreamer, LAMBDA, SAFE-SLAC), which exclusively experiment within the Safety-Gymnasium, we expand our experiments to encompass more complex agents, such as the ant and humanoid, demonstrating remarkable performance. --- We hope our response addresses your concerns and encourages you to reconsider our score.
Summary: Disclosure: I am an emergency reviewer, and also quite familiar with the paradigm of RL with privileged information. The authors propose to exploit privileged information for policy learning in the context of safe RL. They use a world model approach like Dreamer, where an actor-critic is trained solely in a learned MDP (the world model). They use privileged information to train a privileged world model. Then, they exploit the privileged information in a variety of ways, such as aligning the privileged WM and unprivileged WM representations, using asymmetric critics and reward estimators, and generating imagined data with the privileged WM. They evaluate their method on the Safety Gymnasium benchmark, which is state-based goal-reaching tasks, and the agent must avoid hazards. Claims And Evidence: The overall system is verified by showing improvements over baselines. The improvement in returns and safety violation, compared to baselines, is marginal to moderate in my opinion, especially compared to the most competitive baseline Scaffolder. However, this method does seem more simple and faster to run which is good. The design choice of certain components in the method are somewhat questionable, see below. The method uses an augmented lagrangian objective to balance between reward and safety cost, but these methods are known for instability. Can the authors write more about how stable this process is, and what they need to do to make it stable, etc.? Methods And Evaluation Criteria: Twisted imagination: This procedure intends to generate trajectories with privileged information, by unrolling both privileged and unprivileged world models from the same starting state, using the same policy, and recording the pairs of state $(s^+_t, s^-_t)$. However, because each world model is stochastic, it seems unlikely that the trajectories are actually paired. Imagine an environment with a fork at the start state, and path 1 gives 100 reward while path 2 gives -100. The first WM predicts path 1, and the second WM predicts path 2. So now your $(s^+_t, s^-_t)$ pairs should not really be paired together, since even though they are at the same timestep, they correspond to different locations in the environment. Representation alignment: Are you pulling both state encoders towards each other? Or is it just one directional? Some additional baselines could be used, see below. Theoretical Claims: I read the theoretical section but did not check it carefully. Experimental Designs Or Analyses: I am not an expert in Safe RL, but there might be some essential Safe RL baselines that are missing, like Constrained Policy Optimization, TRPO-Lagrangian, etc.? In the main paper. Luckily, I skimmed the appendix and there seems to be comparison in section D.3. I think this should be moved to the main results. Next, for privileged reinforcement learning, a simple baseline is to train a privileged teacher with RL, and then distill it into an unprivileged student. I think this simple privileged baseline is important to show, to justify the additional complexities of using a model-based privileged RL approach. All the environments seem to be done in low-dimensional state based environments. It is worth mentioning as a limitation, and image-based environments would be good future work. Supplementary Material: I skimmed the appendix. Relation To Broader Scientific Literature: It can be seen as a model-based approach to leveraging privileged information in safe RL. Previous work in privileged RL, to my knowledge, did not focus on safe RL. Although I would be sure that people have tried exploiting privileged information for safe RL, especially in robotics. Essential References Not Discussed: The authors do a fair job of surveying privileged reinforcement learning with world models. The algorithm seems quite similar to Scaffolder (ICLR24), which is mentioned and compared against. But could the authors give a more detailed comparison on the similarities and differences? It seems like PIGDreamer removes some parts and replaces some objectives. Another privileged MBRL method not mentioned is Wasserstein Believer (ICLR24). Other Strengths And Weaknesses: I think the application of privileged information towards solving Safe RL tasks is nicely motivated. Other Comments Or Suggestions: N/A. Questions For Authors: Could you address my questions about the design choices? Ablation experiments on the twisted imagination could be interesting, you could replace twisted imagination with the way Scaffolder does imagination? Running a simple privileged RL baseline like distillation would be helpful for convincing readers to use your method over something simpler. > Compared to previous works (Lambrechts et al., 2024; Hu et al., 2024) that directly reconstruct privileged information ...this method is significantly more robust when the privileged information is excessively informative for reconstruction I don't get what the authors are saying here, could you clarify? There are no qualitative descriptions or videos of the tasks I can see. It would be nice to highlight strengths and weaknesses of each method's policies, by showing their rollouts and comparing them. --- Overall: For now, I will lean on the positive side, assuming the authors take my feedback and address it. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to express our gratitude to Reviewer byT1 for your insightful comments. **Q1. The mismatch arises from the Twisted Imagination (TI) and the ablation of TI with the Nested Latent Imagination (NLI).** Thank you for your question. Indeed, the trajectories generated from TI exhibit variability due to their stochastic nature. To address this problem, NLI proposed in Scaffolder forces the matching between two trajectories by encoding the embedding from $Path_2$ to $Path_1$. However, our ablation study about the NLI and TI, demonstrate that this method yields no performance improvement. We attribute this phenomenon to the robustness of the TI predictors. Although the representations of the privileged world model deteriorate during the imagination process, the predictors continue to make accurate predictions from the states $s^{*}$ and $s^{+}$ . | Task | Method | Reward | Cost | | --- | --- | --- | --- | | SafetyPointGoal2 | NLI | 10.79 | **0.41** | | | **TI (Ours)** | **13.59** | 0.73 | | SafetyCarGoal1 | NLI | 14.79 | 0.64 | | | **TI (Ours)** | **17.32** | **0.43** | | SafetyRacecarGoal1 | NLI | **13.99** | 1.57 | | | **TI (Ours)** | 11.38 | **0.83** | **Q2. The implementation of Representation Alignment.** In our work, we pull both state encoders towards each other using the method proposed in Dreamerv3. **Q3. Clarification regarding the privileged representation.** In our experiments, Informed-Dreamer exhibits instability in reward when the privileged information is too informative to be reconstructed from the history of partial observations. To address this issue, we enhance the state representation through **privileged representation alignment**. This approach enables us to establish correspondence between modalities (images and states), while ensuring that the training objective remains agnostic to the dimensionality of the privileged information. **Q4. Comparison with Scaffolder.** Thank you for your inquiry. We are pleased to compare our algorithm with Scaffolder (ICLR24). We will evaluate our algorithm against Scaffolder (ICLR24) in the following aspects: 1. **Theoretical Foundation:** We propose our method within the ACPOMDP framework while Scaffolder has made limited advancements in this area. 2. **Privileged Representations:** see **Q3**. 3. **Privileged Predictors:** The scaffolder comprises two groups of predictors: one group relies exclusively on the state of the naive world model, while the other depends on the state of the privileged world model. In our approach, we simplify these components by enabling the predictors to operate based on the states of all world models. 4. **Privileged Imagination:** see **Q1**. 5. **Privileged Exploration:** We eliminate this component for the following reasons: 1. It prolongs the training time. 2. The privileged actor operates using information that is inaccessible to the naive actor, potentially creating a significant disparity between their behaviors. As a result, the trajectories collected by the privileged actor become difficult for the naive actor to learn from. 6. **Privileged Critics:** same with Scaffolder. 7. **Efficiency:** Our method demonstrates enhanced efficiency in utilizing privileged information. **Q5. The teacher-student distillation baseline.** Thank you for your valuable suggestion. We have added the teacher-student distillation baseline within the limited timeframe. Our experimental results demonstrate that our method consistently outperforms the distillation baseline by a significant margin. Furthermore, we observe that, in the absence of explicit safety constraint objectives, the student fails to meet safety constraints. | Task | Model | Training time | Cost | Reward | | --- | --- | --- | --- | --- | | SafetyPointGoal2 | Distill | 30.02 | 4.54 | 6.59 | | | **Ours** | **29.71** | **1.31** | **11.61** | | SafetyCarGoal1 | Distill | 29.21 | 3.08 | 11.74 | | | **Ours** | **28.87** | **0.93** | **17.37** | | SafetyRacecarGoal1 | Distill | **30.19** | 3.47 | 10.21 | | | **Ours** | 31.03 | **1.17** | **10.99** | | SafetyPointPush1 | Distill | **31.03** | 12.37 | 12.26 | | | **Ours** | 34.87 | **1.15** | **17.10** | | SafetyPointButton1 | Distill | 30.02 | 11.12 | **6.77** | | | **Ours** | **29.54** | **2.01** | 5.97 | **Q6. Experimental setup.** Thank you for your question. Our experiments actually use 64 × 64 pixel images for agent observations, while using low-dimensional state as privileged information. We will clarify this experimental setup in the final version of our manuscript. **Q7. Wasserstein Believer.** We plan to incorporate a discussion of its relevance in the next version of our paper. **Q8. The stability of the augmented lagrangian method.** We regret our inability to engage in further discussion on this matter due to space limitations. Given that the augmented Lagrangian method is a widely accepted practice within the Safe RL community, we have not investigated its stability.
null
null
null
null
null
null
null
null
Enhancing Logits Distillation with Plug&Play Kendall's $\tau$ Ranking Loss
Accept (poster)
Summary: This paper presents a method to enhance logits distillation by incorporating a ranking loss based on Kendall’s τ coefficient. The main finding is that the conventional knowledge distillation approach, which relies heavily on KL divergence, often neglects smaller logit channels that contain valuable information. The authors introduce a ranking loss that ensures the order consistency between the teacher and student logits. The main algorithmic idea is to combine the traditional KL divergence loss with the proposed ranking loss, allowing the student model to not only align with the teacher's probability distribution but also maintain the ordinal relationship among logits. ## update after rebuttal I have carefully read the rebuttal and the other comments. The rebuttal provide the new experimental results and the explanation of the motivation and clearly illustrate the scenario that low-probability channels receiving smaller gradients under the KL divergence. The rebuttal also provide the role of the ranking loss for the the gradients of low-probability labels. Some of my concerns are well solved. Therefore, I change my score. However, some issues like the motivation of using ranking loss, the innovation of the ranking loss, are not clealy discussed. Claims And Evidence: 1. The paper merely states that using KL divergence for distillation loss ignores the gradient information of low-probability labels, but this viewpoint lacks any theoretical support. In Equation 3, $q_i^t$ merely indicates that the gradient information for small labels will be small, which does not imply that they are ineffective during learning. Furthermore, low-probability labels themselves should not dominate the information in the distillation process. 2. The proof of consistency between the ranking loss and the KL divergence optimization objective has only been preliminarily explored from the perspective of the optimal solution space, without a thorough analysis of their relationship in the context of complex models and data. This insufficient analysis cannot fully explain why the ranking loss can effectively assist in knowledge distillation. The presentation merely elaborates on the advantages of the ranking loss, lacking more rigorous mathematical proof and theoretical support, and fails to provide an in-depth explanation of the internal mechanisms of the proposed method. Methods And Evaluation Criteria: The evaluation criteria employed in the paper include accuracy metrics (top-1 and top-5) on benchmark datasets like CIFAR-100, ImageNet, and MS-COCO. These datasets and evaluation are widely used in KD. Theoretical Claims: The paper does not elaborate on any theory. In Section 4.2, it discusses the advantages of the ranking loss from three different perspectives. The proofs for the first two perspectives do not contain typos, but the analysis for the third perspective is clearly incorrect. Knowledge distillation aims to enhance the generalization of the model through the KL divergence and ensures the correctness of classification through cross-entropy loss. Assigning a larger weight to the distillation loss is precisely to ensure that the logits of the student network are closer to those of the teacher network. This analysis does not demonstrate that the ranking loss can help the student network achieve better classification. This part of the analysis should instead focus on how the introduction of the ranking loss can help the student network better ensure that the optimal label dominates, reducing the risk of low-probability labels exceeding the optimal label during the learning process. Experimental Designs Or Analyses: There are three issues with the experimental design section of the paper: 1. The experiments conducted in the paper are insufficient, as many teacher-student combinations have not been covered, such as Res56/Res20, wrn40-2/wrn16-2, vgg13/mbv2, res32x4/shv1, etc. 2. In the experiments on ImageNet, the top-5 accuracy results are missing, and the experimental results for DKD+, CTKD+, and MLKD+ are also lacking. 3. The advantages compared to the logit-standard method are too insignificant, with differences of only 0.1-0.2. Supplementary Material: I have read the supplementary meterial, including appendix and the code (.zip file). Relation To Broader Scientific Literature: 1. The paper proposes an auxiliary ranking loss based on Kendall's τ coefficient to mitigate the neglect of low-probability channels by the Kullback-Leibler (KL) divergence loss. This ranking loss is designed to focus on the alignment of logits between the teacher and student models, taking into account both high- and low-probability channels. This contribution builds upon and extends the existing knowledge distillation framework by incorporating a new loss function that addresses a specific limitation of the KL divergence. 2. The paper demonstrates that the proposed ranking loss can be used as a plug-and-play auxiliary function in various distillation tasks, which is similar to the setting in Logit-standard KD. Essential References Not Discussed: These two works should be selected as comparison methods in the experimental setup. [1] A unified approach with normalized loss and customized soft labels [2] DOT: A Distillation -Oriented Trainer Other Strengths And Weaknesses: My major concern is the rationality of the motivation. 1. Although the KL divergence has the issue of neglecting low-probability channels, the paper does not clearly elaborate on the rationality of introducing a ranking loss in knowledge distillation. Figure 1 seems more like a hypothetical scenario and lacks real-case analysis. 2. Second, the differences in experimental results compared to other SOTA methods are too small to sufficiently demonstrate the effectiveness of the proposed method. 3. Third, there is no detailed analysis of the impact on model training time and computational resource consumption after adding the ranking loss. Other Comments Or Suggestions: 1. Conduct an in-depth analysis of the gradient characteristics of the ranking loss based on Kendall’s τ coefficient, for instance, by rigorously mathematically deriving and proving its stability with minimal influence from channel scale across different model structures and data distributions. 2. Utilize visualization (such as feature importance visualization, label information visualization, etc.) to conduct an in-depth analysis of the specific role that the inter-class relationship information provided by the ranking loss plays in knowledge distillation. Questions For Authors: 1. The paper mentions that the gradient of the ranking loss is less affected by channel scale and that its optimization objective is consistent with the KL divergence. Can we theoretically analyze how the ranking loss enhances the classification ability of the student network? 2. There are various types of ranking losses. What are the reasons for choosing Kendall’s τ? Can we add comparisons with other ranking losses in the experimental section? 3. In the appendix, could you provide some derivation details on how Eq. 28 is derived to Eq. 31? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for the in-depth feedback! We highly value each of your comments, and all your concerns are addressed point by point: --- **Q1: Claims And Evidence** 1. **Small gradient**: We would like to clarify that while small gradient information does not necessarily imply ineffectiveness in the learning process, it is indeed prone to being overlooked. Our goal is for the model to learn more from the distribution of low-probability channels rather than ignoring them. Furthermore, we do not treat low-probability channels as dominant information; instead, we ensure that the model learns more consistently ranked logits under the constraint of KL divergence, thereby better capturing the knowledge hidden within low-probability channels. 2. **Optimal solution space**: We would like to clarify that the optimal solution space primarily to illustrate the compatibility between ranking loss and KL, meaning that optimizing ranking loss does not significantly interfere KL. However, this is not the main reason why ranking loss improved distillation performance. --- **Q2: Theoretical Claims** We would like to clarify that correct ranking inherently encompasses the notion of correct classification. Notably, the teacher model generally achieves accurate classification on the training set; therefore, ensuring consistency in the ranking order between the student and the teacher, to some extent, also guarantees correct classification for the student. Consequently, the ranking loss serves to enhance the classification accuracy of the student model. --- **Q3: The rationality of the motivation** 1. **The rationality of introducing the ranking loss**: The motivation for introducing ranking loss is to help the student model better capture the inter-class information of small channels and help the student model classify accurately without affecting the optimization of KL divergence. These are the three main perspectives we discuss in the paper. Figure 1 is not a hypothetical scenario, it has also been mentioned in works such as LSKD [1]. 2. **Limited improvement**: Our average improvement is 0.85, which we believe is effective. Considering the capacity of the student model and its gap with the teacher model, we suggest that our method effectively improves the performance of distillation. 3. **Time comsumption**: **Table T4.1** shows the distillation optimization time with and without ranking loss. It can be seen that ranking loss only brings a small extra time consumption. **Table T4.1**: Additional Computation Evaluation of Ranking Loss. The full training time is reported as the evaluation metric | Dataset | CIFAR-100| |:---:|:---:| | KD | 0.93 Hours | | KD+Ours | 1.06 Hours | --- **Q4: How the ranking loss enhances the classification ability of the student network** Intuitively, a tiger should be more like a cat than a fish. This relationship between channels can help the model better capture generalization features. According to previous work [2], the essence of distillation is to help the student model to learn features from more views to improve generalization. Therefore, ranking loss can help the student model generalize more effectively. Ranking loss enhances the training classification accuracy of the student model and better understanding the knowledge of small channels without affecting the optimization of KL divergence, thereby improving the generalization ability of the student model. --- **Q5: Other types of ranking losses.** Goodman Kruskal's Gamma can be understood as Kendall's with ties ignored, but these ties have no gradients when optimizing Kendall's taus, so they are essentially the same when used for optimization. Considering the form of Spearman’s rank correlation coefficient, it is difficult to convert it into a differentiable form. Considering the above reasons, using the Kendall coefficient tau as a ranking loss is a suitable choice. --- **Q6: Derivation details** Based on equation 23, we can simply get equation 29. Considering that the sum of the probability output by the model is 1, we can get equation 30. Expanding equation 28 and substituting it into equations 29 and 30, we can get equation 31. We will further clarify the steps here in the revision. [1] Logit Standardization in Knowledge Distillation [2] Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning --- Rebuttal Comment 1.1: Comment: Thanks for your effort to provide the rebuttal. However, I still have a concern about the motivation since the KL divergence does not actually ignore the gradients of low-probability labels; rather, it distills knowledge based on the overall similarity between the two distributions. It seems unreasonable to separate and analyze the KL divergence in this way. Second, the response does not explain why ranking can address the gradients of low-probability labels or why it is less affected by the channel scale. Third, in the third and fourth questions, the improvement in classification still mainly relies on the cross-entropy loss function, while the purpose of the distillation loss is to generalize knowledge. I still have differing opinions regarding the motivation set by the authors. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for the in-depth feedback! --- **Q1. About Motivation:** - As shown in **Figure 3 of the main paper and Section A.6 of the Appendix**, we provide both visualization and theoretical derivation demonstrating that low-probability channels receive smaller gradients under the KL divergence. Although KL divergence can capture very general knowledge through distribution learning, making it the most important loss in distillation tasks, the proposed ranking loss demonstrates stronger capabilities in capturing low-probability channels and inter-class relationships. This serves as a complement to KL divergence. **Figures 3 and 4 in the paper** illustrate that the ranking loss performs better in these aspects and effectively helps further reduce the distillation loss and improve performance. --- **Q2. Why ranking is less affected by the channel scale:** - In **Section A.6 of the Appendix**, we provide the derivation of the gradients for the ranking loss, which is given by $ \frac{\partial L_{RK}}{\partial z^s_i} = -\frac{k}{C(C - 1)} \sum_{j \ne i} \left[ 1 - \tanh^2\left( k (z^s_i - z^s_j) \right) \right] \tanh\left( k (z^t_i - z^t_j) \right) $ . The tanh function and an appropriate steepness parameter control the magnitude of the channel difference term, making it less sensitive to channel scale. In contrast, the gradient provided by KL divergence, given by $\frac{\partial L_{\text{KD}}}{\partial z^{s}_i} = -T \left( q^{t}_i - q^{s}_i \right)$ , includes a difference term that makes it more sensitive to channel scale. This is also illustrated in **Figure 3 of the paper**, which visualizes the gradients obtained for channels of different sizes. --- **Q3. Questions about classification:** - In fact, both KL divergence and ranking loss significantly contribute to the improvement of classification. The cross-entropy loss only constrains the target class, while the soft labels provided by the teacher contain rich inter-class information, which not only aids generalization but also facilitates global optimization for classification. KL divergence and ranking loss enhance classification performance by capturing such information. - Our further experiments also demonstrate that students trained with ranking loss achieve higher accuracy and better generalization ability, supporting our hypothesis that ranking can capture information relevant to classification. - Additionally, while cross-entropy loss aims for a predicted probability of 1 for the target class, KL divergence aims for the predicted probabilities of all classes to align with the soft labels provided by the teacher. There is an inherent mismatch in their solution spaces, which often necessitates assigning a smaller weight to cross-entropy during distillation. In contrast, ranking loss operates in a space aligned with KL divergence and explicitly preserves ranking consistency, thereby maintaining distillation efficiency while enhancing the consistency of decision boundaries, ultimately improving classification.
Summary: This paper introduces a plug-and-play ranking loss based on Kendall’s τ to mitigate two drawbacks of KL-based logit distillation: the neglect of low-probability channels and getting stuck in suboptimal points. By aligning channel-wise rankings between teacher and student, it leverages inter-class relationships and balances attention across all channels. The loss is theoretically shown to be compatible with KL divergence—sharing the same optimal solution—yet more robust to scale. Experiments on CIFAR-100, ImageNet, and MS-COCO, using both CNNs and ViTs, confirm that adding this ranking loss consistently enhances performance over baselines. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes. The authors compared their method with current state-of-the-art approaches on common visual distillation tasks using CIFAR-100, COCO, and ImageNet. I find the experiments to be quite comprehensive overall. A potential shortcoming maybe is that they did not evaluate the method’s effectiveness in the increasingly popular large-model distillation settings. Supplementary Material: Yes, including the experimental implementation details, such as the “Implementation Details for Transformer.” Relation To Broader Scientific Literature: This study analyzes the deficiencies in a series of previous logit-based methods based on KL divergence and proposes a novel auxiliary ranking loss based on Kendall’s τ Coefficient to mitigate the aforementioned issues. Essential References Not Discussed: To the best of my knowledge, I think the authors have considered the relevant works related to their study. Other Strengths And Weaknesses: Strengths: 1. This paper is well-written and easy to follow. 2. The experiments are extensive and conducted on CIFAR-100, ImageNet, and COCO datasets. Additionally, more visual analyses are provided to explain the effectiveness of the proposed method. Weaknesses: 1. The font in Figure 1 appears to be very small. 2. The proposed motivation does not seem very reasonable. I have a different opinion from the authors regarding why KL divergence would overlook the matching of low-probability channels. Intuitively, KL divergence seems to assign different learning weights to different class channels, with the target class having a higher weight, making it more important. This appears to be a reasonable arrangement. Although the authors provide an example in Figure 1, I do not believe this phenomenon occurs frequently. 3. Following the previous question, if time permits, could the authors compare the performance of using only the proposed rank loss versus using only KL divergence? This would provide a more intuitive evaluation of the effectiveness of the proposed method. Other Comments Or Suggestions: See above Questions For Authors: I don’t quite understand lines 183-185, which state: “The gradient of a logit channel primarily depends on the difference between its rank and the target rank, effectively harnessing the knowledge from smaller channels.” From Eq.9 and Eq.10, I don’t seem to arrive at this conclusion. Could the author provide further clarification? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for the in-depth feedback! We highly value each of your comments, and all your concerns are addressed point by point: --- **Q1. Font Size in Figure 1:** - Thank you very much for your feedback regarding the formatting of our paper. We will will use appropriate font sizes in the future version of the manuscript. We appreciate your attention to detail. --- **Q2. The Motivation Seems Not Reasonable: Intuitively, KL divergence seems to assign different learning weights to different class channels, with the target class having a higher weight, making it more important.** - In the distillation task, the cross-entropy loss computed with the one-hot hard labels of the target class has already emphasized its significance. The soft labels output by the teacher model differ from the one-hot hard labels, as they contain richer inter-class relationship knowledge. While the Kullback-Leibler (KL) divergence captures this information by constraining the distribution, it tends to overlook the low-probability channels. In contrast, the proposed ranking loss exhibits a stronger capability in capturing relational information among low-probability channels, thereby providing complementary support to the KL divergence. --- **Q3. The Motivation Seems Not Reasonable: The phenomenon in Figure 1 does not occur frequently.** - Figure 1 presents a toy case designed to illustrate the potential suboptimal optimization behavior of the Kullback-Leibler (KL) divergence. A similar suboptimal scenario is also demonstrated in **Figure 2 of the LSKD [1] paper**. To further validate whether the proposed ranking loss can address the potential suboptimality in the KL divergence, we visualize the loss landscape in **Figure 6 of our manuscript**. We observe that when only the KL divergence is used, suboptimal points exist near the global optimum, while the introduction of the ranking loss effectively mitigates this issue. [1] Logit Standardization in Knowledge Distillation. CVPR 2024 Highlight --- **Q4. Performance Comparison of using only Ranking Loss versus using only KL Divergence:** - We are currently conducting comparative experiments using only the ranking loss and only the KL divergence. We will promptly update our response once the results are obtained. - In fact, the proposed ranking loss serves as an auxiliary function to the KL divergence, designed to guide the KL divergence in avoiding suboptimal solutions and capturing inter-class relationship information. The ranking loss focuses solely on aligning the order of channels and imposes minimal constraints on channel values and distributions. Therefore, using only the ranking loss typically does not yield satisfactory distillation performance. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal and the rigorous experimental efforts. I will decide whether to revise my score after the authors address the additional experiments mentioned in Q4. Additionally, the core idea of this work appears highly similar to LSKD. However, as shown in Table 1, the proposed method’s performance is notably inferior to that of LSKD. Alternatively, given the tight timeline, I wonder if integrating the authors’ approach with LSKD could yield further performance improvements. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for the in-depth feedback! --- **Q1. Additional Experiments:** - We separately test distillation using only the KL divergence and only the proposed ranking loss, with the results presented in **Table T3.1**. We observe that using only the ranking loss can achieve comparable results than using the KL divergence. *Table T3.1: Ablation on KL.* | Teacher -> Student | WRN-40-2 -> WRN-40-1 | VGG13 -> VGG8| ResNet32×4 -> SHN-V1 | ResNet32×4 -> WRN-40-2 | WRN-40-2 -> SHN-V1 | | :---: | :---: | :---: | :---: | :---: | :---: | | Only KL | 73.54 | 72.98 | 74.07 | 77.70 | 74.83 | | Only Ranking Loss | 74.00 | 73.72 | 75.60 | 78.04 | 75.81 | | KL + Ranking Loss | 74.49 | 74.14 | 75.98 | 78.50 | 76.13 | --- **Q2. Combination with LSKD:** - The LSKD results reported in **Table 1 of the paper** are for MLKD+LSKD, while MLKD+Ours outperformed LSKD in 7 out of 9 experiments, with an average performance improvement of 0.36. - Additionally, LSKD is a method that optimizes the temperature of the KL divergence to enhance its generalization ability in aligning distributions, whereas the ranking loss focuses more on aligning channel order to provide additional inter-class information. Their core ideas are fundamentally different. Moreover, unlike LSKD, which modifies the KL divergence, the ranking loss is a plug-and-play auxiliary loss. - We incorporate LSKD into our proposed method and observed a performance improvement, and the ablation results are presented in **Table T3.2**. *Table T3.2: Ablation on LSKD.* | Teacher -> Student | WRN-40-2 -> WRN-40-1 | ResNet32×4 -> SHN-V1 | ResNet32×4 -> WRN-40-2 | WRN-40-2 -> SHN-V1 | | :---: | :---: | :---: | :---: | :---: | KD+LSKD | 74.37 | 75.12 | 77.92 | 75.53 | | KD+Ours | 74.49 | 75.98 | 78.50 | 76.13 | | KD+LSKD+Ours | 74.95 | 76.12 | 78.74 | 76.15 |
Summary: This paper points out two drawbacks of KL divergence in knowledge distillation: (1) it is often prone to suboptimal points; (2) it overlooks low-probability channels. The authors use Kendall’s $\tau$ coefficient to mitigate these issues and better model inter-class relationships. Experiments on CIFAR-100, ImageNet, and COCO across various architectures demonstrate improved performance compared to baselines. ## Update after rebuttal I have carefully read the authors' rebuttal as well as reviews provided by other reviewers. - Some of my concerns have been addressed. The authors reasonably explain that KL tends to ignore low-probability channels due to their small gradients, but this point should be further elaborated in the main paper. Moreover, the authors have provided additional experimental results. - **Nonetheless, there remains a critical concern that has not been addressed.** The motivation for introducing the ranking loss—enabling the student model to better capture inter-class relationships (IRs)—is highly related to existing methods (e.g., [WKD]). However, **the authors failed to discuss connections with and differences from WKD.** Unlike using logit ranking as qualitative information in this paper, WKD quantitatively models IRs based on feature-level statistics and employs the Wasserstein Distance (WD) to measure the distance between discrete probability distributions across all classes. *From a theoretical standpoint, WKD may offer a more principled approach than the proposed ranking loss.* Therefore, the authors are required to explicitly discuss methodological distinctions and offer theoretical insights in the main paper. Furthermore, it is important to empirically evaluate the individual performance of ranking loss and WKD and their potential complementarity. Such discussions would help further substantiate the soundness and effectiveness of the proposed approach. [WKD] Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge Distillation. NeurIPS, 2024. - **Due to the unresolved concern, my actual evaluation lies between "weak reject" and "weak accept". However, as no such score under the current review criteria, I select "weak accept".** Claims And Evidence: 1. According to Eq.1, the authors claim that KL overlooks low-probability channels as the teacher's probability serves as a weighting factor. However, this claim is questionable. The scale of KL is influenced by both the teacher’s probability and the logarithmic term. Could the authors clarify the specific role of the logarithmic term in KL? 2. In Lines 77-80 (R1), the authors claim that low-probability channels will receive smaller gradients during optimization. However, according to Eq.5, the gradient appears to be influenced by the difference of bewteen teacher and student, and independent of the channel values. The analysis of Eq.5 seems to contradict the claim in R1. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A Experimental Designs Or Analyses: 1. The current experimental design on ImageNet and COCO could be further refined. The current baseline, KD, has a relatively weak performance in the context of knowledge distillation. To provide a more comprehensive evaluation of the proposed method, the authors should conduct experiments on stronger KL-based baselines and analyze performance changes, such as [NKD], [CTKD], and [WTTM]. [NKD] From Knowledge Distillation to Self-Knowledge Distillation. ICCV, 2023. [CTKD] Curriculum Temperature for Knowledge Distillation. AAAI, 2023. [WTTM] Knowledge Distillation based on Transformed Teacher Matching. ICLR, 2024. 2. The paper lacks a comparison with relevant SOTA methods. For instance, [WKD] also leverages inter-class relationships for distillation and employs Wasserstein Distance (WD) instead of KL as a measurement. A comparison with such methods would help better highlight the advantages of the proposed approach. [WKD] Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge Distillation. NeurIPS, 2024. Supplementary Material: Yes. I have reviewed the whole supplementary material. Relation To Broader Scientific Literature: It seems there is no broader scientific literature for this paper. Essential References Not Discussed: The core contribution of this paper lies in utilizing rank loss to measure the inter-class relationships. In contrast, WKD employs WD to capture inter-class relationships (refer to “Experimental Designs or Analyses”). Rank loss focuses solely on class ranking, while WKD provides a quantitative measure of inter-class relationships. The authors should further discuss distinctions and connections with related works. Other Strengths And Weaknesses: 1. The idea of introducing rank loss in knowledge distillation to overcome the limitations of KL and provide inter-class relationships is interesting. 2. The visualizations are well-executed, which is helpful to enhance the overall understanding of the paper. Other Comments Or Suggestions: There are some typos, including the incorrect citation of Fig. 5 in Line 378 (right column). The authors should carefully review and correct these issues. Questions For Authors: 1. Why does KL overlook low-probability channels? Why low-probability channels will have smaller gradients? The current theoretical analysis presented by the authors is unconvincing. A more rigorous theoretical or experimental analysis should be provided. 2. The discussion on ranking loss is rather limited. Besides Kendall’s τ, other ordinal data measures, such as Spearman’s rank correlation coefficient and Goodman & Kruskal’s Gamma, could also be considered. It is important to analyze and compare the performance of these metrics. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for the in-depth feedback! We highly value each of your comments, and all your concerns are addressed point by point: --- **Q1. The scale of KL is influenced by both the teacher’s probability and the logarithmic term. Could the authors clarify the specific role of the logarithmic term in KL?** - The logarithmic term is derived from maximum likelihood, but the coefficient of the teacher is independent of the log term. As shown in the formula $$K L(P | Q)=\sum p(x) \log \frac{p(x)}{q(x)}=\underbrace{-\sum p(x) \log (q(x))}_1+\underbrace{\sum p(x) \log (p(x))}_2$$ the first part is a constant, while the second part is the optimizable component. The teacher's probability, acting as the coefficient of the log term, leads to the optimization process ignoring channels where the teacher's output probability is small. The neglect of low-probability channels by the KL divergence can also be explained at the gradient level. We have provided a detailed explanation in Figure 3 of the paper and Section A.6 of the appendix to validate our observation that small channels often receive smaller gradients. --- **Q2. The analysis of Eq.5 seems to contradict the claim in R1:** - In fact, lines 77-80 (R1) explain that when using KL divergence, low-probability channels receive smaller gradients during the optimization process. Equation 5 represents our proposed ranking loss, which we designed to be independent of channel values to ensure that low-probability channels are not overlooked. Therefore, the fact that the gradient of Equation 5 is independent of channel values does not conflict with R1; rather, this is precisely the intended goal of our design. --- **Q3. The current experimental design on ImageNet and COCO could be further refined:** - We have individually integrated the proposed ranking loss into NKD [1], CTKD [2], and WTTM [3], and conducted comparative experiments on ImageNet. Notably, CTKD+Ours achieved results surpassing the baseline despite being trained for 23 fewer epochs than CTKD. The experiments for WTTM are still ongoing, and we will promptly update our response once the results are obtained. The results are presented in **Table T2.1**. *Table T2.1: Experiments on ImageNet.* | ResNet34 -> ResNet18 | Acc@1 | Acc@5 | | :---: | :---: | :---: | | NKD[1] | 71.96 | - | | NKD[1]+Ours | 72.06 | 90.57 | | CTKD[2] (120epochs) | 71.32 | 90.27 | | CTKD[2]+Ours(97epochs) | 71.43 | 90.29 | - For object detection on COCO, WTTM does not provide relevant experiments, and the NKD paper only includes experiments related to the self-supervised method USKD, which differs from our experimental setup. Therefore, we compared our method with CTKD, and the results are presented in T**able T2.2**. *Table T2.2: Experiments on COCO.* | ResNet50 -> MN-V2 | AP | AP50 | AP75 | | :---: | :---: | :---: | :---: | | CTKD[2] | 31.39 | 52.34 | 31.35 | | Ours | 31.99 | 53.80 | 33.37 | --- **Q4. The paper lacks a comparison with relevant SOTA methods:** - In fact, we have already compared our method with LSKD [5], a highlight method from CVPR 2024, in the paper. We further add a performance comparison with WKD[4] in Table T2.3. *Table T2.3: Comparison with WKD.* | Teacher -> Student | WRN-40-2 -> WRN-40-1 | ResNet32×4 ->ResNet8×4 | VGG13 -> VGG8 | WRN-40-2 -> SHN-V1 | ResNet50 -> MN-V2 | | :---: | :---: | :---: | :---: | :---: | :---: | | WKD-L[4] | 74.84 | 76.53 | 75.09 | 76.72 | 71.10 | | Ours | 76.08(+1.24) | 77.25(+0.72) | 75.35(+0.26) | 77.87(+1.15) | 71.66(+0.56) | --- **Q5. Typos:** - We are sorry that we could not locate any citations in Figure 5 or line 378, nor did we find any incorrect citations in Table 5. If there are any errors, we would greatly appreciate it if you could point them out to us. --- **Q6. Why does KL overlook low-probability channels? Why low-probability channels will have smaller gradients?** - In Section A.6 of the appendix, we provide a detailed derivation of the gradients provided by the KL divergence. The gradient obtained by a specific channel is related to the difference between the student and teacher values in that channel. Since the values of the student and teacher in low-probability channels are on a smaller scale, the gradients obtained are also smaller. This is the underlying reason why KL divergence tends to overlook small channels. In Figure 3 of the paper, we illustrate the gradients provided by KL divergence for channels of different sizes, which also demonstrates that low-probability channels generally receive smaller gradients. --- **Q7. Discussions on other ordinal data measures:** - We are currently experimenting with replacing the Kendall correlation coefficient with Spearman’s rank correlation coefficient and Goodman & Kruskal’s Gamma for ablation studies. The experiments are still ongoing, and we will promptly update our response with the results as soon as they are available. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal, which I have carefully reviewed. Some of my concerns have been addressed. However, a few issues remain: First, the response in Q1 remains unclear. Contrary to the authors' explanation, I believe the first part of the equation is the optimizable component, while the second part is a constant. However, this does not justify why the optimization process ignores channels where the teacher's output probability is small. Since the scale of the first term is influenced by both the teacher and student distributions, the current explanation appears unconvincing. Second, on the ImageNet dataset, the proposed ranking loss yields only a marginal improvement of 0.1 when applied to strong KL-based distillation methods (e.g., NKD and CTKD). Such a limited gain raises concerns about the method’s practical impact and significance. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for the in-depth feedback. --- **Q1. Why the optimization process ignores channels where the teacher's output probability is small:** - We sincerely apologize for mistakenly reversing the first and second items of Q1 in our previous response. In fact, as mentioned in line 51 of our paper, the analysis of the weight term in the KL divergence formula is intuitive, and we understand your confusion. Therefore, **we would like to emphasize once again that we have provided the detailed derivation of the gradients provided by the KL divergence in Appendix A.6 of the paper. Additionally, Figure 3 in the paper visualizes the gradients obtained by channels at different scales, theoretically and experimentally explaining why the KL divergence tends to overlook low-probability channels.** Specifically, the gradient provided by the KL divergence is $\frac{\partial L_{\text{KD}}}{\partial z^{s}_i} = -T \left( q^{t}_i - q^{s}_i \right)$, which is influenced by the difference in channel values between the teacher and student outputs, making it sensitive to the scale of channel values. For low-probability channels (especially in the later stages of optimization), since both the teacher and student outputs have small channel values, the gradient becomes smaller due to the reduced difference in channel values, causing them to be neglected by the KL divergence. --- **Q2. The ImageNet Results:** - The previously reported results for CTKD+Ours were based on incomplete training. We have now provided the fully trained results for CTKD+Ours, as well as the previously missing results for WTTM+Ours, which are presented in **Table T2.4**. The average performance improvement across the three baselines is 0.26. *Table T2.4: Experiments on ImageNet.* | | ResNet34 -> ResNet18 | | :---: | :---: | | CTKD (120epochs) | 71.32 | | CTKD+Ours(120epochs) | 71.68(+0.36) | | WTTM | 72.19 | | WTTM+Ours | 72.51(+0.32) | --- **Q3. Discussions on other ordinal data measures:** - The missing experiment for Q7 in our previous response has been supplemented in **Table T2.5**. It is worth noting that we found Goodman & Kruskal’s Gamma to be equivalent to our Kendall correlation coefficient. The Kendall correlation coefficient is calculated as (C - D) / N_pairs, while Goodman & Kruskal’s Gamma is calculated as (C - D) / (C + D), where C represents the number of concordant pairs and D represents the number of discordant pairs. In our Kendall coefficient, N_pairs is the sum of concordant and discordant pairs, which is equivalent to (C + D). *Table T2.5: Comparison with other coefficient.* | Teacher -> Student | WRN-40-2 -> WRN-40-1 | VGG13 -> VGG8| ResNet32×4 -> SHN-V1 | ResNet32×4 -> WRN-40-2 | WRN-40-2 -> SHN-V1 | | :---: | :---: | :---: | :---: | :---: | :---: | | Only KL | 73.54 | 72.98 | 74.07 | 77.70 | 74.83 | | +Spearman's Rank | 73.79 | 73.79 | 75.51 | 78.05 | 75.14 | | +Our Ranking Loss | 74.49 | 74.14 | 75.98 | 78.50 | 76.13 |
Summary: This paper proposes an auxiliary ranking loss based on Kendall’s tau coefficient to improve knowledge distillation by addressing the limitations of KL divergence. Traditional KL-based distillation struggles with optimization challenges and tends to overlook low-probability channels, leading to suboptimal performance. The proposed ranking loss provides inter-class relationship information and balances attention across all channels while maintaining consistency with KL divergence. It is a plug-and-play module compatible with various distillation methods. Extensive experiments on CIFAR-100, ImageNet, and COCO datasets demonstrate its effectiveness across different CNN and ViT teacher-student architectures. Claims And Evidence: Yes, the claims have been supported by clear and convincing evidences. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense. Theoretical Claims: Yes, theoretical claim is proved properly. Experimental Designs Or Analyses: Experimental designs and analyses are valid. Supplementary Material: The supplementary material shows more detailed derivations and additional experimental results. Relation To Broader Scientific Literature: NA Essential References Not Discussed: A contribution of the proposed method is to improve the object detection performance via distillation, and it is suggested to discuss several more techniques of detection improvement such as [1-2]. [1] Dynamic head: Unifying object detection heads with attentions, CVPR 2021. [2] Rethinking image restoration for object detection, NeurIPS 2022. Other Strengths And Weaknesses: Strengths: - The idea of emphasizing the ranking information of logits through a ranking loss is both innovative and impactful. By explicitly modeling the relative order of predictions, the proposed approach enhances the training process, leading to more refined and meaningful feature representations. - The ablation studies presented in Tables 6 and 7 are thorough, covering various aspects of the proposed method. These studies systematically analyze the contribution of each component, reinforcing the effectiveness and necessity of the design choices. - The method demonstrates a substantial performance improvement over existing baselines across multiple benchmarks. The quantitative results highlight its superiority in terms of both accuracy and robustness, confirming its effectiveness in handling divergent cases. - The role of the ranking loss is comprehensively discussed, with detailed insights into how it influences the learning dynamics. The paper provides both empirical evidence and theoretical analysis showing how ranking-based supervision enhances model generalization and optimization. - The proposed method also proves to be beneficial for distilling vision transformers, effectively transferring knowledge while preserving critical ranking information. This indicates its potential for improving the efficiency and performance of large-scale transformer-based architectures. Weaknesses: - One closely related work [1] incorporates intrinsic ranking information of logits using Pearson Correlation. Since both methods aim to enhance knowledge distillation by capturing ranking relationships, it would be valuable to compare the proposed ranking loss with this approach as an auxiliary loss with more discussions. This would provide deeper insights into their relative effectiveness and compatibility. - A discussion on the numerical range of the ranking loss would be helpful, as it would provide better interpretability and understanding of its impact during training. Clarifying whether its scale remains stable across different datasets and architectures could further enhance reproducibility. - The influence of the steepness parameter in Eq. 6 is not explicitly analyzed. Since this parameter likely affects the sensitivity of the ranking loss, a more detailed discussion would be beneficial to determine its optimal setting and impact on training stability and performance. [1] Knowledge distillation from a stronger teacher, NeurIPS 2022. Other Comments Or Suggestions: It is recommended to revise the functions and equations to ensure proper formatting, condensing them into a single line where possible or adjusting line breaks appropriately, particularly for Equations (6) and (10). Questions For Authors: Please see weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for the in-depth feedback! We highly value each of your comments, and all your concerns are addressed point by point: --- **Q1. Discuss More techniques of detection improvement:** - We sincerely appreciate your suggestion. Exploring the application of the proposed method in downstream tasks is indeed one of our future directions. However, our method is primarily designed to enhance knowledge distillation. We have compared our approach with other knowledge distillation-based object detection methods, and the results are presented in Table 5 of the paper and in Table T2.2 of our response to Reviewer vqaG. In future versions, we will include discussions and potential integrations with pure object detection methods. --- **Q2. Comparison with Pearson Correlation:** - Thank you for your interest in our work. We also note the application of the Pearson correlation coefficient in distillation; however, unlike our proposed method, the Pearson correlation coefficient measures the linear relationship between two variables. Our method focuses on the ranking relationship of logits, which is not strictly a linear relationship. Therefore, the Kendall correlation coefficient may be more suitable for capturing this non-linear ranking information. - We have already compared the performance of DIST and our proposed method in the distillation tasks in **Table 1 and 2 of the paper**. To further investigate the combined effect of using both linear and non-linear auxiliary functions, we incorporated a ranking loss into DIST, which improved its performance. The results are shown in **Table T1.1**. *Table T1.1: Experiments on Combining DIST.* | Teacher -> Student | ResNet32×4 -> WRN-16-2 | ResNet32×4 -> WRN-40-2 | WRN-40-2 -> SHN-V1 | VGG13 -> VGG8 | WRN-40-2 -> WRN-40-1 | | :---: | :---: | :---: | :---: | :---: | :---: | | DIST[1] | 75.58 | 78.02 | 76.00 | 73.80 | 74.73 | | DIST[1]+Ours | 75.85(+0.27) | 78.54(+0.52) | 76.23(+0.23) | 74.06(+0.26) | 74.86(+0.13) | [1] Knowledge distillation from a stronger teacher, NeurIPS 2022 --- **Q3. Range of the Ranking Loss:** - According to **Equations 5 and 6 in the paper**, the proposed ranking loss has a value range of [-1, 1]. We set the weight of ranking loss to be the same as that of KL divergence. For baselines using different KL divergence weights, the value of the ranking loss may vary, but its scale generally remains within the range of [-1, 1]. Across all teacher-student combinations with the same baseline and dataset, the ranking loss maintains consistent weight and scale. --- **Q4. Analysis of the Steepness Parameter:** - In Table 7 of our paper, we investigate the impact of the steepness parameter. We observe that although the optimal steepness parameter may vary across different teacher-student combinations, larger steepness parameters generally yield better results. This finding aligns with intuition: as the steepness parameter increases, the tanh function approximates the sign function more closely, making it more sensitive to ranking and less sensitive to scale differences. **Q5. Other Comments and Suggestions:** - Thank you very much for your feedback regarding the formatting of our paper. We will adjust these formulas in the future version of the manuscript. We appreciate your attention to detail.
null
null
null
null
null
null
Invariant Deep Uplift Modeling for Incentive Assignment in Online Marketing via Probability of Necessity and Sufficiency
Accept (spotlight poster)
Summary: This work investigates uplift modeling for incentive allocation in digital marketplaces, addressing two critical challenges: OOD generalization and selection bias mitigation through spurious correlation elimination. The proposed IDUM framework introduces: 1) A causal invariance learning mechanism identifying domain-agnostic features with necessity and sufficiency properties. 2) A differentiable Gumbel-softmax feature selector for computational efficiency. 3) A distributional discrepancy regularizer aligning treatment-control group representations. Empirical validation on production datasets demonstrates IDUM's superior performance in response prediction compared to existing baselines. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: This paper addresses the challenge of testing data that is out-of-distribution. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths** Methodological Innovation: IDUM presents a novel paradigm for uplift modeling under distribution shifts. Its emphasis on causal invariance and feature sufficiency/necessity decomposition addresses critical gaps in real-world deployment scenarios. ​Theoretical Rigor: The authors establish formal generalization guarantees through β-divergence analysis and PNS risk bounding. The integration of distributional alignment metrics (IPM, MMD) provides principled control of treatment group biases. Good narration: The authors skillfully present the main ideas of the article, helping to understand the key counter-attributions. **Weaknesses** 1. The mathematical formulations underpinning the proposed methodology present certain complexities in comprehension, which could be mitigated through supplementary elucidation to strengthen theoretical transparency. 2. The work lacks an explicit discussion of the model's computational overhead during training phases and omits details regarding algorithmic complexity analysis. These omissions hinder the practical evaluation of real-world deployment feasibility. Furthermore, the absence of an exhaustive specification of architectural hyperparameters and optimization configurations compromises the reproducibility of experimental results across diverse datasets. Other Comments Or Suggestions: No. Questions For Authors: Please see "Weaknesses" Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer UCNa, Thank you for taking the time to review our work, we sincerely appreciate your insightful comments, which have helped improve our paper. Below, we provide point-by-point responses to each concern raised. 1. **Mathematical Formulations**: We appreciate the valuable feedback and have incorporated a comprehensive notation table in the appendix to improve clarity. In our final revision, we will further enhance these explanations to ensure better understanding of our methodology. 2. **Computational Complexity Analysis**: We analyze the computational complexity of each module in our framework: **Invariant Property Learning Module** - Objective loss and negative log-likelihood: $O(n)$ (linear complexity) - KL divergence: $O(n \times d)$, where $d$ is the feature dimensionality **Feature Selection Module** - Forward propagation (linear transformation + batch normalization): $O(n \times h)$, where $h$ is hidden layer dimension - Mask generation (Gumbel-Softmax): $O(k \times m)$ for $k$ iterations on $m$ mask elements - Total complexity: $O(n \times h + k \times m)$ **Discrepancy Balancing Module** - IPM with MMD distance: $O(n^2)$ The overall time complexity of IDUM is $O(n^2) + O(n \times h) + O(k \times m)$. **Empirical Runtime Comparison** We compare the computational efficiency of different methods on two datasets: | Method | Production Dataset | Lazada Dataset | |----------------|-------------------:|---------------:| | S-Learner | 37 min | 27 min | | T-Learner | 40 min | 28 min | | TARNet | 58 min | 49 min | | CFRNet-mmd | 61 min | 53 min | | CFRNet-wass | 122 min | 117 min | | DragonNet | 73 min | 64 min | | EUEN | 65 min | 35 min | | UniTE | 110 min | 86 min | | TEED | 93 min | 89 min | | IDUM (ours) | 85 min | 79 min | If you have any further questions to discuss, we are willing to reply as soon as possible.
Summary: This study addresses uplift modeling for incentive allocation in online markets, aiming to resolve out-of-distribution generalization challenges and selection bias by eliminating spurious correlations. The authors propose the Invariant Deep Uplift Modeling (IDUM) framework, which innovatively integrates a cross-domain invariant learning approach to identify causal features with necessary and sufficient properties. The model employs a Gumbel-Softmax-based feature selection module to enhance computational efficiency and introduces a balanced discrepancy constraint mechanism to mitigate distributional biases between treatment and control groups. Empirical evaluations on real-world commercial datasets (e.g., Lazada) demonstrate that the proposed method significantly outperforms existing benchmarks in response prediction tasks, showcasing superior causal inference capabilities. Overall, the paper exhibits clear motivations, a rigorous theoretical framework, and well-designed experiments with practical implications, all presented with logical coherence and academic rigor. Claims And Evidence: See summary. Methods And Evaluation Criteria: See summary. Theoretical Claims: See summary. Experimental Designs Or Analyses: See summary. Supplementary Material: See summary. Relation To Broader Scientific Literature: See summary. Essential References Not Discussed: See summary. Other Strengths And Weaknesses: Pros: 1. The IDUM framework introduces an improved approach to overcome limitations in handling out-of-distribution (OOD) challenges. Its design consists of three straightforward components: invariant property learning, feature selection, and balancing discrepancy. The method has been carefully developed and supported by mathematical analysis. 2. Extensive experiments are performed to validate the model's effectiveness. Experiments covered two widely-used benchmark datasets and one real-world industrial dataset, with performance comparisons made against standard reference methods in the field. Cons: 1. The study currently does not publicly share its implementation code, which limits reproducibility and poses a challenge for independent verification or further development by the research community. 2. Some technical sections involving mathematical formulations may require simplifications or clearer explanations to improve accessibility for readers with varying levels of expertise. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Dear Reviewer 4KLk**, Thank you for taking the time to review our work, we sincerely appreciate your insightful comments, which have helped improve our paper. Below, we provide point-by-point responses to each concern raised. **Cons** 1. **Open-source code**: We have included our code in the Supplementary Material for review. Upon acceptance of the paper, we will open-source all code and datasets to ensure reproducibility, and will provide the GitHub link in the final version. 2. **Mathematical formulation**: Additionally, we have added a notation table in the appendix for clarity, and will further expand the explanations in our final revision to enhance understanding. If you have any further questions to discuss, we are willing to reply as soon as possible.
Summary: The paper proposes Invariant Deep Uplift Modeling (IDUM) for incentive assignment in online marketing (e.g., coupons or discounts). The model identifies features that are both necessary and sufficient under distribution shifts (e.g., changes in user demographics, time, or geography). It builds on the Probability of Necessity and Sufficiency (PNS) framework from Pearl’s causality theory. A Gumbel-Softmax-based selection layer masks out irrelevant or unstable features, enabling the model to focus on “invariant” causal relationships and reduce computational overhead. It uses an integral probability metric (IPM) to control distributional difference to mitigate selection bias, aligning distributions of treated and control users in a latent representation. The experiment shows that IDUM provides robust uplift prediction under out-of-distribution (OOD) shifts and outperforms multiple baselines. Claims And Evidence: All main claims (better uplift prediction, out-of-distribution robustness, correctness of the bounding arguments) appear to be well-supported by both empirical results and theoretical analysis. Methods And Evaluation Criteria: The proposed method (IDUM) makes sense for the problem: it specifically addresses two big challenges in real-world uplift modeling: - Selection bias between treated/control groups in observational data. - Distribution shift over time or across user populations. The evaluation criteria (AUUC, QINI, Kendall) are standard in uplift modeling literature. The chosen real-world datasets cover ID and OOD scenarios, which is a strong design choice to demonstrate generalization capacity although more diverse evaluation dataset could help prove the model can work well under different conditions. Overall, the methods align well with the practical online marketing setting, and the metrics/datasets are appropriate and standard for uplift tasks. Theoretical Claims: The theorical claims mostly from Pearl's existing work and are correct. The claims specific for this paper uses bounding techniques consistent with prior works (e.g union bounds, Jensen's inequality), and the proofs given are coherent, consistent with prior literature, and appear correct under the listed assumptions. Experimental Designs Or Analyses: - The experiments compare IDUM to widely used uplift baselines (S-learner, T-learner, TARNet, CFRNet, DragonNet, etc.) using the same training setup, hyperparameter tuning strategies, and standard metrics. - Authors test both in-distribution and out-of-distribution performance, which directly addresses the paper’s main claim of robustness to real-world shifts. - The ablation studies (removing the balancing discrepancy term, the invariant property learning term, or the feature selection) confirm that each piece meaningfully contributes to performance. Supplementary Material: The supplementary materials covers case studies, notations and the proofs for the main theory contribution. Relation To Broader Scientific Literature: The approach is well-situated in the intersection of causal inference, domain adaptation, and uplift modeling. - The paper builds on prior works like S-learner, T-learner, TARNet, CFRNet, etc., addressing the standard challenge of biased observational data and distribution mismatch in real systems. - Relates to Pearl’s framework for identifying causal effects under exogeneity and monotonicity. The paper is among the few that directly incorporate “Probability of Necessity and Sufficiency” for robust OOD generalization. - Connects to literature on distribution shift (e.g., IRM, domain adversarial training), but specifically tailors these ideas for uplift models in marketing contexts. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths - The paper builds its methodology on Pearl’s causality framework, especially around Probability of Necessity and Sufficiency. This is a clear and principled way to identify robust features for uplift. - The authors convincingly motivate the need for out-of-distribution generalization in real-world incentive assignment and illustrate why traditional uplift approaches can fail under changing user distributions. - The paper provides not only an invariant learning approach but also attempts to back it up with a set of domain adaptation–style bounds, which is valuable in a field that often has few formal generalization guarantees. Weaknesses: - While the paper does well to include two real-world datasets (Lazada and a short-video platform), these are still both from online platforms. Including more varied or publicly accessible datasets (e.g., from different verticals or domains) could strengthen the claims about broad applicability. - Their Gumbel-Softmax–based feature selection is a major part of the proposed approach, yet there is only a limited discussion of which features get included or excluded and why. Additional analysis or interpretability results—especially highlighting which features consistently remain selected—could offer insights into the invariant property learning. Other Comments Or Suggestions: Please see the weakness part. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Dear Reviewer UDxr**, Thank you for taking the time to review our work, we sincerely appreciate your insightful comments, which have helped improve our paper. Below, we provide point-by-point responses to each concern raised. **Weaknesses:** 1. **Additional experiments**: Due to the inherent constraints of our model's design, we are unable to generalize our approach to domains with significantly different characteristics within the rebuttal period. However, in response to your feedback, we have conducted additional experiments on the Criteo dataset (ad click-through rate prediction) to validate our method's ability. Training Data: 80% (in-distribution) ID Test Data: 10% (in-distribution) OOD Test Data: 10% (unbiased sampling) This partition allows us to evaluate both in-distribution performance and out-of-distribution generalization. | Method | ID AUUC | ID QINI | ID KENDALL | OOD AUUC | OOD QINI | OOD KENDALL | |-----------------|-------------------|-------------------|------------------|-------------------|-------------------|-----------------| | S-Learner | 0.0973 ± 0.0027 | 0.0356 ± 0.0121 | 0.0091 ± 0.0077 | 0.0939 ± 0.0021 | 0.0361 ± 0.0097 | 0.0093 ± 0.0064 | | T-Learner | 0.1025 ± 0.0022 | 0.0422 ± 0.0109 | 0.0117 ± 0.0063 | 0.0919 ± 0.0022 | 0.0411 ± 0.0115 | 0.0087 ± 0.0068 | | TARNet | 0.1031 ± 0.0024 | 0.0411 ± 0.0145 | 0.0109 ± 0.0037 | 0.0915 ± 0.0023 | 0.0398 ± 0.0127 | 0.0093 ± 0.0072 | | CFRNet-mmd | **0.1042 ± 0.0032** | 0.0429 ± 0.0099 | 0.0112 ± 0.0047 | 0.0924 ± 0.0019 | 0.0339 ± 0.0133 | 0.0109 ± 0.0064 | | CFRNet-wass | 0.0922 ± 0.0022 | 0.0391 ± 0.0110 | 0.0122 ± 0.0051 | 0.0908 ± 0.0024 | 0.0401 ± 0.0117 | 0.0101 ± 0.0039 | | DragonNet | 0.1037 ± 0.0020| 0.0429 ± 0.0127 | 0.0112 ± 0.0044 | 0.1009 ± 0.0021 | 0.0372 ± 0.0130 | 0.0107 ± 0.0048 | | EUEN | 0.1029 ± 0.0031 | 0.0421 ± 0.0118 | 0.0132 ± 0.0050 | 0.1007 ± 0.0029 | 0.0396 ± 0.0134 | 0.0092 ± 0.0061 | | UniTE | 0.1033 ± 0.0036 | 0.0467 ± 0.0137 | 0.0157 ± 0.0049 | 0.0901 ± 0.0027 | 0.0401 ± 0.0125 | 0.0073 ± 0.0057 | | TEED | 0.0923 ± 0.0021 | 0.0436 ± 0.0132 | 0.0127 ± 0.0061 | 0.0902 ± 0.0029 | 0.0335 ± 0.0117 | 0.0092 ± 0.0043 | | **IDUM** | 0.1027 ± 0.0023 | __0.0482 ± 0.0093__ | **0.0163 ± 0.0133** | **0.1019 ± 0.0023** | **0.0442 ± 0.0105** | **0.0112 ± 0.0054** | 2. **Gumble softmax features**: Both the Lazada dataset (with over 80 features) and the Production dataset (with over 100 features) exhibit high dimensionality. To address the computational challenges posed by such large feature sets, we employ Gumbel-Softmax-based feature selection, which effectively reduces computational costs while preserving predictive performance. While the Lazada dataset lacks feature interpretability, our analysis of the Production dataset reveals that certain features—such as Phone_type, Region, Silcon, and Age, etc., are consistently retained across selections. This suggests their importance in modeling uplift across diverse scenarios. If you have any further questions to discuss, we are willing to reply as soon as possible.
Summary: This paper introduces the IDUM method for predicting uplift in an out-of-distribution setting. It utilizes a Gumbel Softmax-based feature selection mechanism to identify a relevant subset of features, followed by invariant property learning. Additionally, the balancing discrepancy component mitigates selection bias, improving model robustness. Through empirical evaluations, the authors claim that these three components collectively enhance uplift prediction in online marketing. Claims And Evidence: The authors evaluate their proposed methods against baseline models using AUUC, QINI, and KENDALL metrics on two real-world datasets. Methods And Evaluation Criteria: Evaluation criteria: It would be helpful if the authors could include the definitions of AUUC and Qini coefficients used in this work. Typically, Qini coefficients are defined as the difference between AUUC and the area under the curve of a random model, which differs from the authors' description that it "scales the responses in the control group." Theoretical Claims: Looks good to me. Experimental Designs Or Analyses: Looks good to me. Supplementary Material: The codes. Relation To Broader Scientific Literature: This paper integrates three previous works for application in uplift modeling. The invariant property learning approach is based on Yang et al. (2024) [Invariant Learning via Probability of Sufficient and Necessary Causes], with all definitions, lemmas, properties, and theorems directly drawn from this work. The feature selection mechanism is adapted from Jang et al. (2016) [Categorical Reparameterization with Gumbel-Softmax], while the distribution discrepancy regularizer is based on Shalit et al. (2017) [Estimating Individual Treatment Effect: Generalization Bounds and Algorithms]. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: - Provides a comprehensive literature review of related works, including uplift modeling and its applications in online marketing. - Conducts analysis on real-world datasets to compare the proposed model with baseline methods. Weaknesses: - The work appears to lack novelty, as all three components are derived from previous research. The theoretical framework is entirely based on Yang et al. (2024) [Invariant Learning via Probability of Sufficient and Necessary Causes], and the model optimization objective follows the same paper, with the addition of a distribution discrepancy regularizer. - Uplift modeling in online marketing provides better customer targeting strategies. The authors could include an online experiment to justify the proposed methods' effectiveness. Other Comments Or Suggestions: None. Questions For Authors: 1. Could the authors clarify the settings of the Lazada dataset? Based on my understanding, the treatments in this dataset correspond to different voucher distribution strategies. What is the response variable (i.e., the y values) used in the model? 2. The proposed IDUM method exhibits an unusually large variance in AUUC on both the ID and OOD Lazada datasets, with a standard error of 0.02, whereas other methods have standard errors around 0.002–0.003. Could the authors investigate and explain the reason for this magnitude difference? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Dear Reviewer xxdc**, Thank you for taking the time to review our work, we sincerely appreciate your insightful comments, which have helped improve our paper. Below, we provide point-by-point responses to each concern raised. **Methods And Evaluation Criteria:** We will follow your advice to include the the detailed definitions of AUUC and Qini coefficients and revise the presentation in our final version. **Weaknesses:** **1. Three components are derived from previous research** Our main contribution is to solve the OOD problem in uplift modeling by introducing the PNS risk, which has been not fully investigated by previous works. In our work, we build upon existing methods to construct our model architecture. However, due to differences in application scenarios, additional components are required to address the problem effectively. For OOD uplift modeling, we account for the **in-distribution shift** between treatment and control groups by employing a discrepancy regularizer—a common solution that is not our key contribution. To address the PNS risk, we introduce a novel intervention and prediction procedure for balanced embedding learning, which distinguishes our approach from prior work. However, we observed that the **computational cost** of PNS-based feature optimization was prohibitively high. To mitigate this, we adopted a Gumbel-Softmax-based feature selection mechanism. The theoretical foundations of our work are primarily based on Pearl's causal framework, with additional insights adapted from Yang et al. (2024) to enhance rigor. However, our approach is not a direct application of their theorems, but rather a tailored extension for uplift modeling. First, our focus differs fundamentally from Yang et al. (2024): while they address image classification, we tackle uplift estimation with dual prediction heads. These domains differ critically in objectives and assumptions—for instance, uplift modeling requires counterfactual reasoning absent in standard classification. This divergence necessitates theoretical adaptations. Second, we explicitly model the interaction between prediction heads and balanced embeddings within our theoretical framework, which is uniquely designed for our problem setting. Finally, all prior works have been properly cited to maintain academic norms and contextualize our contributions. **2. Online experiments.** Following your suggestions, we conducted an online experiment to evaluate our proposed IDUM against the baseline CFRNet. Due to the limited time of rebuttal and the cost of algorithm deployment, we only keep it for three days. **Experimental Setup:** In-Distribution (ID) Test: We selected two user groups with similar distributions to the training data. Out-of-Distribution (OOD) Test: We used two user groups with distinct distributions to assess generalization. **Evaluation Protocol:** Due to the constraints of online inference, we could not obtain the true responses for users unaffected by the deployed algorithm. Instead, we compared the watch time improvement across experimental groups as the key metric. - **Watch time evaluation** | Method | ID | OOD | |---------|-------|--------| | CFRNet | 0 | 0 | | IDUM | 0.012%| 0.028% | - **Cost evaluation** | Method | ID | OOD | |---------|-------|--------| | CFRNet | 0 | 0 | | IDUM | -1.21%| -1.75% | We report the percentage of watch time improvement relative to CFRNet's performance (baseline, set at 0). And also, we evaluate the cost of two methods, for that we assign less incentives (-1.21% and -1.75%) to the user group and get competitive performance (0.012% and 0.028%). The results demonstrate that our proposed IDUM achieves further gains in the OOD testing environment. **Questions** **1. clarify the settings of the Lazada dataset** The Lazada dataset is a public dataset for uplift modeling, which can be found at: https://drive.google.com/file/d/19iSXsbRXJWvuSFHdcLb0Vi9JCP9Fu41s/view In the dataset description, there are 86 fields in total, of which 83 are features(f0-f82). We use the column of is_treat as treatment $t$, and the column label as the response $y$. Usually, for uplift modeling, $t$ represents whether the user is assigned the discount or coupon, $y$ represents whether the user is converted. **2. The variance** We apologize for the error in reporting the variance. As you pointed out, the variance was incorrectly magnified by a factor of 10. We will correct this in the final version of our paper. Additionally, we plan to release the code, dataset, and training logs upon obtaining permission from our collaborating company, and the variance can be identified in the training logs. Thank you for your careful attention to this issue. If you have any further questions to discuss, we are willing to reply as soon as possible.
null
null
null
null
null
null
An Improved Clique-Picking Algorithm for Counting Markov Equivalent DAGs via Super Cliques Transfer
Accept (oral)
Summary: The paper proposed a more efficient way to improve the existing work by Wienobst et al on counting the number of DAGs in an MEC. The authors argue that the previous approach suffers in the case where there are multiple maximal cliques in a MEC. The proposed improvement is to make use of structure overlaps between these maximal cliques. To facilitate the construction of the method, the paper introduces two key notions named super clique and super residual. The key contribution in this paper is to come up with an efficient transfer operation of super cliques from one maximal clique-directed tree to another to avoid counting redundant structures. ## update after rebuttal I have not changed my score because I support this paper to be accepted. I think the paper has made a non-trivial contribution to improving the efficiency of the clique-picking algorithm by Wienobst et al. The efficiency is significant for scalable causal discovery. Claims And Evidence: Mostly yes with some minor issues I raised in the questions. Methods And Evaluation Criteria: Mostly yes. I think the experiment should try to push for very large and dense graphs in order to show the merits. Theoretical Claims: Yes. I have checked the proofs in section D. I don't find any issue. Experimental Designs Or Analyses: I have checked the experiment presented in the paper. I don't see any issue. Supplementary Material: Yes, section D. Relation To Broader Scientific Literature: This work further improves on the previous polynomial-time counting algorithm by Wienobst et al. (2023). It is built on the previous approach with an improvement to avoid counting the number of DAGs in the overlapping members of the undirected connected components. The algorithm remains polynomial-time with respect to the number of maximal cliques. Essential References Not Discussed: No Other Strengths And Weaknesses: I think this work will have the most positive impact on large and dense graphs. This aligns with the original motivation why one even needs to have a polynomial-time counting algorithm. The non-trivial part of this paper is to come up with the notion of super cliques and residuals and innovatively incorporate them in an efficient transfer operation. I personally spent quite a lot of time to understand the idea and tried to come up with an alternative method, but I failed. For weaknesses, there are a few typos, but it's very minor. I think the presentation of the paper can be improved by providing a working example in the appendix since there are various different graph concepts and it's hard to remember all the notations until I have read it many times back and forth. The most obvious weakness is the efficiency gain over the previous work. It does not save more than half of the time even for the size of 4096. When r is increased, the patterns in the left and right figures in Figure 3 are almost the same. It would be better to try more extreme cases to show different behaviors of the algorithm. Other Comments Or Suggestions: - "Notably, Wien¨obst et al. (2023) introduce the Clique- Picking (CP) algorithm, which is a polynomial-time algorithm for counting MECs.” I think it will be better to just say determining the size of an MEC or counting the number of DAGs in an MEC instead of saying counting MECs. - The big O notations are inconsistent e.g. line 193,194 vs line 56 - It would be better to add an example in the paper to walk people through the algorithm to show how the algorithm works in one clean way. - I suggest the authors to put Algorithm 2 instead of "SC-Trans algorithm” in Theorem 4.1 and Theorem 4.2 so that the authors don’t need to explicitly say: "For our SC-Trans algorithm in Algorithm 2, we have the following results.” - The "Case 1, Case 2, Case 3” in section 6.2 can be wrapped by a number list. - Line 427: "Algorithm1" -> "Algorithm 1" Questions For Authors: 1. In line 147, why G[e] and G[f] are two separate undirected connected components but not G[e,f] in C_{G}(K_{1})? 2. In line 201, how can C_{G}(K_{3}) contain G[b,c,d] given that K_{3} = {b,e} and C_{G}(K_{3}) is defined as the undirected connected components of G^{K}[V \ K], which is a subgraph that excludes K by definition? Do you mean G[a,c,d]? 3. Can I replace "super residuals” in Theorem 4.5 with just residuals except for the residuals from the root? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your careful reading and valuable feedback. We address the comments in details below. >**Weaknesses:***"...the paper can be improved by providing a working example in the appendix..."* Thanks for the suggestion. We plan to include a table summarizing all key notations and concepts, along with a detailed example illustrating how the algorithm works. These additions, to be placed in the appendix, will enhance the clarity and accessibility of our method for a broader audience. >*"The most obvious weakness is the efficiency gain over the previous work .... try more extreme cases to show different behaviors of the algorithm."* Thank you for this insightful comment. We agree that evaluating performance on larger and denser graphs is important for highlighting the efficiency gains. The state-of-the-art Clique-Picking (CP), proposed by Wienöbst et al. (2023) and presented as Algorithm 1 in our paper, is already quite efficient. Our work focuses on improving Step 2 of Algorithm 1 through our proposed algorithm ( ICP). For Step 2, the computational complexity of ICP and CP is $\mathcal{O}(m^2)$ and $\mathcal{O}(m(|V| + |E|))$, respectively. This represents a strict and substantial improvement when $m(>1)$ is of moderate size and $|V|+|E|$ is large. To provide additional evidence, we conducted experiments across various levels of graph edge density $r=|E|/|E_{\max}|$, where $|E_{\max}|=|V|(|V|-1)/2$. We performed experiments for $|V|=1024$ and $2048$, and the results are reported in the tables below. As the edge density $r$ increases, the difference in running time between the two algorithms becomes more pronounced. This trend is attributed to the fact that denser graphs generally correspond to a lower value of $\theta=m/(|V|+|E|)$, which enhances the performance advantage of our proposed method. **Table1: Comparison between ICP and CP** ||||||| |:-:|:-:|:-:|:-:|:-:|:-:| ||| $\vert V\vert =1024$ |||| |$r$|0.06|0.09|0.15|0.27|0.43| |$\mathrm{CP}-\mathrm{ICP}$ (seconds)|3.752|4.40|5.71|7.10|8.73| |||$ \vert V\vert=2048$|||| |$r$|0.04|0.06|0.10|0.22|0.49| |$\mathrm{CP}-\mathrm{ICP}$ (seconds) |19.45 |34.47|41.54|59.24|62.24| When isolating the comparison for Step 2 of Algorithm 1, the advantage of our algorithm becomes even prominent. Denote $T_{\mathrm{CP},2}$ as the average running time of Step 2 executed as in Wienöbst et al. (2023), denote $T_{\mathrm{ICP},2}$ as the average running time of our approach for Step 2. The table below presents both their difference and ratio. The results highlight substantial improvements achieved by our method in terms of efficiency. **Table2: Comparison between Step 2 of ICP and CP** ||||||| |:-:|:-:|:-:|:-:|:-:|:-:| ||| $ \vert V\vert=1024$ |||| |$r$|0.06|0.09|0.15|0.27|0.43| | $T_{\mathrm{CP},2}-T_{\mathrm{ICP},2}$ (seconds) | 3.31 | 4.65 |5.66 |8.04| 9.28 | |$T_{\mathrm{CP},2} / T_{\mathrm{ICP},2}$ |7.88|19.02|26.80|65.91|159.57| ||| $ \vert V\vert=2048$ |||| |$r$|0.04|0.06|0.10|0.22|0.49| |$T_{\mathrm{CP},2}-T_{\mathrm{ICP},2}$ (seconds) |17.99| 24.19|37.41 |72.44 |81.45 | |$T_{\mathrm{CP},2} / T_{\mathrm{ICP},2}$ |10.05|16.42|29.71|92.31|318.54| The full results will be included as new figures and tables. We believe these additional experiments significantly enhance the evidence to support the advantages of the proposed method. > **Other Comments Or Suggestions:** Thank you for these detailed suggestions. We will carefully revise the manuscript to address each of these points in the final version. > **Questions For Authors:** > **Q1** Thank you for the question. The orientation of the UCCG $G$ cannot form new $v$-structures or directed cycles. When $K_1$ is selected the root, the edge between vertices $b$ and $e$ will be oriented as $b \rightarrow e$. Suppose $f \rightarrow e$, this would lead to $v$-structure: $b\rightarrow e\leftarrow f$. Therefore, to avoid this, it must be $e\rightarrow f$ in $G^{K_1}$, and $G[e]$ and $G[f]$ remain two separate undirected connected components. >**Q2** You are right. We deeply apologize for this typo. $\mathcal{C}_G(K_3)$ should contain $G[a,c,d]$, *not* $G[b,c,d]$. We will carefully proofread the paper and correct the typos in the final version. >**Q3** We assume your question refers to Theorem 5.3. The answer is no—replacing "super residuals" with individual residuals would lead to incorrect identification of undirected connected component, and hence incorrect counts of Markov Equivalent DAGs via Equation (2). Take the example in Figure 1: the subgraph $G[g,h,i,j]$ forms an undirected connected component in $G^{K_1}$. This component arises from merging the residuals $R_5$={ $g,j$ }, $R_6$={ $i$ } and $R_7$={ $h$ } into a single super residual. However, these residuals on their own—$R_5,R_6,R_7$—do not, by definition, form undirected connected components. Therefore, super residuals are essential for correctly identifying related structures and correctly counting Markov Equivalent DAGs. --- Rebuttal Comment 1.1: Comment: I thank the authors for the response. I appreciate the new experiments posted above. Please include it in the camera-ready version.
Summary: This paper proposes an improvement to the clique-picking algorithm introduced by Wienöbst et al. (2023) for counting Markov Equivalent Directed Acyclic Graphs (DAGs). The authors introduce super cliques and super residuals to reduce computational complexity when identifying undirected connected components (UCCGs) in a Completed Partially Directed Acyclic Graph (CPDAG). The proposed Super Cliques Transfer Algorithm optimizes the recursive clique-selection process, leading to a computational cost reduction from $O(m(|V|+|E|)$ to $O(m^2)$ where $m$ is the number of maximal cliques. Experiments on randomly generated chordal graphs demonstrate that the improved algorithm (ICP) outperforms the original Clique-Picking (CP) algorithm in runtime. Claims And Evidence: - The proposed Super Cliques Transfer Algorithm reduces the computational complexity of counting MEC sizes from $O(m(|V|+|E|)$ to $O(m^2)$ is supported by the theoretical analysis in Theorems 4.1 and 4.2. - The claim of the improved algorithm speeds up MEC size counting is supported by the experimental results in Table 2 and Figure 3. Methods And Evaluation Criteria: The paper follows a solid algorithmic and theoretical approach. The evaluation is conducted via simulation experiments. Theoretical Claims: The paper presents several new theoretical constructs, including super cliques and super residuals, and proves their validity. Theorems 4.1 and 4.2 ensure the correctness and complexity of the Super Cliques Transfer Algorithm. While not carefully checked, the proofs appear well-structured and logically sound. Experimental Designs Or Analyses: The experiments are conducted to demonstrate the improvement in computational complexity. They all make sense to me. Supplementary Material: I did not check the appendix very carefully. Relation To Broader Scientific Literature: Bayesian network is widely used for scientific exploratory analysis, which is typically only identifiable up to so-called Markov equivalence class (MEC). To count the number of possible elements inside a given MEC is crucial for many downstream task and experimental degisn. Essential References Not Discussed: Not sure. But the literature seems to be well cited in the discussion in paragraph 3-4 in the introduction. Other Strengths And Weaknesses: **Strength**: - Reducing redundancy in clique-picking through transfer operations is well-motivated. - The running examples provided in Figure 1 are very helpful for illustrating the mathematical concepts and understanding the algorithms. **Weakness**: - See suggestions and questions Other Comments Or Suggestions: - More discussion on the quantitive comparison between the computational complexity of proposed algorithm with the existing work will be appreciated. Currently, the only comparison is $O(m^2)$ v.s. $O(m(|V|+|E|)$. Is this a strict improvement? When is there a significant gap? Is there any special class of DAGs where the proposed algorithm is much better? - It would be more illustrative to plot the experiment results in log-log plot. Questions For Authors: - How do we compare the proposed algorithm to other counting algorithms listed in the literature review -- thrid paramgraphs in the introduction -- in terms of both theoretical analysis and experimental comparison? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the valuable comments and remarks. We will address your questions and suggestions below. > **Other Comments Or Suggestions:** *"More discussion on the quantitive comparison between the computational complexity of proposed algorithm with the existing work will be appreciated."* Thanks for the comments. We have conducted additional experiments and will expand the related discussion as follows. In particular, we have tested the proposed algorithm (ICP) and the previous algorithm (CP) over various specification of **graph edge density $r$**, which is measured by $|E|/|E_{\max}|$ with $|E_{\max}| = |V|(|V|-1)/2$. We performed experiments for $|V| = 1024$ and $2048$, and the results are summarized in the tables below. As the edge density $r$ increases, the difference in running time (measured in seconds) between the two algorithms becomes more pronounced. This trend is attributed to the fact that denser graphs generally correspond to a lower value of $\theta = m / (|V| + |E|)$, which enhances the performance advantage of our proposed method. **Table1: Comparison between ICP and CP** ||||||| |:-:|:-:|:-:|:-:|:-:|:-:| ||| \|$V$\|=1024 |||| |$r$|0.06|0.09|0.15|0.27|0.43| |$\mathrm{CP}-\mathrm{ICP}$ (seconds) |3.75|4.40|5.72|7.10|8.73| ||| \|$V$\|=2048 |||| |$r$|0.04|0.06|0.10|0.22|0.49| |$\mathrm{CP}-\mathrm{ICP}$ (seconds) |19.45 |34.47|41.54|59.24|62.24| Recall that our proposed algorithm enhances Step 2 of Algorithm 1. When focusing solely on the computational cost of this step, the advantage of our method becomes even more evident. Let $T_{\mathrm{CP},2}$ denote the average running time of Step 2 as implemented in Wienöbst et al. (2023), and let $T_{\mathrm{ICP},2}$ denote the average running time of our improved approach for the same step. The table below presents both the difference and the ratio between these average running times. The results highlight substantial improvements achieved by our method in terms of efficiency. **Table2: Comparison between Step 2 of ICP and CP** ||||||| |:-:|:-:|:-:|:-:|:-:|:-:| ||| $\vert V\vert=1024$ |||| |$r$|0.06|0.09|0.15|0.27|0.43| | $T_{\mathrm{CP},2}-T_{\mathrm{ICP},2}$ (seconds) | 3.31 | 4.65 |5.66 |8.04| 9.28 | |$T_{\mathrm{CP},2} / T_{\mathrm{ICP},2}$ |7.88|19.02|26.80|65.91|159.57| ||| $ \vert V\vert=2048$ |||| |$r$|0.04|0.06|0.10|0.22|0.49| |$T_{\mathrm{CP},2}-T_{\mathrm{ICP},2}$ (seconds) |17.99| 24.19|37.41 |72.44 |81.45 | |$T_{\mathrm{CP},2} / T_{\mathrm{ICP},2}$ |10.05|16.42|29.71|92.31|318.54| The full results will be included as new figures and tables in the final version of the manuscript. We believe these additional experiments significantly enhance the evidence to support the advantages of the proposed method. > **Other Comments Or Suggestions:** *"Is this a strict improvement? When is there a significant gap? Is there any special class of DAGs where the proposed algorithm is much better?"* In this work, we propose SC-Trans algorithm to improve Step 2 of Algorithm 1, which is the Clique-Picking (CP) algorithm by Wienöbst et al.(2023). The computational complexity of our proposed algorithm is $\mathcal{O}(m^2)$, compared to $\mathcal{O}(m(|V| + |E|))$ for the corresponding step in the CP algorithm. This constitutes a strict and significant improvement when $m (> 1)$ is a moderate value and the graph size, i.e., $|V| + |E|$, is large. The improvement is especially pronounced for large graph where the ratio $m/(|V| + |E|)$ is small. This is supported by the numerical evidence as in our response to your previous comment (please refer to the tables above). > **Other Comments Or Suggestions:** *"It would be more illustrative to plot the experiment results in log-log plot."* Thanks for the suggestions. We will adjust the plot to better demonstrate the advantages of the proposed algorithm in the final version of the manuscript. > **Questions For Authors:** *"How do we compare the proposed algorithm to other counting algorithms"* Thank you for the question. The Clique-Picking (CP) algorithm proposed by Wienöbst et al. (2023) is the first known algorithm with polynomial complexity for this task and currently represents the state of the art. In their work, the CP algorithm has been shown to significantly outperform previous counting algorithms. Our proposed approach builds upon and improves the Clique-Picking algorithm, and therefore also outperforms existing alternatives. --- Rebuttal Comment 1.1: Comment: I thank the authors for their helpful resposne. It addresses most of my concern. I will keep the score for acceptance.
Summary: This submission presents an improvement of the recent polynomial time algorithm for counting moral acyclic orientations of chordal graphs, a problem which lies at the core of counting Markov equivalent DAGs. The main idea of that algorithm compared to older iterative root-picking algorithms lay in picking root-cliques. However, the prior algorithm recomputed equivalent information multiple times when choosing a different sequence of root-cliques. The present submission can identify and avoid this behavior by using what the submission refers to as super clique structure. Overall this improves the efficiency of the general approach at the point where the next lookup assuming the specific choice of the root clique. Beside proving the improved asymptotic complexity bound, the submission includes small test indicating the increased efficiency also empirically. Claims And Evidence: No concerns. Methods And Evaluation Criteria: The methods make sense for the claims being made. Theoretical Claims: I read only what is not in the appendix. Experimental Designs Or Analyses: I did not as the experiments are not the main contribution and the mild claims made about them seem unproblematic. Supplementary Material: No. Relation To Broader Scientific Literature: The submission continues a line of research improving the runtime of algorithms for counting MAOs. This is also clearly described in the introduction of the submission. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: The topic and contribution are relevant to the audience at ICML. The presentation is also satisfactory. In terms of content, I find the key ideas behind the improvement elegant and natural. I recommend acceptance of this submission. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We greatly thank you for the positive feedback and valuable recognition of our work's core ideas and contributions.
Summary: Paper addresses the computational complexity of finding the size of the so-called Markov equivalence class (MEC) encoding conditional dependencies. Conditional dependence properties are captured by d-separation property of the DAGs and the method counts DAGs in the MEC class (representing the same/equivalent causla relationships). Generally, size of MEC grows exponentially in number of vertices and efficient methods are needed. The paper improves on the recent Clique Picking (CP) method of Wienobst et al. (2023) presented in Algorithm 1. in the paper. Authors propose novel click structures called Superclicks (Definition 5.2) and propose a novel Super Clique Transfer procedure to improve computational cost of generating undirected connected components $C_G(K)$ of the tree with selected root clique $K$, the step 2. of the Algorithm 1. Using Super Cliques (SCs) enable an efficient reuse of the structures across different rooted trees avoiding repetitive construction. As a result, after definitions and theoretical framework derivations (Theorem 4.1, 4.2) and further detailed in Sections 5&6, the complexity of step 2 of Algorithm 1 is reduced from $O(m(|V|+|E|))$ to $O(m^2)$. This constitutes the main contribution of the paper, supported by numerical experiments on random choprdal graphs in Section 7. This is claimed to enhance a feasibility of real-world applications such as casual inference in healthcare and AI. ## update after rebuttal: Thanks to authors for updates, new experiments and clarifications regarding proofs. I found their arguments and experimental results convincing and raise my score to 4. Claims And Evidence: Overall, problem to solve is well motivated and laid out. Arguments are clear and readable, yet exposition is quite technical. Figure 1. helps a lot, but further visual helpers may improve appeal of the paper to wider community. Experimental evidence is rather frugal but conceptually sufficient I believe. Clarity of presentation of the evidence in Section 7, is on the other hand, recommended for a remake. See comments and Weaknesses 1 and 2 in respective sections. Methods And Evaluation Criteria: Paper argument is mainly theoretical so the selected experiments on random graphs are rather verification and supportive evidence. In this sense they are sufficient. For details see respective sections below. Theoretical Claims: As for Theoretical evidence, Theorems 4.1, 4.2, Lemma 6.1, Propositions 6.2 and 6.3. are clearly stated but it is unclear where the proof can be found. Supplementary Material contains only Jupyter Notebook implementation of the SC+Trans algorithm. The sketch of the arguments is to be found in related sections for Lemma and Propositions, but some details are missing, e.g., 'easy to see' under Proposition 6.3, etc. $\textbf{Weakness 1}$: Overall I find arguments convincing but theoretical argument would benefit from more detailed proofs in Supplementary Material. This would make the paper more complete and self-contained rendering it more useful to wider research community, I believe. Experimental Designs Or Analyses: Section 7 presents experiments (Figure 3, Tale 2) on rangom graphs with increasing number of vertices for two ratios of dependencies in them. Namely $r=0.08$ vs. $r=0.34$, where $r=m/|V|+|E|$ suitably chopsen to demonstrate strong side of the proposed SC-Trans algortihm whose efficiency should excel in settings, where number of DAGs $m$ is strongly smaller than number of vertices and edges. While text in Section 7. claims the Table 2 and Fig.3 demonstrates this well, it is in fact quite hard to see from the experiments how the compute time scales with $r$. $\textbf{Weakness 2}$: While I agree with experiments supporting the claim, it is just quite hard to read it aout from absolute numbers presented. It is suggested to use relative measure, e.g., a (run time diference ICP-CP) over changing $r$ or vertices, or other means to present the evidence of the main claim of the paper more clearly. Supplementary Material: I reviewed SM. It contains the implementation of the proposed method and results. Proofs are not present. Relation To Broader Scientific Literature: Introduction contains relevant mentions of the literature of the field with applications in epidemiology, biology ad economics with references mentioned. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: novelty, contribution, addressing a timely problem, theoretical arguments are well written (yet still lacking some rigour that is suggested to be added, see W1) Other Minor Weaknesses: Applications and possible problems to be solved by proposed method Other Comments Or Suggestions: -typos, L076 ("product" instead of multiplication), L186 ("be be") Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your feedback. We are excited that you have found our approach useful and novel. Please find below our response to your concerns. > **Claims And Evidence:** *"...further visual helpers may improve appeal of the paper to wider community..."* Thank you for the helpful suggestion. We will include a table summarizing all key notations and concepts, along with a detailed example illustrating how the algorithm works. These additions, to be placed in the appendix, will enhance the clarity and accessibility of our method for a broader audience. > **Theoretical Claims:** *"...but some details are missing, e.g., 'easy to see' under Proposition 6.3, etc."* Thank you for pointing this out. We will provide more detailed explanation texts to improve clarity and rigor. In particular, for *'easy to see' under Proposition 6.3*, we will add an explanation: "The edge $K_1\rightarrow K_i$ in $T^{K_1}$ will be redirected as $K_i\rightarrow K_1$ in $T^{K_i}$. This implies $K_1$ becomes a child clique of $K_i$ in $T^{K_i}$. As the root $K_i$ is the only ancestral clique of $K_1$ in $T^{K_i}$, $K_1$ must be a clique header within $T^{K_i}$ by Definition 5.1." > **Weakness 1** *"...it is unclear where the proof can be found... Overall I find arguments convincing but theoretical argument would benefit from more detailed proofs..."* Thank you for your feedback. We would like to clarify that the technical proofs are included in the **Appendix of the main paper**, *not* in the Supplementary Material. Please refer to **Appendix D. Technical Proofs for the Main Paper** at the end of the main manuscript. To help readers locate the proofs more easily, we will explicitly highlight their locations in the final version of our manuscript: 1. The proof of Theorems 4.1 can be found in Appendix D.*Proof of Theorem 4.1.* (Line 699); 2. The proof of Theorems 4.2 can be found in Appendix D.*Proof of Theorem 4.2.* (Line 713); 3. The proof of Propositions 6.2 \& 6.3 can be found in Appendix D.*Proof of Propositions 6.2 \& 6.3.* (Line 668). Lastly, we will clarify that Lemma 6.1 was established in Leimer (1993), Page 105, Proposition 2.4. (iii). > **Weakness 2** *"Clarity of presentation of the evidence in Section 7 is recommended for a remake." "...It is suggested to use relative measure over changing $r$ or vertices..."* Thanks for your comments. We have conducted additional experiments and will revise Section 7 to better demonstrate the advantage of the proposed method. We would like to clarify that the proposed algorithm (ICP) and the previous algorithm (CP) were tested over various specifications of **graph edge density** $r=|E|/|E_{\max}|$ with $|E_{\max}|=|V|(|V|-1)/2$. We performed experiments for $|V|=1024$ and $2048$, and the results are summarized in the tables below. As the edge density $r$ increases, the difference in running time (in seconds) between the two algorithms becomes more pronounced. This trend is attributed to the fact that denser graphs generally correspond to a lower value of $\theta=m/(|V|+|E|)$, which enhances the performance advantage of our proposed method. **Table1: Comparison between ICP and CP** ||||||| |:-:|:-:|:-:|:-:|:-:|:-:| ||| $\vert V\vert=1024$ |||| |$r$|0.06|0.09|0.15|0.27|0.43| |$\mathrm{CP}-\mathrm{ICP}$ (seconds) |3.75|4.40|5.72|7.10|8.73| ||| $\vert V\vert=2048$ |||| |$r$|0.04|0.06|0.10|0.22|0.49| |$\mathrm{CP}-\mathrm{ICP}$ (seconds) |19.45 |34.47|41.54|59.24|62.24| Recall that our algorithm enhances Step 2 of Algorithm 1. When focusing solely on the computational cost of this step, the advantage of our method becomes even more evident. Let $T_{\mathrm{CP},2}$ be the average running time of Step 2 as implemented in Wienöbst et al. (2023), and let $T_{\mathrm{ICP},2}$ be the average running time of our improved approach for the same step. The table below presents both the difference and the ratio between these average running times. The results highlight substantial improvements achieved by our method in terms of efficiency. **Table2: Comparison between Step 2 of ICP and CP** ||||||| |:-:|:-:|:-:|:-:|:-:|:-:| ||| $\vert V\vert=1024$ |||| |$r$|0.06|0.09|0.15|0.27|0.43| | $T_{\mathrm{CP},2}-T_{\mathrm{ICP},2}$ (seconds) | 3.31 | 4.65 |5.66 |8.04| 9.28 | |$T_{\mathrm{CP},2} / T_{\mathrm{ICP},2}$ |7.88|19.02|26.80|65.91|159.57| ||| $\vert V\vert=2048$ |||| |$r$|0.04|0.06|0.10|0.22|0.49| |$T_{\mathrm{CP},2}-T_{\mathrm{ICP},2}$ (seconds) |17.99| 24.19|37.41 |72.44 |81.45 | |$T_{\mathrm{CP},2} / T_{\mathrm{ICP},2}$ |10.05|16.42|29.71|92.31|318.54| The full new results will be included as new figures and tables in the revision. We believe these additional experiments will significantly enhance the evidence to support the advantages of the proposed method. > **Other Comments Or Suggestions:** Thank you for pointing this out. We will carefully proofread the paper and correct these typos in our final submission.
null
null
null
null
null
null
Fusing Reward and Dueling Feedback in Stochastic Bandits
Accept (poster)
Summary: This paper investigates the fusion of numerical and preference feedback in stochastic bandits, where both feedback types are gathered in each timestep. The authors derive a regret lower bound, demonstrating that an efficient algorithm may incur only the smaller among the reward and dueling-based regret for each arm. The authors propose two fusion algorithms: (1) a simple elimination fusion algorithm that leverages both feedback types to explore all arms and unifies collected information by sharing a common candidate arm set, and (2) a decomposition fusion algorithm that selects the more effective feedback to explore the corresponding arms and randomly assigns one feedback type for exploration and the other for exploitation in each round. The elimination fusion experiences a suboptimal multiplicative term of the number of arms in regret due to the intrinsic suboptimality of dueling elimination. In contrast, the decomposition fusion achieves regret matching the lower bound up to a constant under a common assumption. Extensive experiments validate the efficacy of the provided algorithms and theoretical results. Claims And Evidence: The claims made in this paper are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense. Theoretical Claims: The theoretical results look reasonable, but I didn’t go through every proof. Experimental Designs Or Analyses: The experiments look reasonable. Supplementary Material: I didn’t read the supplementary material. Relation To Broader Scientific Literature: This paper is relevant to the literature. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1. This paper is very well-written and the considered problem is well-motivated from real-world decision-making scenarios where numerical feedback and preference feedback are simultaneously collected, e.g., recommendation systems. 2. The algorithm design and theoretical results are complete and well executed. The authors design two fusion algorithms, one of which achieves an optimal regret. Rigorous regret upper and lower bounds are provided. The presented table and figure for result summary (Table 1 and Figure 1) are clear. 3. Experimental results are presented to validate the empirical effectiveness of the proposed algorithms and the provided theoretical findings. Weaknesses: 1. (Not a weakness, just a comment) In the formulation, this paper only considers that the preference probability of choosing arm a over arm b is greater than 0.5 if arm a has a higher expected reward than arm b. Besides this ordering relation, this paper does not assume or utilize any other relation between the preference probability and expected reward. What if the preference probability is generated according to the expected reward, like the Bradley-Terry model used in the RLHF literature? Will this additional assumption allow a better regret result? More discussion is needed here. 2. What is the motivation of the regret definition (the equation above Section 2.1), which is a linear combination between the regret in classic bandits and the regret in dueling bandits? It seems that this is not very natural. Isn’t it more natural to define the regret as the loss in the expected reward due to not always selecting the optimal arm, which may need an additional assumption that the preference probability is generated according to the expected reward, like the Bradley-Terry model? Other Comments Or Suggestions: Please see the weaknesses above. Questions For Authors: Please see the weaknesses above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > *W1. [...] other relation between the preference probability and expected reward [...] like the Bradley-Terry model* **A1:** We thank the reviewer for raising this intriguing question. A relation between the preference probability and expected reward, like the Bradley-Terry model $\nu_{1,2} = \frac{\exp(\mu_1)}{\exp(\mu_1) + \exp(\mu_2)} = \frac{\exp(\mu_1-\mu_2)}{\exp(\mu_1-\mu_2) + 1}$ or utility dueling bandits $\nu_{1,2} = \frac{1 + (\mu_1 - \mu_2)}{2}$ (Ailon et al., 2014), is indeed stronger than the one in our paper and will lead to a better regret result. Specifically, with this parametric relation, one only needs to focus on one set of parameters: the reward means or the dueling probabilities. Let us consider the case that the reward means $(\mu_1, \mu_2, \dots, \mu_K)$ as the basic parameters to estimate, and all observations from dueling feedback can be translated into reward feedback via the parametric relation. Under this point of view, the reward feedback provides direct observations from the reward mean parameters, and the dueling (preference) feedback between arms $k$ and $\ell$ can be considered as a "parametric reward feedback" depending on the reward mean parameters. For example, with the Bradley-Terry relation, the parametric form is a logistic function, which is similar to the logistic bandits (Faury et al., 2020), while with the utility dueling relation, the parametric form is a linear function, which is similar to the linear bandits (Abbasi-Yadkori et al., 2011). With this interpretation, the authors believe that the final regret bound would depend on reward gaps $\Delta_k^{(R)}$ and have no explicit dependence on the dueling gaps $\Delta_k^{(D)}$ (or the other way round if we consider the dueling probabilities as the basic parameters), and the regret improvement may be in the actual dependence of the reward gaps $\Delta_k^{(R)}$ (e.g., the prefactor or the exponential order of this gap would be strictly less than than the best one without the dueling feedback option). To rigorously investigate the improvement in regret bounds under these stronger assumptions (out of the scope of the current paper) is an interesting research direction. We will add this discussion in the final version of this paper to clarify the potential improvement in regret bounds under these stronger assumptions. --- > *W2. [...] motivation of the regret definition [...] regret as the loss in the expected reward due to not always selecting the optimal arm* **A2:** We thank the reviewer again for raising this interesting question. **Motivation** The current linear combination definition --- especially, the parameter $\alpha\in[0,1]$---is motivated by the cost difference between querying reward feedback and dueling feedback, as the dueling (relative) feedback is usually cost-efficient (Ouyang et al., 2022). Furthermore, this flexible definition covers many interesting scenarios, e.g., the case of $\alpha = 1$ is the regret from reward feedback only, and the case of $\alpha = 0$ is the regret from dueling feedback only, and in the case of $\alpha = \frac{1}{2}$, the regret is the simple sum of the two types of feedback. We will add this discussion to this paper's final version to clarify the regret definition's motivation. **Other regret definitions** Below, we provide three perspectives on changing the definition of regret in our paper. First, if the reviewer means to only consider the regret from reward feedback ("define the regret as the loss in the expected reward"), and treat the dueling feedback as side free observations, then this regret reduces to the case of setting $\alpha = 1$ in our regret definition. In this scenario, our $\texttt{DecoFusion}$ algorithm achieves constant regret, as discussed in Lines 409--423 (left column). Second, if the reviewer means also to consider the regret cost due to dueling feedback but counts this part of the regret in terms of the expected regret instead of the dueling (preference) probability in the current definition, then the new regret cost in each decision round would be the sum of the reward gaps of the pulled three arms (one for reward, a pair for dueling). For this new regret definition, our algorithm and analysis still work. The only modification is in the regret upper bound results, where one needs to change the dueling gap $\Delta_k^{(D)}$ in the nominator of Eq. (4) in Theorem 3.1 and Eq. (5) in Theorem 4.1 to the reward gap $2\Delta_k^{(R)}$. Third, if one further assumes the Bradley-Terry model relation between reward means and dueling probabilities upon the new regret definition, this would lead to a problem that is similar to logistic bandits (Faury et al., 2020), as discussed in the previous response, which is an interesting research direction. --- - Faury, L., Abeille, M., Calauz`enes, C., and Fercoq, O. Improved optimistic algorithms for logistic bandits. In International Conference on Machine Learning, pp. 30523060. PMLR, 2020.
Summary: This paper looks into the problem of stochastic bandits with fusing reward and dueling feedback where the regret is defined as linear combination of the normal regret and averaged dueling regret. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: - I checked the lower bound proof but did not review the rest, as I am not familiar with the proof of the dueling bandit. - I have a question regarding the proof of the lower bound. In the construction of the original and alternative instances, the proof seems to define the reward means and dueling probabilities separately. My question is: when the reward mean is defined, aren’t the dueling probabilities automatically determined? Experimental Designs Or Analyses: Yes, looks good to me. Supplementary Material: N/A. There is no supplementary material. Relation To Broader Scientific Literature: This problem is related to general bandit problem beyond natural reward feedback and has it's application in real world systems such as recommendation system. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper proposes the fusing problem and presents two algorithms to solve it. The algorithms are well-designed, and the achievable upper bounds match the order of the lower bounds. I did not find any significant weaknesses in this paper, as it is self-contained and inspiring. Other Comments Or Suggestions: - The regret bound for No Fusion algorithm seems to be different in the Table 1 and line 224 (right) - The empirical log-likelihoods seems to be defined in algorithm 2, not algorithm 1 or 4. Questions For Authors: - In the reward explore, dueling exploit stage, is there a reason to define the dueling action as both $\hat {k}_t^{R}$ - The algorithm line 6 and 8 does not do the exclusion of the reward set form dueling set as in line 264(right). Is there an intuition that this kind of non-approximate design works? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > *Q1: The regret bound for No Fusion algorithm seems to be different in the Table 1 and line 224 (right)* **A1:** Indeed, they are different. The regret bound in Table 1 combines two separate (optimal) regret bounds for the reward and dueling bandits, respectively. Therefore, the deminonator is $\min\{(\Delta_k^{(R)})^2,\Delta_k^{(D)})^2 \}$ as both are in the best dependence. Nevertheless, the regret bound in line 224 is by combining two elimination algorithms for reward and dueling bandits, respectively (for comparison with our fusion elimination algorithm). As the elimination algorithm in dueling bandits is suboptimal, the denominator becomes $\min\{(\Delta_k^{(R)})^2,\Delta_k^{(D)})^2 / K \}$. We thank the reviewer for raising this concern. We will clarify this in the final version of this paper. --- > *Q2: The empirical log-likelihoods seems to be defined in algorithm 2, not algorithm 1 or 4.* **A2:** Yes, it is the case because the empirical log-likelihoods are only used in Algorithm 2 (also updated in Algorithm 2 at Lines 29-30). Algorithms 1 and 4 do not need the empirical log-likelihoods. --- > *Q3: In the reward explore, dueling exploit stage, is there a reason to define the dueling action as both $\hat k_t^R$* **A3:** Yes, as the arm $\hat k_t^{(R)}$ is the current empirical best arm, we choose the empirical best arm pair $(\hat k_t^{(R)}, \hat k_t^{(R)})$ as the dueling action to exploit the empirical good arms (for the sake of minimizing the dueling regret since we only explore via reward feedback in that case). --- > *Q4: The algorithm line 6 and 8 does not do the exclusion of the reward set form dueling set as in line 264(right). Is there an intuition that this kind of non-approximate design works?* **A4:** We thank the reviewer for catching this delicate but important algorithm design technique. In Line 264 (right), as the ground-truth decomposition is known, one should exclude the reward set from the dueling set and vice versa. However, as the actual decomposition is unknown a prior in $\texttt{DecoFusion}$, we need to be conservative (i.e., allow both sets $\hat{\mathcal K}_t^{(D)}$ and $\hat{\mathcal K}_t^{(R)}$ to have some overlap instead of taking exclusion) to ensure enough explorations from both feedback sides for uncertain arms so to learn the ground-truth decomposition at the end. We will add this discussion in the final version of this paper to clarify the intuition behind this design. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will hold my rating for now and see if the other reviewers' concerns are addressed.
Summary: The paper investigates a K-armed bandit problem in which, in addition to the standard observations of arm rewards (modeled by a Bernoulli distribution), the learner has the option to select two additional arms at each time step. The feedback received includes comparisons between the selected arms, which are also based on a Bernoulli feedback mechanism, with preferences that align with the mean rewards. I.e., the paper "fuses" standard bandits with duelling bandits. Claims And Evidence: Yes, the results are supported by proofs and appear to be correct to me. Methods And Evaluation Criteria: The theoretical presentation is consistent with standard practices in the literature. The experiments utilize synthetic data. The paper leans more towards a theoretical approach. Theoretical Claims: Yes, Theorem 2.3 on the lower bound, as well as Theorems 3.1 and 4.1 on the regret performance of algorithms, make sense and appear to be correct to me. I have reviewed the proofs. Experimental Designs Or Analyses: Yes, the proofs appear correct. The experiments are standard and use synthetic data. Supplementary Material: I checked the proof of theorems. Relation To Broader Scientific Literature: The paper mainly derives theoretical regret bounds on fusion of standard and duelling bandits and also presents some basic synthetic experiments. Essential References Not Discussed: - Other Strengths And Weaknesses: The fusion of reward-based and preference-based feedback is an intriguing problem, and I assume many practitioners employ heuristics in their algorithm deployments. This paper adopts a theoretical approach and models the problem mathematically using the multi-armed bandit (MAB) framework, which could potentially be interesting. However, I have some reservations regarding the scope and contribution of the paper. First, the practical value of the paper is not clear. The setting and framework are too simplistic and niche, making them inapplicable to motivating examples such as large language models (LLMs), which utilize inherently different algorithms and techniques. Second, the theoretical results appear to be a combination of existing tools and findings, which makes them somewhat expected. While the lower bound seems complete and logical, the algorithms do not appear to align with this lower bound. The only aspect that seems specific to this paper is the method of fusing these two feedback models and eliminating arms based on the best statistics (relative feedback or reward), which remains an open question. Specifically, the results align with the lower bound only when the best relative arm for eliminating a suboptimal arm is the one with the highest reward. This limitation also constrains the theoretical contribution, which seems to be the primary focus of the paper. Other Comments Or Suggestions: - Questions For Authors: "The estimations of reward means $\mu_k$ and dueling probabilities $\nu_{k,l}$ are orthogonal as their observations are independently sampled from distributions with different parameters. This orthogonality makes it difficult to directly combine these two types of feedback in online learning." The statement appears to be problematic. In what sense are these estimations considered orthogonal? Does "orthogonal" here imply independence? The reward feedback can clearly be translated into relative feedback, which suggests a relationship between the two types of feedback rather than a complete orthogonality. It may be beneficial to clarify this terminology to avoid confusion. Ethical Review Concerns: - Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > *Q1: the practical value of the paper is not clear. [...] simplistic and niche* **A1:** **Practical value** While the authors agree that the bandits setting (a framework in favor of theoretical analysis) of this paper is indeed simplified, we believe that the proposed algorithms and theoretical results are still valuable for practice for the following reasons: The standard LLM training usually has two separate steps: (1) use the responses from human experts (absolute feedback) to conduct a supervised learning, called supervised fine-tuning (SFT), and (2) use the human preference (relative feedback) among multiple responses generated from the SFT model to conduct a preference directly training (either directed preference optimization (DPO) or reinforcement learning from human feedback (RLHF)). Although not exactly the same, these two steps correspond to the reward (absolute) and dueling (relative) feedback in our bandits setting. Our algorithm suggests a training approach to conduct these two steps in parallel, potentially saving the human labeling cost and improving the training efficiency (as our regret upper bounds improve over the existing ones). Investigating this potential approach in LLM training is an interesting direction. As our ultimate goal is to effectively combine the absolute and relative feedback (e.g., in realistic RLHF alignment for LLM), the proposed algorithms and theoretical results in the bandits framework are meaningful and concrete steps toward this goal. We thank the reviewer for raising this concern. We will discuss this point in the final version of this paper. --- > *Q2: [...] the theoretical results appear to be a combination of existing tools and findings [...] the algorithms do not appear to align with this lower bound.* **A2:** **Novelty** First, we clarify that our paper focuses on fusing both reward and dueling feedback, each drawn from distributions from a different set of parameters, and the information on either type of feedback can not be translated into the other type. Due to this separation, it is intrinsically difficult to incorporate them to optimize performance on the same bandit task. To address the difficulty, we propose two novel algorithmic frameworks: elimination fusion and decomposition fusion, neither of which is known in prior literature. The elimination fusion is simple and more straightforward to be extended to other applications. In contrast, the decomposition fusion is optimized for the stochastic bandit scenario and achieves near-optimal regret performance. **Open Problem** Secondly, we agree with the reviewer that removing the condition that "the best relative arm for eliminating a suboptimal arm is the one with the highest reward" (i.e., $\ell_k^*=1$) is indeed an interesting direction, as we discussed in the paper (Lines 382--384 (right column) and Lines 396--408 (left column)), However, this required condition is a reminiscence of the dueling bandits setting, and it is still a partially open problem (Komiyama et al., 2015). Once this problem is fully addressed in the basic dueling bandits, one can apply our two fusion algorithm techniques to the new dueling bandits algorithms to remove this condition in our setting. --- > *Q3: [...] The statement appears to be problematic. Does "orthogonal" here imply independence? [...]* **A3:** We thank the reviewer for suggesting the improvement of this phrase. Although there is indeed an ordering relation that "the dueling probability of arm $k$ winning over arm $\ell$ is greater than $0.5$ if and only if arm $k$ has a higher expected reward than arm $\ell$," we clarify that the reward feedback cannot be directly translated into dueling feedback in the general case, unless some stronger relations are assumed between the reward means and dueling probabilities (e.g., the Bradley-Terry model suggested by Reviewer qwFm (hyperref)). **Rephrase** In the final version of this paper, we will clarify the terminology and rewrite this sentence to avoid the confusion as follows, "As the reward means and dueling probabilities only have a weak ordering relation---the dueling probability of arm $k$ winning over arm $\ell$ is greater than $0.5$ if and only if arm $k$ has a higher reward mean than arm $\ell$---and their observations are independently sampled from distributions with different parameters, the estimations of reward means $\mu_k$ and dueling probabilities $\nu_{k,\ell}$ are intrinsically separated. This separation makes it difficult to combine these two types of feedback in online learning directly."
Summary: This manuscript considers a new bandit problem where the learner, at each step, simultaneously chooses any of the K arms and any couple of the K arms and observe an absolute reward as well as a dueling reward. The incurred regret is a convex combination (with parameter $\alpha$) of the absolute regret and the strong dueling regret. Assuming that the ordering of the expected rewards is compatible with the dueling expected, the authors prove an asymptotic lower bound of the cumulative regret and introduce two new algorithms, the second one asymptotically matching the lower bound. Interestingly for $\alpha$ close to zero or $\alpha$ close to one, this corresponds to bandit problem with side information and the authors, in which case their algorithm DecoFusion achieves a constant regret. ## update after rebuttal I understand the authors viewpoint that earlier papers on bandits (e.g Auer et al., 2002) only focused on purely asymptotic regimes. However, there has been a lot of literature in the last ten years (and in particular in the duelling literature) to weaken assumptions of the form $T> \exp( cK)$; see again the introduction of Saha and Gaillard. In any case, I admit that this is perhaps a personal bias on the objective of being tightly asymptotically optimal versus non-asymptotically optimal, but there is a huge gap here between $T> \exp(cK)$ and weaker conditions such as $T>K^2$. Claims And Evidence: I feel that it is a bit overclaimed that "DecoFusion achieves the optimal regret". While the main theorem (Theorem 4.1) seems valid, the result is so asymptotic, that the optimality is not achieved as soon as the number of arms K is not constant (see W1) Methods And Evaluation Criteria: Yes Theoretical Claims: I checked the proofs of the main theorem (Theorem 4.1). Experimental Designs Or Analyses: Yes, but the numerical experiments are only here for illustration. Supplementary Material: I mainly reviewed Appendix A (bibliography), D2 (main proof). Relation To Broader Scientific Literature: This manuscript considers a new bandit problem where the learner both plays an arm and a duel of arms. As such, I am not aware of any previous on this topic. However, the authors build upon previous works on K-arms bandits (mostly Honda & Takemura (2010)) and duelling bandits (Komiyama et al. (2015)). That being said, the combination of the above approaches in DecoFusion is clearly new and original. Essential References Not Discussed: I am not aware of any, although the authors could further discussed the recent literature in duelling bandits (see e.g. the references in Saha and Gaillard or the survey of Bengs et al., 2018). Other Strengths And Weaknesses: S1) This is the first manuscript that deals with this problem, although the motivations are somewhat allusive. The authors establishes almost asymptotic matching lower and upper bounds for the problem, but see W1. S2) The manuscript is mostly well written and the authors manage to provide the intuition behind their algorithms and the different rates. W1) In my view, the regret bounds are of high asymptotic nature. For instance, the regret bound in Theorem 4.1 only matches the lower bound only in the regime where T\geq \exp [ c K], which is quite disappointing. Other Comments Or Suggestions: C1) $\epsilon$ is not defined the statement of Theorem 4.1 Questions For Authors: Q1) In Section 2, you write "Later in this paper, we will show that a sublogarithmic o(log T ) regret is actually achievable (in fact, T -independent constant regret) in these two scenarios, revealing a unique phenomenon of DR-MAB". Up to my knowledge, this phenomenon is not unique and in fact arises for bandit problems with side information (e.g. the player plays several arms and only incurs the regret of a single arm). Q2) Following W1, I am wondering to what extent using more recent approaches in duelling bandits allows to bypass the limitation that we need $T\geq \exp(c K^{1+\zeta})$ for optimality. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > *W1): In my view, the regret bounds are of high asymptotic nature. [...] where T\geq \exp [ c K], which is quite disappointing.* **A1:** We respectfully disagree with the reviewer's characterization of our results as "disappointing," particularly regarding the fact that our upper bound matches the lower bound asymptotically. As detailed below, achieving asymptotic optimality in this context is a well-established practice in bandit literature. We hope the reviewer will reconsider the contributions of our paper in light of these points. Firstly, we clarify that our (gap-dependent) upper bound alone is non-asymptotic for any $T$, and it is expected to only match the lower bound asymptotically. Because the (gap-dependent) lower bounds in bandits literature (as well as our Theorem 3.1) are usually asymptotic (holds when $T\to\infty$), and thus the upper bounds can only match them in an asymptotic manner. This is a common practice for optimal bandit algorithms. For example, the regret upper bounds of the known optimal stochastic bandit algorithms, e.g., KL-UCB (Cappe et al., 2013, Theorem 1) and DMED (Honda & Takemura, 2010, Theorem 4), also match the lower bound only asymptotically. Secondly, the condition $T\ge \exp(cK)$ for matching lower bounds is common and needed in most bandits algorithms (omitted by default, so this is often unaware). For example, the regret upper bound of the seminal UCB1 algorithm (Auer et al., 2002a, Theorem 1) can be expressed as $$R_T \le O\left( \sum_{k>1}\frac{\log T}{\Delta_k} + K \right).$$ To match the $\Omega\left( \sum_{k>1}\frac{\log T}{\Delta_k}\right)$ lower bound, it needs $T \ge \exp (cK)$ as well. This condition is required for any bandit algorithm whose finite (gap-dependent) regret upper bound has an additive $O(K)$ term---a common upper bound term in the literature. Therefore, the required condition $T\ge \exp(cK^{1+\xi})$ in our Theorem 4.1 is very basic as $\xi$ can be as close to zero as possible. - Auer, P., Cesa-Bianchi, N., and Fischer, P. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47:235–256, 2002a. --- > *C1) $\epsilon$ is not defined the statement of Theorem 4.1* **A2:** We thank the reviewer for pointing this typo out. The $\xi$ and $\epsilon$ are the same in the statement of Theorem 4.1, and we will correct this in the final version. --- > *Q1) [...] this phenomenon is not unique and in fact arises for bandit problems with side information [...]* **A3:** We thank the reviewer for mentioning this reference. We will add it to the final version of this paper. We will also clarify that the phenomenon's uniqueness is in the context of the fusion of reward and dueling feedback in comparison to the bandits with either reward or dueling feedback. --- > *Q2) Following W1, [...] what extent using more recent approaches in duelling bandits allows to bypass the limitation [...of...] $T\ge \exp\left( cK^{1+\xi} \right)$ [...]* **A4:** Although there is indeed some recent progress on dueling bandits, e.g., the best-of-both world algorithm by Saha & Gaillard (2022, Theorem 3) (whose stochastic regret upper bound has a suboptimal prefactor $4$ in the dominating $\log T$ term), to the knowledge of the authors, the best-known regret bounds for stochastic dueling bandits are still the RMED algorithm by Komiyama et al. (2015, Theorem 3), which also relies on the $T\ge \exp\left( cK^{1+\xi} \right)$ condition. We will add this discussion to the final version of this paper and raise it as an open question for future research on dueling bandits.
null
null
null
null
null
null
CogMath: Assessing LLMs' Authentic Mathematical Ability from a Human Cognitive Perspective
Accept (poster)
Summary: The paper introduces CogMath, an evaluation framework for assessing LLMs' capabilities from a human cognitive perspective. It breaks down mathematical reasoning into three stages: problem comprehension, problem solving, and solution summarization. Each stage is further evaluated through nine detailed dimensions, with three agents designed for scientific assessment per dimension. The experiments provide valuable insights into the strengths and weaknesses of seven mainstream LLMs, guiding future development. Claims And Evidence: The experiments on seven representative LLMs are significant, and the authors investigate their performances at both the stage and dimension levels. Besides, the authors also assess the impact of some LLM-enhanced methods including CoT and ICL, which greatly enriches the conclusion. Methods And Evaluation Criteria: The proposed three stages and nine dimensions in CogMath framework are grounded in psychological research on human reasoning process. Therefore, I believe they have great foundation. Besides, the authors provide human verification results for agents in each dimension. The results in Section 4.7 clearly demonstrate their quality and effectiveness. Theoretical Claims: The core of this paper is to propose an LLM evaluation framework based on human cognition, through which various cognitive issues in LLMs are identified. Therefore, the focus and contribution of this work lie in experimental findings rather than theoretical analysis. Experimental Designs Or Analyses: I have checked the details of the experimental process. The settings and metrics appear well-founded. Besides, I have read all the analyses and I think they are reasonable and insightful. Supplementary Material: I mainly review Section A in the supplementary material to better understand the implementation of each dimension. I also review Section D, which illustrates the impact of CoT and ICL at each dimension. Relation To Broader Scientific Literature: Although there exist many other studies for LLM evaluation and some dimensions proposed in this paper share similarities, I believe that the organization of the three stages and nine dimensions from a human perspective is both innovative and insightful. It not only clarifies how LLMs achieve performance in each cognitive stage/dimension but also highlights areas for further improvement toward human-like intelligence. As a result, although the conclusion that LLMs are overestimated may not be surprising, the contributions of this paper remain significant for the community. Essential References Not Discussed: None. Other Strengths And Weaknesses: The main strengths of this paper include the following: 1. The motivation to evaluate LLMs’ performances at human-level dimensions/stages is important and innovative. Besides, in each dimension, the authors introduce a judge agent and a reference agent to ensure the quality of the evaluation. 2. The authors conduct extensive experiments across several LLMs. The results provide detailed insights into the advantages and disadvantages of different LLMs. The exploration of LLM-enhanced methods such as CoT and ICL provides a good insight to prompt further study. 3. The paper is well-organized and easy to understand. The authors give sufficient examples to explain their ideas and implementations. However, I have the following concerns and suggestions: 1. Some technical details in this paper require further clarification. Please refer to the questions below. 2. Since a key contribution of this paper is providing insights for improving LLMs, the authors should offer more detailed illustrations on this topic. Please refer to the questions below. Other Comments Or Suggestions: Typos: 1. Line 48, “performance rom" should be “performance from”? 2. Line 60, should "Solution" be lowercase? 3. Line 67, “refine” should be “refines” Questions For Authors: 1. If there are no numerical values in the problem (such as in a symbolic computation problem or when the numbers are represented in text), how should the dimension 6 work? and what will the query look like? 2. This paper reveals the performance of each LLM across different dimensions and stages. Based on these findings, the authors highlight potential directions for future improvements in LLMs. Therefore, how do the authors suggest further optimizing each LLM, such as GPT-4? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of the psychological foundation of our evaluation framework, the extensive and significant experiments, the valuable insights provided by our work, and the well-organization of our paper. $\bf{Q1}$: If there are no numerical values in the problem (such as in a symbolic computation problem or when the numbers are represented in text), how should the dimension 6 work? and what will the query look like? $\bf{A1}$ : Thanks for your detailed review and valuable question, and we apologize for causing this confusion. If the problem does not contain numerical values, then our multi-agent system will not generate any inquiries for Dimension 6, as the Inquiry agent cannot produce any reasonable transformation for the problem. As a result, in such cases, as we outline in Appendix C, we omit consideration of that dimension during the evaluation. For example, the problem "If $a<b$, what is the value of $|a-b|+a+b$?" from MATH dataset does not include any numerical values, so it is unnecessary to assess an LLM's ability of numerical calculations on it. In our framework, we can imagine that our multi-agent systems will also fail to provide a reasonable numerical transformation to such a problem. As a result, we would not evaluate Dimension 6 for this problem, which aligns with real-world scenarios and intuition. Based on your suggestion, we will add more clarifications in Section 3.2 to better explain our evaluation process. $\bf{Q2}$: This paper highlights potential directions for future improvements in LLMs. Therefore, how do the authors suggest further optimizing each LLM, such as GPT-4? $\bf{A2}$ : Thanks for your constructive question. Specifically, we can analyze potential optimization strategies for different LLMs based on the conclusions drawn in Sections 4.3 and 4.4. For example, as shown in Table 2, weaker LLMs (e.g., Llama2-13B) exhibit the lowest Pass Rates in Stage 1 (i.e., Problem Comprehension), whereas more advanced models (e.g., GPT-4, DeepSeek-V2.5) demonstrate relatively stable comprehension abilities but struggle significantly with mastering Stage 2 (i.e., Problem Solving). This suggests that improving comprehension should be a primary focus for weaker models. Further analyzing Figure 2, we observe that Llama2-13B’s main issues in Stage 1 stem from Dimension 1 (Sentence Paraphrasing) and Dimension 4 (Redundant Condition). This insight suggests that incorporating more training data involving synonymous rewrites and redundant conditions could help enhance its comprehension capabilities. For GPT-4, its main deficiencies in Stage 1 lie in Dimension 2 (Sentence Disruption) and Dimension 3 (Missing Condition), while in Stage 2, it performs poorly in Dimension 7 (Knowledge Redefinition). For Dimensions 2 and 3, as analyzed in Section 4.4, GPT-4's struggles may stem from its tendency to inherently "over-correct" the unsolvable problems into solvable ones. To address this, we could empower it with the critical thinking skills by reinforcement learning techniques, encouraging the model to recognize counterfactual scenarios instead of merely simulating the reasoning process based on a given input. The results in Dimension 7 indicate that GPT4 may tend to treat knowledge as fixed memorization rather than a flexible reasoning process. This suggests the need for more adaptive knowledge learning strategies, such as contrastive learning and retrieval-augmented training, where models are exposed to dynamic and evolving knowledge sources to encourage reasoning beyond static memorization. Following your comments, we will supplement these discussions to expand the scope of our work in the revised version. $\bf{Q3}$: Some Typos. $\bf{A3}$ : Thanks for your meticulous review and pointing out these typos. We will carefully correct them in the revised version. --- Rebuttal Comment 1.1: Comment: I confirm that I have read the author's response. I will maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for taking the time to read our response! We truly appreciate your valuable comments and will include all our discussions into the revised revision. Thank you again for your thoughtful feedback!
Summary: This paper proposes CogMath to assess LLMs’ abilities at specific cognitive stages. Based on psychological research, they decompose the mathematical reasoning process into three stages: problem understanding, problem solving, and solution summarization. Then, in each stage, they further specify several detailed dimensions to test LLMs’ performance. More specifically, in each dimension, the authors design an innovative “Inquiry-Judge-Reference” multi-agent system to conduct a scientific and reliable evaluation. In experiments, they apply CogMath to several mainstream LLMs. The results reveal that the mathematical abilities of these LLMs are overestimated, and they have distinct advantages and disadvantages at different stages, providing a valuable direction for their further improvement. Claims And Evidence: Yes. In the proposed CogMath framework, the motivation for decomposing the reasoning process into three stages comes from psychological research. Moreover, the "Inquiry-Judge-Reference" multi-agent system ensures the quality of the "queries-answers" in each dimension. Therefore, the experimental results and findings are convincing. Methods And Evaluation Criteria: Yes. As mentioned above, the proposed framework has a solid psychological foundation, and the evaluation dimensions are well-suited to the problem. Besides, the proposed multi-agent system is both reasonable and effective in achieving the goals of this paper. Theoretical Claims: As this paper focuses on an LLM evaluation framework, there is no issue with theoretical claims. Experimental Designs Or Analyses: I have checked all the experimental designs, including performance analyses at both the stage and dimension levels, the impact of LLM enhancement methods (e.g., CoT and ICL), error analyses based on difficulty and problem length, and human verification of the multi-agent system. The experiments are well-founded, and their results clearly validate the authors’ conclusions and highlight the contributions of this paper. Supplementary Material: Yes. I first review the examples and the prompts in supplementary materials to understand the details of the nine implemented dimensions. Then, I review the results in Section D to check the dimension-level effects of LLM enhancement methods. Finally, I read the authors’ discussion on future directions. There are no specific issues in these sections, and I believe the authors have provided sufficient illustrations of the proposed framework in this paper. Relation To Broader Scientific Literature: I believe that the evaluation of LLMs has attracted great attention in current community. There is much literature that intends to evaluate LLMs from different perspectives. However, as the authors state in this paper, these related works either are task-specific or rely on a single accuracy metric to assess LLMs. Therefore, I believe that this paper contributes a valuable angle to view LLMs’ abilities from a human cognitive perspective, with multiple dimensions in each cognitive stage to construct a comprehensive evaluation. Moreover, the proposed framework is more than just a benchmark; it provides a general methodology for evaluating LLM performance across different cognitive stages, even when applied to datasets from various domains. Thus, this framework is extendable. Essential References Not Discussed: I think the related works are discussed sufficiently. Other Strengths And Weaknesses: Strengths: - The paper is well written, with each part detailed enough for understanding. Besides, the authors give clear explanations for their motivation, methodology, and experiments. Therefore, this paper is easy to reproduce and implement. - The perspective to evaluate LLMs from human cognition is valuable and innovative. It provides a new angle to examine LLMs’ advantages and disadvantages. Besides, the proposed framework has the generalization ability. - As revealed in Section 4.2, simply introducing more test problems may be insufficient to assess the true mathematical abilities of LLMs. Therefore, this paper points out an important finding that existing related works may overlook. - The experimental results reveal how different LLMs perform at each cognitive stage and dimension. The exploration of the LLM enhancement methods is a plus for the experimental findings. It provides a new way to assess LLMs’ intelligence and advancement. There remain some weak issues that can be further improved - The decomposition of the reasoning process into three stages could be elaborated further. - The relationships among the proposed dimensions could be discussed in more detail. Other Comments Or Suggestions: Line 18, "assess" -> "assesses" In this paper, "a LLM" -> "an LLM" In Figure 2, what is "MathExam"? Does it mean "MExam"? Questions For Authors: 1. In dimension 2, the authors disrupt the word order within each clause. Why do they choose the clause level, rather than directly disrupt the whole problem? 2. When humans prove theorems, do the three stages discussed in this paper enough to represent the whole reasoning process? 3. As for the nine dimensions in this paper, the authors seem to consider them independent. Do they try to combine some of them in one query and test LLMs? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of our motivation and solid psychological foundation, the credibility of our experimental results, the significant contributions of our framework, and the good writing of our paper. $\bf{Q1}$: The decomposition of the reasoning process into three stages could be elaborated further. When humans prove theorems, do the three stages discussed in this paper enough to represent the whole reasoning process? $\bf{A1}$: Thanks for your constructive suggestion and insightful question. Our decomposition of the reasoning process into three stages is based on psychological research, which points out that humans typically undergo three general stages when reasoning about a mathematical problem: Problem Comprehension, Problem Solving, and Solution Summarization. Building on this, we carefully analyze the objectives associated with each stage in mathematical reasoning and designed corresponding dimensions for evaluation. Moreover, these three stages represent a general evaluation framework. As discussed in Appendix E, they do not rely on specific problem types or formats and can apply to evaluting LLMs' abilities in a wide range of tasks. For instance, in theorem proving you mentioned, the reasoning process of human learners can also be split into understanding the given conditions (corresponding to our Stage 1), performing the proof (Stage 2), and summarizing the proof (Stage 3). Of course, for specific tasks like theorem proving, additional evaluation dimensions could be introduced. For example, in Stage 2, besides numerical computation, we could add assessments related to symbolic manipulation skills. In summary, the three stages we utilize in this paper are widely generalizable. Depending on the task at hand, we can also introduce more granular dimensions to conduct more evaluation. Based on your suggestion, we will include these discussions in the revised version. $\bf{Q2}$: The relationships among the proposed dimensions could be discussed in more detail. The authors seem to consider them independent. Do they try to combine some of them in one query and test LLMs? $\bf{A2}$ : Thanks for your valuable question. Indeed, in our paper, the design of the different dimensions is relatively independent. However, as mentioned in Section 3, only when a LLM passes all dimensions can we conclude that it has genuinely mastered the problem $P$. Therefore, these dimensions are collectively used to assess whether the model truly masters mathematical reasoning. We certainly appreciate your suggestion to combine different dimensions in a single query, which could provide us with valuable insights into the model's ability to handle multiple dimensions simultaneously. However, this approach may present challenges in assessment in the cognitive stage level, and it could be difficult to trace which specific dimension the model is struggling with. For this reason, we have chosen to evaluate the dimensions independently in our primary experiments. However, we believe that your suggestion is very promising, and we are willing to explore this direction in future work. $\bf{Q3}$: Why do they choose the clause level, rather than directly disrupt the whole problem? $\bf{A3}$ : Thanks for your thoughtful question. We agree that disrupting the whole problem is also a possible approach. In this paper, we chose to disrupt the problem at the clause level because it allows our inquiries to maintain a closer similarity to the original problem's structure while ensuring that they remain unsolvable. As a result, if the model does rely on semantics for reasoning, our approach makes it easier to observe such behaviors. Based on your concern, we will add further clarification about it in the revised version. $\bf{Q4}$: Typos and does "MathExam" mean "MExam" in Figure 2? $\bf{A4}$ : Thanks for your meticulous review and pointing out these typos. We will carefully correct them in the revised version.
Summary: The paper proposes CogMath, a novel evaluation framework for assessing the mathematical reasoning abilities of LLMs from a human cognitive perspective. Traditional benchmarks primarily focus on answer accuracy, often overestimating LLMs’ true mathematical competence. Instead, CogMath structures evaluation into three cognitive stages—problem comprehension, problem solving, and solution summarization—with nine finer grained evaluation dimensions. To conduct rigorous assessments, the framework employs an “Inquiry-Judge-Reference” multi-agent system, ensuring models demonstrate a deeper understanding beyond superficial pattern recognition. Applying CogMath to three mathematical benchmarks (GSM8K, MATH, and MExam) reveals that current LLMs' mathematical abilities are overestimated by 30%-40%, with deficiencies varying across the nine dimensions. Additionally, popular prompting techniques like Chain-of-Thought (CoT) and In-Context Learning (ICL) do not significantly enhance genuine reasoning ability. This highlightes the need for more wholistic evaluation of LLM capabilities. Claims And Evidence: 1.) The paper's main claim that due to a single-dimensional nature of existing evaluations, the mathematical reasoning abilities of models are overestimated is well substantiated by the fact that these LLMs when evaluated on a wider range of criteria show significant drops in performance. 2.) In lines 177-178 (right column), the authors state that if humans have mastered a problem (conceptually), changing the numerical values should not affect their ability to solve it. However, intuitively it seems that the change in complexity of computations resulting from the change in numerical values may plausibly affect the ability of arriving at the correct final answer successfully. This argument is even stronger in the case of LLMs which have been shown to struggle with fundamental calculations. It would be important to include evidence from previous literature, if any to support the claim. Absence of it renders Dimension 6 potentially meaningless. Methods And Evaluation Criteria: Dimension 6 assumes that the problem solving abilities of LLMs would not be affected by the change in the calculations involved in the problems. This is not true as it has been shown that LLMs struggle at some of the most fundamental computations. Additionally, I am skeptical about the strategy used in Dimension 2. Depending on how extreme the random shuffling of the sequence of words in the original question is, the result may not assess memorization accurately (if that is the purpose of evaluation along this dimension). For eg. if all the words of the original seuquence are jumbled, the LLM may not be able to solve it correctly even if has partly memorized the original problem. The evaluation criteria (Pass Rate) seems well thought of for most of the part. However, if the purpose of both Dimensions 2 ans 3 is not to solely evaluate memorization, the strategy of considering any answer except the original answer does not make sense. The LLM arriving at an incorrect but valid numerical answer can point to many other problems apart from memorization (such as hallucination) which also tie in to mathematical capabilities of LLMs. Theoretical Claims: The authors make no theoretical claims. Experimental Designs Or Analyses: The authors present a several experiments in order to demonstrate the usefulness of the frame work including stage wise as well as dimension wise break down of the performance of different LLMs. The experiments on the effect of Chain-of-Thought and In-Context Learning on the performance of models on difference difficulty levels provides interesting insights. An important detail however, which seems to be missing from the text is the lenght thresholds corresponding to Length 1 to Length 5. Supplementary Material: I have gone through the appendix and have taken a very brief look at the supplementary material zip file attached with the submission. Relation To Broader Scientific Literature: This study falls within the broader literature of evaluation of mathematical reasoning in LLMs by addressing the limitations of traditional accuracy-based benchmarks like GSM8K and MATH, instead proposing a cognitively inspired framework, CogMath. It aligns with cognitive science theories on problem-solving and AI evaluation methodologies. The work is also connected to recent research on AI self-verification ([4], [5]), highlighting LLMs' struggles with backward reasoning. While CogMath offers a novel multi-stage evaluation, it could benefit from integrating findings on structured reasoning methods like Tree-of-Thought or Graph-of-Thought, further bridging AI robustness and human-like problem-solving strategies. [4] Weng et al., 2023; Large Language Models are Better Reasoners with Self-Verification [5] Yu et al.; 2023; Bootstrap your own mathematical questions for Large Language Models Essential References Not Discussed: Here is a list of references (although not all being "essential") would be good to include * Any literature related to the claim that human and LLM problem solving abilities are not affected by change in the computations involved (related to dimension 6) * [1], [2] and [3] related to data contamination [1] Zhang et al., 2024; A Careful examination of Large Language Model Grade School Performance on High School Arithmetic [2] Mirzadeh et al., 2024; GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models [3] Shah et al., 2024; AI-Assisted Generation of Difficult Math Questions Other Strengths And Weaknesses: I have discussed most of the strengths and weaknesses in other sections. I discuss some additional ones below ### Strengths * The idea of forming a more wholistic evaluation of LLMs based on psychological performance analogs in humans is a novel and interesting ideas. ### Weaknesses * Close to zero discussion of the new MExam benchmark introduced in the paper is present in the text. Including some minimal but critical information about the benchmark in the main paper would be important Other Comments Or Suggestions: The overall writing of the paper could be significantly improved. Below are some suggestions: * The captions of the figures could be more descriptive and self-containing * The paper contains several typos. Some typos which I noticed are: Line 49 right column: rom -> from, Line 395 (right): we focusing -> we focus * A brief description of the three stages (around lines 46-47 (right)) as described in the psychology literature would be helpful. * I would recommend explicitly stating the purpose and motivation of evaluation along each dimension; i.e. what aspect of mathematical reasoning is aimed to be targeted by each dimension as well its foundation in the psychology literature. Questions For Authors: * Are both dimensions 2 and 3 specifically geared towards evaluating memorization of the training data only? If so, it seems a bit redundant (ignoring the concern that Dimension 2 may not be reliably assessing memorization as explained in the "Methodology and Evaluation Criteria" section). Both the dimensions could be be leveraged to attack other kinds of problems in the mathematical capabilities of LLMs (such as hallucination). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of our framework's novelty, the validity of our conclusions, and the insights of our work. $\bf{Q1}$: Validity of Dimension 6. $\bf{A1}$: Thanks for your insightful comments. First, the strategy of changing numbers in problems has been widely adopted in assessing learners' abilities. For example, human educators commonly pose numerical variations in exam questions to assess students[1,2]. Similarly, it is widely employed in evaluating a model's mastery for math problems[3,4]. Thus, our Dimension 6 is meaningful for LLM evaluation. Second, we do not assume that "LLMs' problem-solving abilities would not be affected by the change in the calculations". Instead, if a model's performance greatly declines after numerical variations, our framework can identify such deficiencies in numerical processing capabilities, which indeed aligns with your opinions that LLMs struggle with fundamental computations. Third, in Dimension 6, our Inquiry agent does not introduce extreme cases (e.g., a huge number) that might exceed current LLMs’ processing limits (an example is shown in Table 6). Therefore, our dimension will not significantly increase the difficulty of the problem, so we do not expect any noticeable decline in model performance. [1] How to solve it: A new aspect of mathematical method. [2] Cognitive diagnostic assessment for education: Theory and applications. [3] GSM-Plus: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers. [4] Functional benchmarks for robust evaluation of reasoning performance, and the reasoning gap. $\bf{Q2}$: If all words are jumbled, Dimension 2 may not assess memorization accurately? $\bf{A2}$: Thanks for your thoughtful question. Our goal in Dimension 2 is not to evaluate the memorization capacity of LLMs. Instead, we aim to assess whether the LLM is truly reasoning or merely relying on the semantic cues in the wording. After randomly shuffling the sequence of words, humans can naturally recognize that the problem becomes unsolvable. Therefore, if an LLM truly master reasoning, it should also recognize this. This is why in Section 3.1 and in Table 6, we clarify that an LLM is considered to have "passed" Dimension 2 if it successfully recognizes the inquiry as “unsolvable”, rather than whether its answer is correct (please also note that the shuffled problem does not have a well-defined correct answer!). $\bf{Q3}$: Dimensions 2 and 3 evaluate memorization only? Strategy of considering answer? $\bf{A3}$: Thanks for your insightful questions. As we clarified in $\bf{A2}$, Dimension 2 is not designed to evaluate the memorization, so do Dimension 3. Our goal is to assess whether the LLM is engaging in genuine reasoning, regardless of whether it is responding with memorized information or other capabilities. In both dimensions, an LLM is considered to pass if it correctly identifies the given inquiry (i.e., $q_2$, $q_3$) as “unsolvable”, rather than considering any specific answers. Therefore, our Pass Rate criteria refers to the proportion of cases where the model successfully recognizes such ill-posed inquiries. $\bf{Q4}$: Length thresholds. $\bf{A4}$: Indeed, we do not set fixed thresholds. As stated in Section 4.6, we divide all problems into five levels using an equal-frequency binning approach. This means that we sort all problems by length and then divide them into five equal-sized groups. This ensures that each group contains sufficient data and avoids introducing any potential biases, allowing for more reliable analysis. $\bf{Q5}$: Information about MExam. $\bf{A5}$: Thanks for your valuable question. In this work, our goal is not to create a new dataset, MExam, but rather to utilize it to validate that the overestimation of LLM capabilities is not merely due to data contamination (discussed in Section 4.2). Thus, we explain in Appendix C how we collected MExam and the number of problems it contains. To address your concern, we conduct additional statistical analyses. Due to space limit, please refer to our response $\bf{A4}$ to reviewer $\bf{Xd5A}$ for details. We will also make MExam publicly available if this paper is accepted. $\bf{Q6}$: Suggestions on writing. $\bf{A6}$: Thanks for your constructive feedback. We will incorporate the related studies and refine the writing in the revised version. $\bf{Q7}$: Ethics review. $\bf{A7}$: Thanks for your attention to ethics. In this work, we invited human annotators to evaluate the outputs of our Judge agents and Reference agents. As stated in Section 4.7, the evaluation protocol was approved by the Ethics Review Board, and all annotators were informed of data usage. Besides, our templates (Appendix B) do not collect private information. Thus, our study does not raise ethical concerns. We greatly appreciate your in-depth reviews and hope our explanations addresses your concerns. We will also include them in the revised version. --- Rebuttal Comment 1.1: Comment: I appreciate the author's response to my review. I am convinced about the utility of Dimensions 2, 3 and 6. I am increasing my score. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your constructive comments and valuable feedback! We will incorporate all our discussions and your suggestions into the revised version. Thank you once again for your time and for increasing your score!
Summary: This paper introduces CogMath to assess the authentic mathematical reasoning abilities of LLMs through the lens of human cognition. Specifically, the paper models human mathematical reasoning with three stages and nine dimensions, such as sentence paraphrasing, numerical transformation, and backward reasoning. A system with three agents, including Inquiry, Judge, and Reference, is used in each dimension to ensure accurate assessments. Through experiments with CogMath, this paper reveals that current LLMs’ abilities are overestimated by 30%-40% and that prompting techniques do not fundamentally improve LLMs’ mathematical reasoning ability. Claims And Evidence: Yes. The evidence provided includes: First, there are large-scale evaluations on GSM8K, MATH, and MExam datasets, with 7 mainstream LLMs. Second, the authors provide fine-grained pass rate analysis at different reasoning stages and dimensions, offering clear and significant evaluation results. Third, they also conduct human validation of the multi-agent system to ensure assessment quality. Methods And Evaluation Criteria: The three-stage reasoning model aligns with established studies on human problem-solving and the hypothesis that true problem mastery requires passing all nine dimensions is reasonable. Besides, this paper uses Pass Rate for evaluation across different dimensions, which considers both the reasoning scenario and the counterfactual scenario (e.g., Dimensions 2 and 3). Therefore, this evaluation criteria also makes sense. Theoretical Claims: The paper focuses on evaluating LLMs through the lens of human cognition and does not involve theoretical issues. Experimental Designs Or Analyses: I have checked the experimental designs in Sections 4.1, 4.5, and 4.6. They are reasonable, with evaluations across: three diverse math datasets, seven different LLMs covering both open-source and closed-source, and Dimension-wise analysis to pinpoint LLMs’ weaknesses in different cognitive processes. Supplementary Material: I have reviewed the example queries for all dimensions, prompt templates for the proposed agents, and the experiments of LLM-enhancement methods on dimension level performances. Overall, the supplementary material is thorough and well-organized. Relation To Broader Scientific Literature: This paper makes a contribution to the broader scientific literature on evaluating the reasoning abilities of LLMs, particularly in the domain of mathematical problem-solving. While previous benchmarks primarily rely on overall answer accuracy, CogMath provides a cognitively motivated framework that assesses LLMs across multiple dimensions of human-like reasoning. Additionally, it highlights critical weaknesses in existing LLMs, such as the “over-correct” behavior and their struggles with counterfactual reasoning. This study not only provides a more accurate assessment of LLM capabilities but also offers great insights for future model improvements. Essential References Not Discussed: No, the references are essential to understanding this paper. Other Strengths And Weaknesses: In this paper, the authors propose the CogMath framework to evaluate LLMs’ abilities from a cognitive perspective and then conduct sufficient experiments with several LLMs and datasets. By structuring the evaluation around cognitive stages and dimensions, this paper offers a more fine-grained understanding of LLMs and reveals some interesting phenomena (e.g., the “overcorrect” behavior). Second, the use of a multi-agent system allows for a more rigorous and systematic evaluation process, reducing the limitations of single-metric evaluations and providing deeper insights into reasoning failures. Moreover, this paper covers a diverse set of LLMs and mathematical benchmarks, making the findings broadly applicable. My major concern about this paper is that although the three-stage reasoning process proposed in the paper is very general, some specialized types of mathematical problems may require additional stages or dimensions to capture the full reasoning process in those areas. Explaining how to expand the framework to address more problem types would further enhance this paper’s applicability. Besides, several newly released advanced LLMs have emerged (e.g., Deepseek-R1, the authors investigate Deepseek-v2.5 in this paper). I recommend the authors to supplement more evaluations on these models. Additionally, I have a question regarding the multi-agent system. In section 3, the authors state that they set a maximum number of iterations $\delta$ for the agent interaction. However, if after exceeding $\delta$ rounds, the Judge agent still considers the obtained inquiry to be of insufficient quality, how should the evaluation be conducted? In summary, I think this paper has good quality and good readability, presenting its ideas clearly and logically. Other Comments Or Suggestions: N/A Questions For Authors: 1. I recommend the authors to supplement more evaluations on newly released LLMs (e.g., Deepseek-R1) to analyze their improvements over the previous versions. 2. If after exceeding $\delta$ rounds, the Judge agent still considers the obtained inquiry to be of insufficient quality, how should the evaluation be conducted? 3. How is the adaptability of the framework? I would like to hear more discussion about how can the proposed three-stage reasoning process be adapted to more mathematical problem types. 4. The current training of LLMs largely follows a “pretrain-SFT-RL” process. Given the various weakness identified of different LLMs in this paper, how should we optimize them (e,g, enhance a LLM’s capability in problem comprehension stage) during training? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of our framework's soundness, evaluation significance, and great contributions. $\bf{Q1}$: Adaptability to more problem types. $\bf{A1}$: Thanks for your insightful question. Our CogMath is easily adaptable because: (1) it is based on the decomposition of human reasoning processes. The three stages, Problem Comprehension, Problem Solving, and Solution Summarization, reflect the general cognitive processes humans use to conduct reasoning. (2) our agents are highly flexible. As shown in Appendix B, their prompts do not impose any specific requirements on the problem types. Besides, their interactions ensure the quality of the generated inquires and reference answers (verified in Section 4.7). (3) our evaluation metrics are reasonable and easy to use, considering both the general "answer correctly" and "unsolvable" in counterfactual scenarios. $\bf{Q2}$: Evaluation on new LLMs (e.g., DeepSeek-R1). $\bf{A2}$: Thanks for your constructive suggestion. We supplement results for DeepSeek-R1 below. Due to API speed limitations, here we conduct evaluations on the widely-used public MATH and GSM8K datasets. | DeepSeek-R1|||||MATH||||GSM8K| |-|-|-|-|-|-|-|-|-|-| ||Avg|Alg|Count|Geo|Itmd|Num|Pre-Alg|Pre-Cal|| |Vanilla|0.982|0.992|0.994|0.956|0.979|0.980|0.985|0.972|0.967| |CogMath|0.448|0.581|0.443|0.307|0.295|0.413|0.604|0.326|0.703| |$\Delta \quad$|-0.534|-0.411|-0.551|-0.649|-0.684|-0.567|-0.381|-0.646|-0.264| |$Stage\ 1$|0.863|0.942|0.831|0.737|0.837|0.881|0.875|0.837|0.897| |$Stage\ 2$|0.575|0.715|0.557|0.441|0.405|0.544|0.738|0.456|0.848| |$Stage\ 3$|0.753|0.808|0.773|0.637|0.694|0.743|0.815|0.725|0.856| First, DeepSeek-R1 achieves the best performance compared with other LLMs in Table 1, both in “Vanilla” and CogMath framework. This reflects its superior mathematical reasoning capabilities. Second, the performance gap (marked as $\Delta$) suggests that DeepSeek-R1 still exhibits a certain degree of overestimation, highlighting the necessity of our proposed evaluation from the human cognitive perspective. Third, similar to other advanced LLMs, DeepSeek-R1 encounters the most challenges in Stage 2 (Problem Solving). Further investigation reveals that its primary weakness lies in Dimension 7 (Knowledge Redefinition), with a Relative Pass Rate (RPR) of 0.617. This supports the conclusion that current LLMs rely on fixed memorization rather than adapting knowledge flexibly. Lastly, DeepSeek-R1 improves significantly in Stage 3 (Solution Summarization) compared to DeepSeek-V2.5, which suggests a deeper understanding of the reasoning process. $\bf{Q3}$: How to evaluate after exceeding $\delta$ rounds? $\bf{A3}$: Thanks for your valuable question. As illustrated in Appendix C, if inquiry quality remains insufficient after $\delta$ rounds, we exclude the dimension because it suggests that the problem may not be suitable for evaluation from that dimension. For example, the problem "If $a<b$, what is the value of $|a-b|+a+b$?" from MATH dataset does not include any numerical values, so it is unnecessary to assess an LLM's ability of numerical calculations on it. Our multi-agent system would also fail to generate a valid transformation, so we would not evaluate Dimension 6, aligning with real-world intuition. $\bf{Q4}$: How to optimize LLMs (e,g, enhance capability in Poblem Comprehension stage) during "pretrain-SFT-RL" training? $\bf{A4}$: Thanks for your insightful question. Different phases of training correspond to enhancing different cognitive stages/dimensions of LLMs. For example, the pretraining phase focuses on developing the model's text comprehension abilities and the mastery of fundamental knowledge. The SFT phase is more about teaching the model to simulate a given reasoning strategy. The RL stage allows the model to develop more complex abilities, such as backward reasoning, intermediate step explanations, and error identification. Therefore, in Problem Comprehension stage, we suggest: 1) For Dimension 1 (Sentence Paraphrasing), increase the corpus used during the pretraining phase. 2) For Dimension 4 (Redundant Condition), train the model with question-answer pairs that include redundant information in SFT phase. 3) For Dimensions 2 (Sentence Disruption) and 3 (Missing Condition), to cultivate critical thinking skills, we can allow the model to think more freely and learn to recognize such situations in the RL phase. Similarly, for the other cognitive stages, improvements can be made through different training processes. For example, knowledge acquisition in the Problem Solving stage can be reinforced during pretraining, while backward reasoning abilities in the Solution Summarization stage are often enhanced through the RL phase. Overall, our framework provides valuable insights into how to optimize LLMs in different stages. We greatly appreciate your thought-provoking suggestions and will include these experiments and discussions in the revised version.
Summary: This paper aims to explore and evaluate the mathematical ability of LLMs. The authors propose a novel evaluation framework (CogMath) based on the human psychological design. The workflow examines the LLM’s performance across three stages, including problem comprehension, problem-solving, and solution summarization. The experiments on real benchmarks reveal several constructive findings for different LLMs. ## update after rebuttal The authors have adequately addressed my concerns; therefore, I am raising my score to Accept. Claims And Evidence: The main claim regarding the evaluation findings on LLMs’ reasoning abilities is well supported by the extensive experiments on three benchmark datasets (GSM8K, MATH, and MExam). The CogMath shows the consistent overestimation of LLMs' mathematical abilities. The in-depth analyses of 3 stages and 9 reasoning dimensions provide the robust evidence for the claims about the strengthens and limitations of current LLMs. Methods And Evaluation Criteria: The proposed methods and its evaluation criteria make sense for understanding and studying the problem. CogMath introduces an "Inquiry-Judge-Reference" multi-agent system to generate multiple inquiries for assessing LLMs’ math ability rather than just testing them with one question. The evaluation criteria on 9 ability dimensions with the proposed metric is appropriate for ensuring the credibility of the results. Theoretical Claims: The main claims of the paper are evaluated on extensive experiments and qualitative analysis on different datasets, rather than proof. The proposed CogMath introduces a delicate evaluation workflow based on psychological design for the experimental analyses. Experimental Designs Or Analyses: The experimental designs and analyses appear sound and reasonable. The paper tests seven mainstream LLMs on three representative benchmark datasets, ensuring a broad evaluation of their mathematical abilities. The in-depth analyses on nine ability dimensions with the pass rate metric is appropriate for this context. Moreover, the data and method settings are clearly provided in the paper. Supplementary Material: Yes, the paper supplies the sufficient appendices. The part includes the A: test examples of ability dimensions, B: the evaluation prompts in the framework. C: experimental settings, D: Dimension-level Effects of LLM enhancement study, and E: broader discussion. Relation To Broader Scientific Literature: The paper's key contributions are closely related to the broader scientific literature on LLMs and mathematical reasoning. It introduces a novel and comprehensive evaluation framework with a multi-agent system following the workflow of mimicking human reasoning activities of problem comprehension, problem solving, and solution summarization. The findings about the overestimation of LLMs' abilities and the limitations of the mainstream prompting techniques contribute to the existing conclusions. They also have the potential to inspire further research on LLM reasoning ability exploration and strategy design. Essential References Not Discussed: The authors can cite and discuss the following works, “Deepseekmath: Pushing the limits of mathematical reasoning in open language models." arXiv preprint arXiv:2402.03300”, "Deepseek-v3 technical report." arXiv preprint arXiv:2412.19437. Other Strengths And Weaknesses: Strengths: 1.The paper proposes a novel evaluation framework CogMath for testing LLMs’ reasoning abilities on math problems. The core idea of the framework design aligns the psychological perspectives, which constructs the 3 cognitive stages and 9 ability dimensions. The key technique contribution refers to constructing the multi-agent system to generate multiple inquiries for assessing LLMs’ true ability, rather than just testing them with one question as many existing works do. The overall framework provides comprehensive methods for LLM evaluation. 2.The paper provides an in-depth analysis of the performance of seven mainstream LLMs across different problem types and formats. The CogMath shows the consistent overestimation findings of LLMs on math abilities compared to only using traditional predefined benchmarks, which offers valuable results into their capabilities and limitations. 3.The datasets released for the evaluation can contribute to more study as they involve more diverse questions with the ability labels. That has potentials to further study on LLM reasoning ability exploration and strategy design. Weaknesses: 1.The paper could include and discuss more recent released deepseek version into the evaluation framework. 2.The scalability of the evaluation framework should be discussed. 3.The pass rate criteria used for 9 dimension seems to have different definitions. In Table 6, I notice the indicators of “Pass” refer to “answer correctly” and “unsolvable”. I cannot find the explicit definitions in the paper. It should be provided Other Comments Or Suggestions: 1.Adding statistical analyses on the datasets used in the proposed evaluation. 2.Adding the references with more recent LLMs and provide discussions. Questions For Authors: 1.Can you include and discuss more recent released deepseek version into the in-depth analyses? 2.I am missing the definition of “Pass” criteria in Table 6. How can we distinguish the different indicators in different dimension? 3.Can you discuss the scalability of the proposed framework? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your affirmation of our framework's novelty, experimental validity, and evaluation significance. $\bf{Q1}$: Could include more deepseek version. $\bf{A1}$: Thanks for your valuable suggestion. We will incorporate more references to DeepSeek-related papers. Besides, we conduct experiments on DeepSeek-R1 as follows. Due to API speed limitations, here we conduct evaluations on the widely-used public MATH and GSM8K datasets. |DeepSeek-R1|||||MATH||||GSM8K| |-|-|-|-|-|-|-|-|-|-| ||Avg|Alg|Count|Geo|Itmd|Num|Pre-Alg|Pre-Cal|| |Vanilla|0.982|0.992|0.994|0.956|0.979|0.980|0.985|0.972|0.967| |CogMath|0.448|0.581|0.443|0.307|0.295|0.413|0.604|0.326|0.703| |$\Delta \quad$|-0.534|-0.411|-0.551|-0.649|-0.684|-0.567|-0.381|-0.646|-0.264| |$Stage\ 1$|0.863|0.942|0.831|0.737|0.837|0.881|0.875|0.837|0.897| |$Stage\ 2$|0.575|0.715|0.557|0.441|0.405|0.544|0.738|0.456|0.848| |$Stage\ 3$|0.753|0.808|0.773|0.637|0.694|0.743|0.815|0.725|0.856| First, compared to Table 1, DeepSeek-R1 achieves the best performance among all evaluated LLMs, both in “Vanilla” and CogMath framework. This shows its superior mathematical reasoning capabilities. Second, based on the performance gap (marked as $\Delta$), DeepSeek-R1 still exhibits a certain degree of overestimation, highlighting the necessity of our proposed evaluation from the human cognitive perspective. Third, similar to other advanced LLMs, DeepSeek-R1 encounters the most challenges in Stage 2 (i.e., Problem Solving). Further analysis reveals that its main weakness lies in Dimension 7 (Knowledge Redefinition), with a Relative Pass Rate (RPR) of 0.617. This supports the conclusion that current LLMs rely on fixed memorization rather than adapting knowledge flexibly. Fourth, compared to DeepSeek-V2.5, DeepSeek-R1 improves significantly in Stage 3 (Solution Summarization), suggesting a deeper understanding of reasoning process. $\bf{Q2}$: Scalability of CogMath. $\bf{A2}$: Thanks for your constructive comments. CogMath is highly scalable because: (1) it is designed based on human reasoning processes, which makes the three stages independent of any specific problem types. (2) our multi-agent system is highly flexible. As shown in Appendix B, they do not depend on the dataset or task definition. Besides, their interactions ensures the quality of our inquiries and reference answers (verified in Section 4.7). (3) our evaluation metric is widely applicable, considering both general “answer correctly” and “unsolvable” in counterfactual situations. $\bf{Q3}$: Definitions of Pass Rate. $\bf{A3}$: Thanks for your valuable question. As explained in Section 4.1, Pass Rate for Dimensions 1 and 4-9 refers to the accuracy of answering the inquiries correctly. For Dimensions 2 and 3, they are counterfactual evaluation dimensions. For example, Dimension 2 evaluates the LLM’s response after the words of the original problem are randomly shuffled (an example is in Table 6). Ideally, this will render the problem meaningless, and the LLM should not provide the original answer. Thus, in this case, Pass Rate refers to the proportion of cases where the LLM successfully identifies the inquiry as "unsolvable". $\bf{Q4}$: Statistical analyses on datasets. $\bf{A4}$: Thanks for your constructive suggestion. We present the number of problems (#P), the average problem length (Avg.P), and the average answer length (Avg.A) of the original dataset, along with the average inquiry length (Avg.$q_i$) and the average answer length (Avg.$a_i$) for each dimension $i$ in our framework. Since the reference answers $a_i$ for Dimensions 1 to 4 are same to the original answers and the inquiry $q_2$ for Dimension 2 is simply a shuffled version of the original problem, no additional statistics are required for these cases. ||MATH|GSM8K|MExam |-|-|-|-| #P|5000|1319|6353 Avg.P|29.51|46.91|133.81 Avg.A|81.70|49.13|115.62 Avg.$q_1$|33.25|46.30|132.07 Avg.$q_3$|24.18|38.20|128.51 Avg.$q_4$|47.28|63.52|235.71 Avg.$q_5$|28.20|47.38|140.25 Avg.$a_5$|133.20|61.67|200.64 Avg.$q_6$|30.49|46.82|148.70 Avg.$a_6$|199.99|106.24|180.59 Avg.$q_7$|51.76|70.69|178.20 Avg.$a_7$|266.01|201.88|580.72 Avg.$q_8$|15.82|12.80|40.60 Avg.$a_8$|215.46|111.23|402.66 Avg.$q_9$|47.56|66.47|113.74 Avg.$a_9$|1.12|1.01|1.15 We observe that the inquiries in Dimensions 4, 7, and 8 exhibit the most significant length differences compared to the original problems. This is expected, as they ntroduce additional conditions, redefine knowledge concepts, or only ask about one specific intermediate step. Furthermore, in most cases, the reference answers are longer than the original answers. Upon further inspection, we find that this is not due to an increase in problem difficulty, but rather stems from our Reference agent providing a more detailed solution, whereas the original dataset answers are more concise. We sincerely appreciate your thoughtful comments and will incorporate these experiments and discussions in the revised version.
null
null
null
null
An Online Learning Approach to Prompt-based Selection of Generative Models and LLMs
Accept (poster)
Summary: - This paper frames the task of optimally routing prompt to a data generation model as a contextual bandit problem - In doing so, the authors design a contextual bandit algorithm called PAK-UCB and prove uapper bounds on its expected regret - The overcome the computational overhead of PAK-UCB, the authors propose a variant, called RFF-UCB, based on the random Fourier Features framework that approximates PAK-UCB. They prove that RFF-UCB is efficient and obtains expected regret that is not too much larger than that of PAK-UCB. - Finally, the authors present experimental results showcasing the performance of PAK-UCB and RFF-UCB for text-to-image and image-captioning tasks. ## Update after rebuttal I thank the authors for their response. As they have satisfactorily addressed my questions and concerns, I will maintain my positive score for this paper. Claims And Evidence: Yes, the claims made in the submissions are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and/or evaluation criteria make sense for the problem at hand. Theoretical Claims: As no proofs were provided in the main text, I did not verify the correctness of the theoretical claims. Experimental Designs Or Analyses: Yes, I checked the experimental results in the main text and the Appendix. Supplementary Material: I reviewed the experimental results section in the Appendix. Relation To Broader Scientific Literature: This paper empirically finds that different models perform better on different prompts. This motivates the design of routing mechanisms which can sequentially learn to route prompts to model so that prompts get answered by their "optimal" model. The authors frame this problem through as a contextual bandit problem, where now the context is the prompt, and the arms are the different models that the prompt can be routed to. Unlike the standard contextual bandit setting where a different context vector is variable is observed for each arm, but a single weight vector is applied to all arms, in the authors setting, the flip is true: there is a single context that is shared across all arms, but now each arm has a fixed weight vector. As far as I can tell (I am not an expert in this area), both the framing of prompt-based selection of generative models as a variant of the CB problem and the particular variant of the CB problem are novel, and worth futher investigations. As such, a key contribution of this paper is the introduction of a new variant of the CB problem and its applications to modern-day generative machine learning. Essential References Not Discussed: From what I can tell, the authors adaquetly cite existing work and make sure to distinguish their settings with existing work. Other Strengths And Weaknesses: **Strengths**: - This paper is well written and easy to follow - The problem studied is well-motivated and the novel variant of the CB problem is interesting to me - The experimental results for PAK-UCB seem fairly strong to me **Weaknesses**: - Clarity. The regret guarantees of PAK-UCB are with respect to a variant of the PAK-UCB algorithm presented by the authors in the main text, with no discussion on what this variant is and why it is needed. Moreover, it now becomes unclear whether the xperimental results for PAk-UCB in Section 6 are with respect to Algorithm 2 or its variant in Appendix A.1. The same can be said about RFF-UCB. In fact, its unclear whether Lemma 2 still holds the variant of RFF-UCB that satisfies Theorem 2. - Mismatch between theory and practice. According to Theorem 1 and Theorem 2, the reader gets the sense that PAK-UCB and RFF-UCB have the same regret bound and thus should perform empirically similary. In fact, the authors state "It can be shown that the implementation of PAK-UCB with RFF attains the exact same regret guarantees for adaptively selected feature sizes." However, the experiments tell a different story, with PAK-UCB consistently and significantly outperforming RFF-UCB across all experimental setups. Moreover, in many of these experiments, RFF-UCB doesn't seem to significantly outperform the baselines. It would be nice if the authors can provide some reasonable justification about why RFF-UCB doesn't do as well as PAK-UCB despite having similar regret guarantees. Other Comments Or Suggestions: - I think the authors should move Remark 1 into the Contextual bandits section of the Related works to futher drive home the differences between their setup and the standard CB setup - In the Equation in Lines 212-213, $\tilde{\Phi}$ and $\tilde{\Phi}^*$ are not defined any where. - The authors provide theoretical guarantees on regret, but the experiments are evaluted with respect to O2B and OPR. I would be interested in seeing plots on regret as well. Questions For Authors: (1) What are lower bounds on expected regret in your variant of the CB problem? Are your algorithms optimal with respect to the relevant problem parameters? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer 5QGu for the thoughtful feedback on our work. Below is our answer to the reviewer's comments and questions. **1. Regret and complexity of PAK-UCB and RFF-UCB** First, we would like to clarify that the numerical results in Section 6 are reported for PAK-UCB (Alg.2) and RFF-UCB (Alg.3) in the main text. Also, as we stated in Theorem 1, the regret bound is shown for the variant of PAK-UCB in the Appendix, which we titled Sup-PAK-UCB (Alg.4). We note that our analysis of PAK-UCB and Sup-PAK-UCB parallels the analysis of LinUCB [1] and KernelUCB [2] references. Specifically, [1] introduces two versions of the LinUCB algorithm: "LinUCB" (Alg.1 on p.g. 3 in [1]) is recommended for usage in practice, whereas [1]'s' regret analysis is performed for "SupLinUCB" (Alg.3 on p.g. 4). Similarly, Reference [2] also introduces two versions of KernelUCB, "KernelUCB" (Alg.1 on p.g. 5) and "SupKernelUCB" (Alg.2 on p.g. 5), where the numerical application and theoretical analysis are aimed for the algorithms. Our analysis also follows a similar approach. We will include this clarification in the revised paper. **2. Regarding Lemma 2** We would like to clarify that Lemma 2 still holds for the variant of RFF-UCB analyzed in Theorem 2 (described in Appendix B.2). Note that Sup-PAK-UCB (Alg.4) computes the UCB values on the mutually exclusive subsets $\\{\Psi\_g^m\\}\_{m \in [M]}$, which satisfies $\sum_{m,g}|\Psi_g^m| \le t$ at the $(t+1)$-th iteration. Therefore, Sup-PAK-UCB using Compute_UCB_RFF (Alg.3) requires time at most $\Theta(\sum_{m,g}|\Psi_g^m| s^2) = O(ts^2)$ and space $\Theta(\sum_{m,g} |\Psi_g^m| s) = O(ts)$. We will add this clarification to the revised paper. **3. Performance of RFF-UCB** As pointed out by the reviewer, there could be a performance gap between PAK-UCB-poly3 and RFF-UCB (e.g. in Figure 2). Note that this is not in contradiction with the regret bound of Theorem 2, because PAK-UCB-poly3 and RFF-UCB use *different kernel functions* to predict the scores: RFF-UCB uses the Gaussian kernel, whereas PAK-UCB-poly3 uses a polynomial kernel with degree 3. Therefore, RFF-UCB is *not* supposed to match the result with PAK-UCB-poly3. **3. Discussing Remark 1 in the Related work** Thank you for the suggestion. We will update the related work with a summary of Remark 1. **4. Typo in lines 212-213** We thank the reviewer for pointing this out. We would like to clarify that the tilde in the notation $\widetilde{\Phi}$ is a typo, which we will correct in the revision. In the equation, $\widetilde{\Phi}$ should be replaced with $\Phi$. **5. Results on regret** We appreciate the reviewer's question about the numerical performance in terms of regret values. We note that the evaluations based on O2B and the (average) regret are equivalent. This is because the *average regret* is computed as $\text{Avg.Regret}(T) := \frac{1}{T} \sum_{t=1}^T (s_\star (y_t) - s_{g_t}(y_t))$, while *O2B* is computed as $$\text{O2B}(T) := \frac{1}{T}\sum_{t=1}^T (s_{g_t}(y_t) - s_{g^\star}(y_t)) = -\text{Avg.Regret}(T) + C,$$ where $g^\star$ is the best single model with the highest expected score. Therefore, the O2B scores of different policies are all shifted with the same constant $C := \frac{1}{T}\sum_{t=1}^T (s_\star (y_t) - s_{g^\star}(y_t))$ compared to their average regret. As a result, the O2B rankings of the policies in the plots are *identical* to the regret-based rankings. **6. Regret lower bound** First, we note that the regret (upper) bound derived in Theorem 1 matches the regret of the LinUCB [1] and KernelUCB [2] algorithms up to a factor of $\sqrt{G}$, where $G$ is the number of models. On the other hand, we anticipate that the regret lower bound could scale with $\Omega(\sqrt{dGT})$ for a kernel function with a finite dimension $d$ (e.g., by slight modification of [Theorem 2, 1] for linear bandits without arm-specification). Formally proving a regret lower bound for the arm-specific setting is an interesting future direction for our work. We will discuss this in the revised conclusion. [1] Chu, et al. "Contextual bandits with linear payoff functions." AISTATS 2011. [2] Valko, et al. "Finite-time analysis of kernelised contextual bandits", UAI 2013. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and for addressing my questions and concerns. I will maintain my positive score. --- Reply to Comment 1.1.1: Comment: We would like to thank Reviewer 5QGu for the constructive review and feedback on our response. We are pleased that our response helped address the reviewer’s concerns. As noted, we will revise the paper accordingly to incorporate the discussed improvements.
Summary: Generative models are increasingly being used in numerous applications. Evaluation scores are typically used when selecting a sample generation from multiple models. The drawback of evaluation scores is that different models perform better under different text prompts. The paper proposes a method to address this issue by learning the ranking of generative models for a given prompt. The proposed method goes beyond standard LinUCB and KernelUCB. Specifically, it introduces PAK-UCB, which learns an arm-specific function to predict the score of each model. Furthermore, the paper seeks to reduce expensive computation and memory overhead by incorporating Random Fourier Features into PAK-UCB. The proposed algorithm is evaluated against several baselines for prompt-based selection of text-to-image models and wa shown to outperform all of them. Claims And Evidence: The paper claims that different models perform differently depending on the text prompts provided, and evaluation scores do not capture this inherent limitation. The authors present evidence of this through their experiments, which is well-known in the literature. They propose to address this issue by converting the problem of finding which model responds best to which prompt—maximizing evaluation scores—into a contextual bandit problem. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation make sense for the problem and application. Theoretical Claims: The paper claims that the proposed algorithm achieves a \bigO\sqrt{GT} regret bound. I briefly reviewed the proof for correctness. Experimental Designs Or Analyses: The paper conducts experiments using state-of-the-art models: UniDiffuser, Stable Diffusion, PixelArt, and DeepFloyd. Specifically, it evaluates the performance of these models using two metrics: (i) outscore-the-best (02B) and (ii) optimal-pick-ratio (OPR). The main results of the paper are sound, and the experimental setup is well-defined. However, one issue is that while the models used were provided, the paper does not mention any of the datasets that were used in the experiments. In addition to the main results, the paper includes two additional ablation studies—one for adaptation to new prompts and the other for synthetic experiments. Similar to the main results, the authors do not clarify how the new prompts were selected or what the original datasets were that the models were trained on. Furthermore, the paper does not explain the overlap between the new prompts and the training prompts. The synthetic experiments also lack detail, making it unclear how these experiments were conducted. Supplementary Material: No Relation To Broader Scientific Literature: The authors are attempting to address an ongoing and challenging problem: models are sensitive to prompts, and small changes in the prompts can have significant effects on the model's decision-making. In particular, the authors are trying to improve the selection criteria for determining which model should be used for a given prompt in order to maximize performance. Essential References Not Discussed: None Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: 1) What dataset did you perform your experiments on? 2) What evaluation score (e.g., ClipScore) did you use in your main experiments? 3) What features are you using for your kernel methods? 4) How does randomly selecting a model per prompt perform? 5) Are non-polynomial degree 3 algorithms suffering from the model not being expressive enough? 6) Could you elaborate on the difference between PAK-UCB and KernelUCB? Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer 3Dpj for the thoughtful feedback on our work. Below is our answer to the reviewer's comments and questions. **1. Details of the datasets** We note that the details of experiment settings are discussed in Appendix D. The following is a summary of the details asked by Reviewer 3Dpj, which we will include in the revised main text to improve the clarity: * **Setup 1:** Prompts are uniformly randomly selected from the MS-COCO dataset under two categories: 'dog'/'car' (Fig.2), 'train'/'baseball-bat' (Fig.3a), 'elephant'/'fire-hydrant' (Fig.3b), and 'carrot'/'bowl' (Fig.3c). * **Adaptation to new models (Fig.4a):** Prompts are uniformly randomly selected from the MS-COCO dataset under categories 'train' and 'baseball-bat', * **Adaptation to new prompt types (Fig.4b):** In the first 1k iterations, the prompts are uniformly randomly selected from a pool that initially includes categories 'person' and 'bicycle' in the MS-COCO dataset. Then, categories 'airplane', 'bus', 'train', and 'truck' are added to the pool after each 1k iterations, * **Synthetic T2I and image-captioning experiments (Fig.19 and Fig.21):** The prompts are uniformly randomly selected from the MS-COCO dataset under categories 'dog', 'car', 'carrot', 'cake', and 'bowl', * **Synthetic T2V task (Fig.22):** The captions are uniformly randomly selected from the MSR-VTT dataset under categories 'sports/action', 'movie/comedy', 'vehicles/autos', 'music', and 'food/drink'. **2. Evaluation scores** In the numerical experiments of this paper, we primarily focus on text-to-image generation tasks and use CLIPScore as the evaluation score. We note that the online selection framework can be applied to other prompt-guided generation tasks as long as we know the score values assigned to the generated samples. **3. Features for kernel methods** The input of the PAK-UCB method and other baselines is the embedded prompt that is output by the pretrained CLIP-ViT-B-32-laion2B-e16 model from the open_clip repository (https://github.com/mlfoundations/open_clip/tree/main). Only for LinUCB and KernelizedUCB baselines, we also concatenate the one-hot encoded vector of the model index to the CLIP-embedded prompt. **4. Performance of the random selection strategy** We thank the reviewer for the suggestion of including the random selection strategy as a baseline. The random selection strategy would be expected to underperform for prompt-based model selection. Below, we provide the results in Setup 1 (Figure 3b) where we use PAK-UCB-poly3 as the competing strategy. | Metric (After 5k iterations) | Random | PAK-UCB-poly3 | | - | - | - | | **O2B** | -0.13 | **0.86** | | **OPR** | 0.50 | **0.77** | **5. Expressivity of score estimation functions** The use of a polynomial kernel with degree 3 is inspired by the kernel Inception distance (KID) in the literature of generative models [1, 2]. Please note that a higher degree can lead to higher expressivity while increasing the risk of overfitting the data. Below, we conduct an ablation study to test the effect of degree on the performance of the PAK-UCB algorithm using a polynomial. We observe that a degree of 3 can achieve a better tradeoff between expressivity and generalization. |Metric (After 5k iterations)|poly1|poly2|poly3|poly 4| |-|-|-|-|-| | **O2B** | 0.13 | 0.30 | **0.70** | 0.39 | | **OPR** | 0.54 | 0.58 | **0.71** | 0.61 | [1] Stein et al. "Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models." NuerIPS 2023. [2] Bińkowski et al. "Demystifying mmd gans." ICLR 2018. **6. Comparison to KernelUCB** In the following, we will compare the process in PAK-UCB and KernelUCB side-by-side to highlight their differences: - **Problem Setting of PAK-UCB (ours):** We have $G$ arms where each arm represents a *fixed generative model* that remains unchanged across rounds: for example, Arm 1 represents the Stable Diffusion model in all rounds. At each round, the arms observe *one shared* context variable (i.e., the text prompt). We learn $N$ *separate kernel-based models* with *different weights* to predict the CLIPScore of a shared incoming prompt (context) for the $G$ fixed generative models. - **Problem Setting of KernelUCB [1]:** At every round, we have $N$ arms where the expected reward of each arm is fully characterized by its context variable. The arms have *different* context variables at each round. We learn *one shared* set of weights to predict the expected reward for the $N$ observed contexts (i.e., arms) in the next round. As explained above, in the KernelUCB [1] setting, there is not a fixed model corresponding to one arm across iterations, and the arms will perform independently across iterations depending on their context. However, in the setting of PAK-UCB (our method), each arm will represent one fixed generative model in all the learning rounds. [1] Valko, et al. "Finite-time analysis of kernelised contextual bandits", UAI 2013.
Summary: This study focuses on the task of selecting the generative model that achieves the highest reward for a given input prompt. The authors formulate this task as a contextual bandit (CB) problem, treating it as an online learning problem where past records are used to update the predictive model dynamically. They explore a UCB-based approach to solve the CB problem, specifically introducing PAK-UCB, which employs a kernel-based prediction function for each model (arm). Additionally, they present RFF-UCB, which reduces computational burden at the cost of some performance loss. The proposed methods were validated on the text-to-image generation task using models such as Stable Diffusion v1.5 and PixArt-alpha, demonstrating superior performance compared to other UCB-based approaches. Claims And Evidence: This study argues that the performance of generative models can vary significantly across different prompt categories. This claim is supported by the analysis in Figure 1 and further reinforced by the main experimental results, which show that approaches selecting the optimal model for each prompt outperform the strategy of consistently choosing a generally high-performing model (One-arm Oracle). Methods And Evaluation Criteria: In this study, the authors formulate the given task as a contextual bandit (CB) problem, which I believe is a valid problem formulation from the perspective of online model selection. Additionally, their proposed method (PAK-UCB), which defines a different kernel function for each arm, is also a reasonable approach within the CB framework. However, regarding this methodology, the authors emphasize the use of an "arm-specific" prediction function as a key distinguishing feature compared to other UCB-based approaches. However, based on my understanding, using an arm-specific prediction function is a commonly adopted approach in previous CB-related studies [1], making this claim somewhat overemphasized. Therefore, it would be beneficial to compare their method with other UCB approaches that also utilize arm-specific functions and discuss the differences between them. [1] A Contextual-Bandit Approach to Personalized News Article Recommendation Theoretical Claims: In this study, the authors theoretically derive the regret bound for PAK-UCB. Since they appropriately extend the proofs used in UCB-based approaches, I find this approach valid and well-founded. Experimental Designs Or Analyses: I believe that the overall evaluation setting, including the metrics and baselines, is well-designed. However, it is unclear which prompt set was used for the main experiments presented in the paper. Based on inferences from Appendix Section 7, Figure 10, and Figure 12, it seems that the experiments were conducted using only two categories from the MS-COCO dataset. If that is the case, it would be beneficial to validate the approach not only on such a clearly defined and limited category set but also on more general prompt sets, such as ImageRewardDB[1], for a more comprehensive evaluation. [1] ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation Supplementary Material: I have checked the supplementary material for the following: (1) an ablation study on hyper-parameters, (2) additional experimental results on text-to-video and image captioning, and (3) experimental results investigating adaptation to new prompts and models. Relation To Broader Scientific Literature: The "generative model selection based on prompts" emphasized in this study is a valuable research topic that could be further explored in future work. Additionally, this study has identified a meaningful domain where UCB literature can be effectively applied. Essential References Not Discussed: This study explores UCB-based approaches for the task of model selection. However, there may be alternative methodologies for this task beyond UCB. Notably, as presented in [1], an Agent AI approach is also a viable option. It would be beneficial to include a discussion on alternative methodologies beyond UCB as well. [1] DiffusionGPT: LLM-Driven Text-to-Image Generation System Other Strengths And Weaknesses: Strengths: Identifying the importance of generative model selection based on prompts and formulating it as a contextual bandit problem is novel and interesting. Additionally, the presentation is clear and well-structured. Weaknesses: Achieving meaningful performance requires multiple iterations, and even after several iterations, the model seems to work effectively only when the prompts fall within previously seen categories. While the appendix (Fig 12) demonstrates adaptation ability through additional training when a new category is introduced, handling unseen categories still necessitates substantial additional learning, which may limit the practical usability of the proposed approach. Other Comments Or Suggestions: While this study focuses on selecting a model based on the input prompt, another practical approach involves selecting an appropriate fine-tuned adapter for each prompt, as proposed in [1]. I am curious whether the proposed PAK-UCB method could also be applied to such scenarios, where adapters are chosen based on the prompt. Unlike models, adapters are generally more abundant (typically at least 10 or more), so demonstrating the effectiveness of the proposed method in cases where the contextual bandit (CB) has more than three arms could further highlight its generalizability. Additionally, if the concerns mentioned above are addressed, I would be willing to increase the rating. [1] Stylus: Automatic Adapter Selection for Diffusion Models Questions For Authors: As mentioned in the "Experimental Designs or Analyses" section, it would be helpful to specify the exact prompt set used in the experiments presented in the main paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer MgCP for the thoughtful feedback on our work. Below is our answer to the reviewer's comments and questions. **1. Arm-specific reward model** We thank the reviewer for pointing out the bandit algorithm in [1] that utilizes arm-specific prediction functions. To the best of our knowledge, the arm-specific bandit algorithms in the literature (References [1-3]) consider a linear pay-off. However, in our experiments, we observed that the PAK-UCB algorithm performs better when non-linear kernel functions (e.g., polynomial kernel and RBF kernel) are applied. We think this phenomenon is due to the normalized output of the CLIP embedding that is on the surface of the unit sphere, where a linear classification could be sub-optimal compared to a non-linear rule provided by the kernel-based approach. In the revision, we will discuss the existing arm-specific implementations of LinUCB and the necessity of including non-linear kernel functions to obtain better results in the case of text-to-image model selection. [1] Li et al. "A contextual-bandit approach to personalized news article recommendation." WWW 2010. [2] Fang et al. "Networked bandits with disjoint linear payoffs." KDD 2014. [3] Xu et al. "Contextual-bandit based personalized recommendation with time-varying user interests." AAAI 2020. **2. Performance on ImageReward DB dataset** Thank you for the suggestion. Due to the limited time, we report preliminary results on the ImageRewardDB ReFL dataset [1]. In the experiment, the algorithm selects among three T2I models considered in our paper: StableDiffusion v1.5, UniDiffuser, and PixArt-$\alpha$, which attain a CLIPScore of 35.54, 34.40, and 37.20 on (a subset of) the dataset, respectively. After 5k iterations, our proposed PAK-UCB-poly3 and RFF-UCB algorithms attain an OPR (ratio of picking the best model PixArt-$\alpha$) of 63.76% and 42.94%. On the other hand, the KernelUCB-poly3 baseline achieves only 34.18%. We will cite and discuss the dataset in the revised paper. [1] Xu et al. "Imagereward: Learning and evaluating human preferences for text-to-image generation." NeurIPS 2023. **3. Comparison with the DiffusionGPT framework [1]** We thank the reviewer for pointing out the alternative DiffusionGPT method in [1], which leverages an LLM agent for model selection. Our proposed PAK-UCB approach can be viewed as complementary to the DifussionGPT method, which employs the UCB bandit approach to address this task. We will cite and discuss the potential combination of the PAK-UCB and DiffusionGPT approaches as a future direction in the revised text. [1] Qin et al. "DiffusionGPT: LLM-driven text-to-image generation system." **4. Handling unseen categories** We agree with the reviewer that adaptation to unseen prompt categories would be challenging if the new prompts were *fully orthogonal* to previous prompts. On the other hand, in practice, it is often the case that the incoming prompt has *some correlation* with some previous prompts. For example, a user who has generated images of cats and dogs is likely to generate images of other pets or animals in the future. Assuming that the optimal model choice changes continuously with input prompts, the PAK-UCB online learning approach would be capable of predicting the optimal arm after observing previously correlated prompts. To test this hypothesis, we conducted a numerical experiment to predict the CLIPScore of StableDiffusion v1.5. We train a poly3-based prediction model on $n=1,2,3$ categories and compute the prediction MSE on a new category after observing 10 samples in this new category. The results show that the prediction function can generalize effectively to unseen but correlated prompt categories. |Trained categories|bird|bird, horse|bird, horse, sheep| |-|-|-|-| |**New category**|horse|sheep|cow| |**Prediction MSE**|$0.22$|$0.19$|$0.08$| **5. Performance on a large number of arms** We appreciate the reviewer's comment on setups with a large number of arms. Please note that the primary goal of the online selection task is to minimize the regret. Our regret bound in Theorems 1 and 2 are on the order of $\widetilde{O}(\sqrt{GT})$ for $G$ arms and time horizon $T$. In practice, the effective number of arms will be lower if the performance scores change more smoothly across arms. To numerically test the effect of a larger number of arms, we conducted an experiment to evaluate PAK-UCB-poly3 and RFF-UCB in a setting with ten arms: We included five more arms to the synthetic T2I task (Setup 3), which generate a clean image less frequently (only $10\%$ of the time). The results show that PAK-UCB-poly3 and RFF-UCB can outperform the best single arm and the KernelUCB baseline. |Metric (After 2k iterations)|PAK-UCB-poly3|RFF-UCB|KernelUCB-poly3| |-|-|-|-| |**O2B**|0.84|2.05|0.22| |**OPR**|0.13|0.16|0.11| --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their rebuttal. It addressed most of my concerns, so I have increased the score. It would be great if the authors could include the experimental results on unseen prompts and various prompt sets mentioned in the rebuttal. --- Reply to Comment 1.1.1: Comment: We sincerely thank Reviewer MgCP for the constructive suggestions and the feedback on our response. We are glad to hear that our responses could address the concerns. We will include the discussed numerical results in the revised paper.
null
null
null
null
null
null
null
null
Beyond Topological Self-Explainable GNNs: A Formal Explainability Perspective
Accept (poster)
Summary: In this paper, the authors argue that existing SE-GNNs generate Trivial Explanations and advocate for designing GNN explanation methods capable of producing Prime Implicant and faithful explanations. After conducting theoretical analysis, they propose DC-GNNs, a dual-channel model that incorporates a non-topological rule extractor. They evaluate their method on several datasets and conclude that DC-GNNs perform on par with or better than standard SE-GNNs. ## update after rebuttal The rebuttal clarified key concepts I had misunderstood in my initial review. Therefore, I have raised my score from weak reject (2) to accept (4). Claims And Evidence: Theorem 3.2 suggests that existing SE-GNNs provide only Trivial Explanations. While this is theoretically correct, it does not always hold in practice: (1) As shown in Figure 3 of GSAT, existing SE-GNNs identify all important edges rather than strictly minimal explanations. (2) In multi-loss optimization, the hyperparameter $\lambda$ used to constrain $R$ is typically small (to maintain stable training and ensure predictive accuracy). Consequently, the model may retain more edges to preserve predictive performance, even at the cost of some conciseness (cf. Figure 10 of GSAT). Considering that the critique of existing SE-GNNs throughout this work is primarily based on Theorem 3.2, a significant discrepancy between theory and practice would undermine the impact of this paper. Methods And Evaluation Criteria: - Gap between method and theoretical analysis: From my perspective, the proposed method does not appear to have a strong connection to its theoretical analysis. While building on existing work, DC-GNNs incorporate a non-topological perspective by analyzing node features. In other words, DC-GNNs perform better on certain types of datasets, but there is no evidence that they can generate PI explanations. - Gap between benchmark datasets and theoretical analysis: The dataset used by the authors seems unrelated to the theoretical analysis. None of the datasets have a strong connection to FOL and Figure 1, creating a significant gap between the theoretical analysis and the experimental evaluation. Theoretical Claims: Based on the provided proof sketch, Theorem 3.2 is correct. However, starting from Theorem 3.4, I can't understand why a single $e \in E$ can be a Trivial Explanation for $g^{\prime}(G)$, which prevents me from checking the correctness of the subsequent proofs. Experimental Designs Or Analyses: - Limited metrics: This paper focuses on GNN explanation, yet the primary evaluation metrics only assess predictive accuracy. A more comprehensive comparison between DC-GNNs and SE-GNNs appears only in Appendix C.3, and even there, it is limited to just two synthetic datasets. - Unfair experimental settings: The authors introduce two new datasets, RedBlueNodes and TopoFeature, where labels are constructed using additional color information. However, the design of DC-GNNs inherently biases the model towards discovering color information, leading to an unfair comparison. - Inappropriate parameter choices: According to the GSAT authors’ recommendations, the parameter r should be set to $r \geq 0.5$. However, the authors set $r=0.3$ for RedBlueNodes (line 910), deviating from the guideline. Additionally, in GSAT, the hidden size of GIN for the Graph-SST2 dataset is 64, whereas the authors set it to 300, which may introduce inconsistencies in comparison. - Limited baselines: The baseline selection is quite limited, as the authors include only two baselines, one of which is a 2020 ArXiv paper with just 15 citations. Given that the paper extensively discusses sufficiency and necessity, it would be more appropriate to include causal-based baselines for comparison. For instance, Wu et al. (2020) is cited (line 300), so it would be reasonable to compare against it. Furthermore, DC-GNNs incorporate node features, yet multiple prior works also consider this aspect, such as GNNExplainer [1] and CAL [2]. So, comparing DC-GNNs with these methods might be a good way to verify the effectiveness of DC-GNNs. - Misleading experimental analysis: On real-world datasets, the advantage of DC-GNNs is limited and can sometimes even lead to a decline in performance. However, the authors claim that DC-GNNs perform on par with or better than plain SE-GNNs. Additionally, in Table 4, the authors state that DC-GNNs generalize better to OOD than plain SE-GNNs. However, on Motif OOD_2, DC-GNNs exhibit significantly worse performance compared to plain SE-GNNs. [1] GNNExplainer: Generating Explanations for Graph Neural Networks, NeurIPS 2019 [2] Causal Attention for Interpretable and Generalizable Graph Classification, KDD 2022 Supplementary Material: I carefully read the Appendix B (Implementation Details) and briefly skimmed through the other parts. Relation To Broader Scientific Literature: Investigating GNN explanations is crucial for expanding the applicability of GNNs in high-stakes scenarios. Essential References Not Discussed: I think the paper is original as I have not seen anything like it before. Other Strengths And Weaknesses: I appreciate the authors’ effort in writing such a comprehensive paper -- it is the longest paper in my batch and the one I spent the most time reviewing. I am worried that I misunderstood some parts. If the authors can address my concerns during the rebuttal phase, I would be open to reconsidering and potentially increasing my score. Other Comments Or Suggestions: - I suggest that the authors provide more details about Figure 1, as it is crucial for readers to fully understand the paper. I would appreciate it if the authors could clarify the meanings of the solid and dashed lines in Figure 1, illustrate what the raw graphs $G$ looks like, and specify whether the two cases presented in the figure correspond to positive or negative samples. - I see that the authors use faithfulness to evaluate the effectiveness of self-interpretable GNNs (e.g., GSAT). However, I think this is inappropriate because, according to Eqs. 7 and 8, Suf(R) and Nec(R) are computed based on $\Delta(G, G^{\prime})$ . These metrics are suitable for post-hoc GNN explanation methods, which are trained to align with the predictive results of the original graph $G$. However, self-interpretable GNNs are trained to make accurate predictions when $R$ is provided as input and do not need to align with the predictions of $G$. As a result, using Eqs. 7 and 8 to assess the effectiveness of self-interpretable GNNs suffers from a distribution shift problem. That said, I remain open-minded on this issue, as I am aware that none of the recent works have identified this problem. Questions For Authors: - In Figure 2, why do SE-GNNs fail to capture non-topological patterns? Some SE-GNNs use node features to determine edge weights, which suggests that they might have the potential to discover non-topological patterns. - If the white-box rule extractor (line 30) is a sparse linear model, why can it be described as white-box? - What does ± mean in all tables? Is the number after ± at the decimal level or the integer level? - Which metrics did you use in Table 4? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the feedback. See our comments below. **Summary clarification:** We are *not* advocating for extracting PI explanations from SE-GNNs. As per the Abstract and Sec 6 (line 244), PIs can be intractable to compute and as large as the input. Our analysis is not meant as criticism. In fact, we advocate for enhancing SE-GNNs rather than replacing them. We renamed “Trivial Explanation” to “Minimum Explanations”, which sounds less negative. **Theory-practice gap:** Thm 3.2 does not guarantee that SE-GNNs always achieve TEs in practice, but it shows TEs are necessary and sufficient for minimal true risk *assuming perfect predictive accuracy* for any $\lambda>0$. Optimization may not reach this solution, as also discussed in 666-673 Appx A.1. Still, the result remains impactful: even if Thm 3.2’s conditions are not met, SE-GNNs in Table 1 optimize for small (ideally the smallest) label-preserving subgraphs and our theoretical analysis applies identically, as such explanations are still neither PI nor faithful in general. Figure 3 of GSAT supports the analysis above: TEs for that task would comprise 3 houses, and GSAT indeed highlights 3 houses with only a few additional edges. **Method-theory gap:** Our analysis provides a formal argument for using (and enhancing) subgraph-based explanations, and it directly informs the design of DC-GNNs. **DC-GNNs & PI:** DC-GNNs are not designed to output PIs but to leverage SE-GNN strengths in existential tasks, combined with an interpretable model for non-existential properties **Datasets not related to FOL:** Synthetic datasets are tightly linked to FOL. TopoFeature matches the formula $\exists a,b,c,d. Cycle(a,b,c,d) \land \exists^{\ge 2}x.Red(x)$. Motif follows a similar structure, while RedBlueNodes is an instance of FOL to counting quantifiers. Figure 1 is an illustration to provide intuition. **Why a single edge is a TE for $g′$?** $g′(G)$ checks for the existence of an edge for each node. Thus, a subgraph consisting of only one edge (seen alone) indeed satisfies this constraint. We will clarify. **Mostly assess accuracy:** We also evaluate DC-GNNs by measuring faithfulness and rule quality by testing how well they generalize OOD (Table 4), their dependability (Table 2), and their conciseness (C.4). Plausibility is unsuitable, as the ground-truth explanation is either unknown or not a pure subgraph, a problem for all models beyond subgraph explanations (Bechler-Speicher et al. 2024; Pluska et al. 2024). **Datasets are biased for colors:** The synthetic datasets are explicitly designed to test DC-GNN's ability to decompose the task between the two channels. To showcase its generality, we also test it on standard benchmarks where color information is not present. **Hyperparams choice:** We verified that using $r \in \\{0.5,0.7\\}$ for GSAT doesn’t change our results, which will be added in the revision. GIN performs better with a hidden size of 300, aligning with Gui et al. 2022. Keeping this value fixed ensures fair comparison across our experiments. **Limited baselines:** We added Wu et al. (2020) as a new baseline. Please refer to the answer to 3Jwx for the results. **Prior works also consider node features:** Including node feature relevance in topological explanations is not the same as extending them with rules: *Example*: for a model trained on RedBlueNodes, GNNExplainer and CAL will highlight individual nodes' color features. DC-GNNs, instead, will return the formula *Red $\ge$ Blue*, which is far more compact and intelligible. **Misleading experiments:** In real-world tasks, performance drops only 1% AUC for BBBP, which is within std dev. Elsewhere, DC-GNNs match or exceed baselines. For OOD2 in Motif, the drop likely stems from OOD performance instability in models without OOD regularization (Section 7, A1) and not the unused side channel (Table 2), as shown by the higher std dev. **Fig. 1:** It shows a positive instance G composed of a 3-clique for both formulas. Solid (resp. dashed) edges belong (resp. do not belong) to the explanation depicted in each column. We will clarify the caption. **Faithfulness is inappropriate:** We follow previous work computing SUF and NEC by feeding G and G’ into the full model, including the explanation extractor. Thus, the classifier evaluates $R'=q(G’)$, not $G’$, alleviating the OOD issue. Whether those metrics are suboptimal for SE-GNNs is beyond the scope of our contribution. **Q1:** SE-GNNs are bound to explain non-topological patterns by subgraphs, which can be ambiguous.. E.g., in Figure 2 the SE-GNN highlights the red nodes by marking their incident edges, and it’s unclear if what matters are the edges, the nodes, their multiplicity, etc. **Q2:** One can understand which (few) features are most responsible for a prediction and extract rules from them (see Appx B.3) without resorting to post-hoc methods. **Q3:** It’s the integer value of standard dev. **Q4:** Accuracy. We will clarify this. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful response and for clarifying the points I previously raised. Now I have a better understanding of the paper and will increase my score :) Side note: While I feel that the final practical takeaway -- explicitly augmenting SE-GNNs with alternative modalities of explanations -- is somewhat trivial, the analysis between TE and PI contributes meaningful insights to the GNN explanation community.
Summary: This paper investigates the properties and limitations of explanations generated by Self-Explainable Graph Neural Networks (SE-GNNs). The authors formalize SE-GNN explanations as Trivial Explanations (TEs) and compare them to Prime Implicant (PI) and faithful explanations. They find that TEs, while effective for motif-based tasks, can be less informative and misaligned with a common definition of faithfulness in general graph classification. The paper highlights that TEs match PI explanations in motif-based prediction tasks when the presence or absence of one particular motif has a causal relationship with the class label. The authors proceed to propose Dual-Channel GNNs (DC-GNNs), which combine SE-GNNs with non-relational predictors, to improve explanation compactness and performance. DC-GNNs adaptively employ both channels, allowing the SE-GNN to focus on topological motifs. Experiments on synthetic and real-world datasets show that DC-GNNs perform competitively with SE-GNNs, often with more succinct rules. The study reveals that TEs can be ambiguous and unfaithful, limiting their utility in certain applications. Overall, the research formally analyzes SE-GNN explanations and introduces a novel architecture to address their shortcomings. The findings suggest that integrating non-topological information enhances the quality and interpretability of GNN explanations. Claims And Evidence: The paper makes the following claims: (1) Formal characterization of the class of explanations SEGNNs optimize for, namely "trivial explanations" (TEs). Evidence: the evidence is not convincing, and this is an overclaim. The above claim only holds for the loss functions given in Table 1. Several SE-GNNs do not use these losses. Or they use these losses but with more sophisticated subgraph extraction methods. (2) Formal comparison of TEs, PIs, and faithful explanations. Evidence: this is done well and relates these different classes of explanations to each other. In my opinion, this is the main contribution of the paper. (3) DC-GNNs are proposed as an alternative to existing SE-GNNs. They are claimed to be better empirically. Evidence: The evidence here is empirical and weak, also because specific benchmarks are not used in the paper. Methods And Evaluation Criteria: The paper proposes a new class of models (DC-GNN) compared empirically to some existing SE-GNNs. The authors also investigate some properties of the new class, such as how well the model can indicate which type of features it uses (topological or feature-based). Theoretical Claims: The paper provides theoretical results—one theorem connecting SE-GNNs with specific loss functions to TEs. Moreover, several theorems compare TEs, PIs, and faithfulness measures. The theoretical results are correct. However, some parts of the paper overclaim the implications of some of these theorems. The theory does not support the claim that all SE-GNNs are restricted to TEs. Experimental Designs Or Analyses: The experimental design seems sound. However, comparing the models using common topological benchmarks such as BA-2MOTIFS and MUTAG_0 would be better. A worry about the proposed DC-GNN model is that it will overfit or latch on to node features (through the second channel) and, therefore, have more difficulty finding the topological features. Supplementary Material: The supplementary material provides details for reproducibility and further understanding of the experiments. It includes proofs of theorems, additional experimental details, and extended analyses that support the paper's main claims. Relation To Broader Scientific Literature: A substantial contribution of the paper is the theoretical comparison of TEs, PIs, and faithfulness measures. This has not been done before and brings interesting insights into the otherwise mostly empirically driven research direction of explainable GNNs. The paper is therefore of interest to a XAI for GNN subcommunity and well integrated into the literature. Essential References Not Discussed: None. Other Strengths And Weaknesses: The paper consists of three parts, each with its own contribution. First, the connection between SE-GNNs and TEs. This is an interesting result but it should be clarified that Theorem 3.2 does not generally hold for all SE-GNNs. Even in the statement of the theorem, this can be easily overlooked. Other than the imprecise claims made in the paper‘s language, the theoretical results are a strength of the paper. I have a bit of a problem with the term "trivial explanation," as it might lead people to associate these explanations with "useless" or "uninteresting" explanations. This is, of course, not the case. In general, the paper has a negative tone about what they call TEs throughout the first part of the paper, showing that TEs have undesirable properties (from their point of view) compared to PIs and faithfulness measures. However, before introducing their own method, they completely change course, saying that TEs are often the preferable topological explanations (because faithful and PIs might grow very large and unhelpful for many graph problems) and then introducing a model that augments TEs with feature-based explanations. Therefore, the first (theoretical) and the second (empirical) parts are somewhat disconnected. The empirical evaluation is sound but misses several benchmark datasets, such as BA-2MOTIFS. It is strange since their proposed method is a SE-GNN with an additional feature-based explanation channel. It would also be good to compare to existing methods such as https://link.springer.com/article/10.1007/s10994-024-06576-1. Here, I assume that the method (which also incorporates features) would extract the topological motif and connect nodes whose features are important via edges to this topology. The method is also not one that falls into Table 1. Other Comments Or Suggestions: Please rethink the term "trivial explanation." As you wrote in the paper, they are often preferable, and indeed, your proposed method also attempts to extract them. They can be very complex. Indeed, in some cases finding a specific topological motif is an NP-hard problem. You also contrast TEs with the "measures" of faithfulness which have their own problems. So, the authors might want to show more critical distance to these (rather arbitrary) notions of faithfulness and what a good explanation is when writing the paper. Please also consider, as written before, running experiments on existing benchmarks and using methods that do not follow the loss functions given in Table 1. Questions For Authors: - Why did you not use datasets such as BA-2MOTIFS? - How is your method better than a SE-GNN that extracts a subgraph, and on top of which, you can run a feature attribution method? - How does your theoretical analysis (part 1 of the paper) fit to your proposed method? Why are these two parts in the same paper? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We are glad they found our analysis of interest to the community. Below, we address their remarks. **Not all SE-GNNs are restricted to TEs:** We highlighted multiple times that our focus is on SE-GNNs with the losses of Table 1 (Line 57 in Preliminaries, Statement of Thm 3.2, Limitations). Overall, the losses in Table 1 capture the notion of “small, label-preserving explanations” in a relatively general fashion, offering insights into a wide SE-GNN family. Nonetheless, we appreciate the reviewers’ feedback, and we will clarify that our results pertain to a subset of all SE-GNNs also in the Abstract, Introduction, and Conclusions. **Rethink the term Trivial Explanation:** This paper aims to formalize SE-GNN explanation semantics, and the term 'Trivial Explanation' was meant to highlight simplicity, not negativity. On further reflection, we agree that “Trivial” may imply a bias. Therefore, we will rephrase it as 'Minimum Explanations' in the manuscript. **DC-GNN may overfit node features:** Note that SE-GNNs are considerably more expressive than our sparse side channel, therefore, it is much more likely that a standard SE-GNN will overfit to (potentially harmful) topological patterns rather than the side channel overfitting to node features [1]. Nonetheless, in DC-GNNs, both channels are jointly trained to maximize accuracy, and experiments show they balance well for generalization. [1] Graph Neural Networks Use Graphs When They Shouldn't **Add baselines beyond Table 1:** Following also the suggestion of rev. rjdA, we opted to include as a new baseline beyond Table 1, the model of Wu et al. 2020. Given the time and space constraints, we report the results for a subset of the datasets, leaving the full set of results to the revised manuscript. |BAColor|Accuracy|Channel| |-|-|-| |DIR| 98 +- 01 || |DC-DIR| 98 +- 01 |Rule| |AIDS|Accuracy|Channel| |-|-|-| |DIR| 96 +- 03 || |DC-DIR| 96 +- 03 |Rule| |BBBP|AUC|Channel| |-|-|-| |DIR| 64 +- 02 || |DC-DIR| 65 +- 02 |Topo*| |MUTAG_0|Accuracy|Channel| |-|-|-| |see below||| Where “*” has the same meaning as is Table 3. Overall, these results are in line with those of other SE-GNNs, showing the generality of our proposal. **Add existing benchmarks:** We performed new experiments on MUTAG_0. The results are as follows: |MUTAG_0|Accuracy|Channel| |-|-|-| |GIN| 99 +- 1 || |GSAT| 99 +- 1 || |DC-GSAT| 99 +- 1 |Topo| |SMGNN| 99 +- 3 || |DC-SMGNN| 98 +- 4 |Topo*| |GiSST| 99 +- 1 || |DC-GiSST| 99 +- 1 |Topo| |DIR| 98 +- 2 || |DC-DIR| 98 +- 2 |Topo| Model accuracy increases wrt the original MUTAG dataset, as expected (Serra & Niepert, 2022). Also, DC-GNNs prefer the topological channel as in MUTAG, and they match the performances of SE-GNNs. For BA-2MOTIFS, see **Q1**. **Show more critical distance to faithfulness:** Indeed, we show that explanations extracted by SE-GNNs can be misaligned wrt faithfulness metrics (Thm 5.2). Whether those metrics are problematic in general is an interesting question beyond the scope of our work. We’ll add a sentence in Section 5 pointing to this issue. **Q1:** We opted for the Motif dataset in our experiments as, despite being designed with the same goal as BA-2MOTIFS, it is more challenging: it has three classes with three distinct motifs instead of two; it has more samples; and it comes with OOD splits. **Q2:** Our DC-GNNs are different from feature attribution methods in the form of the explanation they highlight. Feature attribution methods simply highlight the node features more relevant for prediction, whereas our DC-GNN can provide rules. In this sense, feature attribution methods cannot go “beyond” topological explanations, as the explanation would remain a subgraph with refined node features. Let us consider as an example RedBlueNodes, where graphs are of class 0 when they contain more red than blue nodes, encoded as one-hot vectors. There, a TE for an instance of class 0 contains a single red node, as it is the minimal subgraph allowing the SE-GNN to make the correct prediction. Then, running a feature attribution method can only highlight that the only relevant feature is the one related to the red color and that other features are irrelevant. However, this cannot give insight into the multiplicity of red vs blue nodes. Conversely, our DC-GNNs provide the rule *Red$\ge$Blue* as the explanation, which is more aligned with human expectations. **Q3:** Our theoretical analysis delineates tasks that are suitable and unsuitable for SE-GNNs and shows that simply aiming for alternative notions of explanations (like PI and faithfulness) can lead to large explanations and high computational complexity. Given these observations, a natural choice is to augment SE-GNNs with alternative modalities of explanations taking care of SE-GNNs’ limitations — and we believe DC-GNNs are a well-motivated step in this direction. We will clarify this link in the revised Abstract and Introduction. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I appreciate the additional experimental results. Again, trivial, according to English dictionaries, means ignorable, of little significance or value, being the simplest possible case. The use of this word is misleading and has a negative connotation. Also, reading your abstract and introduction, a reader must believe that TEs are exactly those explanations extracted by SE-GNNs, which is incorrect. Conditional on the changes that a more suitable term will replace the term "trivial explanation" and that the abstract and intro are updated to make clear that TEs are those explanations extracted by *a subclass of* SE-GNNs, I will raise my score to 4.
Summary: This paper discusses several definitions of Graph explanations and there internal connections. They provide analytical results on the connections between TEs, PIs, sufficient and necessary subgraphs explanations. Because they show that TEs are always less informative than other explanations & directly learning other explanations is not optimal, they construct DC-GNN that finds a middle ground of these two issues. Claims And Evidence: The paper draws connections between different subgraph explanations with rigorous proofs. I did not read the proofs fully but have examined the statement and proof sketch, which all look good to me. The second part of the paper focuses on the construction of DC-GNN and they have provided empirical results over 8 datasets. Methods And Evaluation Criteria: This paper is well-motivated and I enjoy reading it much. Their analysis reveals that TEs align with PI explanations for a restricted but significant family of tasks. However, in general, TEs can be less informative than PI explanations and are misaligned with accepted notions of faithfulness. Although synthetic datasets are easier to control (and we have groundtruth for subgraphs), it would be great to have more discussions and analysis on real-world datasets. Theoretical Claims: I have only checked proof sketch and they all look correct and sound to me. Experimental Designs Or Analyses: I have checked the setup and they are using widely-used datasets in other literature, and the evaluation metrics are related. A few questions / comments. The paper mentions that GNNs, including DC-GNNs, can be computationally expensive. However, it does not provide a detailed analysis of the computational complexity or scalability of their approach. It would be beneficial to have a more thorough evaluation of the computational resources required by DC-GNNs, so we have a better sense if this can scale. Supplementary Material: Visualizations, and extra experiments in Appendix C.3 Relation To Broader Scientific Literature: The analytical parts has bridged several different metrics in generating subgraph explanations. This paper provides a clear answer to what subgraph explanations are important and in what sense. Essential References Not Discussed: N/A Other Strengths And Weaknesses: One significant contribution is to analyze the information gain from using a simple TE subgraph, or more complicated PI and faithful subgraphs. The authors show that TE significantly contains less information despite of its simplicity and sometimes it is very unfaithful. Other Comments Or Suggestions: N/A Questions For Authors: Could you provide more details on the hyperparameter tuning process for DC-GNNs? How sensitive is the model's performance to different hyperparameter settings, and what steps have you taken to ensure the robustness of your results across a reasonable range of parameter values? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We are glad they enjoyed reading it and found it well motivated. Below, we address their remarks. **Computational resources for running DC-GNNs:** We would like to clarify a slight misunderstanding regarding our discussion of computational costs. Our argument mostly pertains to PIs being intractable to extract (Marques-Silva, 2023). In contrast, SE-GNNs aim to extract explanations with a single forward pass of the neural network, resulting in fast and tractable explanations. DC-GNNs, on their side, combine a standard SE-GNNs with a side channel implemented as a linear layer. The resulting complexity is primarily dominated by the SE-GNN-based channel, and the side channel adds negligible computational overhead. Below, we report an analysis of the run time for DC-GNNs compared to existing baselines, which will be integrated into the revised manuscript. |Graph-SST2|Time per epoch (seconds)| |-|-| |GSAT| 4.78 +- 0.23 | |DC-GSAT| 4.50 +- 0.32 | |SMGNN| 6.51 +- 0.37 | |DC-SMGNN| 6.32 +- 0.50 | |BBBP|Time per epoch (seconds)| |-|-| |GSAT| 0.55 +- 0.18 | |DC-GSAT| 0.58 +- 0.18 | |SMGNN| 0.53 +- 0.18 | |DC-SMGNN| 0.71 +- 0.22 | **More details about hyper-parameter tuning:** Hyper-parameters were optimized for the performance of the validation split. For datasets with OOD splits, only the ID validation split was used for tuning. We’ll clarify this in Appx. B.2. When choosing the values to test, we followed the standard practice of choosing values within a reasonable range, e.g., powers of $10$ (0.001, 0.01, 0.1, …), or other reasonable choices like $r \in \\{0.3, 0.5, 0.7, 0.9\\}$ for GSAT. When performance was satisfactory, we kept the default hyper-parameters of SE-GNN. We report below an additional analysis on the robustness to hyper-parameter selection. Given the time constraint in running those experiments, we focus only on representative hyper-parameters for DC-GSAT for the Motif dataset by aggregating values across 5 seeds. We plan to complement those results with the full experimental setting in the final revision of our manuscript. |(B)LEN hidden size|15|*30*|64| |-|-|-|-| |DC-GSAT| 92 +- 01 | 93 +- 01 | 93 +- 01 | |(B)LEN num layers|*2*|3|4| |-|-|-|-| |DC-GSAT| 93 +- 01 | 93 +- 01 | 93 +- 01 | |(B)LEN sigmoid temperature|0.1|*0.3*|0.5| |-|-|-|-| |DC-GSAT| 93 +- 01 | 93 +- 01 | 93 +- 01 | |GSAT’s $r$ parameter|0.5|*0.7*|0.9| |-|-|-|-| |DC-GSAT| 90 +- 02 | 93 +- 01 | 93 +- 01 | For comparison, we report in italic font the original value reported in the paper. The SE-GNN channel was the one selected by the training routine for all the experiments. Overall, results are stable across hyper-parameter choice, except for a small fluctuation when choosing $0.5$ as the parameter $r$ of GSAT. However, for $r \ge 0.7$, performance stabilizes. --- Rebuttal Comment 1.1: Comment: Thanks for the great responses here. Please update the paper with these results! My score remains.
Summary: This paper investigates Self-Explainable Graph Neural Networks (SE-GNNs) and their limitations in explanation quality. It formalizes the Trivial Explanations (TEs) generated by SE-GNNs and compares them with Prime Implicant (PI) and faithful explanations. The analysis shows that TEs align with PI explanations in some cases but are generally less informative and misaligned with faithfulness. To address this, the authors propose Dual-Channel GNNs, combining a white-box rule extractor with a standard SE-GNN. Experiments demonstrate that Dual-Channel GNNs can extract concise rules and perform better or on par with existing SE-GNNs. Claims And Evidence: The authors' claims are mainly threefold: 1. In Section 4, they formalize the trivial explanations (TEs) generated by SE-GNNs and compare them with Prime Implicant (PI) and faithful explanations. 2. In Section 5, they explore how Trivial Explanations can be unfaithful. 3. Based on the analysis and findings, in Section 6, they propose Dual-Channel GNNs, which combine a white-box rule extractor with a standard SE-GNN. Both claims 1 and 2 are well-supported with theoretical evidence, while claim 3 is validated through extensive experiments demonstrating its effectiveness. Methods And Evaluation Criteria: This paper focuses on Self-Explainable Graph Neural Networks (SE-GNNs) and aims to better understand the properties and limitations of their explanations. First, in Section 4, the authors formalize the trivial explanations (TEs) generated by SE-GNNs and compare them with Prime Implicant (PI) and faithful explanations. Further, in Section 5, they explore how Trivial Explanations can be unfaithful. These discussions are accompanied by extensive theoretical evidence that addresses the problems described by the authors. Based on their analysis and findings, in Section 6, the authors propose Dual-Channel GNNs, combining a white-box rule extractor with a standard SE-GNN. A large body of experiments demonstrates the effectiveness of the proposed method. Theoretical Claims: This paper is well-supported by theoretical evidence. I have checked all the proofs and did not find any explicit errors. Experimental Designs Or Analyses: I believe that both qualitative and quantitative experiments can demonstrate the performance and interpretability of the proposed method in this paper. Supplementary Material: The supplementary material includes the code for this paper, ensuring reproducibility. Relation To Broader Scientific Literature: I believe this paper could advance the interpretability of GNNs, but it is hard to say whether it will have a significant impact. In my view, the author's theory is solid, but the practicality of the proposed Dual-Channel GNNs will require time to be tested. Essential References Not Discussed: It seems that the relevant literature is sufficient. Note: I am not familiar with this field. Other Strengths And Weaknesses: **Weaknesses or Suggestions:** I believe the expression of this paper needs further improvement. Although the author's theory and techniques are sufficient, the paper lacks examples and a clear explanation of the domain background, making it difficult for readers unfamiliar with the self-explainable field or newcomers to the field to comprehend. This is particularly evident in the following points: 1. In the second paragraph of the introduction, the author does not clearly express that TEs (Trivial Explanations) are the key concept introduced in this paper. It is only later in the text that we realize this. This confusing logic may leave readers unsure of the relationship between TEs and existing PIs (Prime Implicants) and faithful explanations, leading them to believe that these are parallel categories of methods (perhaps with overlap). Furthermore, these concepts need to be illustrated with examples. 2. The author needs to explain the formulas in Table 1. While most readers are likely familiar with their meanings, there are unknown symbols in the table, and it is hard for us to understand what these symbols represent without further clarification. 3. The logic in Sections 2 and 3 needs improvement, especially in clarifying the motivation behind each concept and theory. For example, in Section 2, the introduction of logical classifiers is abrupt, lacking context and further explanation. Readers may struggle to understand its role and its connection to the paper, leading to confusion. Similar issues appear in Theorems 3.2, 3.4, and Remark 3.3. The author should note that this is not a technical report, and the content should not be simply listed but should follow a logical structure. 4. In the main text, each theory should be clearly explained with non-mathematical descriptions so that readers can understand its purpose and the motivation behind the author's derivations. 5. The author should provide an overview of the technical aspects of self-explainable GNNs, particularly in Sections 4, 5, and 6. This would help tie together the goals of each section. Currently, each section contains a lot of content, but without an overview, it is easy for readers to forget the purpose of each section and how they relate to the overall paper. It might seem like three unrelated technical modules are presented. I will adjust my score based on the author's response. Other Comments Or Suggestions: Please see "Other Strengths and Weaknesses". Questions For Authors: Please see "Other Strengths and Weaknesses". Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work. We are glad that you found it well supported by evidence and theoretically sound. We applied the following changes. **W1: Clarify TEs are a contribution** We revised the Introduction line 16 as follows: > Focusing on graph classification, we introduce the notion of Trivial Explanations (TEs) as the minimal subgraphs of the input that locally ensure the classifier outputs the target prediction. We then show that some popular SE-GNNs are implicitly optimized for generating TEs. **W1: TEs and PIs need examples** As an example, Fig. 1 illustrates the different nuances between TEs, PIs, and faithfulness with a simple example of two different classifiers. We will make this connection clearer in the revised manuscript and bring this example to the forefront of the discussion in the Introduction. We also updated Fig 1’s description to ensure the figure is properly discussed and contextualized. This change could not be reported here due to space constraints. **W2:** We added a description of the formulas in Table 1. This change could not be reported here due to space constraints **W3: logical classifiers** In Section 2, different topics are introduced separately since they serve as background material for the main content. We updated the Logical classifiers paragraph as follows to make the context clearer: > To prove our theoretical results, we will use basic concepts from First Order Logic (FOL) as described in Barcelo and Grohe. [...] For ease of discussion, we fix two FOL classifiers that will be used to provide examples and intuition in the remainder of the paper: [...] **W3: Sections 3 lack context** We improved the contextualization of our results as follows: We added this in line 134 of Section 3: > Having established a link between SE-GNNs and TEs, we proceed to analyze the formal properties of TEs, starting from the following remark formalizing the notion of *informative explanation*. We revised line 142 of Section 3: > The following theorem, however, shows that TEs can fail to satisfy this desideratum for certain prediction tasks, indicating potential limits in the informativeness of TEs. We added this after Thm 3.4: > Intuitively, this result indicates that there exist two distinct graph classifiers for which TEs are the same for any input, meaning that by inspecting explanations alone, it is not possible to distinguish the two classifiers. Hence, TEs can fail to be informative wrt Remark 3.3 **W4:** We list our modifications below, reporting only the cases where little intuition was provided and omitting the rest due to space constraints. After Thm 3.4: > This theorem shows cases in which, for two distinct classifiers, TEs match for any input graph, meaning that TEs are not informative for those classifiers wrt Remark 3.3. Line 132, column 2 > Intuitively, Eq. 3 shows that the insight of Thm 3.4 applies even when aggregating TEs across all instances where the two classifiers yield the same prediction. Hence [...] After Thm 4.4 > Thm 4.4 shows that PIs can overcome TEs' limits in certain tasks. For example, Thm 3.4's classifiers yield identical TEs but differing PIs (see Fig. 1) **W5: Technical Aspects of SE-GNNs** We added a deeper technical discussion in the Preliminaries and expanded Appx B.2 with more background on SE-GNN and a diagram akin to Fig. 1 in (Miao et al. 2022) showing a prototypical SE-GNN. Sections 4 and 5 remain independent of SE-GNN details, focusing on a theoretical analysis of TEs, PIs, and faithful explanations. **W5: Improve link between sections** We added the following sentences: Line 144 Section 4: > Having established the link between SE-GNNs and TEs in Section 3, in this section, we provide a comparative analysis between TEs and PIs for graph classifiers. [...] Line 193 in Section 4.1: > We now show that our previous observation that TEs equal PIs for $\exists x,y. E(x,y)$ generalizes all existential tasks. This reinforces the use of SE-GNNs in various practical applications and our proposed method (Section 6). Line 172 Section 4.2: > Although Section 4.1 shows TEs match PIs for existential tasks, real-world properties like counting cannot be expressed by existential formulas. Here, we show that beyond existential tasks, TEs can be less informative than PIs (Remark 3.3). Line 202 Section 4.2: > Thm 4.2 justifies SE-GNNs' performance on many benchmark datasets. However, Thm 3.4 and Thm 4.4 show that SEGNNs may not be informative for certain tasks, motivating an extension of SE-GNNs that preserves their performance on existential tasks but adaptively aids them in other tasks. We will exploit this observation in Section 6. Replace line 216 of Section 5 with: > This section reviews a general notion of faithfulness (Azzolin et al., 2024) in Definition 5.1 and shows that faithfulness, TEs (Section 3), and PIs (Section 4) can overlap – in some restricted cases – but are generally misaligned. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful responses and for clarifying the points I raised earlier. I now have a better understanding of the paper, which will positively impact my score.
null
null
null
null
null
null
Primitive Vision: Improving Diagram Understanding in MLLMs
Accept (poster)
Summary: The paper focus on the problem of math diagram understanding --- multimodal math problems with visual inputs. The paper has two main contributions, (1) GeoGLIP, a vision encoder that is more geometrically-grounded; (2) PRIMITIVE, a VLM trained with GeoGLIP + a feature selection module, which demonstrates strong performance on math diagram understanding tasks. ## Update after rebuttal The rebuttal addresses my major concerns, I raised score accordingly; I believe adding these results are important to make the claims more convincing; Claims And Evidence: The main claims/contributions of the paper are stated as: 1. GeoGLIP feature is better than CLIP 2. The Feature Router is needed and effective --- Although Claim 2 is well supported by ablation Table 5, Claim 1 is not very convincing. - For Claim 1. the paper lacking important baselines that finetune the same backbone VLM, with the same data, but using their original vision encoder; this makes it unclear whether the main benefit is just mainly from the a better backbone model, or maybe better training data Methods And Evaluation Criteria: The method and evaluation setting are mostly clear for the claim, but some remaining questions need to be answered: 1. Additional analysis is needed for demonstrating: how robust is the GeoGLIP (especially junction and boundary detection) to the variance/noise in the visual input, for example, change of resolution, different line width, different line type (solid vs dashed), different junction size, etc. And how will the these detection error be propagated to the final performance? 2. Another concerning result is that the performance on FigureQA and VQA is not good, showing that the GeoGLIP feature might not be able to easily generalize to more diverse domains other than in-domain data as the synthesized dataset. And it is unclear that whether the performance on general multimodal benchmarks such as MME, MMBench can be retrained. Theoretical Claims: N/A Experimental Designs Or Analyses: As mentioned in the Claims And Evidence, a set of important baseline is missing Supplementary Material: The paper provided comprehensive Appendix including data synthesis pipeline, dataset statistics, additional qualitative analysis and case studies. Relation To Broader Scientific Literature: The key contributions are related to addressing the long-lasting problem in VLMs --- fine-grained visual perception and grounding. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Some notations of the features are a little bit confusing; For example, in Figure 2's caption, it seems that the four features from GeoGLIP to Connector contains a mixed feature F1* and 3 un-mixed feature; but in Section 3.3 text, there is no notation referring to mixed feature (*); this make is unclear what exactly is the input to the Connector Other Comments Or Suggestions: Minor: The title is exceeding the textlength, this might violate the formatting requirement, please fix that Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # We thank the reviewer for insightful questions that help refine our work further. ## 1. Claim 1—that GeoGLIP outperforms CLIP—is unconvincing due to missing baselines using the same VLM and data with original vision encoders, leaving it unclear whether gains come from the backbone or training data. We have conducted an ablation study in Sec. A.4 (lines 840-848 and Tab. 6), which utilizes different visual encoders trained on the same backbone and datasets. * We compare a **single encoder** setup using either CLIP or GeoGLIP, with the same backbone VLM (LLaVA) and dataset (Geo170K). GeoQA accuracy is 64.2 for CLIP and 66.1 for GeoGLIP. * We evaluate **dual encoders** combining GLIP or GeoGLIP with CLIP, showing that without math-specific fine-tuning, GLIP's performance drops (67.0→65.3) due to limited geometric sensitivity, as illustrated in Fig. 10. If the reviewer considers this ablation crucial, we are open to moving it to the main paper. |Type|Model|Top1 Acc (GeoQA)| |:-:|:-:|:-:| |Dual encoders|GLIP+CLIP|65.3| |Dual encoders|GeoGLIP+CLIP|67.0| |single encoder|GeoGLIP|66.1| |single encoder|CLIP |64.2| ## 2. Additional analysis is needed to assess GeoGLIP’s robustness to visual variations (e.g., resolution, line style/width, junction size) in junction and boundary detection, and how detection errors impact final performance. In response to this concern, we evaluated the robustness of our model using a custom set of 500 images with various distortions, as no public benchmark exists for junction and boundary detection in mathematical diagrams. We adopted standard metrics from natural image tasks: * Junction Detection: Recall, using a confidence threshold of 0.65. * Boundary Detection: Intersection over Union (IoU), with boundary maps binarized using a threshold of 200 (1 for boundary and 0 for non-boundary); IoU is computed via pixel-wise logical AND/OR between prediction and ground truth. Distortions Applied: * Gaussian Noise: Variance of 0.3. * Resolution Change: Shortest side reduced from 800 to 400 pixels. * Line Width Variation: Increased from 1–4 to 1–8 pixels, correspondingly impacting junction size. * Line Style Modification: Changed from solid to dashed lines. Our results (see table) show GeoGLIP is robust to resolution and line width changes, aided by diverse training data and augmentations (e.g., varying line widths, flipping, cropping, keep-ratio resize). It is more sensitive to Gaussian noise (−4.6 junction, −3.1 boundary) and dashed lines (−3.2 junction, due to higher false positives). These results highlight the need for improved data generation and training strategies to better handle visual distortions. | Distortion Type | Junction Detection (recall \%) | Boundary Detection (IoU \%) | |:-:|:-:|:-:| | w/o | 85.6 | 92.3 | | +Gaussian Noise | 81.0 | 89.2 | | +Resolution Change | 85.2 | 91.9 | | +Line Width Variation | 85.9 | 92.3| | +Dashed Lines | 82.4 | 92.7| As Gaussian noise and dashed lines are the most impactful factors, we apply Gaussian noise for further reasoning evaluation due to limitations in modifying line styles without access to the original drawing codes of evaluation images. On GeoQA, top-1 accuracy dropped by −1.3 (PRIMITIVE-7B), −1.8 (PRIMITIVE-Qwen2.5-7B), and −2.7 (PRIMITIVE-Deepseek-7B). In contrast, the baseline model using only the CLIP encoder (PRIMITIVE(-)) showed larger drops of −3.1, −4.2, and −5.1, indicating that CLIP is more sensitive to distortions and provides less reliable visual input for reasoning. | Model | Top1 Acc (wo/w Gau. noise) | |:-:|:-:| | PRIMITIVE-7B | 67.0/65.7| | PRIMITIVE-Deepseek-7B | 72.8/71.0 | | PRIMITIVE-Qwen2.5-7B | 79.6/76.9 | | PRIMITIVE(-)-7B | 64.2/ 61.1 | | PRIMITIVE(-)-Deepseek-7B | 66.1/ 61.9 | | PRIMITIVE(-)-Qwen2.5-7B | 72.3/ 67.2 | # 3. Preservation of general ability. While our model doesn't lead on FigureQA and VQA in Table 2, it significantly outperforms the G-LLaVA baseline with gains of +19.6 and +2.1, respectively. To address concerns about general ability, we further evaluated PRIMITIVE-7B on SEED-I and MM-Bench. On SEED-I, it maintained comparable performance to LLaVA-1.5-7B (66.2 vs. 66.9), and on MM-Bench, it achieved a +0.6 improvement. Most gains are observed in categories like instance interaction, counting, and spatial localization. # 4. Inconsistent notations and formatting issue. We apologize for the confusion. The four features transferred from GeoGLIP to the Connector include one mixed feature ($F^{1*}$) and three unmixed features. We will clarify this in Sec. 3.3, including revising the index notation in Eq. (1). We will adjust the title to meet the formatting requirements. --- Rebuttal Comment 1.1: Comment: (copy the official comment here so that authors can see) The rebuttal addresses my major concerns, I will raise score accordingly; I believe adding these results are important to make the claims more convincing; --- Reply to Comment 1.1.1: Comment: Esteemed Reviewer, \ \ Thank you for your kind message, and valuable comments helping us improve and refine our manuscript. Meantime, if there is anything else we can answer or explain or discuss further, kindly do let us know. \ \ Rest assured, all requested explanations and additional results will be added in the final paper. Kind regards, \ Authors
Summary: This paper addresses a significant challenge in multi-modal large language models (MLLMs), specifically focusing on their limited ability to accurately interpret geometric primitives (e.g., points, lines, boundaries, and junctions) in mathematical diagrams. The authors conduct a comprehensive analysis revealing that existing models such as GPT-4o frequently misinterpret these visual elements, negatively impacting the accuracy of subsequent reasoning tasks. To address this, the authors propose PRIMITIVE, a novel approach featuring a dedicated geometric-aware visual encoder (GeoGLIP) and a dynamic feature router. GeoGLIP utilizes a multi-task learning framework based on Mask R-CNN architectures to effectively detect boundaries and junctions. The feature router dynamically selects and integrates multi-scale features from GeoGLIP and the CLIP encoder, resulting in improved reasoning performance on several benchmarks, including MathVerse, GeoQA, and MathVista. Experimental results demonstrate substantial improvements in accuracy compared to existing methods, emphasizing the importance of precise geometric visual understanding in mathematics-related tasks. Claims And Evidence: - The paper clearly identifies a crucial limitation in current MLLMs regarding precise geometric visual perception, presenting a thorough and systematic analysis of existing model failures. - Although the authors claim to perform an 'apples-to-apples' comparison, the evaluation lacks a detailed analysis of how different visual encoders affect the reasoning performance when keeping the LLM weights fixed. For example, a comparison between Qwen2.5-vl and PRIMITIVE Qwen2.5-7B, analyzing both the performance differences and qualitative differences in responses, would provide deeper insights. Methods And Evaluation Criteria: - The introduction of GeoGLIP as a specialized visual encoder effectively addresses the challenge of accurately detecting small-scale geometric primitives, significantly advancing the state-of-the-art in math-oriented visual understanding. - The proposed feature router mechanism is innovative, enabling adaptive fusion of multi-scale visual information from GeoGLIP and CLIP, thus demonstrating significant empirical improvements. - The technical contributions of the proposed PRIMITIVE pipeline appear limited, as the approach primarily integrates existing LLM architectures with incremental modifications. Theoretical Claims: N/A Experimental Designs Or Analyses: - Comprehensive experimental validation across multiple challenging mathematical visual reasoning benchmarks showcases clear and convincing performance advantages over established baseline methods. - Throughout the paper, the authors do not specify the parameter size of InternVL2 nor provide any relevant citation, which limits the reproducibility and clarity of their comparisons. Additionally, InternVL2 is currently not the most state-of-the-art open-source model; therefore, including a comparison with the latest state-of-the-art visual models would significantly strengthen the experimental validity. - Although PRIMITIVE demonstrates promising performance and improved interpretability in Figures 14 and 15, the authors have not adequately addressed the reasoning errors that still persist in problem-solving scenarios. These errors may not necessarily stem from perceptual inaccuracies but could potentially result from hallucinations within the MLLMs' reasoning processes. It would be beneficial to include additional experiments or analyses in caption-based tasks to better understand and address these reasoning limitations. Supplementary Material: Yes, all parts are reviewed Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: - While the authors extensively evaluate their method on mathematical visual reasoning tasks, additional qualitative analyses or visualization examples illustrating the feature routing decisions and specific cases where PRIMITIVE significantly outperforms baselines could further clarify the interpretability and practical effectiveness of the dynamic feature routing mechanism. - Figure 3 and Table 5 extend beyond the page margins, which compromises the visual presentation and overall readability. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # We thank the reviewer for insightful questions that help refine our work further. ## 1. The impact of visual encoders—e.g., comparing Qwen2.5-VL and PRIMITIVE-Qwen2.5-7B with fixed LLM weights. To address the concern about visual encoders' impact on reasoning, we compare variants in controlled settings, as detailed in lines 840–847 and Tab. 6 of the appendix. Although Qwen2.5-VL and PRIMITIVE-Qwen2.5-7B share the same LLM backbone, they differ significantly in training recipes and dataset scale (~2T vs. ~600K). For details, see Q2 (Ay8j). In Tab. 6, we control for training settings and datasets, isolating the visual encoder for direct comparison. See Q1 (sjVj) for analysis. For qualitative response differences, we analyzed model reasoning steps and found that the GLIP+CLIP dual encoder struggles with basic shape perception and interrelationship descriptions, often leading to hallucinations and incorrect answers. We will include their rationale demos in the revised paper. To ensure a fair comparison with Qwen2.5-VL, we conducted new experiments resulting in Qwen2.5-VL-7B+ and PRIMITIVE-Qwen2.5-VL-7B (see Q2 for ARAy8j), with more detailed comparisons to be added in the revision. ## 2. The PRIMITIVE pipeline seems limited in contribution, as it mainly integrates existing LLMs with minor modifications. We respectfully disagree that our contributions are merely incremental. Our core contribution is the design of a geometrically sensitive visual encoder that enhances MLLMs' fine-grained primitive perception—addressing a key bottleneck in reasoning, as shown by the analysis in Figs. 1 and 5a. Unlike single-modality LLMs, MLLMs rely on accurate visual cue interpretation to support abstract reasoning. Misinterpretations at this stage can misguide reasoning processes. To address this, we designed a geometrically sensitive visual encoder with box- and pixel-level supervision, going beyond the image-level supervision used in CLIP (see Q3 for AR PhN5). This was enabled by a custom data engine generating diagrams with shape, junction, and boundary annotations. Furthermore, we integrated the trained visual encoder into LLMs using two methods: hard coordinates (Sec. A.4) and soft visual prompts. For the latter, we introduced a feature router that dynamically selects features from semantic- to geometric-rich levels. Our designs were implemented in LLaMA2-7B, Deepseek-math-7B, and Qwen-math-7B, with ablations showing notable gains over baselines across three math reasoning benchmarks. ## 3. The paper lacks the parameter size and citation for InternVL2, limiting clarity and reproducibility; adding comparisons with recent state-of-the-art visual models would strengthen the experimental validity. We apologize for the oversight—InternVL2 has 8B parameters, which we will clarify as InternVL2-8B. For reproducibility, we will release model weights for PRIMITIVE-7B, PRIMITIVE-Deepseek-7B, and PRIMITIVE-Qwen2.5-7B, along with training and inference code. To address comparisons with state-of-the-art models, we have integrated our design into Qwen2.5-VL (see Q2 for ARAy8j). ## 4. It lacks analysis of remaining reasoning errors, which may stem from LLM hallucinations rather than perception. Our primary goal is to enhance the visual grounding of MLLMs to complement their reasoning, not to solve the full spectrum of mathematical reasoning tasks. We acknowledge that hallucinations in reasoning remain an important challenge, influenced by many factors such as data distribution, vision encoders, modality alignment, and the LLM's inherent knowledge [1–5]. Perceptual enhancement is one feasible path to reducing such hallucinations [4,5], and our work focuses on improving object- and pixel-level perception in mathematical contexts. We agree that other factors also impact reasoning accuracy and plan to explore them in future work. As suggested, analyzing caption-based descriptions could provide intuitive insights into hallucinations tied to diagram elements. However, building such datasets and benchmarks is non-trivial and beyond the scope of this rebuttal, but it remains a key direction for future research. [1] Jaemin Cho et al., Fine-grained image captioning with clip reward. [2] Ziwei Ji et al., Survey of hallucination in natural language generation. [3] Lei Huang et al., A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. [4] Shunqi Mao et al., Through the Magnifying Glass: Adaptive Perception Magnification for Hallucination-Free VLM Decoding. [5] Hanchao Liu et al., A Survey on Hallucination in Large Vision-Language Models. ## 5. Additional visualization examples are needed, and Fig. 3 and Tab. 5 exceed page margins. Thank you. We will add more qualitative comparisons between PRIMITIVE and baselines for clarity and fix the formatting issues in Fig. 3 and Tab. 5. --- Rebuttal Comment 1.1: Comment: Thanks for your response. The response partially addressed my concerns. I would like to raise my rating. --- Reply to Comment 1.1.1: Comment: Esteemed Reviewer, Thank you for your kind message, and valuable comments helping us improve and refine our manuscript. We are glad to hear that our responses have addressed some of your concerns. If you have any further questions or suggestions or would like to discuss some further issues, kindly do let us know how we can resolve them. We are very keen to engage with yourself and improve our work further. Kind regards, The Authors
Summary: This work proposes a novel approach named PRIMITIVE, aiming to address the deficiencies of current mathematical multimodal large language models (MLLMs) in geometric perception, thereby enhancing their capabilities in visual mathematical reasoning. Experiments conducted on three benchmarks demonstrate the effectiveness of the PRIMITIVE method. ## Update review In the previous round, I thought the feature pyramid used by the authors lacked novelty. The authors replied that they did not consider the feature pyramid technique as a major contribution. I hope the authors can clarify this in the revision. Additionally, I still have concerns about the authors using existing models to extract junctions, which may introduce unpredictable noise, as the performance on MathVerse and MathVista is not very promising. I hope the authors will address this point in the revised version. The study of how fine-grained visual perception influences downstream reasoning in MLLMs is meaningful. Therefore, I change my opinion to 'Weak Accept'. Claims And Evidence: No, please refer to Part of [Methods And Evaluation Criteria] Methods And Evaluation Criteria: - 1) The authors mentioned that existing models would be used to extract junctions and boundaries as ground truth. For geometric shapes with diverse variations in shapes and domains, could this process introduce noise? - 2) The performance on MathVerse and MathVista is not very promising. For example, the proposed PRIMITIVE-Qwen2.5-7B achieves lower performance on the MathVista evaluation set compared to InternVL2.5-8B (64.4%), Qwen2.5-VL-7B (68.2%), and even previous-generation MLLMs such as InternVL2-8B (61.6%), InternVL2-4B (57.0%), and Qwen2-VL-7B (58.2%). - 3) The relatively low experimental results on the MathVista and MathVerse evaluation sets raise doubts about the effectiveness of the proposed approach. Although the work claims that the approach can be integrated into multiple model baselines, the experimental results for its combination with baselines such as Qwen2.5-7B and DeepSeek-7B remain unsatisfactory. Theoretical Claims: This research work does not propose theoretical analyses. Experimental Designs Or Analyses: - 1) The PRIMITIVE architecture used in this work is built upon LLaVA-1.5, which is somewhat outdated. Moreover, PRIMITIVE employs a feature pyramid structure to extract fine-grained geometric visual features, a method that is generic for natural images and not specifically tailored for geometric image tasks. Supplementary Material: No supplementary materials were provided. Relation To Broader Scientific Literature: This work proposes MLLMs in mathematical scenarios, which could potentially aid in analyzing the reasoning capabilities of MLLMs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: [Strengths] - 1) This work analyzes the proportion of perception errors in geometric question answering, which is insightful. - 2) The authors achieved high performance on the GeoQA leaderboard. [Weaknesses] - 1) The authors mentioned that existing models would be used to extract junctions and boundaries as ground truth. For geometric shapes with diverse variations in shapes and domains, could this process introduce noise? - 2) The performance on MathVerse and MathVista is not very promising. For example, the proposed PRIMITIVE-Qwen2.5-7B achieves lower performance on the MathVista evaluation set compared to InternVL2.5-8B (64.4%), Qwen2.5-VL-7B (68.2%), and even previous-generation MLLMs such as InternVL2-8B (61.6%), InternVL2-4B (57.0%), and Qwen2-VL-7B (58.2%). - 3) The relatively low experimental results on the MathVista and MathVerse evaluation sets raise doubts about the effectiveness of the proposed approach. Although the work claims that the approach can be integrated into multiple model baselines, the experimental results for its combination with baselines such as Qwen2.5-7B and DeepSeek-7B remain unsatisfactory. - 4) The PRIMITIVE architecture used in this work is built upon LLaVA-1.5, which is somewhat outdated. Moreover, PRIMITIVE employs a feature pyramid structure to extract fine-grained geometric visual features, a method that is generic for natural images and not specifically tailored for geometric image tasks. Other Comments Or Suggestions: The figure captions should be placed below the figures. Please make this correction in future versions. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # We thank the reviewer for insightful questions that help refine our work further. ## 1. The authors use existing models to extract junctions and boundaries as ground truth—could this introduce noise given the variation in geometric shapes and domains? For junction detection, we discuss noisy cases in lines 802–807 of Appendix A.4 and show failure cases in Fig. 11. We use a CNN model (Huang et al., 2018) trained on Man-Made Environments to generate ground truth. Some label noise arises from out-of-domain (OOD) diagrams, which we mitigated using a 0.85 confidence threshold and manual correction. While 100\% accuracy isn’t guaranteed across 20K+ samples, the noise level is acceptable. The test set recall is 85.6%, and GeoGLIP remains robust to distortions (see Q2 for AR sjVj). Improving junction labeling and data synthesis, as discussed in Sec. A.7, could further boost performance. For boundary detection, we use the FoJ model (Verbin & Zickler, 2021), which applies a machine learning algorithm for generalized M-junctions. Unlike CNNs, FoJ generalizes better to OOD domains and is less dataset-biased. Its noise resilience is supported by prior work. GeoGLIP achieves an IoU of 92.3% and remains robust under various distortions (see Q2 for AR sjVj). ## 2. PRIMITIVE-Qwen2.5-7B shows lower MathVerse/MathVista performance compared to Qwen2.5-VL-7B (68.2%), and even earlier MLLMs. MLLMs fall into two categories: generic (e.g., InternVL2.5, Qwen2.5-VL, GPT4o) and math-specific (e.g., G-LLaVA, Math-LLaVA, MAVIS, ours), differing significantly in training recipes and datasets when integrated with LLMs and visual encoders. Generic models are trained on large-scale visual instruction datasets covering a wide variety of multimodal data, including OCR, academic questions, localization data, documents, and video descriptions ( e.g., Qwen2.5-VL uses 2TB+ data), while math-specific models utilize only mathematical text-diagram pairs, which require fewer training resources and provide an efficient test base for proposed math module designs. Our model is fairly compared with math-specific MLLMs, as it is trained solely on MathV360K+Geo170K. In controlled experiments, adding GeoGLIP and the soft router boosts baseline G-LLaVA by 4.6% on MathVerse and 12.3% on MathVista. We've also extended our approach to DeepSeek-Math-7B and Qwen2.5-Math-7B, achieving consistent +6% gains on MathVista (Tab. 7). Note: baselines use only CLIP visual encoders and the same training setup—not generic DeepSeek-VL or Qwen2.5-VL. To compare with state-of-the-art 7B models, we implemented our designs on Qwen2.5-VL-7B and fine-tuned it using its official checkpoint. As the training code is not publicly available, significant effort was required to rewrite the functions, data loader, and training processes within the transformers and Hugging Face frameworks. For alignment, we trained only the projectors; Due to time limits, we trained both projectors and LLMs using LoRA to accelerate SFT stage on the MathV360K+Geo170K dataset. Under the same training setup, we developed two versions: (1) Qwen2.5-VL-7B+, fine-tuned with math-specific visual data; and (2) PRIMITIVE-Qwen2.5-VL-7B, integrated with our GeoGLIP and soft router. Our integrated models are lightweight (<50MB, see Sec. A.6.2) and already show performance gains. We plan to further improve results by exploring full fine-tuning (beyond LoRA) and partial unfreezing of the visual encoder with lower learning rates. |Model|MathVista/All (acc)|MathVerse/All (acc)| |:-:|:-:|:-:| |Qwen2.5-VL-7B|68.2|49.2| |Qwen2.5-VL-7B+|68.5|49.8| |PRIMITIVE-Qwen2.5-VL-7B|69.7|51.0| ## 3. PRIMITIVE uses a feature pyramid to extract fine-grained features—a technique common in natural images but not specifically tailored for geometric tasks. Thank you for your observations regarding the PRIMITIVE architecture. Although the feature pyramid structure effectively extracts fine-grained visual cues in natural images, it is not central to enhancing mathematical perception in our MLLM. Instead, the key factor lies in the mathematically sensitive GeoGLIP, which incorporates visual-centric mathematical training datasets and fine-grained box- and pixel-level supervision (refer to Q3 for AR PhN5). Our empirical evidence, detailed in lines 840-848 and Tab. 6 of the appendix, supports this. Specifically, coupling the original GLIP (trained on natural images) with the feature pyramid and soft router techniques decrease performance from 67.0 to 65.3 on GeoQA. Additionally, Fig. 10 (right panel) demonstrates GLIP's inability to accurately perceive basic geometric primitives. These results confirm that fine-grained primitive perception ability depends on a geometrically sensitive visual encoder, and then this ability could be further enhanced by employing pyramid techniques. Our specific adaptations and innovations for GeoGLIP are deliberately designed to tackle the unique challenges of mathematical image tasks. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' feedback. After reading the rebuttal, I still have concerns regarding the novelty of this work, such as the approach of combining LlaVA-1.5 with the feature pyramid, as well as the GeoGLIP performing shape grounding, boundary, and junction detection. I believe there may be potential noise in the predictions for these aspects. Therefore, I maintain my initial score. --- Reply to Comment 1.1.1: Comment: ## Thank you for your thoughtful feedback and for engaging with our rebuttal. We address reviewer's two main concerns regarding novelty and potential prediction noise in GeoGLIP. # 1. Novelty and Contribution: Our key contribution lies in identifying fine-grained visual perception as a major bottleneck in current MLLMs' reasoning ability (see Figs. 1 and 5a). To address this, we design a geometrically sensitive visual encoder and enhance perception through fine-grained supervision—using bounding box, junction, and boundary labels generated by a custom data engine. We instantiate our design (GeoGLIP + global feature router) across LLaMA2-7B, Deepseek-Math-7B and Qwen-Math-7B. This improves reasoning by enhancing visual grounding, achieving +6.4\% to +12.3\% over baselines on MathVista. Our method focuses on perception enhancement, which is orthogonal to existing methods focused on improving reasoning, and we believe this opens new directions for mathematical MLLM research. We do not claim the feature pyramid as a novel contribution; it is used to enhance perception within our global feature router, enabling adaptive selection from geometric- to semantic-rich cues to aid reasoning. As noted in Q3, without a geometry-sensitive visual encoder, the feature pyramid alone cannot effectively extract fine-grained mathematical primitives. # 2. Prediction Noise: We address the reviewer’s concern along three parts: * **Robustness of GeoGLIP to Visual Distortions**: As shown in Q2 (AR sjVj), GeoGLIP is robust to common visual distortions such as Gaussian noise, changes in line style, line width, and resolution. Compared to the baseline CLIP visual encoder, GeoGLIP shows much smaller performance degradation under these distortions. This robustness is attributed to our diverse training data and augmentation strategies, including randomly varying line width (1–5), flipping, cropping, and keep-ratio resizing. Moreover, it's important to note that even models trained with perfect ground truth labels cannot guarantee 100\% inference accuracy. For example, in natural image object detection, no model achieves 100\% mAP on standard benchmarks like COCO. * **Design Choices to Minimize the Impact of Prediction Uncertainty**: As stated in the introduction (lines 74–77): "Given GeoGLIP’s inherent uncertainty in detecting geometric primitives, instead of directly prompting LLMs with primitive locations (e.g., hard coordinates; see Sec. A.4 for ablation), we leverage global pyramid feature maps that encode essential information for pixel-to-shape detection." In other words, we do not directly use prediction results as hard prompts for the LLM; instead, we use global feature maps as soft inputs. This design choice keeps our model lightweight—under 50MB—as it requires only a visual encoder without additional detection heads (see Sec. A.6.2). * **Focus of This Work**: This work does not aim to develop a standalone primitive detector. Instead, we focus on how fine-grained visual understanding influences downstream reasoning in MLLMs. As shown in Fig. 2, our model generates more fine-grained attention maps, effectively highlighting not only boundaries but also detailed elements like dotted lines and right-angle symbols. We would greatly appreciate any suggestions from the reviewer on how to design future experiments to better address your concerns.
Summary: This paper proposes PRIMITIVE, a multi-modal large language model for mathematical problem-solving. The contribution of PRIMITIVE is two-folds: a mathematical vision encoder, GeoGLIP, and an MLP-based feature router. The GeoGLIP is pre-trained using synthetic data with box-level and pixel-level loss within the GLIP architecture. The feature router adopts a soft weight strategy to combine different levels of visual feature adaptively. The performance of PRIMITIVE is good on different benchmarks. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: The formula (1) in Section 3.3 should contain more comprehensive illustration of newly defined labels. Experimental Designs Or Analyses: It is not very fair for performance comparison. In section 4.1, the authors mention that they evaluate PRIMITIVE on GeoQA and MathVerse/MathVista separately using different checkpoints, that is, Geo170K and MathV360K, respectively. This is my main of concern, since previous works in this field or the default choice of this community, is to test all different benchmarks using one trained model, either by combining training data in one stage or setting different training stages. If PRIMITIVE cannot perform well on different benchmarks within one checkpoint, the generalization capability of this approach will be questioned. The author is encouraged to provide experiments or illustrations. Supplementary Material: Yes Relation To Broader Scientific Literature: No concern Essential References Not Discussed: The main motivation and training approaches of this paper are similar to MAVIS, e.g., improving vision encoder and three-stage training, although with different techniques. I suggest discuss more about the relation and difference of this paper and MAVIS in the Intro and Related Work parts. Other Strengths And Weaknesses: Strengths: 1. Exploring the role of vision encoder in math problem-solving is interesting and reasonable, which is mostly ignored by existing research. Although a few works in recent months also focus on this point, this paper provides unique insights with localization loss. 2. The curated datasets for pre-training GeoGLIP is useful to the community. 3. The figure is clearly presented. Weaknesses: 1. Please refer to the experimental fairness above. 2. The title of the paper is a bit overclaimed. PRIMITIVE utilizes localization information to enhance the vision perception and performance, but does not enable LLMs to know where to focus within math figures. Other Comments Or Suggestions: Overall it's a good paper, while remains some issues to be solved. Questions For Authors: no Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # We thank the reviewer for insightful questions that help refine our work further. ## 1. The formula (1) in Section 3.3 should contain more comprehensive illustration of newly defined labels. Thank you. Duty noted. Eq. (1) describes the soft router process. The scalar routing weight for each level is denoted as $w^i$, where $i \in \{1, 2, 3, 4\}$; the symbol $\circledcirc$ denotes the operation flow from right to left. For instance, $\mathcal{G} \circledcirc F_{\text{geo}}^i $ indicates that the different levels of GeoGLIP features $\(F_{\text{geo}}^i\)$ are resized $(\mathcal{G})$, which is essential for aligning spatial size of CLIP features. $\sigma$ denotes the normalization function, which could be either a SoftMax or a Sigmoid function, depending on the feature integration process (channel-wise or sequence-wise concatenation). We have added a more detailed description of each component and its role in the revised manuscript to ensure that the formulation and its application are transparent to the readers. ## 2. In Section 4.1, authors use two checkpoints for evaluation on GeoQA and MathVerse/MathVista, which is not considered fair for performance comparison since the standard practice in this field is to test all benchmarks using a single trained model. Thank you for highlighting this important point. GeoQA primarily evaluates geometric problem reasoning, and our baseline G-LLaVA-7B is trained exclusively on the Geo170K dataset for this benchmark. To ensure a fair comparison, we maintain consistency in the training dataset and test on GeoQA. Regarding the MathVista and MathVerse benchmarks, which cover a diversity of subjects including scientific topics, PaperQA, and IconQA, we applied the additional MathV360K dataset for fine-tuning our model (Geo170K+MathV360K), a combination commonly used by other mathematical MLLMs like Math-LLaVA and MAVIS. In response to this concern, we test our model trained on Geo170K+MathV360K on the GeoQA benchmark. The performance improved, with PRIMITIVE-7B's top-1 accuracy increasing to 71.3 (67.0 in Tab. 3). We adopt this version as the default choice and will release the checkpoint to the public. ## 3. The main motivation and training approaches of this paper are similar to MAVIS, e.g., improving vision encoder and three-stage training, although with different techniques. I suggest discuss more about the relation and difference of this paper and MAVIS in the Intro and Related Work parts. Thank you for your suggestion. We have expanded the discussion regarding our model in comparison to MAVIS in the Related Works section. Although our high-level motivation and MLLM training strategies share similarities with MAVIS, our different training techniques for the visual encoder and MLLM design enable our model to achieve comparable reasoning performance using an 8$\times$ smaller visual instruction training dataset. For visual encoder training, MAVIS utilizes 588K caption-diagram pairs with alignment loss similar to CLIP, which provides image-level supervision. In contrast, our model employs only 40K samples with box-level and pixel-level supervision, offering more detailed local feature perception. Researchers have shown that CLIP-style encoders fail to capture fine details (e.g., Yiwu Zhong et al., "RegionCLIP: Region-based Language-Image Pretraining"; Zhe Gan et al., "Vision-language pre-training: Basics, recent advances, and future trends"). A more intuitive demonstration is shown through the visualization of attention maps. MAVIS's visual encoder, depicted in Fig. 1(a) of MAVIS's paper, primarily highlights coarse-level boundary information and often overlooks detailed features. In contrast, our model, as shown in Fig. 2, produces attention maps that are more fine-grained, clearly highlighting not only the boundaries but also detailed elements such as dotted lines and right-angle symbols. This stark contrast underscores the superior local feature detection capability of our model. Furthermore, our model design incorporates a soft router that adaptively selects visual cues ranging from semantic-rich to geometric-rich. This adaptability enhances the model's ability to accurately solve problems by leveraging relevant visual information effectively. ## 4. The title of the paper is a bit overclaimed. PRIMITIVE utilizes localization information to enhance the vision perception and performance, but does not enable LLMs to know where to focus within math figures. Thank you. Duty noted. We will revise the title to more accurately reflect the model's enhanced visual perception and avoid overclaiming.
null
null
null
null
null
null
HYGMA: Hypergraph Coordination Networks with Dynamic Grouping for Multi-Agent Reinforcement Learning
Accept (poster)
Summary: The paper proposes a new method to learn higher-order coordination patterns between agents, based on a spectral clustering algorithm and a hyper-graph convolutional network. Agents are coherently grouped together, with such groups changing only when a certain threshold is hit, with the HGCN then combining their information to provide an additional learning signal that can be used in both value-based and policy-based algorithms. Experimental results show improved performances and better sample efficiency against SOTA baselines on a diverse set of cooperative problems. Claims And Evidence: The paper supports its claim with a quite extensive theoretical grounding, and provides results that bound the achievement of convergence of the learned hyper-graph representation, as well as the error from the optimum. Some aspects of the algorithmic outcome are not sufficiently made clear in my opinion (such as the centralized execution requirement), please see the Questions below for more details. Methods And Evaluation Criteria: I have some concern on the effective assessment of the proposed empirical results and their analysis. While it is clear that the proposed algorithm achieves better performances on a wide set of problems, I think that the analysis lacks clarity in assessing the actual merits of the underlying methodology, and is sloppy in some aspects. Please see the Questions below for a more detailed breakdown of this aspect. Theoretical Claims: I have checked the detailed proof of the main theoretical claims in some details, and these seem to be correct and sound. Experimental Designs Or Analyses: I have not empirically checked the validity of the proposed experimental analyses. Supplementary Material: Not provided Relation To Broader Scientific Literature: The relevant connections with existing scientific literature are already appropriately discussed in the paper. Essential References Not Discussed: Applications of (fixed) hype-rgraph structures in improving learning performance have already been investigated in MARL literature (e.g., see [[Castellini et al., 2021]](https://link.springer.com/article/10.1007/s10458-021-09506-w), [[Boehmer et al., 2020]](https://proceedings.mlr.press/v119/boehmer20a.html) and [[Li et al., 2021]](https://dl.acm.org/doi/10.5555/3463952.3464044)), but this track of works is not mentioned nor discussed. Also, attention mechanisms to focus agents solely on salient information of the others have been proposed for policy-based methods as well (e.g., [[Iqbal et Sha, 2019]](https://proceedings.mlr.press/v97/iqbal19a.html)). Other Strengths And Weaknesses: None Other Comments Or Suggestions: - Equation (7): $\alpha ie\rightarrow\alpha_{ie}$ (also at the end of the following text block and in Equation (13)) - Equation (13): $\mathcal{L}att\rightarrow\mathcal{L}_{att}$ - Equation (13): $\mathcal{L}task\rightarrow\mathcal{L}_{task}$ Questions For Authors: **Q1:** I think as aspect that is not highlighted is that the proposed overall framework is not amenable to decentralized execution: both value-based and policy-based instantiation of it require the agents to feed on the group-aware representations $h_i$, which requires the centralized module for spectral clustering and the hyper-graph network (and the consequent centralized collection of agents' histories and actions) in order to be computed. I think that this aspect should be clearly stated and discussed, as to avoid confusion in the reader and wrong positioning in the existing MARL landscape. **Q2:** You method, although learning higher-order decompositions, still produces partitions of the agents (indeed, produced by a spectral clustering algorithm), and thus limit the impact of one agent to only a single subset of the others. In higher-order factorizations, however, it is common to have agents that are comprised into different overlapping factors with diverse subsets of other agents in each of them. I think the potential effects of this, and some general reflections on this aspect, should be discussed in the paper, as these help is assessing potential use-cases and general limitations of the proposed algorithm. **Q3:** In Equation (4), the impact of $k$ is not apparent? Where does it influence the optimization problem? How are $a(i)$ and $b(i)$ using it (if they are)? **Q4:** If my understanding of all the SMAC baselines is correct (I'm not very familiar with VAST and GoMARL), you are comparing only against CTDE algorithms, that thus retain a completely decentralized execution scheme. This does not look like an entirely fair comparison, as your proposed method instead uses centralized information to feed the agents' $Q$-functions. Although I appreciate that these are some popular and strong baselines, and comparing to them is useful in assessing the increased performance of your algorithm, I think that also comparing against some centralized-execution method would have been helpful in clearly assessing up to what extent these improvements stem from the hyper-graph approach you are actually proposing, and what is instead due to simply having more information at execution time. On this point, Figure 5 is a step forward, but definitely not sufficiently discussed or interpreted. **Q5:** "The proposed method maintains consistent coordination efficiency across both scenarios through adaptive group formation, enabling effective information flow while avoiding communication overhead." Isn't your proposed method using a form of communication after all? In the compared baselines, the communication happens between agents directly, while in your communication is between agents and the centralized components. Given that there is an information exchange in your method as well, how is it avoiding communication overheads? **Q6:** In the Traffic Junction problem, how is the convergence step measure assessed? When is an algorithm considered to be converged? This aspect needs more explanation to be able to properly appreciate the results. **Q7:** In Appendix C.1, it seems a bit obvious that the parameters you are using for the mixing network are the same as QMIX, as the mixer network structure you use is indeed that of QMIX. However, you do not seem to keep account of the additional parameters of the attention HGCN, which probably would change the following comparison and discussion on your low parameter count. Perhaps I am not understanding the way in which you count the mixing parameter here, and the hyper-graph network's one are included there? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your detailed feedback. We sincerely appreciate your thoughtful questions that will improve our manuscript. Q1: CTDE Compliance and Execution Requirements Our method strictly adheres to CTDE without requiring centralized execution: 1.During execution, agents' decisions depend only on local observation history ($\tau_i$) and group-derived features ($h_i$), not global state. 2.Group structure converges during training (Theorem 2), after which agents operate with stable group membership, exchanging information only within established groups. 3.HGCN functions as a structured information processor during training, creating enhanced local representations without execution-time centralization. This approach is similar to other CTDE methods that enrich agent representations while maintaining decentralized execution. Importantly, once training is complete, the learned group structure remains fixed during deployment, ensuring true decentralized execution with only local observation exchange within established groups. Q2: Agent Group Membership Limitations Your observation about overlapping groups is astute. We considered allowing agents to belong to multiple groups, which better captures multidimensional coordination relationships. Implementation complexity, computational overhead, and convergence stability led us to adopt hard partitioning as a first implementation. We agree that real-world coordination is often overlapping. This limitation opens directions for future research: (1) soft/fuzzy clustering, (2) hypergraph structures with overlapping hyperedges, and (3) multi-membership representation learning. We've added a discussion of this limitation in our revision manuscript. Q3: Role of $k$ in Equation (4) In Equation (4), $k$ directly influences optimization as the primary variable. For each $k \in [k_{\min}, k_{\max}]$, spectral clustering partitions agents into k groups, determining how $a(i)$ and $b(i)$ are calculated: 1.$a(i)$: mean distance between agent $i$ and others in its group 2.$b(i)$: minimum mean distance from agent $i$ to any other group Selecting $k$ that maximizes silhouette coefficient balances intra-group cohesion and inter-group separation. Q4: Fairness of Experimental Comparisons Our experimental comparisons remain fair because: 1.Our method maintains CTDE compliance as explained in Q1, operating under the same paradigm constraints as baselines. 2.Figure 5's ablation studies show improvements derive from structured information processing rather than increased information access. The "Single Group" variant provides same information but underperforms without adaptive grouping. 3.We've enhanced ablation studies (Figure 5) with experiments comparing our approach against standard graph structures. These new results isolate the specific benefits of dynamic hypergraph structure versus static information sharing. 4.Experiments already include CommNet as a baseline, which implements structured communication within CTCE. We've expanded discussion of these comparative results to highlight our method's specific contributions. Q5: Communication Efficiency Claims You correctly identified our imprecise statement. "Avoiding communication overhead" should be "providing more efficient communication structure." Traditional methods require $O(n^2)$ communication complexity, while our hypergraph structure reduces this to $O(n^2/k)$ through dynamic grouping, explaining superior performance in large-scale environments. Runtime comparisons and efficiency analysis are added in the appendix. Q6: Convergence Definition in Traffic Junction In Traffic Junction experiments, convergence is defined when an algorithm first reaches 90\% of final performance and maintains this for 5 consecutive evaluation epochs, capturing stable convergence rather than temporary spikes. Q7: Parameter Analysis Completeness Our focus on mixing network parameters was a deliberate choice to isolate the core architectural components that directly influence value factorization. We've expanded Appendix C with comprehensive parameter analysis including HGCN components. In 5m\_vs\_6m, though HGCN adds 104,632 parameters, it increases computational time by only approximately 35\% due to efficient parameter reuse during group reorganizations. Our ablation studies demonstrate that performance gains stem primarily from the dynamic hypergraph structure's efficient information processing capabilities rather than simply increased model capacity, highlighting the importance of architectural design over raw parameter count in multi-agent coordination. Missing References \& Notation corrections: Thank you for highlighting relevant prior work and notation inconsistencies. We've incorporated discussions of the suggested references in our revision and corrected all notation issues.
Summary: The paper presents a novel framework that combines dynamic spectral clustering with hypergraph neural networks to address the multi-agent coordination problem in Multi-Agent Reinforcement Learning. The framework performs spectral clustering on agents’ state histories , dynamically constructing and updating the hypergraph structure. It enhances information processing through a hypergraph convolutional network and incorporates an attention mechanism to improve the selective processing of information. This architecture is also applicable to both value-based and policy-based paradigms. The paper claims that this method outperforms state-of-the-art MARL approaches in sample efficiency and final performance across multiple tasks. ## Update after Rebuttal I appreciate the authors’ response and their efforts to address the raised concerns. However, key issues related to the experimental section remain unresolved. The authors evaluate their method on only three scenarios from the SMAC benchmark. Given the limitation, I am maintaining my current score, which already reflects the highest assessment I can reasonably provide under the current circumstances. Claims And Evidence: Yes. The paper derives the proposed claims through numerous formulas and proofs, and designs multiple experiments to validate the effectiveness of the method. Methods And Evaluation Criteria: Yes. The method in this paper primarily focuses on improving multi-agent coordination. The selected baselines and experiments are commonly used scenarios in multi-agent systems. Theoretical Claims: Yes. This paper combines a large number of references and formula proofs to verify the correctness of its claims, especially in Section 3.2, where a large number of theorems are used to demonstrate the effectiveness of the dynamic spectral clustering mechanism in both computational efficiency and learning quality. Experimental Designs Or Analyses: Yes. This paper includes ablation studies and various coordination environments to test the effectiveness and scalability of the proposed method. Evaluation criteria such as convergence speed, sample efficiency, and final performance are applied to assess the effectiveness of the proposed method. Supplementary Material: N/A Relation To Broader Scientific Literature: 1. Multi-agent group recognition using adaptive dynamic spectral clustering. 2. Using Hypergraph to represent the relationships between multiple agents has become very common. 3. Hypergraph attention convolutional neural network has powerful feature extraction ability, which can extract complex relationships between agents. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The integration of dynamic spectral clustering with hypergraph convolution networks is novel and improve the limitations of modeling dynamic relationships between multiple agents. 2. The formulas and theorems are introduced clearly and clearly, and are proved and derived. 3. The paper is well-written and clear in its exposition of the methods and experiments. Weaknesses: 1. The structure and annotations of Figure 1 are a bit confusing and difficult to understand. 2. The introduction and experiment of Implementation in Value-based and Policy-based Frameworks are still a bit simple. As a major innovation point, it should be discussed and compared. Other Comments Or Suggestions: Will dynamic grouping affect efficiency and performance when the number of agents increases? Is it possible to add an experiment with different numbers of agents in the same scenario for further discussion? Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful feedback and constructive comments. Your insights have helped us identify areas for improvement in our manuscript. Weaknesses 1: Regarding Figure 1 We sincerely appreciate your thoughtful feedback and constructive comments, which have helped us identify areas for improvement in our manuscript. Regarding Figure 1 Thank you for your feedback regarding the confusing of Figure 1. We have made targeted improvements to better illustrate two key elements of our approach: 1.The periodic nature of our grouping mechanism (rather than updating every timestep) 2.The mapping from identified groups to hypergraph structure The stability threshold ($\delta$) was already included in the figure, and we've enhanced its visual prominence to emphasize this critical control mechanism that prevents excessive group fluctuations. Weaknesses 2: Value-based and Policy-based Framework Implementations Thank you for highlighting this important area for improvement. We have enhanced Section 3.1 and 3.5 with: 1.More comprehensive descriptions of our method's integration into both paradigms 2.Explanation of how HGCN-extracted features enhance Q-function expressiveness in value-based frameworks, enabling more accurate modeling of intra-group coordination patterns 3.Parallel analysis of how HGCN-extracted features improve policy networks in actor-critic frameworks by providing higher-order relationship information for more effective action selection To address your concern about limited experimental comparison, we have added MAPPO as a policy-based baseline in the SMAC scenarios (achieving win rates of 0.93, 0.74, and 0.16 on 3s\_vs\_5z, 5m\_vs\_6m, and 3s5z\_vs\_3s6z respectively). This addition enables direct comparison between value-based and policy-based approaches in identical environments. This expanded section better illustrates the versatility of our approach across different learning paradigms and strengthens this key contribution. Other comments: Scalability with Increasing Agent Numbers Regarding your question about agent scalability: The SMAC environment inherently addresses this through agent deaths during episodes, creating natural variation in agent counts within the same scenario. Our method successfully handles this dynamic aspect, as demonstrated in the performance results. Our dynamic grouping mechanism maintains efficiency with larger agent numbers through several design choices: 1.Periodic pre-computation of groupings using collected state histories rather than real-time updates 2.Strategic clustering intervals and stability thresholds that control update frequency 3.Updates triggered only when specific conditions are met, preventing computational bottlenecks Our experiments demonstrate scalability across different agent populations: 1.Predator-Prey environment: 5 agents (Fig. 3a) versus 10 agents (Fig. 3b) 2.Traffic Junction: scenarios with up to 5 and 10 agents (Table 1) 3.SMAC: varying numbers and types of agents across scenarios The results consistently show our method's performance advantages becoming more pronounced as agent numbers increase, particularly in large-scale scenarios, validating its strong scalability characteristics. We appreciate the opportunity to address these points and have incorporated these improvements in our revised manuscript. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response. However, there are still some concerns regarding the current version of the paper. The most critical issue, which directly affects my overall evaluation, lies in the experimental section. In the experiments, the authors only select three scenarios from the SMAC environment. However, these scenarios do not appear to have a direct connection to the proposed method, and it is unclear whether they are representative. As a result, the rationale behind selecting these specific scenarios remains vague. Moreover, the SMACv2 benchmark, which addresses many known limitations of SMAC, is generally recommended as a replacement and should be considered. Additionally, the experimental results on Traffic Junction and GRF are only presented in tabular form, lacking intuitive learning curves that would provide more insights into the training dynamics and performance over time. In summary, I will maintain my current score, as it already reflects the highest score I can reasonably assign given these limitations. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your continued feedback and the opportunity to address your concerns. Thank you for recognizing the novelty of our approach in your initial review. Our work indeed focuses on integrating dynamic spectral clustering with hypergraph convolution networks to create an adaptive coordination framework for multi-agent systems. Regarding the SMAC scenario selection, we carefully chose these three scenarios to test distinct aspects of our theoretical framework: 3s_vs_5z: Tests dynamic grouping effectiveness under delayed rewards and sparse coordination requirements 5m_vs_6m: Evaluates hypergraph structure advantages in precise tactical coordination 3s5z_vs_3s6z: Assesses our method's capabilities with heterogeneous units and complex coordination patterns These scenarios represent a progression of coordination complexity (from hard to super hard) that thoroughly evaluates our method's adaptability across diverse multi-agent situations. The SMAC benchmark remains widely used in recent literature and provides well-established baselines for fair comparison with existing methods. While we acknowledge SMACv2's enhancements, our selected SMAC scenarios effectively validate our theoretical contributions and demonstrate our method's advantages across diverse coordination challenges. We appreciate your suggestion regarding SMACv2 and will explore this environment in future work. Regarding result presentation, we adopted a multi-faceted approach to provide a comprehensive evaluation of our method. Specifically: (1) The learning curves in SMAC and Predator-Prey (Figs. 2-3) visualize temporal training characteristics, with quantitative metrics highlighting convergence speed and stability patterns; (2) Tabular results for Traffic Junction and GRF scenarios adopt the standardized metrics (success rate, convergence steps) and presentation format consistent with prior baseline studies, while aligning with the experimental setups and presentation formats established in prior baseline papers. While respecting and aligning with the previous papers, these different visualization formats provide insights that are not obvious using just one approach. This comprehensive evaluation is further strengthened by Figure 4, which analyzes the learned group structures and their evolution over time, and Figure 5, which presents ablation studies isolating the impact of our key components. By combining performance analysis with structural insights, we provide a thorough evaluation across environments with varying coordination challenges. This multi-dimensional approach helps demonstrate not just that our method performs well, but also why and how it achieves superior results. The theoretical foundations of our work draw from established principles in spectral graph theory and hypergraph representation, but their novel application and adaptation to the MARL domain represent significant contributions. Our framework provides three important theoretical guarantees: 1. The Clustering Approximation theorem provides a mathematically rigorous bound on grouping quality, ensuring our dynamic spectral clustering produces near-optimal agent organizations without requiring prior domain knowledge. 2. The Convergence theorem proves that our adaptive update mechanism achieves stable grouping structures in finite time, addressing a critical challenge in dynamic coordination systems where frequent restructuring can destabilize learning. 3. The Quality Preservation theorem establishes a direct relationship between grouping structure quality and learning performance, providing theoretical justification for why our approach improves sample efficiency and final performance. By adapting and extending these theoretical principles to the specific challenges of multi-agent reinforcement learning, our work provides a mathematically grounded framework for dynamic coordination. Our experimental design, spanning both value-based and policy-based paradigms across diverse environments, provides comprehensive empirical validation of these theoretical guarantees. The consistent performance improvements across all tested scenarios demonstrate that our approach effectively addresses the core challenge of dynamic relationship modeling in multi-agent systems. Our work is the first to systematically integrate dynamic spectral clustering with hypergraph convolution networks in the MARL domain, enabling both adaptive group formation and efficient information processing within an end-to-end trainable architecture. We commit to releasing our code and implementation details to benefit the research community upon acceptance.Thank you again for your valuable feedback, which has helped strengthen our manuscript.
Summary: This paper proposes a multi-agent reinforcement learning framework based on dynamic spectral clustering and hypergraph coordination network, aiming to address the challenges of dynamic relationship modeling and efficient information exchange in complex collaborative tasks. Claims And Evidence: The core propositions in the paper are backed by ample evidence, yet certain technical details and generalizability aspects merit further discussion. Several issues remain to be clarified: 1) The computational overhead of spectral clustering during each training phase remains unquantified, potentially impeding its widespread adoption. 2) Certain proofs depend on idealized assumptions (e.g., complete observation, deterministic transformation), necessitating a discussion on their robustness in real-world deviations.3) The sensitivity to the hyper-parameters is not been analyzed, potentially obscuring the true contributions of structural optimization. Methods And Evaluation Criteria: The dynamic hypergraph coordination network and its evaluation criteria proposed in the paper have significant implications for addressing current collaboration challenges in MARL. For the first time, the integration of spectral clustering and hypergraph neural networks has addressed two critical challenges: dynamic grouping and high-order relationship modeling. Covering four types of differentiated benchmarks, the indicator design balances performance and interpretability, demonstrating the superiority of the method in complex collaborative tasks. Suitable for real-world scenarios that require flexible coordination, such as logistics and transportation, but further research is needed to optimize computations and adapt to physical constraints. Theoretical Claims: Yes, I have checked the proofs. Experimental Designs Or Analyses: The experimental design of the paper excels in scene coverage and baseline diversity, bolstering the assertion of the method's effectiveness in dynamic collaborative tasks. Nevertheless, deficiencies in communication overhead, hyperparameter influence, and ablation depth could potentially compromise the comprehensiveness and reproducibility of the conclusions. It is recommended to supplement the aforementioned analysis to bolster the persuasiveness of the experiment. Supplementary Material: Based on the supplementary materials provided in the paper, I have analyzed in detail the following key parts: 1. Theoretical proof (Appendix A) A. 1-A.3 fully derived three core theorems: spectral clustering approximation ratio proof (O(logk) approximation guarantee), finite convergence of dynamic grouping (expected update times O (1/δ)), and upper bound of value function error (γalogk). The proof process adopts classical methods in spectral theory, such as the Cheeger inequality and potential function convergence analysis. 2. Experimental details (Appendix B) Detailed description of parameter settings for four types of tasks: SMAC unit attributes (such as Stalker range/attack power), Predator Prey's negative reward mechanism, Traffic Junction's sparse observation range (5x5 grid), and GRF's action space decomposition (19 discrete actions). However, the undisclosed hyperparameter search range (such as the weight of λ1/λ2) may affect reproducibility. 3. Parameter Analysis (Appendix C) The comparative display method achieved a 90% win rate with 63K parameters in the 3s5z scene, significantly better than QPLEX's 243K/40%. However, the computational cost of dynamic grouping has not been analyzed. Relation To Broader Scientific Literature: The paper makes several contributions that build upon and interact with the broader scientific literature in multi-agent reinforcement learning. Essential References Not Discussed: The paper covers the most relevant literature in the field of multi-agent reinforcement learning (MARL). However, some essential references can be considered: 1. Yang, Q., Dong, W., Ren, Z., Wang, J., Wang, T. and Zhang, C., 2022, June. Self-organized polynomial-time coordination graphs. In International Conference on Machine Learning (pp. 24963-24979). PMLR. 2. Wang, T., Zeng, L., Dong, W., Yang, Q., Yu, Y. and Zhang, C., 2021. Context-aware sparse deep coordination graphs. arXiv preprint arXiv:2106.02886. 3. Liu, Z., Wan, L., Sui, X., Chen, Z., Sun, K. and Lan, X., 2023, August. Deep Hierarchical Communication Graph in Multi-Agent Reinforcement Learning. In IJCAI (pp. 208-216). Other Strengths And Weaknesses: This article significantly enhances the flexibility and efficiency of multi-agent collaboration by innovatively combining dynamic spectral clustering with hypergraph neural networks. The theoretical analysis is solid, and the experimental verification is ample. Although there are shortcomings related to computational overhead and practical deployment verification, its strengths in algorithm design, theoretical contributions, and application potential position it as a significant advancement in the field of MARL. Other Comments Or Suggestions: The paper demonstrates outstanding innovation in methods and comprehensiveness in experiments but requires correction of detail errors and enhancement of reproducibility descriptions. It is recommended to prioritize addressing issues of terminology consistency and formula standardization, followed by supplementing experimental configuration details to enhance academic rigor. Questions For Authors: Question 1: Dynamic spectral clustering requires calculating similarity matrices and performing feature decomposition in each round of training, which may result in significant computational overhead when the number of agents n is large (such as n>50). The paper did not mention any optimization measures for this, such as approximate spectral clustering methods. Question 2: How does the setting of the grouping update threshold δ in formula (5) affect performance? Is it necessary to manually adjust δ for different tasks? Question 3: How do dynamic clustering parameters (such as k_min/k_max) and attention heads affect performance? Do we need to readjust parameters for different tasks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your thorough review and insightful questions. Your feedback has helped us identify important areas for clarification and improvement. Q1: Computational Overhead of Spectral Clustering We appreciate your question regarding computational scalability. Our implementation addresses potential spectral clustering overhead through complementary optimizations: Our method executes clustering operations only at fixed intervals, combined with a stability threshold mechanism that triggers updates only when the proportion of agents changing groups exceeds the threshold ($\delta$). This approach reduces computational demand as training progresses—clustering operations become increasingly infrequent as group structures stabilize, eventually being eliminated in later training phases. These optimizations ensure the computational cost of spectral clustering becomes negligible in practice. While our measurements show the HGCN component introduces approximately 35\% increase in total computational time, this trade-off is justified by the performance improvements across all test environments. We acknowledge that scaling to larger agent populations would require additional considerations. We thank the reviewer for highlighting this important research direction, which we have addressed in our revised manuscript's discussion of future work. Q2: Grouping Update Threshold ($\delta$) Thank you for your thoughtful question about the grouping update threshold in formula (5). The threshold $\delta$ functions as a stability control mechanism determining when to trigger group restructuring. A group update occurs only when the proportion of agents that would change groups exceeds this threshold. This approach prevents computational overhead from frequent minor reorganizations while allowing adaptation when significant coordination pattern changes emerge. In our implementation, we use $\delta$=0.6, meaning group structures update only when at least 40\% of agents would change their group assignment. This value balances computational efficiency with adaptivity. Importantly, this single threshold value performs effectively across all tested environments without requiring task-specific tuning. This robustness stems from: 1.The complementary periodic evaluation (controlled by \texttt{clustering\_interval}) 2.Quality-based group selection using silhouette scores 3.The self-correcting nature of the reinforcement learning process This empirical stability aligns with our theoretical analysis in Theorem 2, which shows that $\delta$ primarily affects the expected number of updates (O(1/$\delta$)) rather than final convergence quality. Q3: Parameter Sensitivity Thank you for raising this important question about parameter sensitivity. Dynamic clustering parameters ($k_{\min}$/$k_{\max}$): These define search space bounds rather than fixed structural constraints. Our method employs silhouette score optimization to automatically select the optimal number of clusters within this range. We follow a consistent principle: $k_{\min} = 2$ (allowing at minimum pair-wise coordination), while $k_{\max}$ scales with agent count (approximately n/2). For SMAC environments with 5 agents, we use $k_{\max} = 3$. This silhouette-based selection significantly reduces sensitivity to the exact $k_{\min}$/$k_{\max}$ values, as the hypergraph structure naturally adapts to reflect discovered group dynamics. Attention mechanism: Our hypergraph convolution network uses a 4-head attention mechanism, balancing representational capacity with computational efficiency. This allows agents to selectively attend to different aspects of group information simultaneously. Network capacity parameters: Following standard practice, we adjust HGCN dimensions based on environment scale (e.g., from 96 dimensions in smaller environments to 128 in larger ones). This follows a simple principle: network capacity scales proportionally with environment complexity and agent count. Loss function weights: Regarding $\lambda_1$ and $\lambda_2$, we use $\lambda_1 = 0.001$ for group consistency loss and $\lambda_2 = 0.01$ for attention regularization. These values ensure auxiliary losses provide meaningful gradients without overshadowing the primary task objective. The same values work effectively across all environments. Despite capacity adjustments, our method's core algorithmic parameters remain consistent across environments. The dynamic nature of our approach—with spectral clustering, attention-based information processing, and stability thresholds—creates inherent adaptability that reduces the need for environment-specific tuning. We have provided hyperparameter specifications in the appendix in revised manuscript to ensure full reproducibility. We appreciate the reviewer's comprehensive feedback and reference suggestions. We have incorporated into our manuscript to strengthen the literature review and better position our work within the field.
Summary: This work considers the problem of coordination in multi-agent systems. It proposes to construct hypergraphs (i.e., graphs with n-ary rather than binary relations) based on agent histories and using a graph convolution technique over this structure. This is motivated by the fact that agents need to form groups for solving tasks and change between groups dynamically. For the hypergraph construction step, the authors propose an approximate spectral clustering technique and prove certain properties regarding its convergence and solution quality. The features obtained through the convolution are used as additional inputs for Q-networks and policy networks alongside histories / actions. The authors demonstrate that the approach generally obtains better performance than a suite of CTDE and communication-enhanced MARL techniques for a variety of benchmarks. Claims And Evidence: The claims in the abstract and introduction are supported by appropriate evidence in the paper text. Methods And Evaluation Criteria: M1. The way in which the similarity matrix $W$ is constructed from state histories (Section 3.2) should be specified. Furthermore, the format of the states and observations should be given for each environment. M2. The work is compared to a variety of CTDE methods, but it is not clear to me whether the proposed method is in fact CTDE itself. This is not discussed in detail in the paper. My understanding is that it is *not*, given that the method still needs to perform the clustering step over histories to obtain the graph structure and then features $h_i$, which requires knowing the histories of all the agents. I do not see how this can be done in a decentralised way at inference time. M3. Furthermore, given the spectral clustering requires knowledge of all the agent histories, the computational cost of this step will increase with longer episodes. This potential drawback should be discussed (and might be mitigated by, e.g., limiting the history to a time window at most X steps in the past). Theoretical Claims: I understand the statements of the theoretical claims but not the details of the proofs themselves. Experimental Designs Or Analyses: E1. The method is not compared with any other techniques that leverage graph structure. The need for using *hypergraphs* as opposed to standard (binary relation) graphs should be demonstrated in an ablation, as the claims around the need for hypergraphs are central to the narrative of the paper. E2. The work does not give any details about the hyperparameters that were used and how they were tuned. Given the work claims improvement over state-of-the-art methods, all algorithms should be given a comparable budget for hyperparameter tuning to make the comparison fair. E3. The overhead for performing the spectral clustering step and graph convolution steps should be quantified. While the method does seem to obtain an improvement in sample efficiency, in practice the wall clock time overhead may be significant. Supplementary Material: I have briefly read all parts of the appendix. Relation To Broader Scientific Literature: The work connects MARL and graphs in an interesting and novel way. I am not aware of similar works in MARL that use hypergraphs, and the reasoning for using n-ary relations is quite convincing. Essential References Not Discussed: To the best of my knowledge, all essential references in this area are discussed. Other Strengths And Weaknesses: The paper in general is fairly organised and well-written, with a novel idea and promising experiments. Other Comments Or Suggestions: C1. Regarding the experiment in 4.5: as far as I understand, there is no penalty for changing "teams". This may be unstable and could be problematic in more complex tasks. Could you specify how the information was aggregated for each point on the x-axis? C2. Consider giving your method a name so you can avoid "Our Method" etc. in tables and figures. C3. Running title (top of each page) still uses default template title. C4. Space is missing before the start of many citations, consider adding a space or "~" before `\cite` commands C5. Subscripts in math notation use a mix of `\text` and plain characters e.g. $\mathcal{L}\_{\text{group}}$ and $\mathcal{L}_{group}$. Consider being consistent (first one is preferable in my opinion). C6. Typos: "Figure 3 reveal", "Table 1 reveal", "Table 2 illuminate" C6. The way of combining task-specific and structural losses is quite standard, I would not view it as a contribution of the paper (as claimed in the conclusion). Questions For Authors: Please address M1-M3, E1-E3, and C1 above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful feedback, which has helped us improve our manuscript significantly. M1: Similarity Matrix Construction and State/Observation Formats In Section 3.2, we presented our dynamic grouping framework using the normalized cut problem. The similarity matrix $W$ is constructed using a $k$-nearest neighbors approach based on Euclidean distances between agents' state history trajectories. $W_{ij}$ is positive only when agent $j$ is among agent $i$'s $k$ nearest neighbors, and zero otherwise. For consistent application across environments, state histories are normalized to ensure uniform scaling across feature dimensions before applying the $k$-nearest neighbors method. This enables identification of meaningful coordination patterns regardless of the specific state representation format. We have added these details in the manuscript and provided complete environment setup descriptions in Appendix B. M2: CTDE Compatibility Thank you for this thoughtful question. Our approach follows and extends the CTDE framework: Our grouping mechanism exhibits convergence properties (Theorem 2, Figure 4), with group structures stabilizing during training and eventually ceasing to update. This convergence enables decentralized execution: once training completes and group structure stabilizes, each agent's decision process relies only on its local observations enhanced by pre-computed group-aware representations. The attention-enhanced HGCN enables agents to selectively process structured information within their established groups rather than providing centralized information. Our contribution enhances the representational capacity of the CTDE framework by discovering and leveraging higher-order dynamic relationships. We have clarified this point in Section 3.1 of the revised manuscript. M3: Computational Efficiency We have implemented several strategies to optimize computational costs: Limited History Window: We only utilize the most recent state\_history\_length timesteps for clustering (5,000 steps for SMAC) Intermittent Updates: Clustering occurs at fixed intervals (100,000 steps for SMAC). Convergence-based Termination: As shown in Figure 4, group changes decrease substantially in mid-to-late training phases. We implement complete termination of the grouping mechanism after 1M steps. These mechanisms balance computational efficiency with the advantages of dynamic grouping. Implementation details have been added to Section 3.2 of the revised manuscript. E1: Hypergraphs vs. Standard Graphs Our baseline comparisons include standard graph-based methods (MAGIC, GA-Comm). In Section 4.5, following your valuable suggestion, we have added a comparison between hypergraphs and standard graphs under identical grouping strategies in Figure 5, which demonstrates the advantages of hypergraph-based representations. E2: Hyperparameters We have added comprehensive tables of hyperparameters to Appendix B.5, including settings for clustering parameters, network architectures, and training configurations. Key parameters includ (5m\_vs\_6m scenario for example) clustering\_interval: $100000$, state\_history\_length: $5000$, stability_threshold: $0.6$, min\_clusters: $2$, max\_clusters: $3$, hgcn\_out\_dim: $48$, hgcn\_hidden\_dim: $64$, and hgcn\_num\_layers: $2$. E3: Computational Overhead Our clustering approach minimizes computational overhead through infrequent updates and an explicit termination mechanism, making the spectral clustering cost negligible in practice. Additionally, our HGCN implementation maximizes parameter reuse when group structures remain stable. In the 5m\_vs\_6m scenario, we add approximately 104,632 parameters while increasing computational time by only approximately 35\% compared to baseline (Ft-)QMIX. We have included theoretical complexity analysis in Appendix A.4 and provided detailed parameter and runtime quantification for all SMAC environments in Appendix C.2, justifying this overhead. C1: Team Stability We implemented a "stability threshold" mechanism to prevent excessive team changes. Groups update only when the proportion of agents changing membership exceeds this threshold, balancing adaptive behavior with structural stability. For Figure 4 data aggregation, we collected group information at regular intervals throughout training. The upper portion displays group structure evolution over normalized time steps (different colors represent distinct groups), while the lower matrix presents co-occurrence probabilities calculated by averaging group memberships across collected time points. This reveals stable coordination patterns—for instance, agents 0 and 1 exhibit high co-occurrence probability, indicating consistent collaboration. We have addressed all your formatting and notation suggestions to improve our manuscript's clarity and precision, and we have introduced HYGMA (HYpergraph Grouping for Multi-Agent coordination) as name. --- Rebuttal Comment 1.1: Comment: Thanks for your response! A few points below. Importantly, the paper pdf has not been updated, so many changes referred to in this reply are missing from the most recent OpenReview version. - M2: It is somewhat hard to tease this apart from the response, but the execution is indeed not decentralized, but "local" or "partially decentralized" given observations of agents in the same team must still be passed through the GNN. This should be clearly acknowledged and clarified. In light of this, it is hardly surprising that the method performs better, given it uses additional information compared to vanilla CTDE. Comparing with a method with the same amount of information (such as using standard instead of hypergraphs, E1) is needed in my opinion. - E1: missing from the figure. - E2: even though the authors give the hyperparameters, tuning (if any) details were not provided, and my original point about tuning being needed to keep the comparison fair still stands. - E3: "only 35%" is a favorable interpretation. - Overall, after probing into some details, it appears there are quite a few tricks (stability, intermittent updates, ...) that are needed to make the method work. This contributes to the general feeling that details are being buried. I am retaining my score for now but would be happy to have another look if the paper is updated. --- Reply to Comment 1.1.1: Comment: Thank you for your follow-up questions! **Regarding PDF updates:** We understand that within rebuttal phase doesn't permit paper updates, which explains why changes mentioned aren't visible on OpenReview. We hope that our explanations in this response adequately address questions. **Regarding M2 (CTDE Compatibility):** As in our last rebuttal, our approach follows and extends the CTDE framework. We agree with your characterization of the execution phase as 'partially decentralized', which is indeed a deliberate extension of traditional CTDE. In our method, agents within the same group share information through HGCN to generate group-aware representations, a design choice that balances coordination capabilities with distributed execution. Our method maintains the core principles of CTDE: 1. Training phase leverages global information (including dynamic grouping and hypergraph construction) 2. Execution phase operates with a more restricted information structure (fixed hypergraph and intra-group communication) **Regarding E1:** The comparison in Figure 5 demonstrates that, even within the same partially decentralized execution framework, hypergraph structures provide significant advantages (win_rate 95% vs 90%. Figure available in https://anonymous.4open.science/r/rebuttal-188F/). This validates our core claim: higher-order relationship modeling is crucial for multi-agent coordination, particularly in SMAC tasks requiring coordinated agent actions. **Regarding E2:** We implemented rigorous hyperparameter tuning for all methods to ensure fair comparison. Our complete hyperparameter table is provided in the Appendix. Table . Hyperparameters for SMAC environments | Parameter | 3s_vs_5z | 5m_vs_6m | 3s5z_vs_3s6z | |--------------------|----------|----------|--------------| | Batch size | 128 | 128 | 128 | | Buffer size | 5000 | 5000 | 5000 | | Double Q | True | True | True | | Epsilon anneal time| 100000 | 100000 | 100000 | | HGCN hidden dim | 48 | 64 | 196 | | HGCN out dim | 36 | 48 | 128 | | HGCN num layers | 2 | 2 | 2 | | Min/Max clusters | 2/4 | 2/3 | 2/3 | | Clustering interval| 100000 | 100000 | 100000 | | Stability threshold| 0.6 | 0.6 | 0.6 | | λ₁ | 0.001 | 0.001 | 0.001 | | λ₂ | 0.01 | 0.01 | 0.01 | For the 5m_vs_6m scenario, we systematically tested: HGCN structure: Hidden dimensions [32,48,64,128] and layer counts [1,2,3], determining that hidden_dim=64, out_dim=48, num_layers=2 performed optimally Grouping parameters: Various Min/Max cluster combinations, with Min=2, Max≈2/3(approximately half number of agents) Stability threshold: Searched in range [0.3,0.5,0.6,0.7], finding 0.6 offered optimal stability-adaptivity balance All baseline methods underwent equivalent tuning procedures. **Regarding E3 (Computational Overhead):** Our experiments confirm additional computational overhead compared to baseline methods: Table . HGCN additional parameters and computational overhead | Scenario | HGCN Parameters | Computation Overhead | |--------------|-----------------|----------------------| | 3s_vs_5z | 65,356 | +36.47% | | 5m_vs_6m | 104,632 | +35.33% | | 3s5z_vs_3s6z | 391,336 | +36.95% | This increase stems from HGCN's additional parameters, though our intermittent update design effectively manages this overhead. While computational costs increase, our method's significant gains in sample efficiency help offset these costs. **Regarding Theoretical Foundations of Design Choices:** The design components mentioned by the reviewer each have specific theoretical or practical justifications: Stability threshold: In Section 3.2, Theorem 3.2 rigorously proves that grouping structures converge in finite time when using this threshold, providing solid theoretical guarantees for our dynamic grouping mechanism Intermittent updates: While not directly proven in Theorem 3.2, this design applies convergence theory in practice. Since Theorem 2 demonstrates grouping structure convergence, we can reasonably set update intervals to balance computational efficiency and grouping quality, as grouping changes naturally decrease in frequency as training progresses These design choices demonstrate robust performance across environments. As shown in Table, key parameters remain consistent across environments, indicating our method doesn't rely on environment-specific fine-tuning. We greatly appreciate your thorough review and constructive feedback, which has significantly improved our work. If accepted, we will incorporate all these improvements in the final version and would be pleased to open-source our code to contribute to the MARL community.
null
null
null
null
null
null
Big Cooperative Learning to Conquer Local Optima
Reject
Summary: This paper introduces Big Cooperative Learning (BCL), a strategy to circumvent local optima by exploiting multiple “views” of the same data distribution. Instead of using one global objective, BCL sets up many subtasks (e.g., marginal or conditional matching, or transformations of the features). All tasks share the same global optimum yet have distinct local minima. By randomly switching among tasks, the algorithm “destabilizes” any single task’s local optimum and converges on the global solution that satisfies all tasks simultaneously. Experiments with Gaussian mixtures show that BCL’s multi-task scheme outperforms conventional single-objective methods in avoiding mode collapse or mode-covering issues, illustrating its potential to tackle entrenched local-optima problems in both forward KL (maximum-likelihood) and reverse KL (adversarial) learning settings. Claims And Evidence: Broadly, the core theoretical claim—that multiple subtasks with a shared global optimum can help escape local minima—is backed by two-dimensional and higher-dimensional Gaussian mixture experiments. These experiments illustrate how BCL consistently converges on the true parameters in scenarios where single-objective methods often fail. In that sense, the authors do provide clear and convincing evidence for the mechanism at work within those controlled mixture-model settings. However, generalization to large-scale neural networks or “foundation model” training remains less explored. The paper does connect BCL’s multi-task idea to how modern foundation models use diverse objectives (e.g., masked or conditional predictions), but it stops short of offering the same level of direct empirical evidence in real neural architectures. Further, theoretical evidence proving that BCL always overcomes local optima under broader conditions is lacking, so while the main demonstration (that BCL escapes local minima in mixture models) is well supported, the broader implication—that it universally conquers local optima—would still need more thorough theoretical and large-scale empirical validation. Methods And Evaluation Criteria: Yes. The authors focus on synthetic Gaussian mixture scenarios to showcase how BCL escapes local optima, which is a fitting choice for illustrating local-minimum structures in a transparent, controlled manner. In that sense, the method and evaluation align well: they aim to show that combining multiple sub-objectives can overcome entrenched local minima, and Gaussian mixtures are a clean platform to test this claim. However, because the evaluation remains largely in synthetic domains, real-world applicability (e.g., large-scale neural network tasks) is less directly demonstrated. Nevertheless, for the paper’s stated objective—verifying BCL’s ability to avoid local minima—the chosen GMM-based setup provides appropriate and understandable evidence. Theoretical Claims: No theoretical proofs Experimental Designs Or Analyses: Strengths: Clear Problem & Setup: Effectively defines the local optima problem and motivates their solution. Controlled GMM Simulations: GMMs provide a valid, well-understood environment to isolate the method's effects. Targeted Demonstrations: Simulations effectively show the method's ability to handle local optima (forward KL) and mode collapse (reverse KL). Comprehensive Task Design: Explores joint, marginal, conditional, and transformed matching, supporting conclusions about task diversity. Concerns: Idealized Assumptions: Strong assumptions about data/model capacity limit real-world applicability. Limited Model Complexity: GMMs are simpler than DNNs; findings need further validation with DNNs. Lack of Statistical Rigor: Could benefit from statistical significance tests. Supplementary Material: NA Relation To Broader Scientific Literature: Local Optima: Builds on existing work addressing this problem (e.g., simulated annealing) and connects directly to studies on local optima in GMMs. Learning from Foundation Models: Leverages the success of models like GPT/BERT by analyzing their learning processes, emphasizing the importance of diverse information utilization. Task Diversity: Relates to multi-task/curriculum learning but focuses on designing tasks with different local optima to actively escape poor solutions. This idea also generalizes the data augmentation and masking approaches of foundation models (like BERT or GPT), which employ specific subsets or transformations of features in training. In essence, the paper introduces a novel learning paradigm inspired by foundation models to tackle the local optima problem, drawing upon and extending existing concepts in machine learning. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful review and recognition of the contributions of our work. Below, we systematically address each of the raised concerns with additional evidence where appropriate. We welcome further feedback. **Q1: Statistical significance tests** In the FKL experiments, Fig. 2 (b) has shown the quantitative results over 100 random runs, where BCL stably delivers the global optimum (evident from the near-zero Test Joint FKL values and supported by Fig. 2(a)), while Joint Matching fails to reach the global optimum with probability 1. To further validate BCL's effectiveness, we have conducted additional experiments: - We extend the FKL experiments to **8 real-world clustering datasets**, where BCL shows boosted performance over SOTA clustering methods. Details can be found in our responses to Reviewer YmdK’s Q4. - We repeat the RKL experiments in Fig. 3 over 100 random runs. The results show that Joint Matching almost always gets stuck in the 1-mode local optimum (with a Test Joint RKL of 3.211±0.036), while our BCL effectively conquers this strong local optimum (delivering a Test Joint RKL of **0.362±0.155** and exploring **86.6%** modes on average within 10K iterations; more iterations would improve the performance). It's worth noting that Conditional Matching in the original domain gets stuck in the same local optimum and delivers roughly the same performance as Joint Matching, similar to what is shown in Appendix Fig. 4. - We perform ablation studies for both FKL and RKL experiments, based on 10 runs. The results are summarized in the following two tables, where JM/MM/CM/RT denote joint/marginal/conditional/randomly-transformed matching, respectively. These studies clearly show that random transformations, which greatly increase the diversity of matching tasks, play a key role in BCL, especially in RKL scenarios. **Table 1: Ablation study for the FKL experiment** |Training Method|Test **Joint FKL**| |------|------| |JM FKL|0.263±0.035| |+MM|0.141±0.054| |+MM+CM|0.124±0.044| |+MM+RT (BCL)|**0.030+0.006**| **Table 2: Ablation study for the RKL experiment** |Training Method|Test **Joint RKL**| |------|------| |JM RKL|3.219±0.000| |+MM|3.201±0.056| |+MM+CM|3.219±0.000| |+MM+RT (BCL)|**0.407±0.238**| |JM RKL (Adam)|3.219±0.000| **Q2: Generalization to large-scale neural networks … theoretical evidence proving that BCL always overcomes local optima under broader conditions … it universally conquers local optima** Developing a unified approach that **uniformly addresses local optima across multiple representative learning paradigms** is both fundamentally valuable and challenging, even in controlled GMM scenarios. Please see our responses to Reviewer YmdK’s Q1. While BCL represents a promising starting point, generalizing what we’ve done on GMMs to DNN scenarios remains a long-term challenge, because - in order to conquer the local optima in DNN scenarios, one must first “understand” their local optima, which unfortunately remains largely unexplored, especially considering the uniformity across multiple learning paradigms; - for Foundation Models (FMs), even defining local/global optima is challenging given their vast training data and diverse downstream applications; - the empirical research required is computationally intensive; and - if one proposes a theory that universally conquers local optima under broad conditions (i.e., the theory always finds the global optimum), it revolutionizes machine learning. We position BCL as a first step towards this ultimate goal. Our transparent research in controlled GMM scenarios is expected to help filter out promising future directions. **Q3: Strong assumptions about data/model capacity** As explained in our response to Q2, uniformly addressing the local-optima challenge across multiple paradigms is inherently difficult. In order to clearly demonstrate the potential of BCL in a transparent and easy-to-understand way, we have made these assumptions to isolate it from confounding factors such as non-ideal data and imperfect model capacity. Please also see the additional discussions in our responses to Reviewer w4Bu’s Q1. In practical applications, approximate fulfilment of these assumptions may be sufficient. In particular: - The iid data and sufficient model capacity assumptions are well-established conventions in deep learning practice. - Existing FMs show that one can use a universal DNN to approximate different $p_{\theta}(y_{T}|y_{S})$s, which has led to widespread success in many numerous practical applications. Please also see our responses to Reviewer w4Bu’s Q4. **Q4: data augmentation** The ideal data assumption inherently obviates the need for data augmentation; see Footnote 1.
Summary: This paper introduces "big cooperative learning" (BCL), a learning approach to address local optima challenges in conventional machine learning paradigms. The core concept involves diversely exploiting available information (data samples or energy landscapes) to design multiple cooperative training tasks with different local optima but sharing the same global optimum. BCL claims to destabilize local optima by randomly switching among these tasks while encouraging exploration toward the global optimum. The authors demonstrate BCL using Gaussian Mixture Models (GMMs) in tailored simulations, focusing on both forward KL (FKL) minimization (related to maximum likelihood learning) and reverse KL (RKL) minimization (related to adversarial learning). The paper positions BCL as a generalization of training methods used in foundation models, drawing parallels between BCL's diverse task creation and the varied exploitation of information in models like BERT and GPT. ## update after rebuttal While the reviewers addressed my concerns, I think my rating is still valid. While I think the paper has potential, I think it would benefit from improvements in writing and defining its story, as well as the directions and shortcomings I outlined below. Claims And Evidence: While the paper presents interesting ideas, several claims are not adequately supported: - The claim that BCL is the "missing core element" from conventional learning paradigms is overstated given the limited scope of experiments. The authors present BCL as a revolutionary approach but primarily demonstrate it on controlled GMM simulations. - The evidence for BCL's effectiveness is primarily qualitative and visual, lacking rigorous quantitative metrics that would strengthen the case. The 25-GMM simulations show promising results but do not include statistical validation across multiple runs or comparison to baseline methods. - The connection between foundation model training and BCL is plausible but not sufficiently validated. The authors draw parallels but don't demonstrate that the success of foundation models is primarily due to the mechanism they've isolated. For example, an ablation study over the different objectives would greatly strengthen the paper. The paper claims that BCL delivers the "emerging power of exploration" but doesn't quantitatively measure exploration capabilities compared to established techniques. Methods And Evaluation Criteria: The methods are reasonable for a proof-of-concept but have significant limitations: - The exclusive use of GMMs, while justified for interpretability, raises questions about generalizability to more complex models and real-world problems. - The evaluation lacks established metrics to quantify improvements in avoiding local optima or exploration efficiency. - The paper focuses on controlled simulations rather than challenging real-world datasets, limiting the findings' practical impact. - The authors acknowledge potential challenges in scaling BCL to DNNs (Remark 3.4) but don't adequately address how these would be overcome in practice. Theoretical Claims: The theoretical framework is generally sound but has some limitations: - The mathematical formulation of BCL in section 3.2 is coherent, but the ideal assumptions (section 2.1) significantly simplify the problem. - The theoretical justification for why task switching helps escape local optima (Remark 3.3) is intuitive but lacks formal proof or guarantees. - The paper does not provide theoretical convergence analysis or bounds on the performance improvement that BCL might offer. I do not think that a theoretical convergence analysis or bounds are necessary for acceptance, but I think the claims made in the paper have to be softened. Experimental Designs Or Analyses: The experimental designs are thoughtfully constructed to demonstrate the core principles of BCL: - The 2D GMM-based visualizations effectively show how diverse tasks have different local optima landscapes but share the same global optimum. - The 25-component GMM simulations appropriately test BCL in more challenging scenarios for both FKL and RKL minimization. - The deliberate initialization of the RKL experiment to encourage mode collapse (placing all components in one corner) provides a strong challenge for testing BCL's exploration capabilities. However, the experiment designs have several weaknesses: - The experiments are limited to synthetic data and controlled settings. - There are no comparisons with other methods designed to address local optima (e.g., simulated annealing, cyclic learning rates, or noise injection). - The paper lacks ablation studies that would isolate the impact of different components of BCL. I would have also liked to see more quantitative metrics beyond visual demonstrations, a sensitivity analysis to different hyperparameters of the method and statistical validation of results across multiple runs. Supplementary Material: I reviewed the supplementary material, which provides additional details on the 2D simulations and 25-GMM experiments. While it offers interesting observations about bi-level optimization and multi-scale noising, it mainly extends the controlled experiments rather than addressing the core limitations of the main paper. Relation To Broader Scientific Literature: The paper connects to relevant literature but has some gaps: - It doesn't sufficiently engage with the rich literature on methods to escape local optima beyond citing a few papers on mode collapse. The discussion of multi-task learning (section 4) acknowledges differences between classical MTL and BCL but doesn't thoroughly examine other works that use task diversity to improve learning. Essential References Not Discussed: To be fair, the authors are addressing a fundamental problem of Machine Learning. A comprehensive comparison with related work would be beyond the 8-page limit. Nonetheless, the current embedding of BCL is rather superficial, mentioning well-known limitations of mode-seeking and mode-covering. I think the paper would benefit from a more in-depth comparison to on-going work in this field. For addressing local optima: - Entropy-regularized methods (e.g., Entropy-SGD by Chaudhari et al., 2016) - Cyclical learning rates (Smith, 2017) For diverse information exploitation: - Multi-view learning (Xu et al., 2013) - Self-supervised contrastive learning (Chen et al., 2020), which also exploits data transformations For exploration in learning: - Thompson sampling and other exploration strategies - Curiosity-driven learning (Pathak et al., 2017) For mode-seeking/covering behavior: - More recent GAN variants addressing mode collapse (e.g., VEEGAN is cited, but PacGAN, MSGAN are not) Other Strengths And Weaknesses: Strengths: - The paper proposes an interesting conceptual framework that connects foundation model training to fundamental learning challenges. - The visualizations effectively illustrate how different tasks have different local optima landscapes. - The application to both FKL and RKL minimization demonstrates some versatility. Weaknesses: - The paper lacks empirical validation beyond GMMs and controlled settings, raising significant questions about practical applicability. - The computational feasibility of BCL for large-scale problems is not adequately addressed. - The paper doesn't provide a clear algorithm or procedure for implementing BCL in practice, particularly for DNNs. - There's a lack of comparative evaluation against alternative methods for addressing local optima. - The paper claims BCL as a fundamental missing element but doesn't convincingly demonstrate that it's the key factor in foundation model success, i.e., some kind of ablation study would be greatly appreciated. Other Comments Or Suggestions: - The paper would benefit from more rigorous empirical evaluation, including quantitative metrics and statistical validation. - A more concrete algorithm formulation would help clarify how BCL should be implemented in practice. - Application to at least one real-world dataset, even with a simplified model, would strengthen the practical relevance. - A comparison with other methods addressing local optima would provide context for BCL's contributions. - The computational overhead of task switching and its impact on convergence could be analyzed. - Maybe the authors presented it and it flew over my head, but it'd be great if they could clarify again how Table 1 and BCL are connected and the role of the transformation function g through an ablation study. Questions For Authors: - How would you implement BCL for DNNs in practice? Remark 3.4 acknowledges challenges in ensuring interrelationships among different $p_{\theta}(y_{T}|y_{S})$, but doesn't provide a concrete solution. Without addressing this, the practical applicability of BCL remains uncertain. - Have you conducted any experiments comparing BCL with established methods for addressing local optima (e.g., simulated annealing, momentum methods, noise injection)? Such comparisons are essential to validate the claimed advantages of BCL. - What is the computational overhead of implementing BCL compared to conventional learning? The paper doesn't address how the task switching mechanism affects convergence speed and overall training efficiency. - How robust is BCL to the choice of task distribution and transformation distribution? Are there optimal strategies for selecting these, or does BCL require extensive tuning? - The paper claims that BCL is a key element missing from conventional learning paradigms, but many recent advances have occurred without it. What evidence suggests that BCL, rather than other factors like model scale or data quality, is truly the missing core element? What I am most confused about is if the GPT models are so amazing with Next-Token Prediction, why not try to rephrase every problem as a next-token prediction problem. BCL is saying, mix the objectives in a multi-task manner, motivated by escaping local minima, but the connection to the success of LLMs, which use next-token prediction, is vague. To encourage the authors, I think the work done in the paper is solid and I am intrigued by the idea. I personally haven't considered mixing all these objectives and I am curious about how this could be feasible with deep neural networks. I am keen to change my score and engage in discussion, and a more in-depth ablation over the different objectives would affect my rating greatly. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate your comprehensive comments with insightful future directions. Below we address your main concerns within the 5000-character limit, with some details consolidated in responses to other reviewers. We welcome further discussion. **Q1: BCL is the missing core element … is overstated. What evidence?** We will moderate our claims in the revision. From an application-agnostic data, modeling, and learning perspective, **complex** foundation models (FMs) **greatly succeed** with **imperfect** data and modeling, while **simple** conventional learning paradigms **suffer badly** even with **perfect** data and modeling. Thus, the missing core element must be associated with learning (see the last paragraph of page 1). **Q2: How are Table 1 and BCL connected?** Table 1 shows that many FM objectives are special cases of BCL. For example, BCL recovers the next-token prediction by setting $D=FKL$, $g=identity$, $T=t$, $S=<t$, $\rho=\nu$, and $q’=q$; see the paragraph after Eq. (5). BCL thus encapsulates the common learning essence of existing FMs. Our transparent GMM studies reveal that BCL’s diverse information exploitation conquers local optima and encourages exploring the global optimum. This revelation may extend to FMs to explain, e.g., the success mechanism of next-token prediction. Note also that diverse conditional matching isn't universally effective; see our responses to Reviewer YmdK’s Q7. **Q3: quantitative … validation across multiple runs … real-world problems … ablation studies** Fig. 2b has shown the quantitative results over 100 random runs. We perform additional experiments on 8 real-world clustering datasets, repeat the experiments in Fig. 3 over 100 runs, and perform FKL and RKL ablation studies. Please see our responses to Reviewer 4KZD’s Q1. **Q4: Implementing BCL for DNNs** Since BCL contains many FM objectives as special cases (see Q2), its DNN implementation can follow existing FMs. We expose the issue that the interrelationship (i.e., Bayes’ rule) among $p_{\theta}(y_T|y_S)$s is ignored in existing FMs; this is considered a small contribution. To address the issue, we’d use $p_{\theta}(y_{T}|y_{S})$ to generate pseudo data, which are then used to form additional regularization tasks (similar to Eq. (6)) to promote Bayes' rule compliance. **Q5: On insightful future research directions for BCL** Revealing the success mechanism of FMs, either empirically or theoretically, is challenging given their unexplored local optima and broad downstream applications. See our responses to Reviewer 4KZD’s Q2 for details. Before quantifying the conquest of local/global optima in DNN scenarios, one must first quantify the local/global optima themselves. Both tasks are clearly beyond the scope of this paper. We have shown in Fig. 2 that BCL stably delivers the global optimum in controlled GMM scenarios while Joint Matching always fails. Please see other experiments in our responses to Reviewer 4KZD’s Q1, where we have quantitatively measured exploration. **Q6: On the ideal assumptions** Please see our responses to Reviewer 4KZD’s Q3. **Q7: Comparisons with other methods across many applications** We position this paper as the first transparent research towards a unified approach that **uniformly addresses local optima across multiple learning paradigms**. See our responses to Reviewer YmdK’s Q1 and the additional experiments in our responses to Reviewer 4KZD’s Q1. Therefore, extending BCL to various applications (e.g., Bandits/RL) and making application-specific comparisons therein are orthogonal to the scope of this paper. We’ll discuss the related applications and suggested references in the revision. The way BCL conquers local optima (i.e., exploiting many cooperative objectives) is orthogonal to what existing techniques do (e.g., simulated annealing and momentum are designed for one objective). Extending BCL with these techniques is left to future research. Empirically, Joint Matching with Adam fails with probability 1 in the RKL experiments; see our responses to Reviewer 4KZD’s Q1. The ideal data assumption eliminates the need for data augmentation; see Footnote 1. **Q8: Computational overhead of BCL … algorithm formulation** Definition 3.1 says “one task at a time”. Accordingly, BCL’s computational overhead comes from sampling a task (i.e., $(S, T, g)$) and computing $y=g(x)$; both are lightweight. While BCL may require more iterations, it often produces a much better solution (see Figs. 2-3). We will add algorithm formulations to the revised appendix. **Q9: BCL’s robustness to task scope and optimal strategy for selecting tasks** Our experience is that if the tasks are sufficiently diverse, BCL robustly conquers local optima with minimal tuning; see Remark 3.3. While we used uniform task sampling in all experiments, smart next-task selection (i.e., learning to big learn) could definitely improve exploration, as noted in Concluding Remarks.
Summary: This paper focuses on the generative model and discusses several learning objectives, regardless of the model architecture and data. The authors generalize the conventional learning objective and conditional learning objective to propose the Big Learning, aiming to eliminate the local minima problem. The authors verify the effectiveness of Big Learning on GMMs. ## update after rebuttal Thank you for the response. Some of my concerns are addressed. However, given the current presentation (writing, etc), I am still lean to rejection. Nevertheless, I raise my rating to 2. Claims And Evidence: 1. The authors claims that Big Learning is designed to incorporate multi-task learning. However, no experiment is designed to demonstrate the ability of Big Learning in multi-task learning. 2. The Big Learning is seemingly general learning objective which combines both joint matching and conditional matching. Despite its good intuition, there is no evident theoretical support for the superior performance of Big Learning over joint matching and conditional matching. Methods And Evaluation Criteria: N/A Theoretical Claims: No theoretical contribution. Experimental Designs Or Analyses: 1. The experiments are too toy to demonstrate the effectiveness of Big Learning. The idea of Big Learning originates from the application, such as GAN, GPT, MAE, etc. At least some experiments in more realistic settings are expected to demonstrate the broader application of the proposed method. 2. Furthermore, in Fig. 2 and 3, Big Learning is only compared with Joint Matching. A comparison with Conditional Matching is expected. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. In my opinion, the biggest problem of this paper is its writing. It makes the audience to capture the key idea of this paper, especially the introduction part. Actually, the authors introduce the central contribution, e.g., Big Learning, until Page 5, and I finally understand it after reading through the whole paper 3 times. 2. Another major issue is the toy settings in experiments. The authors are suggested to demonstrate the broader application of Big Learning under more realistic settings. 3. Lacks theoretical analysis. Overall, I am lean to a clear rejection. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate your comments. As there are misunderstandings, we invite you to read our responses along with other reviewers’ comments, i.e., the comprehensive and insightful comments of Reviewer w4Bu and the concise and objective assessment of Reviewer 4KZD. We welcome further discussion and thank you for your time. **Q1: On the value of BCL** We are addressing the fundamental local-optima challenge of machine learning (noted by Reviewer w4Bu). It’s extremely valuable and challenging to develop a unified approach that **uniformly addresses local optima across multiple learning paradigms**. BCL is the first to demonstrate this potential. Specifically, BCL simultaneously conquers the entrenched local-optima challenge in FKL (maximum-likelihood) and RKL (adversarial) paradigms by diversely exploiting the available information, as validated in controlled GMM settings (acknowledged by Reviewer 4KZD). **Q2: BCL … combines both joint matching and conditional matching** Remark 3.2 stated that the BCL can exhaustively cover all joint, marginal, and conditional matching tasks across many transformed domains. **Q3: On controlled GMM settings** Following our responses to Q1, to rigorously and transparently study how to uniformly conquer local optima across multiple paradigms, we need a platform that applies to multiple paradigms and has well-studied local optima. DNNs are clearly not an option (see our responses to Reviewers w4Bu's Q5 & 4KZD’s Q2). GMMs provide this clean platform, as recognized by Reviewer 4KZD who noted this as a strength of the paper. **Q4: Experiments in realistic settings** We extend the FKL experiments to 8 real-world clustering datasets (https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multiclass.html). Compared to SOTA clustering methods such as WM-GMM (A universal framework for learning the elliptical mixture model. TNNLS 2020) and SW-GMM (Sliced Wasserstein distance for learning gaussian mixture models. CVPR 2018), BCL (BigLearn+EM) delivers boosted performance (see results below), justifying its effectiveness. See also other additional experiments in our responses to Reviewer 4KZD’s Q1. |Dataset|Metric|WM-GMM|SW-GMM|Joint-EM|BigLearn+EM| |------|------|------|------|------|------| |Covtype|NMI|0.101±0.0158|0.138±0.0341|0.119±0.0272|**0.182±0.0112**| |Covtype|ARI|0.065±0.0497|0.037±0.0658|0.057±0.0216|**0.098±0.0143**| |Covtype|Joint-LL|70.957±0.059|70.268±1.632|72.194±1.9089|**74.251±0.8449**| |Glass|NMI|0.419±0.0726|0.426±0.0534|0.436±0.0644|**0.450±0.0504**| |Glass|ARI|0.198±0.0511|0.195±0.0479|0.220±0.0526|**0.226±0.0434**| |Glass|Joint-LL|7.142±0.8023|7.148±0.9546|7.008±1.0364|**7.204±1.0130**| |Letter|NMI|0.279±0.0033|0.478±0.0186|0.492±0.0169|**0.544±0.0106**| |Letter|ARI|0.012±0.0032|0.190±0.0202|0.193±0.0181|**0.255±0.0163**| |Letter|Joint-LL|12.38±0.1024|19.045±0.1548|19.297±0.1877|**19.905±0.0718**| |Pendigits|NMI|0.782±0.0233|0.744±0.0385|0.771±0.0323|**0.831±0.0109**| |Pendigits|ARI|0.679±0.0487|0.600±0.0663|0.626±0.0622|**0.741±0.0185**| |Pendigits|Joint-LL|10.068±0.1824|9.870±0.2198|9.960±0.2545|**10.370±0.0396**| |Satimage|NMI|0.575±0.0256|0.598±0.0542|0.587±0.0311|**0.612±0.0195**| |Satimage|ARI|0.498±0.0451|0.505±0.1562|0.470±0.0765|**0.520±0.0181**| |Satimage|Joint-LL|39.214±0.0035|39.384±0.092|39.387±0.0062|**39.508±0.0437**| |Seismic|NMI|0.167±0.0145|0.196±0.0090|0.198±0.0259|**0.224±0.0055**| |Seismic|ARI|0.113±0.1584|0.089±0.0426|0.057±0.0292|**0.165±0.0104**| |Seismic|Joint-LL|41.958±0.2185|42.234±0.1441|42.050±0.8780|**42.619±0.0367**| |Svmguide2|NMI|0.098±0.0372|0.108±0.0638|0.085±0.0746|**0.215±0.0747**| |Svmguide2|ARI|0.061±0.0348|0.087±0.0911|0.050±0.0820|**0.225±0.0918**| |Svmguide2|Joint-LL|10.248±0.0546|**10.416±0.4158**|10.404±0.4240|10.410±0.3951| |Vehicle|NMI|0.218±0.0152|0.178±0.0545|0.197±0.0655|**0.230±0.0330**| |Vehicle|ARI|0.102±0.0131|0.085±0.0533|0.094±0.0476|**0.128±0.0281**| |Vehicle|Joint-LL|22.300±1.0494|22.473±1.0635|22.896±1.3036|**23.893±1.3625**| **Q5: BCL vs multitask learning** As discussed in the first part of Related Work, BCL, which conquers local optima with cooperative tasks, differs fundamentally from conventional multitask learning. **Q6: Lacks theoretical analysis** Even for GMMs, SOTA theoretical research is still analyzing their local optima in FKL territory (Chen et al., 2024b), let alone conquering them uniformly across multiple paradigms. See also our responses to Reviewer 4KZD’s Q2. **Q7: BCL vs Conditional Matching (CM)** Fig. 1 and Appendix Fig. 4 clearly show BCL’s advantages over CM. Even in this simple setup, combining different CMs (Fig.4) brings no benefits, confirming it's not the unified approach in our responses to Q1. Since BCL doesn't use CM in Fig. 2 (see Lines 401-404, left), we compare BCL with CM in the RKL experiments in Fig. 3. The results (in our responses to Reviewer 4KZD’s Q1) show that CM gets stuck in the same local optimum as Joint Matching.
null
null
null
null
null
null
null
null
MMInference: Accelerating Pre-filling for Long-Context Visual Language Models via Modality-Aware Permutation Sparse Attention
Accept (poster)
Summary: The paper addresses the computational bottleneck in long-context Vision Language Models (VLMs) during the pre-filling stage. The authors observe that attention in VLMs exhibits unique sparse patterns, particularly a "Grid" pattern in video inputs due to spatiotemporal locality. They also identify distinct modality boundaries in attention (No-Boundary, K-Boundary, Q-Boundary, and 2D-Boundary). Based on these observations, the authors propose MAPSparse, a permutation-based dynamic sparse attention approach that significantly reduces computation while maintaining performance. MAPSparse consists of three main components: 1) the Grid sparse attention pattern for intra-modality attention, 2) Q-Boundary and 2D-Boundary patterns for handling mixed-modality inputs, and 3) a Modality-Aware Sparse Attention Search Algorithm to optimize patterns for each attention head. Experiments on video understanding tasks, Video Needle in a Haystack (V-NIAH), and Mix Modality Needle in a Haystack (MM-NIAH) show that MAPSparse achieves up to 8.3× speedup over FlashAttention-2 and 1.7× over MInference at 1M token length while maintaining competitive performance. Claims And Evidence: 1. VLM attention exhibits unique sparse patterns compared to LLMs, including a Grid pattern. - Evidence: Visualizations in Figures 2 and 5 demonstrate these patterns clearly. Section 2 provides quantitative analysis showing VLMs require only 5.78% of attention weights to recall 95% of total attention. 2. Modality boundaries create distinct attention patterns that require specialized handling. - Evidence: Figures 2b, 2c, and 3 visualize these boundaries, and the authors provide detailed analysis of how they affect attention. 3. MAPSparse accelerates the pre-filling stage by up to 8.3× at 1M tokens while maintaining performance. - Evidence: Comprehensive experiments on video understanding tasks (Table 1) and NIAH tasks (Figure 4) show strong performance. Figure 6 demonstrates the claimed speedups. Methods And Evaluation Criteria: Methods: 1. The analysis of attention patterns in VLMs (Section 2) establishes the foundation for the approach 2. The MAPSparse framework (Section 3) is described algorithmically with pseudocode for each component 3. Implementation details are provided, including how permutation is applied and how dynamic sparse indices are constructed Evaluation Criteria: 1. Accuracy preservation: The authors compare performance on diverse video understanding tasks including video captioning, QA, and retrieval 2. Computational efficiency: Both theoretical FLOPs reduction and actual runtime speedups are measured 3. Scalability: Tests at different context lengths (from ~20K to 1M tokens) 4. Comparison against baselines: The method is compared against multiple sparse attention approaches and token compression methods Theoretical Claims: The paper does not make significant novel theoretical claims. It's primarily an empirical investigation of attention patterns in VLMs and an engineering solution to accelerate computation. However, the observations about Grid patterns and modality boundaries add to the theoretical understanding of how VLMs process multi-modal information. Experimental Designs Or Analyses: The experimental design is thorough and well-executed. 1. Two state-of-the-art long-context VLMs are used (Llava-Video and LongVila) 2. Multiple video understanding benchmarks plus two NIAH tasks 3. Several sparse attention methods and a visual token compression method 4. The contribution of each component is evaluated, especially in the MM-NIAH task One minor limitation is that the paper focuses primarily on accuracy and retrieval performance. Additional evaluations like robustness would strengthen the analysis further. Supplementary Material: A. Modality-Aware Sparse Attention Search Algorithm C. Experiment Details Relation To Broader Scientific Literature: The paper positions its work well within the broader literature on VLMs and attention efficiency. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The visualizations are outstanding and provide clear insights into attention patterns. While the paper mentions implementation details of the attention pattern visualization, more detail could be provided in the main text. Besides, Are there additional attention patterns beyond those identified? Other Comments Or Suggestions: "Spars Transformer" on line 319 that should be "Sparse Transformer" Questions For Authors: How stable are the identified patterns across different models and datasets? Are there additional attention patterns beyond those identified? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer's recognition and thoughtful, constructive feedback. Below, we address each of the comments and concerns in detail. 1. ***"How stable are the identified patterns across different models and datasets"*** Thank you for the suggestion. We have tested these patterns across various VLMs and a wide range of datasets including open-domain question answering, multiple-choice question answering, video captioning, and video information retrieval. Additionally, we provide further pattern analysis on two recent VLMs, Qwen2.5-VL [1] and VideoChat-Flash [2], to demonstrate that the attention patterns leveraged in our method are consistently observed across different models and tasks, see https://anonymous.4open.science/r/complementary-D5B2/. While we also observe other patterns, such as local window attention, these are in fact special cases of our existing patterns and can be effectively captured by our method. 2. ***"Typo issue"*** Thank you for your careful review. We will fix this issue in the next version. [1] Qwen2.5-VL Technical Report, 2024. [2] VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling, 2024. --- Rebuttal Comment 1.1: Comment: Thank you for your response! This is a strong paper, and after considering your clarifications, I am still leaning towards acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for your effort in reviewing and recognizing our work. Your feedback has been very helpful, and we will incorporate these suggestions in the next version of the paper.
Summary: This proposes a modality-aware permutation sparse attention method that accelerates long-context VLMs, called MAPSparse. It features permutation-based grid sparse attention, Qboundary/2D-boundary patterns for mixed-modality boundaries, and a Modality-Aware Sparse Attention Search Algorithm. Experiments prove the effectiveness. ### Update After Rebuttal I'd like to thank the authors for the clear rebuttal, which solves most of my concern. Therefore, I maintain the original rating towards acceptance. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes, it make senses. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: This work tries to accelerate the operation on top of other vision language models. Essential References Not Discussed: No Other Strengths And Weaknesses: Strength ---------- 1. This work aims to optimize the long sequence for vision-language models, which is promising ane essential in current framework. 2. The proposed method is proved to work well in two different methods for long video understanding. 3. The problem is well analyzed with good comparisons. Weakness ---------- 1. Because this work tries to optimize the operation in VLM, it's better to report average latency and memory consumption in Table 1 for better comparisons with other methods. 2. An efficient operation in VLM should bring longer frames (or higher FPS), which is an essential for video understanding. The authors are recommended to provide a table that contains results and lantency with increasing frames. 3. It's better to provide what is the meaning and how the recall rates are calculated in Section 2.1. Other Comments Or Suggestions: No Questions For Authors: My questions are mainly focusing on the experiments and definition, as listed in weakness section. 1. Because this work tries to optimize the operation in VLM, it's better to report average latency and memory consumption in Table 1 for better comparisons with other methods. 2. An efficient operation in VLM should bring longer frames (or higher FPS), which is an essential for video understanding. The authors are recommended to provide a table that contains results and lantency with increasing frames. 3. It's better to provide what is the meaning and how the recall rates are calculated in Section 2.1. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer’s thoughtful and constructive feedback. We respond to each of the comments and concerns below. 1. ***"it's better to report average latency and memory consumption in Table 1"*** Thank you for the suggestion. We have updated Table 1 to include a dedicated column for end-to-end latency. Regarding memory usage, since all baselines consume similar memory except for VisionZip, we did not include a separate memory column. Below, we provide a table comparing the latency and average performance of MAPSparse with other baselines. The number of frames is fixed at 256, and all tests are conducted on a single A100 GPU. As shown in Section 4.5, our method achieves greater speedup with longer video inputs and more frames. | Method | Prefill Latency (s) | Avg. Performance | | - | - | - | | *Llava-Video* (Full Attention) | 17.3 | 55.5 | | A-Shape | 12.0 | 53.1 | | Tri-Shape | 11.6 | 54.7 | | VisionZip | OOM | OOM | | MInference | 15.4 | 55.2 | | MAPSparse | 14.2 | 57.6 | RTable 1. Performance (%) of different methods on video understanding tasks evaluated. 2. ***"longer frames"*** Thanks for the suggestion. Here we show the results of MAPSparse and the full attention baselines with various numbers of frames from 32 to 256. The results show that more frames can consistently improve the performance on video understanding tasks. MAPSparse improves system throughput by supporting more frames and higher QPS video inputs within the same latency constraints. | Method | Num. Frames | VideoMME (w/o sub) | VideoMME (w/ sub) | | - | - | - | - | | LongVILA (Full Attention) | 32 | 55.2 | 58.1 | | MAPSparse | 32 | 56.5 | 58.1 | | Full Attention | 64 | 59.1 | 62.0 | | MAPSparse | 64 | 58.4 | 62.4 | | Full Attention | 128 | 59.4 | 64.7 | | MAPSparse | 128 | 59.0 | 64.8 | | Full Attention | 256 | 60.1 | 65.1 | | MAPSparse | 256 | 60.0 | 65.5 | RTable 2. Performance (%) of different frame on video understanding tasks evaluated using LongVILA. 3. ***"meaning and how the recall rates are calculated"*** The recall rate reflects how much of the original attention score is retained in the top-k sparse attention. It is computed by calculating the softmax scores over the top-k key vectors retrieved for each query vector, i.e., $\text{softmax}(qK_{\text{topk}}^T)$. We will update the paper to include a formal definition of the recall rate in the next revision. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. It solves most of my concern. I maintain the original rating towards acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our paper. We will incorporate the updated content into the next version.
Summary: MAPSparse provides an innovative and effective solution for accelerating the pre-filling stage of long-context VLMs. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Code. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: No. Other Strengths And Weaknesses: ### Strengths 1. **Innovative Approach**: The paper introduces MAPSparse, a method that accelerates the pre-filling stage of long-context Vision-Language Models (VLMs) using Modality-Aware Permutation Sparse Attention. This approach significantly improves processing speed, achieving up to 8.3x and 1.7x speedups in a 1M-length context, surpassing FlashAttention-2 and MInference, respectively. 2. **Integration of Theory and Practice**: The paper provides a detailed analysis of different attention head patterns (such as No-Boundary, K-Boundary, Q-Boundary, and 2D-Boundary) and introduces a modality-aware sparse attention search algorithm to optimize cross-modal and intra-modal sparse patterns. These improvements are not only theoretical breakthroughs but also show excellent performance in practical applications. 3. **Comprehensive Experimental Validation**: The research utilizes two state-of-the-art long-context VLMs (Llava-Video and LongVila) and tests them on various video understanding tasks, including video captioning, video question answering, and video information retrieval. Additionally, a mixed-modal "needle in a haystack" task was designed to evaluate multimodal input performance, demonstrating that the method significantly enhances efficiency while maintaining high accuracy. 4. **Strong Adaptability**: MAPSparse effectively addresses boundary issues between different modalities and maintains the continuity of sparse distribution across modalities, supporting more complex scenarios and larger datasets. ### Weaknesses 1. **Generalization ability of the method**: Although MAPSparse shows significant performance improvements, its algorithm implementation requires offline searching the optimal sparse patterns for each head. The authors did not show any experiments of the generalization ability of these patterns across datasets. 2. **Generalization of the patterns to ViT**. All MLLM contains a ViT before feeding into LLM. Although it contributes little to the latency, but it decides how many tokens are fed into LLM. Will optimizing tokens in ViT better reduce the prefilling latency? 3. **Specific Application Scenarios**: The paper primarily focuses on applications in video understanding and multimodal learning. Its applicability and effectiveness for other types of data or tasks, such as pure text or non-visual modalities, have yet to be fully explored. 4. **Impact Statement**: "There are many potential societal consequences of our work, none which we feel must be specifically highlighted here." Maybe the authors could expand more. Other Comments Or Suggestions: See weakness. Questions For Authors: See weakness. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer’s thoughtful and constructive feedback. We respond to each of the comments and concerns below. 1. ***"Generalization of the patterns"*** We evaluate our method across a broad range of multimodal tasks and VLMs, including open-domain question answering, multiple-choice question answering, video captioning, and video information retrieval. The models tested include Llava-Video and LongVILA, and our method consistently demonstrates strong performance across all these scenarios. We additionally provide pattern analysis (shown in https://anonymous.4open.science/r/complementary-D5B2/) and experimental results (RTable.1) on the latest VLMs, Qwen2.5-VL [1] and VideoChat-Flash [2], further demonstrating the generalizability of both the observed patterns and our method across different models and tasks. 2. ***"Patterns in ViT"*** Optimizing tokens in ViT can help reduce prefilling latency, but ViT is not the primary bottleneck in long video processing for VLMs. As shown in Figure 1(a), ViT accounts for less than 2% of the total latency, while attention operations in VLMs dominate with over 95%. Nevertheless, MAPSparse is orthogonal to ViT optimization techniques such as VisionZip (shown in Table 1) and VideoChat-Flash [2]. We include an additional experiment based on VideoChat-Flash, and the results indicate that MAPSparse integrates well with token compression methods. | Model | VideoDC | ActNet-QA | EgoSchema | Next-QA | PerceptionTest | VideoMME (w/o sub) | VideoMME (w/ sub) | Avg. | |-|-|-|-|-|-|-|-|-| | VideoChat-Flash | 3.21 | 53.6 | 57.0 | 81.2 | 69.1 | 63.2 | 70.5 | 56.8 | | w/ MAPSparse | 3.19 | 54.3 | 57.3 | 79.8 | 69.1 | 63.0 | 70.2 | 56.7 | RTable 1. Performance of different methods on video understanding tasks using VideoChat-Flash. 3. ***"Generalization to other modalities"*** Thank you for the suggestion. We conducted additional experiments on the text-only long-context benchmark SCBench [3], comparing MAPSparse with full attention, as shown in RTable 2. The results demonstrate that MAPSparse generalizes well across modalities and tasks. We will include the corresponding results in the next version. | Method | Retr.kv | En.Sum | En.QA | En.MC | ICL | Avg. | | - | - | - | - | - | - | - | | Full Attention | 52.0 | 38.3 | 25.1 | 65.9 | 54.1 | 47.1 | | MAPSparse | 47.0 | 36.6 | 23.6 | 62.6 | 62.9 | 46.5 | RTable 2. Performance of different methods on long-context benchmark SCBench using Llama-3.1-8B in 128K. 4. ***"Impact Statement"*** We apologize for the misunderstanding. We followed the ICML [4] guidelines when writing the impact statement. Since our method does not alter the original VLMs outputs, it does not introduce any additional risks. [1] Qwen2.5-VL Technical Report, 2024. [2] VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling, 2024. [3] SCBench: A KV Cache-Centric Analysis of Long-Context Methods, ICLR 2025. [4] https://icml.cc/Conferences/2025/CallForPapers
null
null
null
null
null
null
null
null
Permutation Equivariant Neural Networks for Symmetric Tensors
Accept (poster)
Summary: The paper studies permutation equivariant models on symmetric tensors. It presents two characterizations of all linear permutation equivariant functions between symmetric spaces. Subsequently, the paper offers methods for generating the basis of these transformations and performing them in a memory-efficient manner. Claims And Evidence: The theoretical claims presented in this paper are substantiated by proofs presented in the main text and the appendix. Methods And Evaluation Criteria: The authors assessed the effectiveness of their method by conducting experiments on two toy tasks. They also provided detailed implementation information in the appendix, which aligns with the methods presented in this paper. Theoretical Claims: The theoretical claims presented in this paper appear to be valid. The paper’s theory is exceptionally well-written, accompanied by numerous illustrative examples throughout the main text and the appendix. These examples significantly improve the understanding of the methods presented. While I haven’t identified any major flaws in the theory, I do have a few suggestions to enhance the paper’s quality: 1. It would be helpful to include an introductory section in the appendix that provides a background on the theoretical concepts discussed in the paper, particularly symmetric powers and symmetric tensors. Although the paper already includes numerous examples that facilitate understanding, readers unfamiliar with these concepts may find the paper initially challenging. 2. It would be helpful to include a section that compares the paper’s findings with previous works. This would help highlight the novel contributions of this paper. Based on my understanding, the $(k.l)$-orbit bipartition diagram is not novel, while the $(k.l)$-bipartition diagram is novel and is more efficient for performing equivariant linear transformations. However, I would appreciate it if the authors could verify my understanding. Experimental Designs Or Analyses: The authors validate the effectiveness of their method on two toy tasks. While this is primarily a theoretical paper, including real-world experiments would improve its completeness. In Section 7, the authors mentioned that using map label notation eliminates the need for additional memory for performing linear equivariant transformations between symmetric powers. It would be beneficial for the authors to provide empirical evidence, through experiments, to justify the time and space requirements of their method compared to the vanilla method using unrolled matrices. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: This paper presents a general approach to characterizing linear equivariant functions on symmetric tensors. Its applications have practical relevance in various domains such as graph analysis, molecular modeling, and physical simulations. Essential References Not Discussed: There are no missing essential references for this paper. Other Strengths And Weaknesses: The strengths and weaknesses are thoroughly discussed in the preceding points. Other Comments Or Suggestions: I have no additional comments. Questions For Authors: I have no additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer’s thoughtful and positive critique of our work. We are delighted that they recognise the “novel contributions” that are contained in our paper; particularly, the characterisation of the linear permutation equivariant layers for symmetric tensors and the map label notation that improves the memory efficiency in their implementation. We are also pleased that the reviewer found our theoretical framework to be “exceptionally well-written” and that the illustrative examples provided in both the main text and the appendix “significantly improve the understanding of the methods presented”. Furthermore, we are encouraged that the reviewer recognises the “practical relevance” of our contributions and their potential application across multiple scientific domains. We greatly appreciate the constructive suggestions that the reviewer made to improve the overall quality of the paper. In a revised edition of this paper, we will add a background section on symmetric powers and symmetric tensors to the beginning of the appendix to aid readers who may not be familiar with these concepts. We will also include an additional section that compares the paper’s findings with previous works in order to make clear the novel aspects of our approach. The reviewer asked us to verify their understanding regarding the novelty of our contributions. We confirm that - the (k,l)-bipartition diagram is novel and forms the foundation for the diagram basis, which is also a novel contribution. - The map label notation results in a more efficient method for performing these equivariant linear transformations, since, as we said in our response to Reviewer W7Yy, it saves memory in two important ways: it eliminates the need to store large weight matrices and it improves the forward pass through the network. Regarding the (k,l)-orbit bipartition diagram, a related concept appeared in Bastias et al. (2024) [1], but only for the case where $k = l$. Our work extends this to general $k$ and $l$, which is not a trivial extension, as it opened up the possibility to characterise equivariant linear maps between different symmetric power spaces. More importantly, our approach to construct these diagrams is fundamentally different: - We introduce a new design that connects individual wires up to central nodes, whereas Bastias et al. (2024) rely on equivalence classes of diagrams for each bipartition, which are more challenging to work with. - Our construction allows for a clear representation of the action of $S_n$ on these diagrams, showing that what appears to be a complex action on the set $S[n]^l \times S[n]^k$ can be simplified to an action of $S_n$ on the set $\{1, \dots, n\}$. - This results in an intuitive and efficient method for constructing the orbit basis between symmetric power spaces. Additionally, our algorithm for generating all (k,l)-bipartition diagrams having at most n blocks, as presented in Appendix B, is novel. This algorithm plays a crucial role in constructing the diagram basis efficiently, and adds to the theoretical contributions in the paper itself. In noting that “this is primarily a theoretical paper”, we felt that the reviewer really understood the purpose of our paper. Whilst we agree that including real world experiments would improve its completeness, unfortunately we have been limited to exploring these networks’ potential on synthetic data, owing to the lack of established real-world datasets consisting of symmetric tensors. Consequently, we have designed experiments with the goal of validating our theoretical findings. We also acknowledge the reviewer’s suggestion to provide empirical evidence to justify the time and space requirements of our method compared to the vanilla approach using unrolled matrices. The following results show that what we have said in theory (namely that the standard “weight matrix times vector” approach becomes unfeasible as $n, k$ and $l$ become larger owing to constraints on storing the weight matrix in memory) is true in practice, particularly in the training of the network, with a 60x speedup for the $S_{12}$ task and a 200x speedup for the $S_8$ task: - $S_{12}$ task: - SymmPermEquiv (map label implementation): training time: 2.26 seconds, inference time: 0.01 seconds - SymmPermEquiv (weight matrix implementation): training time: 127.55 seconds, inference time: 0.15 seconds - $S_8$ task: - SymmPermEquiv (map label implementation): training time: 12.01 seconds, inference time: 0.026 seconds - SymmPermEquiv (weight matrix implementation): training time: 2451.45 seconds, inference time: 1.25 seconds We will include some additional commentary on this comparison in a camera ready version of the paper should it be accepted for publication. [1] Bastias, K. O., Martin, P., and Ryom-Hansen, S. (2024) On the spherical partition algebra. arXiv:2402.01890. (To be published in the Israel Journal of Mathematics)
Summary: The paper derives an equivariant weight matrix for symmetric tensors, for example, a covariance matrix. They found the condition of permutation equivariant and expressed it into two bases: the orbit basis and the diagram basis. The diagram basis is more efficient to compute, although both compose the same space. Claims And Evidence: The theoretical and empirical evidence are convincing. Methods And Evaluation Criteria: The method makes sense. Theoretical Claims: I did not check the proofs, but I could understand that the proposed bases achieve the permutation equivariance for the symmetric tensors. Experimental Designs Or Analyses: The experimental design and analyses are valid. However, the only concern is that the conducted experiments are only on synthetic datasets and the target function seems too simple. Real or more complicated data would be good to enhance the necessity of the symmetric tensor permutation equivariant NNs for the machine learning community. Supplementary Material: I checked the implementation details but the specification of MLPs should be reported, such as the depth and width. Relation To Broader Scientific Literature: The paper invented a permutation equivariance for symmetric tensors, which had never been tackled before. Essential References Not Discussed: None Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for providing a positive critique of our work. We are pleased that the reviewer recognises that our work “invented a permutation equivariance for symmetric tensors, which had never been tackled before”, and found our theoretical and empirical evidence “convincing”. Given that our work is primarily a theoretical contribution, the goal of our numerical experiments was to empirically validate our theoretical findings. Specifically, we aimed to demonstrate the advantages of our linear layer characterisation over both a standard MLP layer and a generic permutation equivariant layer (as introduced in [1]) in terms of test MSE. We showed that the empirical data aligned with our theoretical results. Whilst we acknowledge the importance of real-world experiments for demonstrating practical applicability, given the novelty of our characterisation and the lack of any suitable real-world datasets, we have been limited to exploring these networks’ potential on synthetic data. In doing so, we followed the approach of prior works in permutation equivariance (e.g., such as Maron et al. [1]) which have also relied on synthetic experiments. We believe that our results provide a solid foundation for future investigations on real-world datasets, and we look forward to exploring them in future work. Finally, we appreciate the suggestion to provide details of the specification of the MLPs, hence we will add this information in a revised edition of the paper. [1] Maron, H., Ben-Hamu, H., Shamir, N., and Lipman, Y. (2019). Invariant and Equivariant Graph Networks. In International Conference on Learning Representations.
Summary: This work introduces an exact characterization of all linear permutation equivariant functions between symmetric power spaces. The authors introduce the map label notation, which makes it possible to express a transformation as a series of equations, thereby eliminating the need to store the weights explicitly. They validate their approach on two toy datasets. Claims And Evidence: The claims are supported by extensive proofs, remarks, and examples. Methods And Evaluation Criteria: The two toy datasets used seem to be appropriate for the problem at hand. There are, however, no real datasets on which the proposed method demonstrates its effectiveness and efficiency. Theoretical Claims: I did not check the correctness of the theoretical claims. Experimental Designs Or Analyses: I did not check the soundness of the experimental design. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: There is a growing literature on equivariant neural networks. While symmetric tensors appear in various scientific domains, this is the first work to obtain an exact characterization of permutation equivariant linear functions applied to symmetric tensors. Essential References Not Discussed: There are no essential references missing. Other Strengths And Weaknesses: The paper is extremely hard to follow, unless someone is well-versed in the theory of permutation equivariant neural networks. On the other hand, the paper is well-written, it is quite concise, with multiple examples, and nice figures that bring intuition to the theory. The experimental evaluation is severely lacking. There are only two datasets on which the proposed method is evaluated. In general, it is extremely unclear to me how this work would be employed in real-world applications, and what advantages does it bring in applications with symmetric tensors as inputs. Other Comments Or Suggestions: None Questions For Authors: [1] The experimental evaluation is severely lacking. I would expect to see evaluation on some real-world datasets. What advantages (efficiency/effectiveness) does the proposed method bring in real applications with symmetric tensors as inputs? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their critique of our work. We are pleased that they recognise that our work is “the first […] to obtain an exact characterization of permutation equivariant linear functions applied to symmetric tensors”. This aligns with the opinions of all of the other reviewers. We also appreciate their positive remarks on the clarity of our writing, the conciseness of the paper, and the role of examples and figures in building intuition. We acknowledge that our paper is mathematically rigourous. However, we made a conscious effort to make our contributions as accessible as possible to the machine learning community by supplementing the theory with concrete examples and detailed figures. We have also committed to Reviewer dHkL’s suggestion to include a background section on symmetric powers and symmetric tensors in the appendix to further aid the general reader’s understanding. While the primary contribution of our work is theoretical, it has direct applications in fields where symmetric tensors naturally arise, such as molecular modelling, learning on graphs and hypergraphs, simulations in physics, and statistical modelling, amongst others. The main advantage of our method is that it provides an exact characterisation of permutation equivariant layers for symmetric tensors. This ensures that models that use symmetric tensors as inputs can explicitly enforce permutation equivariance – this is a crucial property that generic permutation equivariant layers or standard MLPs do not guarantee. Additionally, our method provides significant computational benefits: rather than storing a weight matrix, we leverage our map label notation for efficient implementation, which allows for 1) exact permutation equivariance, 2) improved performance (lower test MSE), and 3) an ability to generalise to different input sizes, as demonstrated by our experimental results. On the concern about experimental evaluation, we refer the reviewer to our response to Reviewer gZqA, where we explain that given the novelty of our characterisation and the lack of widely available real-world datasets consisting of symmetric tensors, we followed the standard approach that was adopted in prior works on permutation equivariance where synthetic datasets were used. Nevertheless, we agree that future work could explore real-world applications to further demonstrate the broader practical benefits of our contributions. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their rebuttal. My concerns given the experimental section and practical applicability of the method still remain after the rebuttal. As the authors note, symmetric tensors naturally arise in many applications including "molecular modelling, learning on graphs and hypergraphs, simulations in physics, and statistical modelling, amongst others". While I understand that it is quite nuanced to apply the proposed method in real-world datasets, I would expect more detailed experiments on synthetic datasets that resemble real-world applications, with a clear indication on how they relate to them and how they can be used by researchers in these domains. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their comments. However, we would like to re-emphasise that the main contribution of our paper is the exact theoretical characterisation of permutation equivariant linear functions for symmetric tensors, which, as noted by the other reviewers, is a novel contribution to the field of equivariant neural networks. We specifically designed synthetic experiments to demonstrate both the the feasibility and the advantages of our approach, namely that it guarantees exact permutation equivariance and comes with an efficient computational implementation (via our newly introduced map label notation). While experiments that model real-world applications could complement the work, the focus of our paper is theoretical, and we believe that the contributions that we have made are both substantial and sufficient at this stage. We are confident that our paper provides a strong foundation for future work to explore the real-world applicability of our method.
Summary: This paper introduces permutation equivariant neural networks for symmetric tensors. In particular, this paper provides two different characterization of all linear permutation equivariant layers between symmetric tensors: orbit basis and diagram basis. Of these two characterization, the diagram basis leads to a more practical way to construct these equivariant networks by using a “map label notation” method introduced in the paper, where, the parameter sharing scheme is stored in terms of equations, which makes the overall weight matrix cheaper to save in terms of memory. Experimental results on S12-Invariant task and S8-equivariant tasks shows performance gain compared to prior methods. Claims And Evidence: Looks good to me Methods And Evaluation Criteria: The introduction of the paper motivates the need for permutation equivariant networks for symmetric tensors with several practical applications, but the paper does not provide any practical applications in experiments. Even the synthetic experiments seem limited. Why is that? Is it applicable in more scenarios and datasets where it can be tested? Theoretical Claims: Looks good to me Experimental Designs Or Analyses: In the experiments, how does the number of parameters and training and inference time compare for the proposed model vs existing equivariant and non-equivariant models? Supplementary Material: Went through most of it for understanding the main results Relation To Broader Scientific Literature: This is related to equivariant machine learning with interest to the broader research community Essential References Not Discussed: Looks good to me Other Strengths And Weaknesses: Strengths: 1. Provides equivariant network design for a new and relevant problem. Provides complete characterization of the linear space with the constraint of permutation equivariant for symmetric tensors 2. Proposes a practical method that can be implemented easily 3. Provides experimental results showing performance gain both in terms of sample efficiency (for S12-invariant task), and MSE reduction and generalization for the S12-equivariant task Weaknesses: 1. Clarity: I understand the paper is primarily focused on the theoretical aspect of solving for an equivariant network and providing an algorithm to construct such networks. But I would request the authors to provide better clarity on few aspects that seemed confusing to me. A. What is the significance of the two characterizations: orbit basis and diagonal basis? Is there any reason for introducing the orbit basis since the diagonal basis seems more useful and used for implementation. B. I am a bit confused about how the map label notation is saving memory, is it saving memory during the use of the network, e.g., during forward pass or is the memory saving only during stpring the weights? Can you explain with an example, e.g., for deepsets, how does the map label notation look like and how it the memory saved? 2. In the experiments, how does the number of parameters and training and inference time compare for the proposed model vs existing equivariant and non-equivariant models? 3. The introduction of the paper motivates the need for permutation equivariant networks for symmetric tensors with several practical applications, but the paper does not provide any practical applications in experiments. Even the synthetic experiments seem limited. Why is that? Is it applicable in more scenarios and datasets where it can be tested? Other Comments Or Suggestions: None Questions For Authors: Please see weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive review. We are pleased that they recognise that our work “[p]rovides equivariant network design for a new and relevant problem”, and “provides a complete characterization of the linear space with the constraint of permutation equivariance for symmetric tensors.” In particular, we felt that they understood the purpose of our paper, namely that it is “primarily focused on the theoretical aspect of solving for an equivariant network and providing an algorithm to construct such networks.” We welcome the reviewer’s positive comments on the ease of implementing our method and the performance gain and generalisation that we demonstrated for the $S_{12}$-invariant task. We address the points where the reviewer sought additional clarity. 1A) The significance of the two classifications is as follows: the orbit basis is needed to be able to define the diagram basis and then prove that it is a basis (see equation (21), Theorem 5.7 and Proposition 5.9). This follows the historical development of “standard” permutation equivariant layers, where the orbit basis was first introduced by Maron et al. [1] and only later was the diagram basis discovered by Godfrey et al. [2]. The orbit basis emerges naturally from the “equivariance equation” (10), whereas the diagram basis requires additional work to be constructed. The diagram basis, however, is more practical for implementation purposes because, with the map label notation, the resulting transformations are easier to vectorise, leading to improved computational efficiency. 1B) The map label notation saves memory in two important ways: 1. Standard neural network methods would require us storing an explicit $n^l \times n^k$ weight matrix in memory whereas, with our approach, we eliminate the need to store large weight matrices by using each (k,l)-bipartition diagram to encode the weight-sharing scheme directly in the map labels. 2. This leads to significant efficiency during the forward pass: since the structure of the transformation is encoded directly in the (k,l)-bipartition diagrams, this means that instead of performing a standard “weight matrix times input vector/tensor”, we can apply transformations directly to input vectors/tensors by using the diagrams to “pick out” exactly which elements of the input appear in the output and how they are combined (i.e sum). This results in a more efficient forward pass since only the necessary computations are performed on the inputs. For DeepSets, i.e permutation equivariant functions from $\mathbb{R}^n$ to $\mathbb{R}^n$, in our method we need to calculate all possible (1,1)-bipartition diagrams with at most n blocks. These are given in equation (55) of our paper. From these diagrams we can immediately see that the corresponding map labels for the unrolled basis matrices are $i \leftarrow i$ and $i \leftarrow \sum_{j=1}^{n} j$. Hence we get that D_{\pi_1}(T)_i = T_i and D_{\pi_2}(T)_i = \sum_{j=1}^{n} T_j for an input $T$ in $\mathbb{R}^n$, and so $D_{\pi_1}(T) = T$ and $D_{\pi_2}(T)$ is $\sum_{j=1}^{n} T_j$ times the all 1s vector in $\mathbb{R}^n$. Note that we have not had to create an $n \times n$ weight matrix to do this, but we can see by inspection that the operations themselves correspond to the identity matrix and the all ones matrix, respectively. Consequently we have recovered the Deep Sets characterisation while using significantly less memory ($O(n)$ vs $O(n^2)$). Regarding timings and parameter counts for the models (we will include these numbers in a revised version of our paper): - $S_{12}$ task: - SymmPermEquiv: training time: 2.26 secs, inference time: 0.01 secs, number of parameters: 3 - SimpleMLP: training time: 1.46 secs, inference time: 0.006 secs, number of parameters: 1728 - $S_8$ task: - SymmPermEquiv: training time: 12.01 secs, inference time: 0.026 secs, number of parameters: 7 - SimpleMLP: training time: 4.38 secs, inference time: 0.008 secs, number of parameters: 4096 - PermEquiv: training time: 12.17 secs, inference time: 0.024 secs, number of parameters: 15 Finally, on the synthetic experiments: to the best of our knowledge, there are currently no widely available real-world datasets consisting of symmetric tensors. Consequently, we designed simple experiments to validate our theoretical results in a controlled setting, following the standard approach that was adopted in prior works on permutation equivariance where synthetic datasets were used (e.g in [1]). However, we recognise that real-world applications of our results would further demonstrate their impact, and so we look forward to exploring them in future work. [1] Maron, H., Ben-Hamu, H., Shamir, N., and Lipman, Y. (2019). Invariant and Equivariant Graph Networks. In International Conference on Learning Representations. [2] Godfrey, C., Rawson, M. G., Brown, D., Kvinge, H. (2023). Fast computation of permutation equivariant layers with the partition algebra. arXiv:2303.06208.
null
null
null
null
null
null
Fundamental Limits of Visual Autoregressive Transformers: Universal Approximation Abilities
Accept (poster)
Summary: This paper shows that single-head, single-layer VAR transformers are universal approximators for Lipschitz image-to-image mappings, enabling them to approximate continuous transformations. This establishes their theoretical expressiveness and sets a new image synthesis benchmark, outperforming methods like Diffusion Transformers. The findings highlight key design principles for efficient and scalable generative models in computer vision. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. This paper includes analysis, but no experiments are supplied. Supplementary Material: Yes. I reviewed all parts of the supplementary file. Relation To Broader Scientific Literature: The paper's key contributions relate to the broader scientific literature by providing foundational design principles for VAR Transformers, advancing the understanding of efficient and scalable architectures for image generation or other related areas. Essential References Not Discussed: No Other Strengths And Weaknesses: Strength: This paper stands out for its theoretical novelty, proving the universality of simple VAR transformers and introducing a scalable "next-scale prediction" framework. It achieves state-of-the-art performance in image synthesis, outperforming existing methods like Diffusion Transformers, and provides practical design principles for efficient and effective model development, with broad applicability in generative modeling. Weakness: 1. While VAR Transformers are widely popular and recognized for their power in various domains, e.g., image generation, the paper does not explicitly establish a clear connection between proving the universality of VAR Transformers as function approximators and their practical application in image generation. Exploring this connection would strengthen the relevance of the theoretical findings to real-world use cases. 2. The paper focuses solely on theoretical analysis without providing experimental results. Including quantitative or qualitative evaluations would help validate the effectiveness of the proposed approach and provide a more comprehensive understanding of its performance. 3. The caption of Figure 1 lacks sufficient detail to clarify the data flow of the Pyramid Up-Interpolation Layer. Specifically, it is unclear where $X_1$ and $X_2$ originate from, and the distinctions between $X_{1/2}$ and $X_{init}$ are not explained. Adding a clear description in the caption or main text would improve the paper's clarity and accessibility for readers. Other Comments Or Suggestions: The descriptions in Sec 3.2 of the main paper and in Fact A.1/2/3 of the supplementary material appear repetitive. I suggest removing one of them to eliminate redundancy. Questions For Authors: See weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and recognition of our theoretical contributions. We appreciate your detailed feedback and would like to address the weaknesses you highlighted: ### Weakness 1: On connecting theory to practical applications You raise an important point about establishing a clearer connection between our universality results and practical applications in image generation. The universal approximation property theoretically guarantees that VAR Transformers can represent any continuous image transformation function given sufficient capacity. In practice, this explains why VAR models excel at diverse image generation tasks - they have the representational capacity to learn complex image distributions and transformations. The "next-scale prediction" framework leverages this flexibility by decomposing the generation process into a sequence of progressively refined predictions, each of which benefits from the universal approximation capability. ### Weakness 2: Lack of experimental results We acknowledge this limitation in our current paper. Our focus was on establishing the theoretical foundations, but we agree that empirical validation would strengthen our claims. In future work, we plan to conduct experiments demonstrating how the universal approximation capabilities translate to practical performance on image generation tasks, possibly showing how approximation quality scales with model complexity. ### Weakness 3: Figure 1 caption Thank you for noting this issue. You're right that the figure caption lacks necessary details. In the figure, $X_1$ and $X_2$ represent token maps at different resolutions, while $X_{init}$ represents the initial token. The diagram shows how a single token $(X_{init})$ is expanded to create token maps at progressively higher resolutions through the up-interpolation process. ### Other Comments Or Suggestions: Redundancy between Section 3.2 and the supplementary material: We appreciate your suggestion. This redundancy was inadvertently introduced to ensure the main paper was self-contained while providing additional details in the supplementary material. We agree that streamlining this content would improve the paper. Thank you again for your constructive feedback. We believe addressing these points would strengthen our paper considerably.
Summary: The paper examines the fundamental limits of Visual Autoregressive (VAR) transformers, proving that single-head VAR transformers with a single self-attention layer and single interpolation layer are universal approximators. By adapting the established techniques in function approximation and neural network to VAR transformers, the authors demonstrate how a minimal VAR transformer is sufficient to approximate any Lipschitz sequence-to-sequence function with arbitrarily small error. The results provide insights into the theoretical expressiveness of VAR transformers, showing how VAR can be utilized as an efficient and expressive architectures for high-quality image synthesis Claims And Evidence: The claims made in the submission is supported by theoretical proofs. Methods And Evaluation Criteria: By investigating into a minimum VAR Transformer design the paper makes it clear why VAR Transformers are universal. Theoretical Claims: I checked the correctness of two theorems (Theorem 4.3 and Theorem 4.4) about the universality of VAR Transformer. Experimental Designs Or Analyses: The paper does not contain any experiments. Supplementary Material: I reviewed the parts of supplementary material about the proof of the Universality of VAR Transformer. Relation To Broader Scientific Literature: The key contributions of the paper that VAR Transformers are universal approximators provide a theoretical foundation that VAR architecture can be an efficient and expressive architectures for the image synthesis tasks. Essential References Not Discussed: The paper cited/discussed essential related works. Other Strengths And Weaknesses: Strengths: 1. The paper is well-organized and clearly written. 2. The theoretical proof in the paper is sufficient. Weaknesses: The paper has no experiments, limiting its practical value. Other Comments Or Suggestions: No, I have no other comments or suggestions. Questions For Authors: Can you design an experiment with VAR Transformers to demonstrate how they can be universal approximator for image-to-image tasks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive assessment of our paper. We appreciate your recognition of the theoretical contributions and clear organization of our work. Regarding your question about designing experiments to demonstrate VAR Transformers as universal approximators for image-to-image tasks: This is an excellent suggestion. While our current paper focuses on theoretical foundations, empirical validation would indeed enhance the practical value of our work. For such an experiment, we envision the following design: 1. Select diverse image-to-image transformation tasks (e.g., style transfer, super-resolution, colorization, and semantic transformations) 2. Train minimalist VAR models (with single attention/interpolation layers) on these tasks 3. Compare their performance against more complex architectures and theoretical bounds 4. Measure approximation quality using metrics like PSNR, SSIM, and FID We believe such experiments would demonstrate how even simple VAR architectures can approximate complex image transformations, providing empirical support for our theoretical claims. The experiments would also help identify practical limitations and the relationship between theoretical expressivity and sample efficiency. We agree that including these experiments would strengthen our paper, and we're considering this direction for future work. Thank you for this valuable suggestion.
Summary: This paper aims to understand transformer-based models in image generation focusing on Visual Autoregressive Transformers (VAR). Transformers have already been shown to be universal approximators in certain settings (e.g., language tasks via prompt tuning [1]), but it is not clear if its visual counterpart (VAR) can approximate any continuous image-to-image transformation. The paper proves that the simplest form of VAR transformer (single self-attention layer and a single interpolation layer) is a universal approximator for Lipschitz continuous functions. [1] Hu, Jerry Yao-Chieh, et al. "Fundamental limits of prompt tuning transformers: Universality, capacity and efficiency." _arXiv preprint arXiv:2411.16525_ (2024). Claims And Evidence: The third claim regarding broader implication for CV community (`We provide insights into the broader implications of our findings for generative modeling, particularly in computer vision, where efficient and expressive architectures are essential for high-quality image synthesis.`) lacks support. The paper provides no theoretical analysis or empirical results of efficiency or generation quality. While VAR's empirical success is noted in prior work [2], this work focuses purely on universality. Without connecting approximation capacity to practical efficiency or image quality metrics, this claim remains speculative. [2] Tian, Keyu, et al. "Visual autoregressive modeling: Scalable image generation via next-scale prediction." _Advances in neural information processing systems_ 37 (2024): 84839-84865. Methods And Evaluation Criteria: Not applicable. This paper proposes a new theoretical understanding of VAR. No experiments are presented. Theoretical Claims: I found the core universality proof (Section 6) inherit assumption from Hu et al. [1], which analyzes prompt-tuned Transformers where base mode is frozen. However, VAR training typically updates all training parameters. I found this discrepancy raises questions about the proof's applicability. [1] Hu, Jerry Yao-Chieh, et al. "Fundamental limits of prompt tuning transformers: Universality, capacity and efficiency." _arXiv preprint arXiv:2411.16525_ (2024). Experimental Designs Or Analyses: Not applicable. This paper proposes a new theoretical understanding of VAR. No experiments are presented. Supplementary Material: I reviewed all parts of the supplementary materials. Relation To Broader Scientific Literature: Prior works have already shown transformers are universal approximates in certain settings. This paper extends that understanding to visual autoregressive models. Essential References Not Discussed: This paper cites Hu et al. (2024) [1] but inadequately distinguishes its contributions. While Hu et al. focus on prompt tuning for language tasks, this work targets VAR’s image-to-image mapping. A deeper discussion is needed on why the universality of prompt-tuned models implies universality for fully trained VARs. [1] Hu, Jerry Yao-Chieh, et al. "Fundamental limits of prompt tuning transformers: Universality, capacity and efficiency." _arXiv preprint arXiv:2411.16525_ (2024). Other Strengths And Weaknesses: Strengths: I found the question the authors aim to study is timely and underexplored. In terms of theory, the Transformer model is often studied in the context of language modeling. The setting of image generation is much less studied. Weakness: - **Overreliance on Prompt Tuning Theory**: The proposed theory heavily relies on Hu et al. (2024) [1]. However, Hu et al. assume the transformer is funetuned with prompt tuning. This contradicts the VAR setting, which updates all parameters. - **Single-Layer Architecture**: The conclusion states that one layer suffices for universality. However, VAR’s hierarchical up-scaling (Def. 3.6) implies multiple up-scaling steps. When the transformer layer is a single layer, does it imply we can achieve universality with one up-scaling step? Other Comments Or Suggestions: Terms like "sequence-to-sequence" and "image-to-image" are used interchangeably (e.g., Abstract vs. Section 4). This causes ambiguity. It would improve the clarity if the differences and similarities were discussed. Questions For Authors: My main question to the authors is regarding how prompt tuning assumption affects the theoretical analysis of the VAR model. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer for these insightful comments, and we would like to address the reviewer’s concerns as follows. ### Claims And Evidence: On the broader implications claim We acknowledge that our claim regarding broader implications for CV could be better supported. Our intention was to highlight that understanding theoretical expressivity provides a foundation for more practical research on efficiency and quality. ### Theoretical Claims & Essential References Not Discussed & Weakness 1: On the applicability of prompt tuning theory to VAR This is an insightful question. While our proof builds on techniques from [1], we've carefully adapted them to the VAR setting. The universality result doesn't actually depend on the training method (prompt tuning vs. full fine-tuning) but rather on the architectural expressivity. The key insight is that if a model family can approximate any function when only a subset of parameters are tuned (prompt tuning), then it can certainly do so when all parameters are tunable (full training). The prompt tuning framework provides a convenient theoretical framework to establish lower bounds on expressivity. ### Weakness 2: On single-layer architecture and up-scaling You raised an important point about the relationship between the transformer layer and up-scaling steps. To clarify, Theorem 4.3 and 4.4 state that a single self-attention layer and a single interpolation layer are sufficient for universal approximation. This does not contradict VAR's hierarchical nature. The up-interpolation layer (Definition 3.6) can have multiple internal up-scaling steps while still being considered a single layer from the architectural perspective. Our proof shows that even with minimal architecture (single attention + single interpolation), the model class has universal approximation capabilities. ### Other Comments Or Suggestions: On terminology inconsistency Thank you for noting the inconsistent use of "sequence-to-sequence" and "image-to-image." This was indeed a source of potential confusion. Since VAR operates on tokenized images, both terminologies are technically correct - images are processed as sequences of tokens. In our theoretical analysis, we view images as structured sequences. We should have been more explicit about this connection in our manuscript. We appreciate your feedback and would be happy to address any follow-up questions you might have.
null
null
null
null
null
null
null
null
EA-PS: Estimated Attack Effectiveness based Poisoning Defense in Federated Learning under Parameter Constraint Strategy
Reject
Summary: This paper proposes a client-side defense methond in federated learning, EA-PS, that constrains the pertubation range of local parameters while minimizing the impact of attacks by forming the problem into an optimization problem. This paper further provides convergence and robustness analysis. This paper validates its algorithm through experiments. Claims And Evidence: This paper claims that with EA-PS, combined with server-side defense method, can achieve robust and stable performance under attack. The claims are clearly supported through theoretical results and empirical results. Methods And Evaluation Criteria: The experiments are extensive with varying parameters of the algorithms, supporting the claim of the paper. Theoretical Claims: I have checked the proof of Theorem 4.1, no issues discovered. Experimental Designs Or Analyses: I have checked the experiments compared proposed methods against other baselines, varying \beta, varying \alpha, different \lambda and different \gamma. No issues discovered. Supplementary Material: I have reviewed Appendix A.1, A.2. Appendix B, C and D. No issues discovered. Relation To Broader Scientific Literature: The key contributions of the paper is to constraint the local parameter updating in federated learning under attack, which reduce the variance during learning process. Essential References Not Discussed: There are no essential references not discussed to my knowledge. Other Strengths And Weaknesses: Strengths: Experiments are extensive with strong theoretical guarantees. Weaknesses: Preliminary knowledge isn't explanied enough, make it hard to follow the paper. For example, why A_t - A_{t-1} can be interpreted as long-lasting attacks? what is the definition of long-lasting attacks? Figure 2 didn't illustrate the idea of parameter constraint strategy. Other Comments Or Suggestions: None. Questions For Authors: 1. why can we assume \lambda is a linear set of A? What will be sacrifice with this assumption? 2. How is \tilde{H} calculated? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough analysis and constructive feedback on our paper. We appreciate the opportunity to clarify the points raised and to provide additional insights into our research. > 1. What is the definition of long-lasting attacks? Why $A_t - A_{t-1}$ can be interpreted as long-lasting attacks? Response: - We appreciate the opportunity to clarify the definition of long-lasting attacks. In our work, we follow the FL-WBC's observations on the long-lasting attack effect. The definition of "long-lasting" in the long-lasting attack is to describe the effects of an attack in the current round that can persist through multiple rounds of training. Therefore, there exists a slight misleading in our work. The correction is as follows: “long-lasting attack” to "attack with long-lasting effects ". - $A_t-A_ {t-1}$ in our work is "the difference of different rounds in the coefficient of attack effects". $A_t$ is defined as "the coefficient of attack impact between two rounds". $A_t-A_ {t-1}$ can form a chain structure to better measure the accumulation of attack impact in different periods (i.e., long-lasting attack effects). For example, in our work, Theorem 4.1 shows that minimizing $A_t-A_ {t-1}$ yields a smaller optimization upper bound than a traditional method such as LeadFL; Theorem 5.2: Certified Radius analysis shows that the reduction of $A_t-A_ {t-1}$ improves the robustness of the model against long-term attack effects. The experimental results also support the above proof. > 2. Figure 2 didn't illustrate the idea of parameter constraint strategy. Response: - We believe that Figure 2 illustrates the idea of the parameter constraint strategy. Based on the last paragraph of section 3, we will provide additional insights into our strategy. The goal of the parameter constraint strategy is to enhance the stability of poisoning attack defense by constraining the perturbation range of model parameters. The key components are 1)Optimized Manifold Space $A$; 2) Unit Space $I$; 3) Rank Constraint $λ$. - Optimized Manifold Space $A$ represents the unconstrained parameter space of the model, which may involve high-dimensional or complex parameter distributions. In this space, malicious attacks (e.g., backdoor attacks) can create long-lasting effects through parameter perturbations. As illustrated in equations (formalized as equation (16): $I=B^{-1}AB$), the manifold space $A$ is mapped into a simpler, low-dimensional unit space (Unit Space $I$). By constraining parameter perturbations within a bounded region (Rank Constraint $λ$), the strategy suppresses the cumulative effects of adaptive or persistent attacks (formalized as equation (17): $AB=λB$). This ensures stable defense performance under long-lasting attack effects. > 3. Why can we assume $\lambda$ is a linear set of $A$? What will be sacrifice with this assumption? Response: For the reason why we can assume $\lambda$ is a linear set of $A$, based on the references and descriptions on page 5 of our work, we will provide additional insights into it. Firstly, we assume that $λ$ is a linear set of $A$ mainly because the linear decision rule can transform complex uncertainty descriptions into a more tractable linear form, thus yielding a computationally solvable robust optimization model. Specifically, the linear assumption simplifies the complex parameter constraints into a linear combination of historical information. Moreover, linear approximations facilitate the simplification of proofs for convergence and robustness guarantees. However, setting $λ$ as a linear combination of $A$ means ignoring possible nonlinear relationships, which may result in suboptimality according to the linear decision rule (Bertsimas et al., 2019), which has noted on page 5 of our work. > 4. How is $H$ calculated? Response: We appreciate the opportunity to add details about how $H$ is calculated. We will change the equation of $H_{t,e}^k$ in our work (section 4.2) to illustrate the details of how $H$ is calculated, as follows. $H_{t,e}^k \overset{\bigtriangleup }{\underset{}{=}}\bigtriangledown ^2 F( \theta _{t,e}^k)= (θ _{t,e+1}^k-θ _{t,e}^k-∆θ _{t,e}^k)/ η_t.$ We hope this response adequately addresses your points and welcome the fruitful discussion. We are thankful for the contribution to the manuscript's refinement.
Summary: To combat persistent adaptive attacks, the authors propose EA-PS, a client-side defense that enhances server-side methods for robust, stable performance. By limiting attack impact and constraining local parameter perturbations, EA-PS mitigates backdoor poisoning. Theoretically, it achieves a lower upper bound, smaller certified radius, and larger convergence upper bound. Evaluations on FashionMNIST and CIFAR-10 show EA-PS reduces attack success rates by up to 14.9% and improves stability with up to 40% lower variance compared to other client-side defenses. Claims And Evidence: The paper presents strong empirical and theoretical evidence supporting the effectiveness of the proposed method. However, two key aspects are missing: (1) The cost of implementing client-side defense, such as communication overhead, should be compared to pure server-side defense. (2) The efficiency of the proposed method is not thoroughly evaluated—while convergence results provide some insight, empirical experiments are needed. Specifically, how much additional time does the client-side defense require compared to standard FedAvg? Additionally, it would be beneficial to show model accuracy and backdoor accuracy throughout the FL process to illustrate whether this defense slows down main task training, which is just as crucial as security in practice. Overall, most of my concerns are from empirical perspective, I appreciate the authors offer theoretical guarantees. Methods And Evaluation Criteria: 1. I will suggest adding different type of backdoor attacks (e.g., distributed trigger, adaptive backdoor) as baselines, except for only using fixed pattern since the theoretically results suggest a general defense. 2. Multikrum and Bulyan are designed to defend against model poisoning attacks. Defenses that include a post-training stage, such as CRFL, should also be considered as baselines. Theoretical Claims: N/A Experimental Designs Or Analyses: See "Methods And Evaluation Criteria" Supplementary Material: I haven't checked the proof details Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Minor: Backdoor attacks are typically considered a specific type of targeted attack in previous FL security papers, which is slightly inconsistent with Section 2.1. Other Comments Or Suggestions: It is better to clarify the defender's knowledge and ability considering the difference between client-side defense and server-side defense. To make it more practical, how clients and/or sever exchange knowledge, information should be specified. Questions For Authors: The attack impact measures the differences between two rounds. I wonder why multiple rounds are not considered, as in a real-world FL system, malicious clients may not be selected every round. Additionally, I suggest measuring the tradeoff between security and training speed, as the current optimization goal may slow down the training process. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and valuable feedback on our work. > 1. The communication overhead should be compared to pure server-side defense. Response: We'd like to address the concern regarding the communication overhead in our work. Since nothing but the parameter constraint strategy is used in this work, the communication overhead is the same as pure server-side defense per round. > 2. I will suggest adding distributed trigger, adaptive backdoor, and CRFL as baselines. Response: For distributed trigger and adaptive backdoor, we added DBA[2] and A3FL[1]. We also added CRFL to compare with Multi-Krum and Bulyan. The results are as follows. | (%) | | CIFAR-10 | | | | | |--|--|--|--:|:--|--:|:--| | | | | DBA | | A3FL | | | | Client | Server | MA| BA| MA| BA| MA | BA | | IID | EA-PS | MultiKrum |34.51|76.72|32.15|57.96| | | | Bulyan |33.72|80.26|32.87|59.37| | | | CRFL |27.49|12.53|27.29|26.42| | | Lead-FL | MultiKrum |35.68|77.08|33.96|58.49| | | | Bulyan|34.94|49.14|32.65|52.8| | | | CRFL |26.89|24.73|27.18|39.83| | Non-IID| EA-PS | MultiKrum |34.18|38.94|32.64|47.49| | | | Bulyan |35.24|30.02|34.74|41.38| | | | CRFL |26.29|10.59|25.82|13.67| | | Lead-FL | MultiKrum |35.18|45.95|34.53|53.62| | | | Bulyan |35.01|41.64|33.59|49.86| | | | CRFL |26.41|11.89|26.25|17.84| > 3. How much additional time does the client-side defense require compared to standard FedAvg? Response: As convergence analysis proved, our work is slightly inefficient compared with other methods. We appreciate your suggestion to add time overheads (seconds average round), and the experimental results on CIFAR-10 with IID distribution and FEMNIST with natural non-IID distribution are as follows. | Time(s/r) |CIFAR-10| | | |FEMNIST | | | | |--|--:|:--|:--|:--|--:|--:|--:|--:| | | FedAvg| Krum | Bulyan | CRFL | FedAvg | Krum | Bulyan | CRFL | | EA-PS |27.98|28.66|28.21|45.13|80.75|82.63|78.89|118.51| | Lead-FL |22.55|23.11|22.83|40.62|56.51|58.74|58.02|109.36| | FL-WBC |20.95|21.34|21.27|38.32|44.17|50.44|49.38|98.27| | NULL |19.74|20.94|20.61|37.68|43.17|47.55|46.57|97.46| > 4. Backdoor attacks is slightly inconsistent with Section 2.1. Response: We appreciate the opportunity to change the "slightly inconsistent" expression that "**one of the specific types of** targeted attacks (known as backdoor attacks )" in section 2.1. > 5. It would be beneficial to show model accuracy and backdoor accuracy to illustrate whether this defense slows down main task training. Response: Details of MA (Main-task Accuracy) are in the Appendix. We will extract the MA to the experiment section. > 6. The attack impact measures the differences between two rounds. I wonder why multiple rounds are not considered, as in a real-world FL system, malicious clients may not be selected every round. Response: We appreciate the opportunity to clarify the "multiple rounds attack impact". Our work follows previous works setting as in Lead-FL and FL-WBC, where "In each adversarial round, malicious clients are randomly selected and participate in the training". We will highlight this in the experimental setting. > 7. It is better to clarify the defender's knowledge and ability considering the difference between client-side defense and server-side defense. To make it more practical, how clients and/or sever exchange knowledge, information should be specified. Response: 1) Only aggregation information is exchanged between clients and the server. 2) To clarify "the defender's knowledge and ability, considering the difference between client-side defense and server-side defense", we will add the comparisons in a table as follows. | Component | Client-Side Defense (Ours) | Server-Side Defense | |--|--|--| | Knowledge | Local model parameters and gradients;Local training data distribution | Global aggregated model;Aggregated update statistics (e.g., gradient norms) | | Capability | Can apply local parameter masking/smoothing; Cannot modify server aggregation logic | Can modify aggregation rules (e.g., clip gradients, weight averaging) | | Assumptions |Clients may be malicious ;Server is honest | Server is fully trusted;Clients may be malicious | The added code will still open-source to the original link in the manuscript. We are grateful for the chance to discuss our work's improvement, and wish to thank you again for your valuable input. Reference: [1]Zhang, Hangfan et al. “A3FL: Adversarially Adaptive Backdoor Attacks to Federated Learning.” Neural Information Processing Systems (2023). [2] DBA: Xie, et al. "Distributed Backdoor Attacks against Federated Learning." International Conference on Learning Representations (2020)
Summary: This paper proposes EA-PS (Estimated Attack Effectiveness-based Poisoning Defense with Parameter Constraint Strategy), a client-side defense designed to constrain the perturbation range of local parameters while minimizing the impact of attacks. The authors prove that our methods have an efficiency guarantee with a lower upper bound, a robustness guarantee with a smaller certified radius, and a larger convergence upper bound. Claims And Evidence: - efficiency guarantee with a lower upper bound Evidence: 4.2 and Appendix A.2 - a robustness guarantee with a smaller certified radius 5.3 and Appendix A.4 Theoretical analysis - a larger convergence upper bound 5.2 and Appendix A.3 Theoretical analysis Methods And Evaluation Criteria: 1. introduce an enhanced objective function (EA-PS−) 2. propose a client-based defense approach named Estimated Attack Effectiveness based Poisoning Defense method under Parameter Constraint Strategy (EA-PS). It minimizes the long-lasting backdoor attack effect with a parameter constraint strategy to enhance stability by constraining the perturb range in the parameter space Evaluation: main task accuracy(MA), backdoor accuracy(BA) I think the evaluation metric is reasonable Dataset: FashionMNIST and CIFAR10 datasets under both IID and non-IID settings. I think the datasets, although widely used in FL, do not contain real-world noniid and is too simple. can the author also consider dataset like FEMINIST, which contains nature non-iid? The attack method used here is only one pattern. I encourage the author to check the performance also for untargeted attack. Theoretical Claims: - efficiency guarantee with a lower upper bound Evidence: 4.2 and Appendix A.2 - a robustness guarantee with a smaller certified radius 5.3 and Appendix A.4 Theoretical analysis - a larger convergence upper bound 5.2 and Appendix A.3 Theoretical analysis I don't see flaws in the theoretical analysis , yet I'm not an expert of theory, please refer to other reviewer's suggestions. Experimental Designs Or Analyses: As mentioned before: Dataset: FashionMNIST and CIFAR10 datasets under both IID and non-IID settings. I think the datasets, although widely used in FL, do not contain real-world noniid and is too simple. can the author also consider dataset like FEMINIST, which contains nature non-iid? The attack method used here is only one pattern. I encourage the author to check the performance also for untargeted attack, and other targeted attack, such as follows: [1] ] Xiaoyu Cao and Neil Zhenqiang Gong. 2022. Mpaf: Model poisoning attacks to federated learning based on fake clients. [2] DBA: Distributed Backdoor Attacks against Federated Learning Supplementary Material: Yes, theoretical analysis. Relation To Broader Scientific Literature: It supplements the client-side poisoning defense with theoretical guarantees. Essential References Not Discussed: I don't know well the related literature for client-side defense. But I think the attacks evaluated are limited, as mentioned above Other Strengths And Weaknesses: Table 1 caption is not quite clear benign accuracy / (attack success rate)? Other Comments Or Suggestions: See above. Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your recognition of our work and for your insightful comments. > 1. Table 1 caption is not quite clear benign accuracy / (attack success rate)? Response: The metric used in Table 1 is backdoor accuracy, which is the attack success rate for backdoor attacks. We will change it to "backdoor accuracy" in the revision. >2. Can the author consider FEMINIST, which contains nature non-iid? I encourage the author to check the performance for untargeted attack (MPaf) and other targeted attack (DBA). Response: - We appreciate your suggestion to include the nature non-iid dataset (FEMINIST), MPaf, and DBA. In addition to the methods suggested above, we also added Spectrum (targeted) and Label-Flip (untargeted) to further enhance the experiment. The results are as follows. | (%) | | | | | | FEMINIST | | | | | | |--|--|--:|:--|--:|:--|--:|:--|--:|:--|--|--| | | | 1-pixel | | 9-pixel | | Spectrum|(our) | DBA | (suggested) | Label-Flip (our) | MpAf(suggested) | | client | server | MA | BA | MA | BA | MA | BA | MA | BA | MA | MA | | EA-PS |MultiKrum|87.76|43.2|88.48|53.84|87.92|4.24|87.57|18.79|87.95|88.01| | | Bulyan | 86.25| 49.43|84.76|66.43|86.84|4.082|87.67|7.03|87.81|87.47| | Lead-FL | MultiKrum|88.31| 65.75|88.38|59.01|88.57|4.61|88.32|25.24|87.7|87.94| | | Bulyan |88.25|65.09|87.81|76.62|88.53|4.32|87.17|9.35|87.44|87.17| - We also apply the suggested attack methods and our added methods on the CIFAR-10 dataset to further illustrate the performance. The results are as follows. | (%) | | CIFAR-10 | | | | | | | |--|--|--|--:|:--|--:|:--|--|--| | | | | Spectrum | (our) | DBA | (suggested) | Label-Flip (our) |MpAf (suggested)| | | Client | Server | MA | BA | MA | BA | MA | MA | | IID | EA-PS | MultiKrum |32.41|73.06|34.51|76.72|13.39|14.15| | | | Bulyan |32.18|49.76|33.72|80.26|14.8|14.94| | | Lead-FL | MultiKrum |33.95|76.77|35.68|77.08|10.9|10.47| | | | Bulyan |33.44|40.85|34.94|49.14|14.42|14.27| | Non-IID| EA-PS | MultiKrum |33.75|46.12|34.18|38.94|15.9|15.67| | | | Bulyan |34.81|40.13|35.24|30.02|16.08|15.94| | | Lead-FL | MultiKrum |33.95|55.42|35.18|45.95|13.42|13.19| | | | Bulyan |32.1|41.33|35.01|41.64|16.24|15.35| - It's important to note that none of the existing client-side defense methods focus on untargeted attacks. Through added experiments, we found that although our method struggles to defend against untargeted attacks, it still slightly outperforms the state - of - the - art client - side defense methods. - For new target attacks, our method outperforms the state - of - the - art client - side defense methods with server-side defense methods. The added code will still open-source to the original link in the manuscript. We hope this response has addressed your concerns effectively. We are grateful for your valuable input. References [1] Wang, Tong et al. “An Invisible Black-Box Backdoor Attack Through Frequency Domain.” European Conference on Computer Vision (2022). (Spectrum) [2]Zhang, Mengmei et al. “Adversarial Label-Flipping Attack and Defense for Graph Neural Networks.” 2020 IEEE International Conference on Data Mining (ICDM) (2020): 791-800. (Label-Flip)
null
null
null
null
null
null
null
null
How Expressive are Knowledge Graph Foundation Models?
Accept (poster)
Summary: The manuscript introduces a Knowledge Graph Foundation Model (KGFM) termed MOTIF, which extends ULTRA’s relation graph into a relational hypergraph using manually defined motifs to incorporate additional information for computing a relation’s conditional representation. The authors theoretically establish a condition under which a motif enhances the model’s expressivity and demonstrate that both larger path motifs and broader star motifs improve the expressiveness of the encodings, though this finding is somewhat trivial. Experimental results indicate that the proposed MOTIF outperforms ULTRA in terms of both expressivity and overall performance on real-world datasets. However, the manuscript does not provide a direct comparison with TRIX, an existing KGFM that also addresses expressive power, raising questions about the significance of its contributions. ## Update after Rebuttal This reviewer appreciates the authors' detailed responses. However, some concerns remain regarding the proposed method's heuristic nature, practicality and scalability. Claims And Evidence: Some claims in the manuscript are unclear and lack sufficient supporting evidence: 1. The authors reference [1], yet they assert that their work provides the first rigorous analysis of the expressive power of Knowledge Graph Foundation Models (KGFMs). However, [1] already investigates the expressive power of KGFMs and demonstrates that the proposed model, TRIX, is more expressive than ULTRA in distinguishing triplets. While the authors acknowledge [1], they overlook the contribution of [1] on the expressive power of KGFMs. 2. The authors claim that “relational hypergraphs are a generalization of KGs used to represent higher-arity relational data”. However, existing literature predominantly employs n-ary relational graphs or hyper-relational knowledge graphs for this purpose. 3. Theoretical results presented in the manuscript do not hold for some instances of MOTIF. The authors claim that MOTIF can represent a variant of InGram when the weighted relation graph is replaced. However, InGram requires that INIT_1 and INIT_2 be set to random initialization for all entities and relations. Given that all links are already differentiated through distinct representations of entities and relations, incorporating a motif that cannot be covered via a core-onto homomorphism from any motif already in $\mathcal{F}$ cannot contribute additional expressiveness to the encodings. This indicates that the configuration of MOTIF used to produce theoretical results does not appropriately generalize to InGram. 4. Certain statements in the manuscript are unclear. For example, the authors claim that “while the set of links distinguished by *any instance* in MOTIF($\mathcal{F}$) is the same as those in MOTIF({h2t, h2h, t2t}) or MOTIF({t2h, h2h, t2t}), the ULTRA architecture is a strict subset of MOTIF($\mathcal{F}$), so we cannot directly transfer these results to ULTRA.” in lines 250-255. This implies that ULTRA is a strict subset of MOTIF($\mathcal{F}$) but not an instance of it, which is contradictory. How can model A be a subset of B while simultaneously not being an instance of B? [1] TRIX: A More Expressive Model for Zero-shot Domain Transfer in Knowledge Graphs, LoG 2024. Methods And Evaluation Criteria: The proposed method has the following drawbacks: 1. The proposed method is heuristic. Although the authors establish a condition for motifs that enhances expressive power, verifying whether this condition holds for a given motif requires manually comparing existing motifs with candidate motifs. This limits the practicality and scalability of the approach, particularly when applying it to improve the expressive power. 2. The list of possible motifs is limited. The authors define and utilize only two types of motifs: path motifs and star motifs. Furthermore, only path motifs are employed in experiments on real-world datasets. A more comprehensive analysis should include ablation studies on the selection of motifs and provide additional examples of possible motif types to better assess their impact. Theoretical Claims: I have checked the logical flow of the proof. Experimental Designs Or Analyses: Yes, I have examined the soundness and validity of the experimental designs and have the following concerns: 1. There is a potential issue related to test leakage. The authors pretrain MOTIF on FB15K-237, WN18RR, and CoDEX Medium. However, it is unclear whether there is any test leakage in the inductive datasets derived from these pretrained datasets, particularly those based on FB15K-237 (e.g., FB-v1\~v4, FB-25\~100) and WN18RR(e.g., WN-v1\~v4). The authors should clarify whether any overlap exists between the training and test sets to ensure the validity of the evaluation. Additionally, the proposed method should be re-evaluated in a setting that explicitly prevents test leakage. 2. A direct comparison with TRIX is necessary, as TRIX outperforms MOTIF in zero-shot setting and achieves comparable performance when fine-tuned. For example, on the inductive $e,r$ setting with 23 graphs, TRIX achieves an MRR of 0.368 in the zero-shot scenario, surpassing MOTIF’s MRR of 0.349. When fine-tuned, both models achieve an MRR of 0.401. These results suggest that TRIX is at least as competitive as MOTIF, raising questions about the significance of the proposed model’s contributions from a practical perspective. Supplementary Material: Yes, I reviewed the supplementary material. I have skimmed through the provided code. Relation To Broader Scientific Literature: The paper builds upon prior work on Knowledge Graph Foundation Models (KGFMs), particularly ULTRA, by proposing an extension that aims to enhance expressiveness. ULTRA itself was introduced as a relation graph-based approach for predicting missing links in arbitrary knowledge graphs. The paper aligns with prior work, such as TRIX, which has already explored the expressive power of KGFMs, demonstrating that models like ULTRA have inherent limitations in distinguishing triplets. The authors claim to extend ULTRA’s expressiveness by leveraging relational hypergraphs, but given prior work [1], the novelty of this contribution requires careful consideration. Additionally, its reliance on motifs raises questions about scalability and practicality concerns that need to be further investigated. [1] TRIX: A More Expressive Model for Zero-shot Domain Transfer in Knowledge Graphs, LoG 2024. Essential References Not Discussed: Although the authors have cited [1], a paper that discusses the expressive power of KGFMs, they do not provide a direct comparison with it. Given that [1] explicitly analyzes the expressive power of KGFMs and demonstrates its superiority over ULTRA in distinguishing triplets, the manuscript should clearly compare its approach with [1], particularly in terms of expressive power. Without this comparison, it is difficult to assess whether the proposed method provides a significant theoretical or practical advancement over existing work. [1] TRIX: A More Expressive Model for Zero-shot Domain Transfer in Knowledge Graphs, LoG 2024. Other Strengths And Weaknesses: Strength The proposed relational hypergraph is a reasonable extension of relation graphs, allowing for the capture of more complex relationships between relations. Weaknesses 1. Please refer to the concerns and recommendations discussed in ‘Claims and Evidence’, ‘Methods and Evaluation Criteria’, and ‘Essential References Not Discussed’ for further elaboration on weaknesses regarding the manuscript's clarity, theoretical claims, and comparative analysis. 2. Different concepts are notated in the same way. For instance, the authors use a character followed by parentheses (e.g., a()) to represent functions, (hyper)edges, and indicators, which could cause confusion for readers. Other Comments Or Suggestions: Typos 1. Section 4, lines 151-152: "We present MOTIF, as a general framework for KGFMs" -> "We present MOTIF, a general framework for KGFMs". 2. Section 4.1., lines 127-128: "and G=(V,E,R) a KG" -> "and G=(V,E,R) be a KG". Questions For Authors: 1. Please provide a direct comparison with TRIX in terms of expressive power and performance. 2. The current description of the theoretical claims is confusing, as it suggests that the theoretical results apply to all instances of MOTIF, while the theoretical findings presented in Section 6.1 do not hold for some instances of MOTIF. Please clarify it. 3. Can the procedure for adding motifs be automated? If so, how might this be achieved? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We note that TRIX is a **contemporaneous work** first published on 16 Nov 2024, i.e. **within 4 months** of ICML submission. We nevertheless provide answers to each of the raised concerns. --- **Claims And Evidence** 1. TRIX [1] is the first to provide an expressivity analysis and we will acknowledge this. However, none of our results can be derived from the TRIX paper. While [1] analyzes the expressivity of TRIX in relation to existing models, it does not provide a general tool for studying expressive power and therefore offers limited insight for evaluating future KGFMs. In contrast, the motivation of our work is to provide a more general framework (MOTIF), which captures existing KGFMs (e.g. ULTRA) and their extensions with higher-order motifs. Our theoretical results advance the understanding of KGFMs beyond known architectures. 2. These graph models serve distinct purposes and generalize KGs to higher-arity data. While MOTIF uses relational hypergraphs, designing KGFM architectures using n-ary or hyper-relational KG is an interesting avenue for the future. 3. (Q2) To clarify, the variant of InGram we discussed assumes that both initializations are node invariants, as stated in **Remark 1** and **Remark 2**. Thus, the theoretical results we present only apply under these conditions and do not apply to InGram using random initialization because this initialization breaks node invariance. We will emphasize this to avoid confusion. 4. The confusion stems from the ambiguous use of "ULTRA", which refers both to a framework and a specific architecture. To clarify: ULTRA as a **framework** is a subset of MOTIF—every instance it defines is also a MOTIF instance. ULTRA as an **architecture** (evaluated empirically) is one such instance. We’ll revise the manuscript to make this distinction explicit. --- **Methods** 1. (Q3) Current KGFMs typically use small motifs (e.g., short paths or stars), making manual verification straightforward. The motif-selection process via core-onto homomorphisms can also be automated as shown in modern database engines: EmptyHeaded [Aberger et al., TODS 2017] used generation of small motifs followed by automated homomorphism checks. 2. We performed an ablation study (Table 2) showing that richer motifs -- from 3-paths down to none -- consistently worsen expressivity and performance. As noted in **App. F**, we only focus on path motifs for real-world datasets due to their efficient construction via sparse matrix multiplication. While our framework can generalize to arbitrary motifs using precomputation of SQL queries and database engines, we leave empirical exploration to future work due to practical constraints. --- **Experimental Designs** 1. We follow prior work (ULTRA) by pretraining on the same three datasets and setup to ensure fair comparison. - There is no test leakage: all relation graphs in the train set and zero-shot inference set are different, as well as the original graph structures. While some datasets are derived from others, this only affects models with entity/relation-specific embeddings, which ULTRA and MOTIF do not use. Moreover, extracting subsets alters neighborhood structures enough to become significantly different from their larger transductive counterparts. - Nevertheless, we did pretraining solely on YAGO310 and evaluated averaged zero-shot performance over all inductive datasets for ULTRA and MOTIF(3-path), to re-confirm the consistent performance improvement: | | MRR|H@10| |-|-|-| |ULTRA|0.350|0.522| |MOTIF|**0.358**|**0.530**| 2. (Q1) Our main experimental goal is to validate that richer motifs enhance expressivity within the MOTIF framework (**Q1, Sec. 6**), rather than chasing SOTA results. We use ULTRA for comparison since it is a MOTIF instance with binary motifs, making it ideal for testing our theoretical claims. MOTIF and TRIX differ fundamentally in expressiveness: each can distinguish relation pairs the other cannot, rendering them incomparable: - TRIX can implicitly count homomorphism matches, whereas MOTIF only checks for existence. E.g., in a KG with E = {r₁(u₁,v₁), r₂(v₁,w₁), r₃(u₂,v₂), r₄(v₂,w₂), r₃(u₃,v₃), r₄(v₃,w₃)}, TRIX distinguishes (r₁, r₂) from (r₃, r₄) based on their frequency, while MOTIF cannot. - MOTIF supports arbitrary motifs, such as the PARA with edges {α(x,y), β(x,y)} from RMPI paper, enabling finer structural distinctions. In a KG with E = {r₁(x₁,x₂), r₂(x₁,x₂), r₁(x₃,x₄), r₂(x₃,x₄), r₃(y₁,y₂), r₄(y₁,y₄), r₃(y₃,y₂), r₄(y₃,y₄)}, TRIX cannot distinguish (r₁, r₂) from (r₃, r₄) due to isomorphic relation graphs under h2h/t2t, while MOTIF can distinguish them via PARA, which maps only to (r₁, r₂). Incorporating higher-order motifs into TRIX could similarly boost its expressivity, highlighting our framework as a **complementary** way for improving KGFMs. --- **Weaknesses** We will refine our notation to ensure clarity and distinctness among these different concepts in the revised paper.
Summary: This paper presents a modified design for existing KGFMs, incorporating arbitrary motifs rather than being limited to binary motifs, as in previous approaches. This enhancement increases expressive power. Synthetic and real-world experiments validate the proposed improvements. Claims And Evidence: Yes. However, some unclear definitions may impact the comprehension of these claims to some extent, as outlined in the 'Other Weaknesses' section later. Methods And Evaluation Criteria: Somehow, yes. The problem formulation is overly general, and the motivation behind the proposed modification does not sufficiently justify its necessity. Specifically, more explanations or concrete examples are needed to illustrate the limitations of binary motifs, particularly in the context of downstream tasks. Theoretical Claims: I did not thoroughly verify the proofs, as the theoretical claims are not critical to this paper. My primary concerns lie in the motivations." Experimental Designs Or Analyses: Yes, especially since I find the synthetic experiments interesting, as they effectively demonstrate the modification's inferior performance. It would be beneficial for the authors to provide similar examples using real-world data to better justify the necessity of this modification. Supplementary Material: No, I did not. Relation To Broader Scientific Literature: KGFMs are inherently closely connected to scientific literature. The proposed modification can be viewed as an enhancement that improves analogical reasoning across different scenarios by incorporating more complex structural patterns in KGs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ## Weaknesses: 1. The definition of relational hypergraphs is problematic. For instance, R and V are ambiguously defined. It would be helpful to explicitly specify the domain and range of all functions involved. 2. The sentence between lines 139 and 142 is unclear. Additionally, more intuitive explanations of link/relation invariants are needed. 3. More generally, the writing could be improved for better clarity, particularly in explaining the method (e.g., through additional examples). Furthermore, the paper should provide deeper insights into the intuition behind the modification and the necessity of implementing it. 4. The experimental results on real-world data are not very convincing. It would be beneficial to better demonstrate the necessity of the proposed approach. Other Comments Or Suggestions: 1. In line 56, it would be helpful to provide a clearer explanation of the term "binary motifs," as it appears to be a key concept. 2. More generally, how are different entities and relations matched across various KGs? Additionally, it would strengthen the paper if the authors could provide support or evidence for the effectiveness of "structural similarities" (line 47) to better ground KGFM in the introduction. 3. A minor suggestion: In lines 111–113, it may be clearer to use distinct notations for factual links and potential links. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > *“**Methods**: ...The problem formulation is overly general, ... ”* The reason our problem formulation is deliberately general is that our primary motivation is to rigorously analyze and broadly improve the expressive power of existing KGFMs, including widely used models like ULTRA. Our theoretical framework (MOTIF) thus intentionally generalizes beyond specific architectures. > *“**Experimental Designs**: …It would be beneficial for the authors to provide similar examples using real-world data ….”* > *“W4. The experimental results on real-world data are not very convincing....”* To provide more concrete insight into why higher-order motifs are necessary, we specifically analyzed scenarios where higher-order motifs excel in downstream tasks. Empirically, we have conducted Synthetic Experiments in **Table 1** to pinpoint the exact place where binary motifs are not enough, proven by the failure of ULTRA on all constructed datasets. For real-world experiments, we found that MOTIF, equipped with higher-order motifs, significantly outperforms binary motifs on datasets containing relatively few relations, such as WordNet-based datasets, which contain only around 10 relations. This can be reflected in the average performance from WN v1-WN v4 here from zero-shot and end-to-end results: |Setting|Model|MRR|H@10| |-|-|-|-| |Zero-shot|ULTRA|0.575|0.679| |Zero-shot|MOTIF|**0.601**|**0.701**| |End-to-End|ULTRA|0.480|0.656| |End-to-End|MOTIF|**0.607**|**0.717**| In **Sec. 7.3**, we conduct a detailed investigation showing that these cases revealed that binary motifs (as used by ULTRA) often construct relation graphs with structurally very similar nodes, thereby failing to distinguish different relations adequately, as shown in **Fig. 8**. In contrast, higher-order motifs frequently break such node invariances, leading to more discriminative and informative relation representations. We will discuss these and demonstrate the necessity of higher-order motifs using real-world examples. >*“**Theoretical Claims**: I did not thoroughly verify the proofs, as the theoretical claims are not critical to this paper..."* We would like to respectfully emphasize that one of the primary motivations and key contributions of this work is precisely to **establish a rigorous and systematic theoretical framework to analyze and understand the expressive power of existing KGFMs**, such as ULTRA. We believe these theoretical results significantly deepen our fundamental understanding of why certain KGFM variants outperform others in practice, and thus we respectively suggest that they should not be overlooked. --- **Strengths and Weaknesses:** >*“W1. The definition of relational hypergraphs is problematic… W2. … more intuitive explanations of link/relation invariants are needed. W3. More generally, the writing could be improved for better clarity,...”* We thank the reviewer for carefully pointing out these areas for improvement. We agree that the clarity and precision of our definitions, explanations, and examples can be improved for better presentation. We will explicitly clarify the definitions of relational hypergraphs (W1), provide clearer and more intuitive explanations for link/relation invariants (W2), and enhance overall readability with additional illustrative examples (W3). >“W4. “The experimental results on real-world data are not very convincing....” Please see our detailed response on experimental design and analysis. --- **Other comments:** >*“S1. In line 56, it would be helpful to provide a clearer explanation of the term "binary motifs," as it appears to be a key concept.”* We will clarify this in the revised manuscript by explicitly defining a binary motif as a motif $P = (G_M, \bar{r})$ with a motif graph $G_M = (V_M, E_M, R_M)$ containing exactly two relation types ($|R_M|=2$). >*“S2. More generally, how are different entities and relations matched across various KGs? ...”* Our framework inherently matches similar relations across different KGs by constructing similar relational hypergraphs based on their structural roles. Specifically, relations with structurally similar contexts in different KGs will yield similar embeddings thanks to conditional MPNN since they can effectively capture the induced similar hypergraph neighborhoods without manual matching of entities and relations IDs. The same principle applies to entities: entities embedded in structurally similar local neighborhoods involving similar relations will naturally obtain similar embeddings across KGs. **Fig. 1** in our manuscript already illustrates how structurally similar relations across different KGs (e.g., provide ↔ supply, research ↔ produce) receive similar embeddings due to similar relational contexts. We will further expand this figure with additional explanatory details in the revised manuscript to ground this key concept. >*“S3. A minor suggestion: In lines 111–113...”* We will modify these in the updated manuscripts. --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for your responses. I appreciate the theoretical contributions of your submission. However, I find that the clarity of the paper still requires significant effort to follow. Despite these considerations, I have decided to maintain my original score due to the following concerns: 1. Alignment between theory and practical knowledge graphs: While the main text presents only the average performance, a closer look at the appendix (Tables 7–10) reveals that different datasets exhibit distinct properties—some aligning with MOTIF, while others do not. This suggests that improved expressiveness does not necessarily lead to better performance on downstream tasks. Unlike synthetic experiments, where graph structures are transparent, real-world knowledge graphs vary significantly, requiring a more comprehensive discussion of these differences. 2. Transferability of structural insights across domains: Closely related to the previous point, it is crucial to clarify what kind of information can be reliably transferred between knowledge graphs from different domains. Figure 1 does not sufficiently establish the validity of structural similarity, as the dashed lines may not hold in other contexts, even if the other connections are preserved. Additionally, as noted by Reviewer UN87, proper acknowledgment of prior work is essential. Addressing this issue thoroughly will likely require more time to ensure due credit is given. Best regards, --- Reply to Comment 1.1.1: Comment: >*1. Alignment between theory and practical knowledge graphs...”* We understand the reviewer’s concern and generally agree with this point. In fact, this is exactly why we experimented additionally with synthetic datasets to precisely validate the theoretical expressiveness. There are many confounding factors when experimenting on real-world data, which makes it harder to validate the theory, while still possible. In our case, we find that the expressiveness gains of MOTIF **DO** translate into real-world improvements, particularly on datasets with richer structural patterns that binary motifs cannot capture. Specifically, **Metafam** is a dataset deliberately constructed to capture **conflicting and compositional relational patterns**, commonly presented in real-world multi-relational settings (e.g., social or genealogical graphs). Consistent with our theoretical insights, MOTIF achieves significantly better zero-shot performance on these datasets—most notably, a **45% improvement on Metafam** over baselines. Similarly, in **WIKITOPICS-MT**, where different knowledge subgraphs are aggregated to form multi-task scenarios, MOTIF outperforms existing models by effectively leveraging structural cues across topics, as shown below: |Model|Avg. MRR|Avg. Hits@10| |-|-|-| |ULTRA|0.331| 0.442| |MOTIF|**0.358**| **0.481**| Please note that this result is the average across 8 datasets, and in many cases gains are much more prominent, eg, on MT1 tax MOTIF shows 50% relative improvement. This suggests that enhanced expressiveness is directly beneficial when the dataset structure aligns with MOTIF’s capabilities. We agree that not all datasets exhibit such structure, which is why we include a broad set of benchmarks with 54 datasets to illustrate this variability (whereas other papers often provide comparison only on the datasets they show gains). Nonetheless, the strong performance gains on structurally expressive datasets provide concrete evidence that our theoretical contributions can meaningfully impact real-world tasks. We will elaborate further on these dataset-specific differences and clarify the practical implications of model expressiveness in the revised manuscript. > *“2. Transferability of structural insights across domains...”* We would like to emphasize that **Figure 1 is not intended to claim that the dashed link (e.g., produce(Intel, SemiConductors)) universally holds across all knowledge graphs sharing similar structure**. Rather, it illustrates that **structural motifs provide the model with richer contextual evidence**, *enabling it to learn whether such a link is likely to hold based on observed patterns during training*. We fully agree that structural similarity does not guarantee link existence. Our point is that, with expressive motifs, the model can now better recognize when a candidate link appears in a similar structural context to those seen in training—even across disjoint relational vocabularies—and learn to distinguish between cases where the link should or should not be predicted. In this sense, we **do not assume the transferability of individual links** but rather propose that **relation invariants derived from structural motifs offer a more nuanced basis for generalization across domains**. We will revise the manuscript to clarify this and prevent any misinterpretation. > *“Additionally, as noted by Reviewer UN87, proper acknowledgment of prior work is essential. Addressing this issue thoroughly will likely require more time to ensure due credit is given.”* As discussed in our detailed response to Reviewer UN87, **TRIX [1] is a contemporary work that was first publicly released on November 16, 2024**, less than four months before our own submission. With that being said, we **acknowledge TRIX as the first to provide an analysis of expressive power** within its specific modeling setup. We have also provided the **fundamental expressive differences** between TRIX and MOTIF, rendering them incomparable. However, our work offers a principled framework (MOTIF) for studying and designing expressive KGFMs that strictly **extend** the capabilities of existing models such as ULTRA and others. Moreover, we emphasize the versatility of our approach of using arbitrary motifs: our framework can be easily adapted to extend the capabilities of other models that are not directly comparable to ULTRA, such as TRIX. We believe this is an interesting avenue for future research. We also clarify that **none of our theoretical results can be derived from the TRIX paper**, as TRIX lacks the general tools we develop for analyzing and improving expressiveness through arbitrary motifs. We will revise the manuscript to acknowledge TRIX explicitly and clarify distinctions where necessary.
Summary: The authors in this paper introduce MOTIF, a framework for enhancing expressiveness of KGFMs. The authors have conducted a rigorous, theoretical study on the expressive power of KGFMs that have been designed to generalize to unseen KGs with different relational vocabs, making them highly useful for inductive link prediction. They argue that previous methods rely mostly on binary motifs, limiting their generalization capability and in order to address this, they propose MOTIF which allows for higher-order relational motifs that enable richer relation-interactions.Yes, the authors rigorously support their claim that the choice of motifs determines the expressive power of KGFMs. Claims And Evidence: Yes, the authors rigorously support their claim that the choice of motifs determines the expressive power of KGFMs. Methods And Evaluation Criteria: Yes, but how much overhead does MOTIF introduce? Are these human-readable? Do MOTIFs scale to large-scale KGs without affecting the performance? Theoretical Claims: Yes, however, what is the lower bound on the computational costs? Also, the denser the KGs are, the more chances are that there would be noise. So how does the framework adapt to this? The argument assumes that more expressive models lead to better generalization. Experimental Designs Or Analyses: Yes. 1. Which motif type contributes the most? 2. Was there an ablation study conducted? 3. Can smaller graph modifications affect the performance? 4. How should a reader comprehend the upper bound of the motif complexity? At what upper bound does the performance start to degrade? Supplementary Material: Yes, experimental setup details in Appendix K. Relation To Broader Scientific Literature: The authors extend prior work KGFMs by introducing MOTIF, a framework that generalizes these models using higher-order relational motifs to improve inductive link prediction and relation generalization. Essential References Not Discussed: None. Other Strengths And Weaknesses: The authors in this paper present a strong argument based on theoretical and empirical insights. This can be a strong contribution to the KGFM domain. I would like to understand the flexibility of incorporating the framework into existing KGs. Other Comments Or Suggestions: There are some minor typos that would need to be revised to improve readability, such as, “because this tuples”, “Fist we note”. Questions For Authors: Q1: How much overhead does MOTIF introduce? Q2: What is the computational cost of using denser, richer motifs? Q3: Are motifs interpretable by humans? Q4: Is there an upper bound to adding motifs that start to deteriorate the performance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the Reviewer for considering ours a "strong contribution". >*“**Methods.** Yes, but how much overhead does MOTIF introduce? Are these human-readable? Do MOTIFs scale to large-scale KGs without affecting the performance?”* >*“Q1: How much overhead does MOTIF introduce?”* This question is **Q4** in our experimental section. A detailed comparison with theoretical computational complexity and practical scalability is shown in **App. H** and **App. G**. While introducing higher-order motifs will rapidly increase the number of constructed hyperedges in relational hypergraphs, the MOTIF instances considered in experiments remain feasible to scale effectively to large-scale KGs with improved performance, thanks to our custom implementation of message-passing with Triton kernels that largely reduce the computation overheads. > *“Q3: Are motifs interpretable by humans?”* If this refers to interpretability, motifs correspond directly to small and well-defined subgraph patterns, such as paths or stars, which describe specific patterns appearing among relations to help capture the semantic meaning of relations. E.g., a "h2h" motif aims to identify whether the heads of two relations semantically share the same node type. > “**Theoretical Claims.** Yes, however, what is the lower bound on the computational costs? Also, the denser the KGs are, the more chances are that there would be noise. So how does the framework adapt to this?...” We have provided a detailed analysis of the computational complexity in **App. H**. To alleviate noisy KGs, the framework can be easily adapted by adding regularization known for GNNs, such as edge dropout. In fact, we have conducted preliminary experiments using edge dropout, and MOTIF maintains stable performance, though the gains from these robustness techniques were marginal. >*“**Experimental Designs.** Yes. 1. Which motif type contributes the most? 2. Was there an ablation study conducted? 3. Can smaller graph modifications affect the performance? 4. How should a reader comprehend the upper bound of the motif complexity? At what upper bound does the performance start to degrade?”* >*“Q2: What is the computational cost of using denser, richer motifs?”* >*“Q4: Is there an upper bound to adding motifs that start to deteriorate the performance?”* Thank you for these questions. We address each below: - *“1. Which motif type contributes the most? 2. Was there an ablation study conducted?”* We have performed an ablation study that systematically removed motifs, transitioning from the full set of 3-path motifs to simpler sets (2-path motifs, single motif "h2t", and no motifs), as shown in **Table 2** of the main paper. We observe that even with a single motif ("h2t"), the model retains some predictive ability, as essential structural information necessary for relation representation is still captured. However, richer motifs consistently provide the strongest performance improvements due to enhanced expressivity. We are happy to include additional experiments over other binary motifs “h2h” and “t2t” in the updated manuscripts. - *“3. Can smaller graph modifications affect the performance?”* We are not entirely sure what the reviewer refers to as our method does not modify the graph structure. Nevertheless, to validate whether perturbation of the KG will affect the performance, we have experimented by randomly dropping half of the edges each time during training. We observed that this did not lead to noticeable performance degradation. - *“4. How should a reader comprehend the upper bound of the motif complexity? Q2: What is the computational cost of using denser, richer motifs?”* Regarding motif complexity, computing and storing higher-order motifs incur additional complexity, which scales with respect to the number of relations. For instance, to compute k-path motifs, we would need a computation complexity of $O(|V||R|^{k-1}(|V|+|R|))$ without using sparse matrix multiplications. We include our detailed explanation again in **App. H**. - *“Q4: Is there an upper bound to adding motifs that start to deteriorate the performance?”* Theoretically, adding motifs that can not be mapped via a core-onto homomorphism to an existing one will provably be more expressive. Empirically, whether such additions lead to performance improvements or deterioration depends significantly on the dataset and specific task characteristics. Intuitively, introducing an excessive number of motifs could reduce the signal-to-noise ratio in the constructed relation hypergraphs, potentially deteriorating the model's performance. >*“**Other Strengths And Weaknesses.** … I would like to understand the flexibility of incorporating the framework into existing KGs.”* Our framework is flexible and has been tested on 54 KGs from various domains and task settings, as shown in the experimental section. >*“**Other Comments.** ...minor typos ...”* We will modify this in the updated manuscripts. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions and concerns. I will maintain my recommendation.
null
null
null
null
null
null
null
null
Inverse Problem Sampling in Latent Space Using Sequential Monte Carlo
Accept (poster)
Summary: This paper is about solving inverse problems in a training-free manner using latent denoising diffusion priors and sequential Monte Carlo methods. It proposes a novel probabilistic models that enables inference of the hidden states with latent diffusion priors given the observation. Specifically, this probabilistic models relies on combining two "orthogonal" previous ideas introduced in the literature. On one hand, a popular approximation for the likelihood of the observation, of which the gradient is required to be able to implement a diffusion posterior model given the noisy state is $p(y|x_t) \approx p(y|E(x_0|x_t))$. Another approach consists instead of sampling drawing noisy observations according to the forward process and then conditioning, using various heuristics, the reverse process on these observations. This paper essentially combines these two approaches and blends them in a SMC scheme. This is done by defining a specific probabilistic models in which there are hidden states of which we have observations. Given a hidden state $z_t$, the observation $y_t$ is assumed to be obtained by first decoding the hidden state $z_t$, applying the forward operator to it then adding Gaussian noise with std $\tau$. The proposed algorithm is then a SMC applied to this probabilistic model. Claims And Evidence: The main claim in the paper is that adding auxiliary observations should in combination with the DPS approximation should help in capturing the "large scale semantics" as well as the "finer details". While this is demonstrated through quantitatively through superior performance with various metrics, it is not really studied qualitatively. Can the authors exhibit an example in which this can be understood precisely? Methods And Evaluation Criteria: Besides some issues with the methodology that can be fixed and which I discuss below, there are some design choices that are rather odd and are not explained in the paper. For example, in the probabilistic model introduced, the authors assume that the observation is obtained by applying the forward operator to the decoding of the current noisy state. Doesn't this introduce some stability issues that could yield to blurry images? The decoder has been trained on clean data only and evaluating it at noisy samples should result in odd behavior. Can the authors further discuss this? I would have assumed that a more reasonable approach would be to evaluate the decoder at the denoised hidden state. In this case one could set the variance to be $1 - \bar\alpha_t$ instead of letting it be a hyperparameter. Similarly, the decoder is applied on highly noisy samples in the proposal transition. Also, the Gaussian likelihood approximation of $p(y_0|x_t)$ has variance $1 - \bar\alpha_t$. This is quite odd since $y_0$ is the clean data, why would one assume such a large variance? In Wu et al. 2024 the variance is that of the p(y_0 | x_0), could you explain what you mean by "the variance term is taken to be the variance of the forward diffusion process? Theoretical Claims: While the idea is interesting and the methodology sounds reasonable, I believe that there are several issues with it that needs to be clarified/addressed. - First, in the probabilistic model considered (defined in 4.1) the joint distribution of the hidden states is **not** necessarily Markovian due to the fact that the authors use DDIM. Unless $\eta = 1$, the hidden process is not Markovian. Still, the authors assume that it is Markovian as evidenced the computation that starts at line 240 in the second column. Unless I am missing something, this derivation does not hold unless $\eta=1$. - Regarding the same derivation, in SMC one only needs to propagate the particle approximation of $p(z_{t:T} | y_{t:T})$ but here the authors instead perform what is known as *online smoothing*; they derive particle approximations of the hidden states conditioned on all the remaining observations. This is arguably harder than filtering and it seems to me that this is not needed. Actually, the authors assume that $$ p(z_{t:T} | y_{0:T-1}) \approx p(z_{t:T} | y_{t:T-1}, y_0) $$ which significantly simplifies the computations. Still this requires computing $p(y_0|z_t)$ but this is approximated using the DPS approximation. It seems to me that smoothing is considered here because the authors want to further condition on the initial observation. Now I would like to emphasize that the probabilistic model considered in the paper is **arbitrary**, meaning that the observation likelihood could have been different. Hence, since the authors want to condition on both $y_0$ and $y_t$ at each step of the diffusion process they could have changed their probabilistic model so that there are **two** observations $(y^1 _t, y^2 _t)$ for the same hidden state $z_t$. If their likelihood is given by $$ p(y^1 _t, y^2 _t | z_t) = \mathcal{N}(y^1 _t | \mathcal{A}\big(\mathcal{D}(\overline{z}_0(z_t)), (1 - \overline\alpha_t) \big) \mathcal{N}(y_t | \mathcal{A}\big(\mathcal{D}(z_t)), \tau^2 \big) $$ A classical particle filter with this model, with observations $y^1 _t = y_0$ for all $t$, and observations $y^2 _t$ generated as in the paper would yield, I believe, the exact same algorithm if all the steps are performed conditionally on $y^1 _{0:T}$, which is not restrictive at all. The catch however is that the observation model for $y^1 _t$ is misspecified and the targeted marginal distribution is no longer the posterior distribution of interest. - The authors mention that the proposed algorithm is a blocked Gibbs sampling procedure but I am unsure that this is the case. A blocked Gibbs sampler in this case would proceed by first sampling the hidden states given the observations then the observations given the hidden states. The proposed algorithm however has one more step where the hidden states $z_{1:T}$ are sampled conditionally on $\hat{z}_0$ only and not the observation. I do not see how this can be a Gibbs sampler. Shouldn't one draw a whole trajectory from the particle smoothing approximation then use it to draw the observations? Experimental Designs Or Analyses: - The method is tested on the standard imaging benchmark that is used in most diffusion posterior sampling papers. The method is also compared to standard latent diffusion posterior sampling methods. The analysis of the result is sound. - The images generated with the various algorithms are much blurier than one would expect. For example, posterior sampling with pixel-space diffusion yields images with much better quality. I believe that a discussion of this matter is warranted. Supplementary Material: I have reviewed the additional experimental details in the supplementary paper. Relation To Broader Scientific Literature: The paper borrows various ideas from existing diffusion posterior sampling methods such as [1,2,3]. The methodology presented in the paper is still interesting and is not merely a simple combination of these papers. [1] Dou, Z. and Song, Y. Diffusion posterior sampling for linear inverse problem solving: A filtering perspective. [2] Trippe, B. L., Yim, J., Tischer, D., Broderick, T., Baker, D., Barzilay, R., and Jaakkola, T. Diffusion probabilistic modeling of protein backbones in 3d for the motif-scaffolding problem. [3] Song, B., Kwon, S. M., Zhang, Z., Hu, X., Qu, Q., and Shen, L. Solving inverse problems with latent diffusion models via hard data consistency. Essential References Not Discussed: While the algorithms to which the method is compared to are still relevant, there are nonentheless recent works from last year that achieve much better performance on posterior sampling with latent diffusion. [1] Zhang, B., Chu, W., Berner, J., Meng, C., Anandkumar, A. and Song, Y., 2024. Improving diffusion inverse problem solving with decoupled noise annealing. [2] Moufad, B., Janati, Y., Bedin, L., Durmus, A., Douc, R., Moulines, E. and Olsson, J., 2024. Variational Diffusion Posterior Sampling with Midpoint Guidance. Other Strengths And Weaknesses: no further strengths and weaknesses. Other Comments Or Suggestions: see above. Questions For Authors: no further questions, see above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort put into the review. We are encouraged that the reviewer found our approach interesting and the analysis sound. We address the reviewer’s comments in the following. All answers and suggestions will be incorporated in the paper. **(1)** > “The main claim in the paper [...] is not really studied qualitatively” The main claim in this paper is that combining auxiliary variables with the DPS approximation can perform better sampling for inverse problems. This point is supported by our experiments. The “large scale” vs “fine detail” is our qualitative observation on LD-SMC performance, and not the main claim itself. An illustrative example of this can be seen at https://tinyurl.com/2u6mmj2x, specifically the generated writing in the third image. **(2)** > Behaviour of the decoder and stability issues Thank you for raising this great question. We agree with the reviewer that evaluating the decoder on noisy latents, especially for large $t$ values, can generate non-natural images. However, it does not imply stability issues as long as the gradients of the decoder are well-behaved, which is verified empirically. Importantly, the latents, even if noisy, carry information about the label $\mathbf{y}_0$ which guides the sampling process using the auxiliary labels closer to the desired image. Please see also comments (4) & (7) to reviewer yJi6 that touched this point. **(3)** > Evaluating the decoder at the denoised hidden state Thank you for this suggestion. In our early experiments, we tried using DPS, for generating the labels and initialization of LD-SMC. Since it did not improve the results we used the proposed simple alternative which is also computationally cheaper. **(4)** >What do you mean by "the variance term is taken to be the variance of the forward diffusion process?” The variance of the conditional distribution $p(\mathbf{z}_t|\mathbf{z}_0)$ which is $1−\bar{\alpha}_t$. **(5)** > Large variance of the Gaussian likelihood The variance aims to reflect the stochasticity in $\mathbf{z}_0 | \mathbf{z}_t$, it starts small and gets larger with $t$, but capped at 1. This choice is sensible although other options are also applicable. Either way, the weighting mechanism can fix for that approximation. **(6)** > “In the probabilistic model [...] the authors assume that it is Markovian” The forward process is indeed not Markovian, but, in the backward process, $\mathbf{z}_t$ depends only on $\mathbf{z}\_{t+1}$ as evident in Eq. 10 & 12 in the DDIM paper. **(7)** > Methodology - the SMC procedure and Gibbs sampling We thank the reviewer for the valuable and important comments on LD-SMC methodology. Following the reviewer’s comments we show here that the empirical distribution over samples converges to the true target of interest $p_\theta(\mathbf{z}_0 | \mathbf{y}_0)$ in the large compute limit. This addition leads to a few modifications to the algorithm presented in the paper. Importantly, since the Gibbs sampling process was applied only once, the modifications from the current presentation are minor. In addition, we stress that these modifications preserve the results witnessed in the paper (and even slightly improve them). Link to algorithm: https://tinyurl.com/2t9rpf3j. The main changes are, (1) Generate the auxiliary label $\mathbf{y}_T$, and use it in the SMC procedure for correcting the initial sampling step at time $t=T$; (2) Sample the full chain $\mathbf{z}\_{0:T}$ using SMC, for the Gibbs sampling procedure. These modifications allow us to show the following result, Theorem (informal). Let $\mathbb{P}_N(\mathbf{z}\_{0:T}) = \sum\_{i=1}^N w\^{(i)}_0 \delta\_{\mathbf{z}\_{0:T}\^{(i)}}(\mathbf{z}\_{0:T})$, be the discrete measure obtained by the function $\mathbf{SISR}$ in Algorithm 1, where $\delta$ is the Dirac measure. Under regularity conditions $\mathbb{P}_N(\mathbf{z}\_{0:T})$ converges setwise to $p\_\theta(\mathbf{z}\_{0:T} | \mathbf{y}\_{0:T})$ as $N \rightarrow \infty$. Furthermore, the stationary distribution of the Gibbs sampling process is $p\_\theta(\mathbf{z}\_{0:T}, \mathbf{y}\_{1:T} | \mathbf{y}_0)$, and $p\_\theta(\mathbf{z}_0 | \mathbf{y}_0)$ is the limiting distribution of the $\mathbf{z}_0$'s subchain. Link to the full claim and proof: https://tinyurl.com/3rdvpnad. Importantly, this result does not depend on *any* of the approximations made to derive our model. **(8)** > Recent works from last year Thank you for referencing us to these studies. We will discuss them in the paper. We added here a comparison to LatentDAPS (Zhang et al., 2024) under our experimental setup. Please see the results and discussion in comment (3) to reviewer yJi6. **(9)** >The images generated are blurry The quality of images heavily depends on the prior diffusion model. We used LDM VQ-4 which is not as strong as recently released diffusion models. This point was also shown in Table 1 (and discussed in the appendix) of Zhang et al. (2024). --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their rebuttal. If I have understood correctly your modifications, you have: - changed the probabilistic model to the one I've proposed in my review and then used a particle filter (instead of approximating a particle smoother) to obtain the particle approximation - removed the steps in which you sample $z\_{1:T}$ conditionally on $\hat{z}\_0$. Now you simply draw the observations conditionally on the resampled trajectory $z\_{0:T}$. - These changes yield a principled SMC-within-Gibbs algorithm and thus, theoretical guarantees, as you've provided in your link. I welcome these changes and now think that the methodological part is in a better shape! My only minor concern now is that I think that saying $p_\theta(z_0 | y_0)$ is the limiting distribution is a bit misleading since the reader might assume that the distribution in question is $\propto p(y|z_0) p_\theta(z_0)$, which is not the case. The limiting distribution is the marginal obtained by integrating over $y_{1:T}$ and $z_{1:T}$ and this distribution is not the same as $\propto p(y|z_0) p_\theta(z_0)$. > "The forward process is indeed not Markovian [...]" I still don't agree with the argument here. My point is that in DDIM the joint distribution $$ p\_0(z\_0) p(z\_T | z\_0) \prod_{t = 1}^{T-1} p(z_t | z_{t+1}, z_0) $$ is not necessarily a Markov chain (unless you take for example the parameterization in eq. (16) of the DDIM paper with $\eta = 1$) and hence cannot be written as a backward Markov chain starting at $T$. Hence, the probabilistic model defined at the beginning of section 4.1 is **not** the same as the one to which you apply the particle filter (after having plugged the parametric approximation). This is not a major issue and can be fixed by directly defining the probabilistic model backwards, i.e. with $z_{t-1} | z_t \sim \int p(z_{t-1} | z_t, z_0) p(z_0 | z_t) dz_0$. I insist that the probabilistic model in this case is **different** from the one you define, unless $\eta = 1$ for example. And, the backward distribution I define is essentially the one approximated by DDIM. The additional experiments are a plus and I think they strengthen the paper. I have raised my score from 2 to 4. --- Reply to Comment 1.1.1: Comment: We are glad our additional clarifications and results successfully addressed most of your concerns. We will integrate all the changes in the paper. Thank you for your insightful and constructive feedback, and for raising the score! Please see our answers regarding your comments, > “If I have understood correctly your modifications, you have [...] ” Yes, this is correct. > “[...] The limiting distribution is the marginal obtained by integrating over $\mathbf{z}\_{0:T}$ and $\mathbf{y}\_{0:T}$ and this distribution is not the same as $\propto p(\mathbf{y}|\mathbf{z}_0)p\_θ(\mathbf{z}_0)$” We agree, we make this point precise in the full proof at the provided link. The Gibbs sampling procedure allows us to take samples from the joint distribution over $\mathbf{z}\_{0:T}, \mathbf{y}\_{1:T}\ | \mathbf{y}_0$, but ultimately we care about samples from the marginal of that distribution, namely $\mathbf{z}_0 | \mathbf{y}_0$. We will clarify this point in the main text to prevent any confusion. > “[...] the probabilistic model defined at the beginning of section 4.1 is not the same as the one to which you apply the particle filter (after having plugged the parametric approximation) [...]” Thank you, this is a great point. In DDIM the backward process is trained to mimic the non-markovian forward process. We implicitly assumed that this approximation is accurate in the generative model which we will make explicit. In addition, we will follow your suggestion and modify the generative model according to it in the paper.
Summary: The authors propose a sampling method based on Sequential Monte Carlo (SMC) in the latent space of diffusion models. The proposed approach leverages the forward process of the diffusion model to introduce additional auxiliary variables (e.g. noisy measurements), followed by SMC sampling as part of the reverse process. The method is evaluated on commonly used benchmarks: ImageNet and FFHQ, demonstrating improvements in Gaussian deblurring, inpainting, and super-resolution tasks. The results suggest that incorporating SMC in the latent space of diffusion models may enhance sampling efficiency and reconstruction quality at the cost of expensive gradient updates (see Eq. (4)) and hyper-parameter tuning (see Sec 4.2.3). While the approach is promising, further clarification on the computational trade-offs and runtime analysis would strengthen the contribution. Additionally, an ablation study on the role of auxiliary variables and each gradient update from Eq. (4) in improving sample quality could provide deeper insights into the method's effectiveness. Claims And Evidence: The claims are not supported with clear evidence. Please see the weaknesses below. Methods And Evaluation Criteria: Proposed method and evaluation criteria make sense. However, the extent of the evaluation is limited and relevant baselines in particle filtering area are missing (please see weaknesses below). Theoretical Claims: This is an empirical paper. Experimental Designs Or Analyses: Yes, I have checked the soundness/validity of experimental designs or analyses presented in Section 5. Supplementary Material: Yes, I have reviewed Appendix A-E in the supplementary material. Relation To Broader Scientific Literature: The main idea of noising the measurements and performing a gradient step to minimize the measurement error in the noisy latent space has been previously explored (see for example ADIR: https://arxiv.org/pdf/2212.03221). Additionally, the multi-particle sampling approaches have also been extensively studied in prior works, as discussed in the related works of this paper (Sec 3). Given these prior works, the contribution of the proposed approach appears limited. Essential References Not Discussed: Essential references have been discussed. However, the experimental results lack comparison with these methods. Other Strengths And Weaknesses: ### Strengths The experimental results demonstrate improvements over prior works in Gaussian deblurring, super-resolution, and inpainting tasks. These findings suggest that the Sequential Monte-Carlo (SMC) approach effectively enhances posterior sample quality in diffusion models. ### Weaknesses 1. Given that the proposed method employs a multi-particle sampler, it would be valuable to compare its performance against existing particle filtering methods discussed in related works, such as PFLD and FPS. Although the compared baselines are relevant, a direct comparison with these particle-based methods could provide insights into the advantages and potential limitations of the proposed approach, particularly in terms of sampling efficiency, particle diversity, and robustness in diffusion-based inverse tasks. 2. Why is the initial guess for $\hat{z}_0$ (Sec 4.2.1) as the argmin of Eq (3) important? Can’t we just use the original image itself? Especially, because the VAE-encoder will produce a valid latent if the input is an image. The solution of the optimization problem in Eq (3) on the other hand may not produce an image. 3. The main idea of noising the measurements and performing a gradient step to minimize the measurement error in the noisy latent space has been previously explored (see for example ADIR: https://arxiv.org/pdf/2212.03221). Additionally, the multi-particle sampling approaches have also been extensively studied in prior works, as discussed in the related works of this paper (Sec 3). Given these prior works, the contribution of the proposed approach appears limited. 4. Line 243 (right col): Why is $y_t$ independent of $y_0$ in the first equality? 5. In Eq (4), the gradient wrt $\hat{z_t}$ can be decomposed by applying chain rule with variable $\mu_{\theta}(\cdot,\cdot)$. In this case, the multiplying factor $\gamma_t$ seems to be the Jacobian of $\mu$ wrt $\hat{z}_t$. So the second and third terms are roughly on the same scale, except some scaling factors and additive constants. In that case, what is the justification for using them both? Besides, why would the third term give any meaningful gradients given the fact that $D$ is not trained to produce valid images given noisy latents? ############## Post-rebuttal ############## The reviewer thanks the authors for responding to the raised concerns. Based on the newly added results, the reviewer is raising the score to borderline reject. Please see below for the remaining concerns. The reviewer has the following concerns regarding the contributions of the paper post rebuttal. The authors claim that a paper very similar to theirs, called PFLD, is a concurrent work (although it had first appeared on ArXiv in Aug 2024). The idea of noising measurements has also previously appeared in ADIR. The authors argue that this technique can not be extended to latent diffusion models due to the non-linearity of encoder-decoder. However, this is the case for most of the compared baselines including the proposed method. For examples, $y_t$ will not provide useful signal unless the decoder preserves some structure of the original image (as shown in Figure 1) through linear approximation. Therefore, the main contribution of this paper appears limited given the fact that it combines these two ideas without properly crediting these prior works. The argument that $y_t$ is independent of $y_0$ is also not necessarily true in general. It is a mere implication of the model structure assumed by the authors. There is no guarantee that this is the true structure. The gradient in the second term of Eq. (4) can be decomposed by applying chain rule with $mu_\theta$. Majority of the relevant baselines were omitted during the main paper submission and only appeared during the rebuttal. Given these concerns, the reviewer believes the paper needs a major revision before getting accepted. Other Comments Or Suggestions: Please see the weaknesses above. Questions For Authors: Please see the weaknesses above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort put into the review. We are encouraged that the reviewer found our approach specifically and SMC in general promising. We address the reviewer’s comments in the following. All answers and suggestions will be incorporated in the paper. **(1)** > “Further clarification on the computational trade-offs and runtime analysis” Thank you for this valuable suggestion. Please see comment (4) to Reviewer AxT3. **(2)** > “The role of auxiliary variables and each gradient update from Eq. (4)” Thank you for raising this point. Please see the following figures on box inpainting that analyze that. ImageNet- https://tinyurl.com/yrht7az9; FFHQ - https://tinyurl.com/2snc4vdz. Similar to Table 3 in the paper the figures show the tradeoff between FID and PSNR when varying $s$, the hyper-parameter that balances between the terms. Taking $s=0$ is equivalent to using only the first gradient term which leads to a large degradation in the FID. Conversely taking $s$ to be too large results in a steep degradation in the PSNR between $s=333$ and $s=500$. Using $s=333$ strikes a good balance between the FID and PSNR in inpainting tasks. **(3)** > Contribution and relation to prior works Thank you for referring us to ADIR, we will address it in the paper. Similar to FPS, ADIR noise the measurements based on the assumption of a linear corruption operator. As such, it is not readily clear how to use these methods with latent diffusion models due to the decoder's non-linearity. A main contribution of our paper is the ability to use auxiliary labels in latent diffusion models. The auxiliary labels are then used as an integral part of our SMC procedure. Regarding particle-based sampling approaches. As far as we know there are only five relevant studies, MCGDiff (Cardoso et al., 2023), FPS (Dou & Song, 2024), TDS (Wu et al., 2024), SMCDiff (Tripper et al., 2023), and PFLD (Nazemi et al., 2024). Specifically, MCGDiff and FPS both rely on the assumption of a linear corruption operator and cannot be used with latent diffusion models due to the non-linearity of the encoder-decoder. TDS is being extensively compared against. As for SMCDiff, it was designed mainly for motif-scaffolding, and not general inverse problems. Nonetheless, the proposal used in that study, namely the prior diffusion model, can be used in our case as well. PFLD is a concurrent study that uses PSLD update as a proposal distribution, which we did compare to in the paper. Following the suggestions of this reviewer and reviewer 326R, we add here a comparison to PFLD, LD-SMC with the prior as a proposal distribution, and LatentDAPS (Zhang et al., 2024). The comparison is presented in Tables 1 & 2 at https://tinyurl.com/p6hh6bjy. Per comment (7) to Reviewer 326R, please note that there are small changes from the results reported in the paper for LD-SMC. From the tables, LD-SMC significantly outperforms PFLD and LD-SMC with the prior proposal in all metrics. In comparison to LatentDAPS, LD-SMC has a clear advantage in FID, NIQE, and LPIPS, while LatentDAPS is better in PSNR and SSIM. **(4)** > Initial guess for $\mathbf{\hat{z}}_0$ Thank you for raising this interesting question. Indeed there are multiple valid ways to initialize $\mathbf{\hat{z}}_0$. We did not use the original image since in some tasks (such as super-resolution) it cannot be applied due to a mismatch in the image dimensions. The important part of the initialization is that it will carry information about the measurement $\mathbf{y}_0$ which can then be used to guide the sampling process using the auxiliary labels. Fig. 5 in the paper shows that $\mathbf{y}\_{1:T}$ are sensible. Tables 9 & 10 at https://tinyurl.com/p6hh6bjy show an advantage for LD-SMC initialization compared to reviewer’s proposal, perhaps because it doesn’t take into account the corruption operator. **(5)** >”Why is $\mathbf{y}_t$ independent of $\mathbf{y}_0$?” Due to the model structure when conditioning on $\mathbf{z}_t$ these variables become conditionally independent. **(6)** > ”In Eq. (4) [...] what is the justification for using them both?” We would appreciate further clarification in case we misunderstood the question. The terms use different labels and we do not see how these gradients or their scale are the same. Moreover, $\gamma_t$ is a free scalar parameter and it doesn’t have to be connected to the Jacobean. **(7)** >”Why would the third term give any meaningful gradients [...]?” Thank you for raising this question. As the decoder wasn’t trained on noisy images, we do not expect its reconstructions to be natural images, especially for large $t$ values. However, as a fixed function, its gradients are meaningful even on out-of-distribution data as they have information on how this function reacts to small changes. As such, the gradients help LD-SMC to move the latent encoding to one whose reconstruction is closer to the desired image as verified empirically.
Summary: The work proposed a Sequential Monte Carlo based sampling algorithm for solving imaging inverse problems with latent diffusion model. The writing is clear and easy to follow. Claims And Evidence: The claim is solid. Despite achieving the highest perceptual quality, this work suffers from a significant loss in distortion quality. This critical trade-off needs to be explicitly addressed in the main paper (moving distortion quality into the tables in the main paper). Failure to do so would be misleading. Methods And Evaluation Criteria: Yes, the evaluation is fair and well-designed. To help better understanding the computational cost, please provide a computational cost comparison with baselines methods including memory and inference time per image. Theoretical Claims: There is no theoretical claims. Experimental Designs Or Analyses: Yes, the comparison is fair and well-designed. Supplementary Material: I checked the code. It looks good. Relation To Broader Scientific Literature: No. Essential References Not Discussed: It has discussed most of the important works. Other Strengths And Weaknesses: Weakness: (1) This work presents a method that builds upon existing techniques, notably TDS, by extending the sampling process to the latent space, with a method introduced in Resample baseline. While this extension successfully introduces Sequential Monte Carlo to the latent space and demonstrates performance improvements, it comes at a significant computational cost. The reliance on multiple particle sampling and decoder operations during guidance presents a practical challenge. Other Comments Or Suggestions: No more. Questions For Authors: (1) The number of particles used in the proposed method is 5, is there any reason for choose this number? An ablation study and a corresponding discussion will be helpful to inform if scaling up the particles can further improve the performance or not. (2) By using the weighting and resampling method as introduced in this paper, I think there will be a improvement in the worst-case performance, which may be the main reason for the improvement in the table. I am wondering if authors can add a experiments or discussion to cover this part. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort put into the review. We are encouraged that the reviewer appreciated LD-SMC perceptual quality and the evaluation part. We address the reviewer’s comments in the following. All answers and suggestions will be incorporated in the paper. **(1)** > “This work suffers from a significant loss in distortion quality.” Across 24 comparisons of distortion metrics (including free-form inpainting in comment (2) to Reviewer sS3Y), LD-SMC is first 3 times, second 10 times, third 10 times, and fourth one time. Hence, in our view, LD-SMC is comparable to baseline methods in distortion metrics. In general, there can be a trade-off between metrics that represent perceptual quality and those that represent distortion (Blau & Michaeli, 2018). As mentioned in the paper, we gave a stronger emphasis to the perceptual metrics, since the goal, as we see it, is to obtain high-quality images. Conversely, we could have put more emphasis on distortion metrics, for example when performing a hyper-parameter search, but it would have come at the expense of the perceptual quality. **(2)** > Moving distortion metrics to the main text Thank you for the suggestion. Due to lack of space, we report in the main text the FID and NIQE which are considered perceptual metrics, and LPIPS, a distortion metric. Note that it is common in the literature to report only the FID and LPIPS in the main tables, (e.g., Chung et al., 2023b; Dou & Song 2024). Nevertheless, we will include all the metrics in the next version of the paper. **(3)** > “The number of particles used in the proposed method is 5, is there any reason for choosing this number?” Thank you for this suggestion. We chose 5 particles because of run-time and memory considerations. In SMC one can expect an improvement in the performance when increasing the number of particles, yet as the reviewer mentioned, in the latent space it can be costly due to encoder-decoder operations. Following the reviewer's suggestion, Table 6 in https://tinyurl.com/p6hh6bjy shows an ablation for the number of particles. The table shows a general trend of improvements in perceptual and distortion metrics when increasing the number of particles. Importantly, even taking only one particle results in favorable performance for LD-SMC compared to baseline methods at a similar computational cost. Per comment (7) to Reviewer 326R, please note that there are small changes from the results reported in the paper for LD-SMC. In addition, we examine the performance of LD-SMC using one particle and multiple Gibbs iterations in Tables 7 & 8 at https://tinyurl.com/p6hh6bjy. From the tables, results can be improved even when using one particle (which reduces the computational demand of the method) by using multiple Gibbs iterations. **(4)** > “Please provide a computational cost comparison with baselines methods including memory and inference time per image.” Thank you for this suggestion. We add here a comparison between the methods in average run-time (seconds) and memory (GB) over 10 trials for sampling a single image. For LD-SMC we inspect several variants, including using only 1 particle and multiple Gibbs iterations. As can be seen in comment (3), using 1 particle allows to almost match the results of LD-SMC with 5 particles in inpainting tasks and when allowing to take multiple Gibbs iterations the performance can be further enhanced in Gaussian debluring and super-resolution. From the table, the run-time is roughly linear in the number of particles and Gibbs iterations. Yet, importantly, it can be controlled by the practitioner to trade off performance (which can be good with one Gibbs iteration and one particle) and computational demand. In addition, please note that our code is not properly optimized and we believe that large improvements can be made in the run time of LD-SMC. | | Run time (sec.) | Memory (GB) |----|---|---| | Latent DPS | 105.5 | 8.123 | Latent TDS | 418.5 | 19.86 | ReSample | 333.4 | 5.769 | PSLD | 129.8 | 9.590 | LD-SMC (1 particle) | 136.3 | 9.213 | LD-SMC (3 particles) | 375.1 | 15.11 | LD-SMC (5 particles) | 537.2 | 21.16 | LD-SMC (10 particles) | 1013 | 35.78 | LD-SMC (1 particle; 2 Gibbs iterations) | 271.2 | 9.213 | LD-SMC (1 particle; 4 Gibbs iterations) | 541.0 | 9.213 **(5)** > “By using the weighting and resampling [...] an improvement in the worst-case performance, [...] I am wondering if authors can add an experiment or discussion to cover this part.” Thank you for this valuable feedback. Indeed the weighting and resampling correct for the approximations in the proposal distribution and posterior (section 4.2.2). We formalize this claim in comment (7) to Reviewer 326R. --- Rebuttal Comment 1.1: Comment: Dear authors, I appreciate your response. You've effectively resolved all my questions. I will raise my score to accept (4). --- Reply to Comment 1.1.1: Comment: We are glad that our additional clarifications and results successfully addressed your concerns. We will integrate all the changes in the paper. Thank you for raising the score and for your valuable and supportive feedback!
Summary: This paper studies inverse problems using sequential Monte Carlo sampling in a latent space. Generative models are great priors for the inverse problems. Although diffusion models achieve great performance, due to their computationally expensive reverse process and sequential nature, leveraging diffusion models in inverse problems raises technical challenges. To address the technical difficulty in sampling from the exact posterior distribution, the authors proposed sequential Monte Carlo-based sampling method. The authors proposed a new posterior approximation and proposal distribution. The proposed method achieves competitive performance. Claims And Evidence: Claim 1. The authors claim that the proposed method is effective in capturing both large-scale patterns and local fine-grained details. Incorporating diffusion models into Inverse problems is technically challenging especially for the joint posterior of original image x at all time points, namely, p(x_0, … x_T | y). In the literature, recent works have attempted to address this challenge in various ways. One line of approaches based on P(Y|E[x_0|x_t]) captures large scale patterns and the other line of approaches with auxiliary y_{1:T} captures fine-grained local details. The authors claimed that the proposed method achieves the best of both. Figure 2 demonstrates the traits of two different approaches and the proposed method outperforms both lines of methods in the in-painting task. Figure 4 shows another results in deblurring. But the quantitatively is not analyzed. Methods And Evaluation Criteria: Method. The proposed method is reasonable and is a variant of recent approaches. The sequential Monte Carlo has been proved effective in sampling with diffusion models. The approximation in Section 4.2.2. is introduced but no analysis of this approximation is provided. Evaluation. The evaluation was performed similarly to previous works. 256 x 256 ImageNet and FFHQ 256 x 256 with 1024 samples were used. The authors have reported the performance in various evaluation metrics: FID, NIQE, LIPIPS, SSIM, PSNR. The authors mainly presented Perceptual Quality metrics and distortion metrics (PSNR, SSIM) are presented in the appendix. Overall, the proposed method and evaluation are reasonable. Theoretical Claims: No theoretical results are presented. Experimental Designs Or Analyses: The experimental setting is pretty standard. No discussion is needed. However, more baselines can be included. Supplementary Material: I read Appendix, including experimental details, details of the proposal distribution, and full results with distortion metrics from Table 4 to Table 9. Several qualitative results of the proposed method’s results are reviewed. Relation To Broader Scientific Literature: No relation to any findings in scientific literature is explicitly discussed by authors and I did not observed either. Essential References Not Discussed: More baselines: RED-diff, DDRM, DDNM^+ \PI GDM and so on. To make the comparison more comprehensive, the authors may want to include other diffusion model-based inverse problem methods even though they are not performed in the latent space. Other Strengths And Weaknesses: Strengths 1. Interesting exploration. This paper studies a new sampling scheme, inspired by the prior work on sequential Monte Carlo sampling, for inverse problems with a new proposal distribution and its update. Weaknesses 1. Weak experimental results. Although the proposed method achieves competitive performance in inpainting, its efficacy in other inverse problems is not fully proven. The method exhibit poor performance in deblurring and the proposed method did not outperform baselines, especially in distortion metrics. Also, experiments in other inverse problems are missing such as super-resolution, motion deblurring, colorization and so on. 2. No analysis on the main contributions: approximation and proposal distribution. The accuracy of approximation and efficiency of the proposal distribution (computational cost, distance to posterior distribution or gap induced by the proposal distribution, the number of rejections if rejection sampling schemes are utilized.) are not discussed. Other Comments Or Suggestions: In the main paper, if the performance in all metrics are provided in the main paper, it would be better for readers to understand the strengths and weaknesses of the proposed method. The difference between the proposed method and existing SMC methods needs to be highlighted. Questions For Authors: Q. Any quantitative results to separately evaluate the quality of restored images in large patterns and local details. Regarding distortion metric, PSNR, SSIM, LPIPS in Table 4 and other Tables in the supplement, Resample is overall comparable or often outperforms LD-SMC. Please provide more explanation. Q. Analysis of proposal distribution? If a proposal distribution is closer to the posterior distribution and easy to draw sample, then the proposal distribution is ideal. The effect of proposal distribution update is reported in Table 3. But this kind of thorough analysis is not presented. Also, compared to other simple proposal distributions, the performance gain is not fully analyzed. Q. Similar to other methods, computational costs (the number of parameters, latency) are not compared. Does the proposed method induce additional computational overhead? Q. Why does the proposed method exhibit weak performance in Gaussian Deblurring? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort put into the review. We are encouraged that the reviewer regarded our approach as *Interesting exploration* and overall appreciated it. We address the reviewer’s comments in the following. All answers and suggestions will be incorporated in the paper. **(1)** > ”More baselines: RED-diff, DDRM, DDNM^+ \PI GDM [...]” Thank you for pointing us to these methods. We will make sure to reference them in the paper. These baselines were not included since the values reported in them are based on a different diffusion model and hence are not comparable. As such, due to computational limitations, we picked the set of recent works that were closest to our approach and could be applied with latent diffusion, and we ran all of the baselines ourselves under the exact same experimental setup. Please see comment (3) to reviewer yJi6 for additional baselines added here in the rebuttal. **(2)** >”Weak experimental results [...] poor performance in deblurring [...] especially in distortion metrics. Also, experiments in other inverse problems are missing [...]” Thank you for the comment, but we believe otherwise regarding the results of LD-SMC. First, we would like to stress that there can be a trade-off between metrics that represent perceptual quality and those that represent distortion (Blau & Michaeli, 2018). We gave a stronger emphasis to the perceptual metrics, since the goal, as we see it, is to obtain high-quality images. Second, in image generation, the metrics can be biased, and visual inspection should be taken into account as well. While we did not achieve SoTA results on Gaussian deblurring, visually though most methods yield good and comparable reconstructions. This makes the distinction between methods on Gaussian deblurring very nuanced, and it is not obvious whether the metrics used are sufficient to capture the differences between all methods. We, therefore, focused our efforts on the more challenging inpainting task where LD-SMC significantly outperforms baseline methods, both visually and in terms of perceptual metrics. Regarding additional tasks, we present results for super-resolution in the paper. In addition, we add here results for free-form inpainting on ImageNet and FFHQ following the protocol suggested by Saharia et al., (2022). For numerical results please see Tables 3 & 4 in https://tinyurl.com/p6hh6bjy. Qualitative examples can be seen at https://tinyurl.com/2u6mmj2x From the tables, LD-SMC outperforms baseline methods in terms of perceptual metrics and is comparable to baseline methods in terms of distortion metrics. Visually, as in box inpainting tasks, LD-SMC reconstructions better preserve fine details compared to baseline methods. In addition, despite having good values in several metrics, ReSample presents artifacts that make the images look non-natural. **(3)** > “No analysis on the main contributions: approximation and proposal distribution.” Thank you for this valuable feedback. Please see comment (7) to reviewer 326R where we torch upon this point. Regarding the proposal, the theory is very general and allows a large leeway to pick an appropriate distribution. Ideally, we would want a proposal that generates samples that are in agreement with $\mathbf{y}_0$ and have a high likelihood. We experimented with numerous formulations and we picked the one that worked best. **(4)** > Difference from existing SMC methods Thank you for the suggestion. Please see comments (3) to Reviewer yJi6 in which we discuss other SMC methods. **(5)** > ”Any quantitative results to separately evaluate the quality of restored images in large patterns and local details.” We believe the common metrics, such as FID and NIQE reflect that. It can be witnessed visually as well. **(6)** > “Resample is overall comparable or often outperforms LD-SMC” In terms of perceptual metrics, LD-SMC outperforms ReSample (13/16 comparisons in favor of LD-SMC). In terms of distortion metrics, indeed ReSample has an advantage (15/24 comparisons in favor of ReSample). Visually ReSample presents significant artifacts in all tasks and especially in in-painting. An additional visual comparison on ImageNet Gaussian deblurring is here: https://tinyurl.com/54pec8kn. Zooming in on ReSample images, noticeable artifacts can be seen in the generated images. **(7)** > “Compared to other simple proposal distributions, the performance gain is not fully analyzed” Thank you for this suggestion. Table 5 in https://tinyurl.com/p6hh6bjy shows a comparison of LD-SMC with the proposed proposal distribution and two alternatives, DPS as a proposal, namely taking the hyper-parameter $s=0$, and the prior as a proposal distribution. From the table, LD-SMC proposal outperforms both alternatives in most metrics. **(8)** >Computational cost of LD-SMC Thank you for this suggestion, please see comment (4) to Reviewer AxT3 who also raised this point. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed response. The rebuttal addressed some of the concerns raised in my initial review. In particular, the authors revisited their proposal distribution and compared the Prior proposal across Tables 1 to 5 in the 'Additional Results' section on the anonymous GitHub repository. However, its empirical benefit is not fully demonstrated. Furthermore, the proposed method was not compared against alternative methods and failed to achieve competitive performance. In addition, the main contribution of this work is the extension of SMC to the latent space. Overall, the technical contributions remain somewhat limited. Therefore, I will stick to my original rating. --- Reply to Comment 1.1.1: Comment: Thank you for the additional feedback. Please see our comments below. **1. Empirical benefit of the proposal:** First please note that in Table 5 we compare our proposal distribution to the prior proposal distribution **and** the DPS as a proposal. Indeed other proposal distributions can be constructed, but this one yielded perceptually good images, especially in inpainting tasks. In addition, per comment 2 to reviewer yJi6, in figures https://tinyurl.com/yrht7az9 (ImageNet) and https://tinyurl.com/2snc4vdz (FFHQ) we analyze the effect of varying the hyper-parameter $s$. Taking $s=0$ leads to a large degradation in the FID. Conversely taking $s$ to be too large results in a steep degradation in the PSNR between $s=333$ and $s=500$. Using $s=333$ strikes a good balance between the FID and PSNR in inpainting tasks. We will be happy to perform additional comparisons to other proposal distributions according to the reviewers’ suggestions. **2. The proposed method was not compared against alternative methods:** Here in the rebuttal we added two recent baselines, PFLD and LatentDAPS. Both are suited for diffusion models in the latent space and we were able to run both under exactly the same experimental setup as LD-SMC. The comparison is shown in Tables 1 and 2 here https://tinyurl.com/p6hh6bjy. LD-SMC significantly outperforms both in FID, NIQE, and LPIPS. LatentDAPS has an advantage in PSNR and SSIM, but it still does not disqualify the merits of LD-SMC. Regarding the initial proposal of the reviewer “RED-diff, DDRM, DDNM^+ \PI GDM [...]”. Although the numbers in these papers are not comparable to ours due to mismatch in the setup (e.g., different diffusion model), we will add them and other common methods to the paper for completeness. Specifically, DDRM is designed for linear inverse problems and hence is not applicable to our use case. The other baselines can be applied with non-linear operators, but to the best of our knowledge, were not tested using latent diffusion with a decoder network, which is highly non-linear and imposes a significant computational burden. The different setup, including the decoder involvement, can impose non-trivial changes and extensive hyper-parameter search to these baselines; nevertheless, we will examine how to implement them in our case as well. Due to lack of time we cannot do it by the end of the rebuttal. **3. LD-SMC performance:** In comparison to the methods presented in the paper, in terms of the perceptual metrics, LD-SMC is first 8 times, second 5 times, and third 3 times. In distortion metrics, LD-SMC is first 4 times, second 13 times, and third 7 times. In terms of perceptual metrics, LD-SMC has a clear advantage and in terms of distortion metrics it is competitive. However, as we stated, we chose to put more emphasis on perceptual quality, which is reflected in perceptual metrics and visual inspection. Conversely, we could have performed other design choices (e.g. in the proposal distribution) or hyper-parameter tuning to favor distortion metrics, but we believe the former is more important. **4. Technical contribution:** There are several novel contributions to this work. First, we show how to combine auxiliary variables with latent space diffusion models. Second, we construct a generative model and perform inference using Gibbs sampling, of which the SMC is only part. Third, per comment 7 to reviewer 326R we theoretically show that LD-SMC is asymptotically accurate, namely that it can sample from $p\_\theta(\mathbf{z}_0 | \mathbf{y}_0)$ despite all the approximations made (link to proof: https://tinyurl.com/3rdvpnad). Lastly, we show significant improvements in perceptual quality in inpainting tasks, arguably one of the most challenging inverse problems tasks. Overall, we believe LD-SMC lays a good foundation for SMC methods for inverse problems in the latent space of diffusion models. We will be happy to provide further clarifications to the reviewer and we kindly ask from the reviewer to reevaluate our paper and the score based on our comments and the merits of our method.
null
null
null
null
null
null
Proto Successor Measure: Representing the Behavior Space of an RL Agent
Accept (poster)
Summary: The paper constructs a linear framework, "Proto Successor Measures", for classifying the space of Q functions, a generalization of successor features. The paper provides some theoretical results on PSM, showing how they can be learned from offline data and used for inference at test-time. Experiments show the advantage of PSM over Laplacian and Forward-Backward baselines on gridworlds and locomotion tasks. Claims And Evidence: Claim: Visitation distributions lie on a convex hull defined by affine constraints. Evidence: Theorem 4.1, 4.2. Claim: Better generalization properties Evidence: Theorem 4.4, Fig 1 table 1. Claim: The optimal policy can be produced for any reward function. Evidence: This seems over-stated. At best, further samples are needed from the environment to infer a reward function. This could be problematic especially in sparse environments. Methods And Evaluation Criteria: The discrete codebook seems practically relevant and improves efficiency. The evaluation on diverse environments showcases the efficacy of the method in a variety of settings. Theoretical Claims: Many results stem from the linearity of the framework. Making use of the Bellman flow equation is nice. Theorem 4.1: Its proof based on the Bellman flow equation and related arguments appears sound. Cor 4.2: The affine set proof follows straightforwardly. Theorem 4.4: The notation is somewhat confusing for me here. E.g. maybe there are missing brackets around $span \\{ \Phi^{vf} \\}$? Fixing typography $span \to \textrm{span}$ can also help readability. The proof seems to follow with a straightforward rearrangement of terms, but it is a bit opaque (to me): can you provide some further intuition on the terms like $\beta^\pi , k$, and their ratio? L692 is a bit confusingly worded I feel. Do you mean to say that you've shown the same $V^\pi$ is represented in $\\{ span \\{\Phi \\} r \\}$? The construction of $\Phi(s,s')$ from the indicator function seems trivial, but I might be missing something deeper here. Some further explanation would be appreciated! If I understand correctly, the proof is highlighting that only factorized $\Phi$ lead to value functions represented in the smaller span? - Lets $\to$ Let's - L709, Value $\to$ value Experimental Designs Or Analyses: Experiments are aligned with the claims. - Visualization of the Q function in Fig 3 is nice, but how close is this to the optimal Q function? - Fig 4: What are update steps? $3$ seeds is quite small -- adding more could strengthen the statistics. - Table 3 helps understand different representation sizes (based on comments in final section) - Table 4: Do you know how such errors scale with size of maze? - How is there any "smoothness" between goals in the DMC tasks? They seem quite distinct, so how is there a generalization effect among tasks? Supplementary Material: Please carefully re-read for typos and readability (in main text too), I found many portions poorly worded or with typos. Relation To Broader Scientific Literature: This paper nicely positions PSM as a "successor" to the successor features framework, which has proven itself in recent years. Generalizing the basis with policy-independence is a useful step forward for such zero-shot solutions and understanding the MDP geometry. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A (please see other comments) Other Comments Or Suggestions: - Could you further clarify the use/meaning of the bias vector and how its presence departs from typical SF? - Is there any generality lost in the linearity of the framework? What are possible downsides/alternatives/missing generalizations/adversarial reward functions to keep in mind? - Can you provide an experiment showing the result of small datasets/dataset scaling? Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for providing detailed feedback on our paper. We would like to address the concerns raised by the reviewer: > At best, further samples are needed from the environment to infer a reward function. This could be problematic especially in sparse environments. : We agree with the reviewer’s observation that practically sparse reward environments can be problematic. Mathematically, our method can be used to obtain the near-optimal policy for any reward function. But in practice, the quantity $\sum_{s’} \phi(s, a, s’)r(s’)$ is approximated by $\mathbb{E}_{s’ \sim \rho}[\phi(s, a, s’) r(s’)]$. This restricts the knowledge of reward function to only samples from the function, which could affect the optimality in some situations. We will soften the claim in the camera-ready version. >Theorem 4.4: The notation is somewhat confusing for me here. Fixing typography can also help readability.: We thank the reviewer for pointing out that the Theorem statement might be confusing. We shall rewrite the theorem statement in the camera ready version. Through this theorem, in line with reviewer’s understanding we wished to portray that spanning successor measures using a basis covers a larger set of RL solutions than spanning value functions using a basis. > The proof seems to follow with a straightforward rearrangement of terms, but it is a bit opaque (to me): The proof establishes that for every instance of a $V^\pi \in span{\phi^{vf}}$, there exists an instance of $V^\pi$ in $\span{\Phi}r$ for some reward. While the reverse is not always true. $\beta^\pi$ is used to denote the weights of the linear combination of \phi^{vf} while representing $V^\pi$ and $k_i = \sum_{s’}r(s’)$ is used for algebraic manipulation. We appreciate the reviewer’s concern and will make the proof clearer in the camera-ready version by adding explanations for the different algebraic manipulations used. > Fig 4: What are update steps? seeds is quite small -- adding more could strengthen the statistics.: Update steps means the number of learning updates made to the bases and bias. $\phi$ and $b$ were evaluated on their zero-shot performance after every fixed number of training steps. We appreciate the reviewer for pointing this out and will add more seeds during the camera ready version. > Table 4: Do you know how such errors scale with size of maze?: There are 50 and 68 cells respectively in the grid world and four room maze. So the % of wrong predictions for PSM are 4.1% and 16.97%. This is in comparison to 29.06% and 42.35% for FB and 38.56% and 56.57% for Laplace. > How is there any "smoothness" between goals in the DMC tasks? They seem quite distinct, so how is there a generalization effect among tasks? We are not sure if we understand the reviewer’s question - Most of the tasks we evaluate for DMC use dense rewards as opposed to goals. These tasks are the standard ones that prior methods (Touati et. al., 2023; Park et. al. 2024) used for evaluating unsupervised RL performance. PSM uses the unlabelled transition dataset to learn all possible behaviors (stitched trajectories). Near optimal policies for each of these tasks correspond to a particular stitching. > Could you further clarify the use/meaning of the bias vector and how its presence departs from typical SF?: The bias vector comes from the fact that the Bellman equation is affine and not linear. The significance being that a zero visitation/successor measure is not permitted which might be permitted in SF. Note that it is possible to recover the bias by representing successor measures as $\phi w$ (if the last coordinate of $w$ being learnt to be 1). But, the bias term forces a mathematically correct representation. What truly departs from SF is the absence of the assumption of the linear mapping between rewards and policy embeddings. > Is there any generality lost in the linearity of the framework? What are possible downsides/alternatives/missing generalizations/adversarial reward functions to keep in mind?: The PSM objective ideally will represent all possible successor measures contrary to SF-methods which focus on learning successor measures optimal for a particular class of reward functions (linear-in-state-representation). This means they will fail to capture the successor measure if it was not optimal for a certain reward function belonging to the class. Second, assuming linearity between rewards and zero-shot policy embeddings, fundamentally disallowing many-to-many mapping between policy and rewards. > Can you provide an experiment showing the result of small datasets/dataset scaling? : We run PSM on a reduced dataset (by selecting only 10%) of the trajectories in the RND dataset for walker and cheetah. While the performance on walker remains 656.08 (full dataset performance: 689.07), the performance on cheetah dips to 211.53 (full dataset performance: 626.01). We will add detailed experiments in the camera ready version.
Summary: The paper introduces Proto Successor Measure (PSM), a basis set to represent all possible behaviors of an RL agent in an environment. The key insight is that any valid successor measure must satisfy the Bellman flow equation. By rearranging the Bellman flow equation one gets an affine equation. Any solution to this affine equation can be represented as an affine combination of a basis set, where the bases and the bias term are independent of the policy. Hence, by learning this basis set, one can represent the successor measure of any policy by finding the corresponding set of linear weights $w$. Given a downstream reward, one can solve a linear program to obtain the $w$ corresponding to the successor measure for the optimal behavior, obtain the optimal Q function, and then back out the optimal policy. With these theoretical foundations, the paper introduces a practical algorithm to learn the basis functions using reward-free interaction data. This involves using a seeded RNG to draw samples from the policy space. The method is evaluated on a selection of manipulation and locomotion zero-shot RL problems from FetchReach and ExoRL, where PSM exhibits superior transfer behavior compared to prior zero-shot RL baselines. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Empirically, the paper claims that PSM leads to more desirable value functions and better transfer performance. The visualizations of value functions in the Gridworld and Four-room environments (Figure 3) show that PSM value functions are more concentrated than the FB and Laplace value functions. On FetchReach and ExoRL (Figure 4, Table 1), PSM achieves higher episode returns than baselines when transferring to downstream rewards. For theoretical claims, please refer to the discussion in the "Theoretical Claims" section. Methods And Evaluation Criteria: This paper applies PSM to the problem of zero-shot reinforcement learning, which aims to learn a representation of policies from a reward-free dataset and quickly obtain the optimal policy for some test-time reward functions without additional environment interaction. The evaluation domains properly represent this problem, where the methods are trained on some large reward-free datasets and transferred to multiple different reward functions. Theoretical Claims: I checked the correctness of the proofs for the theoretical claims. The paper provide three sets of theoretical claims: 1. (Theorem 4.1, Corollary 4.2, Corollary 4.3) Deriving proto successor measure as the basis set for solutions to an affine equation. 2. (Theorem 4.2) The basis value functions represent a smaller space of value functions than proto successor measures. 3. (Theorem 6.1, Lemma 6.2, Theorem 6.3) Connects PSM to successor features showing that one can decompose PSM into successor features corresponding to some state features. These proofs largely follow the mechanisms in [1]. The proofs for these claims appear to be sound. [1] Ahmed Touati, Jérémy Rapin, Yann Ollivier. Does Zero-Shot Reinforcement Learning Exist? ICLR 2023. Experimental Designs Or Analyses: I checked the soundness of the experimental designs, including the Gridworld analysis, the FetchReach environment, and the ExoRL benchmark. Supplementary Material: I checked all parts of the supplementary material, including proofs and additional details. Relation To Broader Scientific Literature: The key contributions are built on a line of prior works on zero-shot reinforcement learning. Particularly relevant is the FB representation [1,2], which proposes to decompose a successor measure into a product of a forward representation and a backward representation. Unlike FB where the representations are conditioned on some policy representation $z$, PSM learns policy-independent basis functions, which is more desirable. [1] Ahmed Touati, Yann Ollivier. Learning One Representation to Optimize All Rewards. NeurIPS 2021. [2] Ahmed Touati, Jérémy Rapin, Yann Ollivier. Does Zero-Shot Reinforcement Learning Exist? ICLR 2023. Essential References Not Discussed: The paper includes sufficient context for the reader to understand its contributions. Other Strengths And Weaknesses: **Strengths** 1. This work proposes a principled approach to zero-shot RL, a problem of great interest to the community. 2. The derivations are driven by first principles and mathematically sound. The practical implementation achieves superior empirical performance compared to relevant baselines. 3. This work improves upon prior zero-shot RL methods by learning representations truly independent of the policy and in principle capable of representing all behaviors in an RL environment. **Weaknesses** 1. The inference for the linear weights involves solving a linear program with constraints. This is harder to solve than linear regression in prior works. 2. PSM does not directly produce policies. It produces the successor measure corresponding to the optimal policy for the downstream reward. To back out a policy for a continuous domain, one needs to recover the Q value and then perform several iterations of policy optimization to take actions that maximize the Q values. In some prior work, the definition of zero-shot RL is no additional policy optimization. PSM would violate this strict definition of zero-shot RL. 3. The paper does not address the data assumption. In principle, the dataset needs to have full coverage of the state and action spaces to learn a basis set that spans the entire space of successor measures. Other Comments Or Suggestions: 1. Line 43 typo "familia r" 2. Line 89 typo "ge neralization" 3. Line 177 "Successor measures are more general than state-action visitation distributions as they encode the visitation of the policy conditioned on a starting state-action pair." This statement is inaccurate because successor measures are a special case of state-action visitation distributions (equation 2). 4. Line 189 "we do not need the first constraint on the linear program (in Equation 3) anymore" -> the second constraint. Questions For Authors: 1. How do you solve (4) in continuous space? Do you sample (s, a) from a dataset? 2. How do you determine the dimensionality of the basis set? Can you provide an ablation of the dimensionality of the bases? 3. In hyperparameters (Table 2), why does w have 3 layers? Is it represented by a neural network? ## Updates After Rebuttal The authors have adequately addressed my questions. I will maintain my score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer for providing detailed feedback on our paper. We would like to clarify all the questions raised by the reviewer: > The inference for the linear weights involves solving a linear program with constraints. This is harder to solve than linear regression in prior works. : Yes, while we agree that the full PSM objective requires solving a constraint LP, which can be harder than simple linear regression as in some of the prior work, other methods relying on linear regression rely on the assumption that policy embeddings z have a linear relation to reward functions. Enforcing this assumption in PSM, as we have shown in Theorem 6.3 (Line 318-328 right column), PSM can also enable inference through linear regression. > PSM does not directly produce policies. It produces the successor measure corresponding to the optimal policy for the downstream reward. To back out a policy for a continuous domain, one needs to recover the Q value and then perform several iterations of policy optimization to take actions that maximize the Q values. In some prior work, the definition of zero-shot RL is no additional policy optimization. PSM would violate this strict definition of zero-shot RL. : Following our explanation in comment 1 above, the full objective of PSM does require additional optimization to obtain policy from successor measures, but these optimizations are much easier than solving an RL problem and similar to a policy evaluation problem. With the additional assumption of rewards linear in state features obtained by PSM, we can enable true zero-shot inference without additional optimization. The key benefit of PSM lies in its flexibility to accurately represent all behaviors of an agent and with an additional assumption can also outperform prior works in zero-shot RL. > The paper does not address the data assumption. In principle, the dataset needs to have full coverage of the state and action spaces to learn a basis set that spans the entire space of successor measures: The capabilities of any unsupervised learning algorithm is intrinsically tied to the dataset it is learning from. PSM and its unsupervised RL counterparts FB and HILP all are limited by dataset but PSM aims to learn to represent a more diverse range of skills as a virtue of its objective. But this very property can be beneficial in practice: When using real world dataset to learn skills we would avoid focusing on skills that are irrelevant in the real world. With complete coverage of the space, PSM will learn the basis that attempts to span the entire space of successor measures. > Typos: We thank the reviewer for pointing out these typos. We shall fix them in the camera-ready version. > How do you solve (4) in continuous space? Do you sample (s, a) from a dataset? In both continuous and discrete spaces, we sample (s, a) from the dataset. > How do you determine the dimensionality of the basis set? Can you provide an ablation of the dimensionality of the bases? We have provided an ablation for the dimensionality of the bases in Appendix C.1. The dimensionality of the bases can potentially vary depending on the dynamics. For instance, If the dynamics is very regular (symmetric and Lipschitz as a lot of the real world domains are), the dimensionality may be way less than the mathematical limit of S x A. An example can be constructed similar to the one in Section B.2 in [1]. [1]: Touati, Ahmed, and Yann Ollivier. "Learning one representation to optimize all rewards." Advances in Neural Information Processing Systems 34 (2021): 13-23. > In hyperparameters (Table 2), why does w have 3 layers? Is it represented by a neural network? During pretraining i.e., obtaining the bases of PSM, we represent w as a function of the policy representation: the discrete code c. As a result, we represent w as a neural network which is a mapping from c to $\mathbb{R}^d$.
Summary: The paper investigates representation learning in RL with the aim of performing zero-shot learning: computing optimal policies on downstream tasks wihout any further training. Building on earlier works about representation learning in a reward-free setting, especially on successor representations, it proposes a new representation called "proto-successor measure" (PSM) and gives a procedure to learn this representation. It is based on the observation that successor measures (the discounted distribution of states under a given policy) satisfy a Bellman equation which is independent of the policy. As solutions of linear systems, successor measures and related objects thus span an affine subspace. The proto-successor measure consists then in computing an approximation of this affine subspace. This theory is completed with experimental works in gridworld and continuous control environments, exhibiting better performance than forward-backward, HILP and Laplacian representations, which are popular representations that attracted a lot of attention recently. ## Update after rebutal: I think the authors for addressing my last questions. I agree with the authors that PSM may bring a complementary and novel idea to the FB representation of Touati \& al, so I will maintain my score (weak accept). Claims And Evidence: The authors claim their work on the PSM representation gives a "novel, mathematically complete perspective" on representation learning in RL, together with an efficient algorithm. While I think the present work has an interest, I am a bit more reserved about its ground-breaking aspect. From a theoretical perspective, I think the main contribution is in the observation that there exists a Bellman equation independent of the policy. This is a simple fact which could nonetheless have powerful consequences and to the best of my knowledge this is indeed new. However I don't think the rest of the theoretical content goes much deeper than this: the main result Theorem 4.1 and its corollaries for instance are simple statements that solutions of a linear system span an affine subpsace. So I think the "mathematically complete" mention is a bit exaggerated here. The paper would require a much more quantitative analysis to deserve that mention. When it comes to the experiments, the authors underline the apparently better performances of PSM over other representations like the forward-backward (FB) model of Touati \& al, but I am not sure this comparison is really fair. I believe the generalization power of PSM is a bit different: as the authors write in Theorem 6.3, the PSM representation factorizes the successor measure as $M^{\pi}(s,a,s^{+}) = \phi(s,a,s^{+}) w^{\pi}$, while the FB model adds a further factorization in that $M^{\pi}(s,a,s^{+}) = \psi^{\pi}(s,a) \phi(s^{+})$ (I consider a tabular setting). Therefore I think PSM allows a generalization over policies but it remains intrinsically a high-dimensional ($S^2 A$), more complex representation than FB which generalizes both over policies and state-actions. Thus it hardly comes to as a surprise that PSM provides more expressive representations than all other benchmarks considered here, and I suspect that in a tabular a setting the sample-complexity of learning PSM could scale as $\Theta(S^2 A)$. Methods And Evaluation Criteria: The overall method to learn the PSM representation makes sense. I believe however there is a typo in the loss function p.5, I think there is a square as well as a $(1-\gamma)$ factor missing. Also I haven't quite understood the idea of the discrete codebook of policies, and in particular why the "approach provably samples from among all possible deterministic policies". For evaluation criteria I don't have concerns other than the one above: I am not sure think PSM really compares with FB, Laplacian and the likes. Theoretical Claims: I haven't checked all the proofs but I think the results are simple enough to be correct. Experimental Designs Or Analyses: I don't have more to say than in "Methods and Evaluation Criteria". Supplementary Material: I have looked briefly at the appendix. I think I noticed a typo in Eq. (17), it seems to me that $M_{\pi}$ should be $M_{\pi}^{\top}$. Relation To Broader Scientific Literature: The paper seems to be heavily inspired by works on Laplacian representations as well as the recent line of work of Touati \& al who proposed the forward-backward model representation to perform zero-shot learning. The present paper starts by considering the same object, the successor measure, which for a fixed policy is the operator that maps a reward to its value function. It has appeared quite often in the literature. The forward-backward model of Touati \& al. aims to compactly represent the successor measure of all optimal policies though what is essentially a low-rank matrix in the tabular setting. The PSM representation bears resemblance in that it attempts to represent successor measures for different policies, but the representation is now low-rank. It leverages the fact that successor measures span an affine subpsace, which I think has not been observed in the work of Touati \& al. Essential References Not Discussed: I think the paper already discusses the most important references in relation to the subject. Other Strengths And Weaknesses: The main idea of the paper is simple but interesting. The paper is in general pretty clear. Other Comments Or Suggestions: Overall I think the basic idea of this paper, that successor measures span an affine subspace, has a good potential and the experiments are promising. However I think the theoretical content is quite simple so I would suggest putting the focus on the experimental work. For instance, I don't think Section 3.2 on affine spaces rally brings a lot to the paper. I would assume most researchers in machine learning know what an affine space is. Secondly, the paper puts the emphasis on state-action distributions and regards the successor measure as a secondary. I believe this should be the other way around, the successor measure is the central object and state-action distributions a byproduct of it. Finally I have noticed a few typos here and there: "familia r' line 043, "ideplogy" p.8 Questions For Authors: 1. I'd like primarily the authors to address the concern raised above: is it true as I think that PSM and FB do not play in the same league exactly, since the features of PSM are functions of $(s,a,s')$? Is it really possible to learn PSM as efficiently as FB? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for taking time to review our paper and providing detailed feedback. We would like to address the concerns of the reviewer below: > I think the "mathematically complete" mention is a bit exaggerated here. The paper would require a much more quantitative analysis to deserve that mention.: By “mathematically complete,” we mean that the representation learning framework is derived without making any assumptions on (1) dynamics, (2) reward functions, or (3) the complexity of the state or action space. Since the practical algorithm has some approximations to make the framework more tractable, starting with the low-rank approximation of the bases space itself, we thank the reviewer for pointing this out and understand it might be construed to mean something stronger.Wee shall soften and clarify this language in the camera-ready version. > I am not sure think PSM really compares with FB, Laplacian and the likes: We respectfully disagree with the reviewer that PSM does not really compare with FB, Laplacian. In all our experiments, the respective functions, ($\phi, b$ for PSM or $F, B$ for FB) are represented using neural networks with a similar number of parameters, thus working with similar hypothesis spaces. While PSM requires training the bases with networks taking in (s, a, s’), FB requires training two networks, one with input (s, a, z) and the other with (s). The dimension of z is similar to (often more than) s (128 in our experiments). This observation, in some ways, makes FB require more parameters than PSM. But, in any case, FB and other SF based approaches like Laplace and HILP seem to be the most relevant prior work in terms of zero-shot RL that PSM can compare against. > Typos: We thank the reviewer for pointing out the typos. We shall correct them in the camera-ready version. > I think the theoretical content is quite simple so I would suggest putting the focus on the experimental work.: We are glad to see the reviewer likes our observation that successor measures span an affine space. While the idea may seem simple in hindsight, it has (to the best of our knowledge) remained unexplored in the RL literature. Hence we decided to spend more time building towards insights we can obtain from the affine-space idea of successor measures - for instance we are able to show that with the same basis dimensions, we can span a larger space of values than methods which learn a basis space across value functions. > The paper puts the emphasis on state-action distributions and regards the successor measure as a secondary. I believe this should be the other way around, the successor measure is the central object and state-action distributions a byproduct of it.: We are aligned with the reviewer's observation that successor measures are the main object we end up learning to represent. For unsupervised learning, it is preferable to learn to represent successor measures as they capture more information that visitation distributions. Our choice of starting with visitation distributions was motivated by the large number of works that explore the idea of RL as a linear program and goes to show the same idea can be used for unsupervised representation learning. The familiarity of readers with that literature motivated us to start from visitation distribution to facilitate understanding. We will clarify the writing to place more emphasis on the successor measure. > Is it true as I think that PSM and FB do not play in the same league exactly, since the features of PSM are functions of (s, a, s’)? Is it really possible to learn PSM as efficiently as FB?: As discussed earlier, FB and PSM are parameterized with neural networks with a similar number of parameters, in fact, depending on the sizes of s and z, FB may be parameterized with a larger network. Empirically it has been observed that FB is difficult to stabilize as it is optimizing for a moving reward function ($B^{-1} z$) while PSM does not face this issue as it does not tie the policy latents to reward functions. Moreover, FB uses the policy optimized for the corresponding reward (through Q function maximization) to sample actions during its training. This means that FB is susceptible to overestimation issues occurring due to Q function maximization. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. My concern about the comparison of PSM and FB has been answered, but I have still a few comments. The authors did not address my remark regarding the discrete codebook. About FB and PSM: I understand that on the experiments, PSM shows better performance with similar number of parameters, but these remain specific examples. I don't take this as a proof that PSM will in general always perform better that FB. When saying that I am not sure FB compares with PSM, I did not mean it as a critic, but more as the suggestion it could be a complementary idea. I wonder for instance if the fact that all successor measures satisfy the same affine equation could shed new light on FB and Laplacian representations. About the discrete codebook: I probably could have been clearer in my review. I understand the idea of the codebook but I wasn't sure to understand Eq. (8), which I interpreted literally. I guess the authors mean that for each state the action are sampled uniformly at random with a seed depending on z and s? With hindsight and from the other reviewer remarks, I think the discrete codebook idea is a key feature of the paper and perhaps the biggest difference with FB. There should be a discussion about the size, as suggested by reviewer MScc. The sampling of policies is also what makes me think why FB could still be preferable in some cases, since it directly attempts to find optimal policies. On the other hand, why the codebook of sampled policies would give a meaningful representation, if the codebook is too small? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response. We address all the concerns raised by the reviewer as follows: > The authors did not address my remark regarding the discrete codebook. We apologize for missing your remark about the discrete codebook. The discrete codebook was a neat technique to achieve a sampling distribution of all possible deterministic policies. It is implemented by sampling an integer $z$, constructing a seed using this integer and the hash of the state, and using this seed to sample from a uniform distribution. If $z \in \mathbb{Z}^+$, or $z$ covers all seeds, we get all possible deterministic policies. In practice, $z \leq 2^{16}$ has worked well for all the domains. > I don't take this as a proof that PSM will in general always perform better that FB. The reviewer is correct that PSM performing better than FB empirically does not prove the PSM will in general always perform better than FB. In our experiments across multiple domains (both discrete and continuous), we found PSM to consistently outperform FB while also requiring much less hyperparameter tuning. Theoretically, we hypothesize the following reasons why PSM is better than FB: (1) FB inherits limitations from SF-based approaches due to a similar linear mapping between rewards and policy embeddings. This mapping is incorrect as the rewards to policy mapping is many to one rather than one to one. (2) During training, FB samples from policies that are optimized (through Q function maximization) for the corresponding reward ($B^{-1} z$). This maximization step is prone to overestimation. PSM on the other hand, uses a milder form of off-policy evaluation that is more stable. (3) FB optimizes its actor for a changing reward function ($B^{-1} z$) as B gets updated every iteration. We will reassure our claim to say that PSM is competitive alternative; as there a number of uninterpretable factors for deep representation learning that might make one method over the others. > I wasn't sure to understand Eq. (8), which I interpreted literally. I guess the authors mean that for each state the action are sampled uniformly at random with a seed depending on z and s Yes, the reviewer is correct. We specify the seed using the state $s$ and the integer code $z$ to approximate the distribution over all policies. > Discrete codebook is perhaps the biggest difference with FB While a discrete codebook in the practical implementation of PSM is a new idea than FB, we believe it is only one of the possible ways to implement PSM. PSM is a more general framework that leverages the structure of visitation and successor measure in representation learning. Our previous answer also points out some reasons that PSM is potentially a stronger approach than FB, theoretically. > The sampling of policies is also what makes me think why FB could still be preferable in some cases, since it directly attempts to find optimal policies. On the other hand, why the codebook of sampled policies would give a meaningful representation, if the codebook is too small? In practice, FB requires a number of practical implementation tricks to make learning stable and performant. As an example to make test-time policies performant, FB samples from a prior over reward functions that consists of 50% rewards that are goal-reaching. Besides that, FB also requires careful architecture design such as using a layernorm in the first layer. These are by-products of some of the theoretical reasons we list in our first comment. PSM is practically stable to train and does not require bellman optimality backups, which are prone to overestimation in the offline setting. We believe it's better to think of PSM and FB as complementary ways, and future work can build on both of these ideas together. For the codebook, we use a codebook of $2^{16}$ policies which we found to be sufficient for good representations and the size of the codebook has minimal effect on computational requirements for training.
Summary: This paper studies the reward-free RL problem and proposes the concept of proto successor measure (PSM), which is built on the idea of proto value functions in (Mahadevan and Maggioni, 2007) and serves as the basis set of all the possible visitation distributions in a given RL environment. By learning this basis set, given a reward function at the inference stage, solving for an optimal policy is equivalent to solving a constrained linear program, where both the objective function and the Bellman flow constraint can be expressed in terms of this basis. Accordingly, this paper proposes to implement the PSM by integrating two components: (1) Using the measure loss similar to that in (Touati & Ollivier, 2021), one can learn the weight that corresponds to a given policy under some learned PSM. (2) To learn the basis set in PSM via pre-training, this paper proposes to construct a discrete codebook of policies, which serves as a way to simulate uniform sampling over policies. The proposed PSM is evaluated on both discrete and continuous control environments, including gridworld-like problems, Fetch-Reach tasks, and the ExoRL suite. Through experiments, the PSM is shown to be comparable or outperform the recent benchmark reward-free RL methods. ## update after rebuttal I appreciate the authors for the additional response and for answering my follow-up questions. The additional sanity check of FB and the clarification on linear mapping are helpful. On the other hand, I am still not convinced by the explanation about overestimation bias from the off-policy learning perspective (FB in (Touati et al., 2023) was implemented based on TD3, and there are existing techniques to largely avoid overestimation) and the argument “...This instability is absent in PSM as PSM performs off-policy evaluation using samples from the codebook policy…” After going through all the reviews and the rebuttal responses again, overall, the idea of PSM appears to be a useful addition to the reward-free RL literature with good empirical support. Hence, I have increased my score accordingly and lean towards acceptance. Claims And Evidence: The main theoretical claims of this paper (Theorems 4.1 & 4.4, Corollaries 4.2-4.3, and Theorems 6.1 & 6.3) are supported by the proofs in the appendix. Methods And Evaluation Criteria: - Like the existing reward-free RL literature, this paper evaluates the capability of zero-shot generalization by testing the performance of PSM and other baselines across a few difference reward functions (e.g., for the Walker environment, take the reward functions of the four underlying tasks Stand, Run, Walk, and Flip). One concern is that it is unclear whether this small set of reward functions is already representative enough. While I know this evaluation procedure is also adopted by some prior works (e.g., (Touati & Ollivier, 2021) and (Touati et al., 2023)), to make a more thorough comparison, it would be good to compare a more diverse set of reward functions for each environment. - All the pre-training is done on a shared offline dataset constructed using a fixed behavior policy, and this looks fair to all the algorithms. Theoretical Claims: - The main theoretical results of the paper are two-fold: (1) The set of possible visitation distributions form an affine set (Section 4); (2) The classic successor features can also be represented using a similar basis set. - I have checked the proofs in the appendix up to Appendix A.3, and I did not spot any major issues regarding the correctness of the analysis. Experimental Designs Or Analyses: I have checked the experimental designs in Section 7. Most of them look reasonable. However, there are two things that require further explanation: - Representative reward functions at test time: As mentioned above, one concern is on whether the small set of reward functions considered in Section 7 for each environment is indeed representative enough. Specifically, like the existing reward-free RL literature, this paper evaluates the capability of zero-shot generalization by testing the performance of PSM and other baselines across a few difference reward functions (e.g., for the Walker environment, take the reward functions of the four underlying tasks Stand, Run, Walk, and Flip). While I know this evaluation procedure is also adopted by some prior works (e.g., (Touati & Ollivier, 2021) and (Touati et al., 2023)), to make a more thorough comparison, it would be good to compare a more diverse set of reward functions for each environment. - Comparison with successor feature methods: The PSM approach is highly related to the general method of Successor Features (SF). Therefore, it would be helpful to compare PSM with SF-based approaches, e.g., (Alegre et al., 2022) to demonstrate the benefits of learning this basis set. - Regarding the qualitative results in Figure 3, it appears that FB cannot generalize well in the four-room environment. However, this is quite different from the results provided in the original FB paper (Touati & Ollivier, 2021). It is unclear where the differences come from. It would be good to describe this in more detail to ensure a fair comparison. (Alegre et al., 2022) Lucas N. Alegre, Ana L. C. Bazzan, and Bruno C. da Silva, “Optimistic Linear Support and Successor Features as a Basis for Optimal Policy Transfer,” ICML 2022. Supplementary Material: I have checked some of the proofs in the appendix, but I did not review the source code in the supplementary material. Relation To Broader Scientific Literature: The main contribution of this paper is to offer another perspective of solving reward-free RL by pre-training a basis set called Proto Successor Measure. While this line of research can be traced back to the classic proto value functions (Mahadevan and Maggioni, 2007), this paper offers a practical implementation that scales the idea of proto value function to a broader class of RL problems, including high-dimensional robot control. Given the recent attention on RL foundation models, I find this paper to be quite nice in contributing to the general literature of RL foundation models. Essential References Not Discussed: As far as I know, most of the recent relevant works on learning reward-free RL are cited in this paper. That being said, to better describe the contributions of this papers, there are two things to clarify: - Comparison with successor feature methods: While Section 6 points out that SF can be viewed as a special case of PSM, it is not well explained why it is beneficial to consider PSM in general. Theoretically speaking, using SF, like in (Lehnert & Littman, 2020; Hoang et al., 2021; Alegre et al., 2022; Reinke & Alameda-Pineda, 2021), can already be sufficient to achieve similar zero-shot generalization. - Comparison with FB representation: While I can appreciate the idea of PSM, it remains unclear to me why PSM is a better approach than the FB representation. Indeed, as also shown in Table 1, the two approaches are just comparable in most of the tasks. Moreover, compared to FB, PSM requires solving an additional constrained linear program to get a policy at inference time. Accordingly, it appears that FB could be preferred in practice. While it is mentioned in Lines 155-163 that “FB ties the reward with the representation of the optimal policy derived using Q function maximization which can lead to overestimation issues and instability during training as a result of Bellman optimality backups,” I do not find this argument convincing. It would be helpful to provide more evidence to justify this argument. Other Strengths And Weaknesses: All the strengths and weaknesses are provided in the sections above. Other Comments Or Suggestions: - The statement of Theorem 4.4 is quite hard to parse, and the presentation of it can be improved. This is partially because the notations are quite confusing. - Equation (3) can be reorganized. There are two separate “such that” which can be merged into one. - An ablation study on the need for the discrete policy codebook would be helpful. Questions For Authors: Please see the above questions on the comparison with SF-based and FB-based methods, the baselines for evaluation, as well as the experimental configuration for the test-time evaluation. I will be willing to reconsider my evaluation if the authors can address these issues. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback. We would like to address all the concerns of the reviewer below: > It would be good to compare a more diverse set of reward functions for each environment.: We appreciate the reviewer’s suggestion and agree that our original evaluation using four tasks per environment, though consistent with prior works, could be further expanded for stronger claims. To directly address this concern, **we have conducted additional experiments on a significantly larger and more diverse set of reward functions (10+ tasks) in the Walker environment**. Our extended results confirm that PSM consistently outperforms or matches the baselines. The mean performance on these 10+ tasks are (across 5 seeds), PSM: 523 +- 48.86, FB: 516.95 +- 45.61 and Laplace: 250.43 +- 30.58. We shall include detailed results with diverse tasks for each of the domains in the camera ready. > Comparison with successor feature methods: We would like to point out that the baselines that we considered are in fact SF-based approaches. “Laplace” uses Laplace eigenvectors as state features while HILP has its own objective for learning state features. Both of them learn Successor Features for these corresponding state features. Moreover, FB is also built off an SF-based approach. We acknowledge the reviewer and will make it clearer in the paper. Additionally, **we are adding experiments on a couple of other SF-based approaches (best performant as per (Touati et. al., 2023)**: one that uses one-step dynamics predictability (method named “Trans”) and one that uses SVD decomposition of successor representation (method named “SVD”). The mean performance for these baselines across all DMC tasks are (across 5 seeds): Trans: 522.10 and SVD: 524.96. PSM outperforms them by a significant margin with a mean performance: 607.60. We will add the detailed results in the camera ready. > However, this is quite different from the results provided in the original FB paper. It is unclear where the differences come from. : We thank the reviewer for noting this difference. We would like to clarify that the original FB paper does not report quantitative performance metrics comprehensively on the four-room environment but rather highlights qualitative successes with select examples. Additionally, to ensure fair comparison with FB, we deliberately modified its training strategy. The original FB method assumes that tasks are goal-conditioned, implicitly biasing the training to a smaller set of reward functions. Since PSM makes no such assumption, we adapted FB training to uniformly sample reward functions to ensure an apples-to-apples comparison. This explains the observed discrepancy in FB performance. We will clearly document these modifications in the final version. We shall include the results for the biased sampling in FB in the final version. > While Section 6 points out that SF can be viewed as a special case of PSM, it is not well explained why it is beneficial to consider PSM in general. : We appreciate the reviewer highlighting the need for clearer explanation of the advantages of PSM over traditional SF approaches. SF-based methods rely on two restrictive assumptions: (1) Sufficient Feature Representation: SF assumes either predefined state features or features learned through an auxiliary objective, presuming these features adequately span all possible reward functions which limits representational expressiveness. (2) Linear Mapping between Rewards and Policies: SF-based methods condition policies linearly on feature weights, assuming a one-to-one mapping between optimal policies and rewards. Practically, the mapping is many-to-many; multiple rewards can share the same optimal policy and vice versa, making such linearity limiting. In contrast, PSM avoids both assumptions and directly aiming to accurately capture all feasible visitation distributions. > It remains unclear to me why PSM is a better approach than the FB representation: (1) FB inherits limitations from SF-based approaches described above due to a similar linear mapping. (2) During training, FB samples from policies that are optimized (through Q function maximization) for the corresponding reward ($B^{-1} z$). This maximization step is prone to overestimation (3) FB is unstable due to optimizes for a changing reward function as B gets updated. > The statement of Theorem 4.4 is quite hard to parse, and the presentation of it can be improved. This is partially because the notations are quite confusing. : We thank the reviewer for pointing this out. We agree and will revise the statement and the notation more clearly in Theorem 4.4 in the final version to improve clarity. > An ablation study on the need for the discrete policy codebook would be helpful.: The discrete codebook is an essential component of our method that makes the PSM training tractable. We will add an ablation on the size of the codebook in the final version. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed response. Most of my concerns about the comparison with SF-based and FB have been addressed. The modifications to FB and the results for the biased sampling in FB shall be included as a sanity check. Follow-up questions: **Regarding the explanation on why PSM is a better approach than FB** >(1) FB inherits limitations from SF-based approaches described above due to a similar linear mapping. (2) During training, FB samples from policies that are optimized (through Q function maximization) for the corresponding reward ($B^{-1}x$). This maximization step is prone to overestimation. (3) FB is unstable due to optimizes for a changing reward function as B gets updated. I agree with (1). That said, PSM also ultimately relies on linear mapping (Corollary 4.2), right? Specifically, if the state and action spaces are continuous, then using a set of finite-dimensional basis also presumes an approximation based on linear mapping (Section 5.1). Please correct me if I missed anything. As for (2), has this been observed in practice? Moreover, even if overestimation indeed occurs, that is a consequence of Q-learning, not the FB decomposition itself. There are already many existing techniques that can mitigate the overestimation issue. That’s why this point is not that convincing in my opinion. Regarding (3), if I understand it correctly, PSM can also have the similar issue, i.e., when $\Phi$ and $b$ get updated during training, the resulting $w^{\pi}$ has to change accordingly if the occupancy measure has to be matched by $\Phi w^{\pi}+b$ (Equation (7)). Can the authors comment on this? **Regarding the discrete codebook** Thanks for the response. I believe it would be helpful to describe how to determine the codebook size and how it affects the performance as this appears to be an important hyperparameter in practice. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response. We address all the concerns raised by the reviewer as follows: > The modifications to FB and the results for the biased sampling in FB shall be included as a sanity check. We performed this experiment, and the errors are: 2.13 +- 0.73 (gridworld) and 12.9 +- 1.63 (four room). While these are better than the adapted FB, they are still slightly worse than PSM: 2.05 +- 1.20 (gridworld) and 11.54 +- 1.07. We shall include these results in the camera-ready. > I agree with (1). That said, PSM also ultimately relies on linear mapping (Corollary 4.2), right? Specifically, if the state and action spaces are continuous, then using a set of finite-dimensional basis also presumes an approximation based on linear mapping (Section 5.1). Please correct me if I missed anything. We agree with the reviewer that both PSM and FB lose information under finite dimensions. But, the concern that we raise for FB (and all SF-based works) also holds when dimensions are infinite. While PSM projects successor measures to a linear space, the mapping between rewards and policies is not linear, and is obtained after solving a linear program. On the contrary, FB (and SF-based methods) assume a one-to-one linear relationship between policy embeddings and reward functions. > As for (2), has this been observed in practice? Moreover, even if overestimation indeed occurs, that is a consequence of Q-learning, not the FB decomposition itself. There are already many existing techniques that can mitigate the overestimation issue. That’s why this point is not that convincing in my opinion. PSM not only differs from FB in its decomposition but also in the training. While both these methods use off-policy learning, PSM uses a relatively mild form of off-policy learning that is more stable than the one derived by maximization. Similar observations have been made by [1]. FB relies on the Q-maximization as it uses the actor learnt from this maximization to generate samples for learning successor measures. PSM on the other hand does not use a learnt actor for its off-policy evaluation, hence being more immune to the overestimation issues. [1]: Farebrother, Jesse, et al. "Proto-Value Networks: Scaling Representation Learning with Auxiliary Tasks." The Eleventh International Conference on Learning Representations. > Regarding (3), if I understand it correctly, PSM can also have the similar issue, i.e., when $\Phi$ and $b$ get updated during training, the resulting $w^{\pi}$ has to change accordingly if the occupancy measure has to be matched by $\Phi w^{\pi}+b$ (Equation (7)). Can the authors comment on this? Yes, we agree that the resulting $w^\pi$ has to also change along with $\Phi$ and $b$, but in practice all these networks are trained concurrently. This is the same as training $F$ and $B$ jointly in FB. This step can be thought of as decomposing the representation of $M$ based on the representative biases. Given an input $\pi$ (uniquely by integer code) we learn $\Phi, w$ and $b$ jointly to optimize for a fixed target of successor measure of $\pi$. The difference with FB is that FB uses an actor that is trained to optimize reward $B^{-1}z$, to sample actions when learning successor measures for policy $\pi_z$. As $B$ is constantly updated, the reward changes making the actor optimization unstable. This instability is absent in PSM as PSM performs off-policy evaluation using samples from the codebook policy. > It would be helpful to describe how to determine the codebook size and how it affects the performance as this appears to be an important hyperparameter in practice. We will be adding an ablation on the codebook size in the camera ready. The codebook size used for are experiments is $2^{16}$ that is, the integer codebook representation $z \leq 2^{16}$. These representations are implemented as binary strings and converted to integers when queried as the seed for sampling.
null
null
null
null
null
null
RISE: Radius of Influence based Subgraph Extraction for 3D Molecular Graph Explanation
Accept (poster)
Summary: The paper proposes a new instance-level method to “explain” the predictions of 3D molecular GNNs. Following earlier work, this is formulated as an optimization problem over subgraphs, where the objective is to minimize the loss of predictive power when removing edges, under a given budget of edges. The author's main observation is that the edges between close-by atoms generally should affect the performance of 3D GNNs more. They support this with experiments by deleting random edges of a certain distance and showing a stronger drop in performance the shorter the deleted edges. Based on this they formulate the optimization problem using a “radius of influence”, which is a radius around each atom beyond which the edges are being cut. This contrasts with previous literature, that optimizes soft masks without any distance-based inductive bias. The authors highlight two major benefits: 1) The radius of influence is smooth and therefore optimizable with gradient-based methods, which is claimed to eliminate the need for thresholding soft masks into hard masks. 2) The budget can be enforced strictly without relying on penalty terms. The authors test the method by measuring the drop in predictive power when a certain percent of edges are removed and show that their method performs best compared to previous methods. They also claim that their method is the only one that preserves known important features such as covalent bonds. ### update after rebuttal Thank you for the clarifications. I think the experiments are well conducted, but I am still unconvinced as to what the usefulness exactly is. The authors say, “ The main purpose is not to identify novel chemical interactions unknown to chemists, but rather to understand how ML models make decisions. This is particularly important for scientific applications, where explainability is crucial for domain scientists to trust ML models.” Unfortunately, at the current point in time, I do not trust the ML models more based on the explanations given by RISE. Showing that the model is most sensitive to covalent bonds is expected by the design of the networks, which have a strong near-sightedness bias built in. However, I can imagine that my opinion would change if the authors could present less trivial cases where the method can pick up known non-trivial interactions, which would make me believe that the model learns physical many-body interactions. Another convincing experiment would be if the method can explain a negative case, so a case where the ML model fails, and the explainer shows that the model pays too much attention to some unnecessary/unphysical edge. This way the absence of weird edges could make me trust the model more. It would also expand the usefulness of the method to let practitioners find failure points and correct them in a targeted fashion. The idea of the annuli for long-range interactions makes sense but needs to be proven. Again, I think this has a lot of potential to be impactful, but at the current point I am not convinced enough and therefore keeping my score. Claims And Evidence: The claim that atoms close to each other have a stronger influence is generally accepted and even enforced by 3D GNN architectures using a cutoff function that decreases a message's magnitude with distance. Since RISE is building on this observation it is generally a sensible idea, which is supported by the QM9 and GEOM experiments. However, the central claim that the subgraphs found by RISE are more interpretable is less well supported, or even defined what is meant by interpretable. The biggest piece of evidence is given in figure 4 and 8, where RISE only leaves the covalent bonds, whereas the other methods also include other, less interpretable edges. However, all approaches include the covalent bonds, so the fact that there are additional edges found by the other approaches could also just be due to the bigger budget afforded to them. Methods And Evaluation Criteria: The authors use the drop-off in QM9 and GEOM performance given a certain budget of edges as a figure of merit and show that their method retains better performance than previous methods. This shows that the subgraphs picked out by RISE are more predictive on average on the given datasets. However, it is not clear to me if this would still be the case for systems where long-range and non-local effects are important since RISE would be forced to generate very large radii of influence to find these interactions, thereby including many potentially unnecessary edges, whereas previous methods could find only a few important edges that mediate the effects. Theoretical Claims: The authors say that other methods have to threshold soft edges which induces a mismatch between optimization and resulting graphs, whereas RISE does not require this thresholding. As far as I understand RISE also includes a form of soft masks due to equation 7, such that there is still a mismatch in the mask and the optimization. The other claim about exactly enforceable budgets seems solid though. Experimental Designs Or Analyses: Discussed above Supplementary Material: I looked at the supplementary materials except for the section about previous methods Relation To Broader Scientific Literature: The paper is an adaptation of existing ideas to 3D GNNs Essential References Not Discussed: The paper does a good job of discussing the most relevant explainability literature. I think a section explaining the key pieces of the used 3D GNNs, in particular how messages are constructed would be a good addition. Other Strengths And Weaknesses: The paper is original and the presentation is good Other Comments Or Suggestions: In my eyes, the two most pressing suggestions is to contextualize the vision of the paper better. Why do we care about these explainers? Are there any unexpected explanations made by RISE that a chemist wouldn't have thought of easily? Show some example where RISE picks out a non trivial edge that correlates to interesting physics. The paper mentions several times that the number of edges in a 3D GNN grows exponentially. I am not sure if I misunderstand what the authors are trying to say, however, the number of edges in a fully connected graph is quadratic in the number of nodes, not exponential. Another small suggestion to improve the flow of the paper is that the paper talks about search spaces (line 48) before explaining that these explanation methods are formulized as optimization problems. Adding a short sentence in the beginning to give this context would be helpful Questions For Authors: How would RISE deal with long-range and non-local interactions? Are there any non-trivial explanations found by RISE beyond covalent bonds that can be seen as a known physical interaction? What is the overall vision with these explanation approaches? What are they useful for? What kind of actionable insights are we hoping to gain? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you so much for your detailed and constructive comments! We provide our responses here. ## Claims And Evidence > Interpretability of explanation - The radii of influence can be intepreted as **the spatial extent within which an atom can significantly affect its surroundings**. > Additional edges by other approaches due to the bigger budget - There is a misunderstanding here. **The budget is exactly the same for all methods.** This is stated in the caption of Fig. 8. We will include clearer statements in the revision to further emphasize this. - In terms of interpretability, **the edges found by all explanation methods are directed as in their original works**. RISE successfully finds both directions of the covalent bonds, while other methods find one direction of the covalent bonds and some less interpretable edges. We will include this clarification in the revision. ## Methods And Evaluation Criteria > Long-range interactions - The current form of our method is mainly for small molecules as we note in our discussion for future works in Sec. 5. **Our approach also has the potential to be extended to scenarios with long-range interactions.** We provide some key insights below and will include such discussions in Appendix in the next revision. - We could allow multiple radii of influence for nodes, represented as annular regions that collectively adhere to a predefined budget. For example, for a given atom, the model could identify edges with distances in the ranges [0.0, 0.3] and [0.6, 0.8] as important. This extension aligns well with chemical principles, as different types of interactions—such as covalent bonding, van der Waals forces, and electrostatic interactions—tend to dominate at specific distance ranges. **As a result, to capture important long-range interactions, only relevant annuli will be included; it won't result in unnecessary edges.** ## Theoretical Claims > Relaxation to Soft Masks - **There might be some misunderstanding here.** Indeed, both RISE and baseline methods relax the weights of edges from discrete values to continuous values. However, **there is a fundamental difference**. **The loss function of RISE is a function of radii; it can be extremely close to discrete values with the relaxation, e.g., Eq. (7). Similar approaches, e.g. applying Eq. (7) to edge/node masks, won't change the minima of the loss function of baseline methods, still resulting in mismatch.** - **RISE optimizes the radius masks, which are naturally continuous.** In Eq. (7), we reformulate the radius masks to be edge masks to allow gradient-based optimization. **As explained in sentences above Eq. (7), we can apply a function such that the edge masks will be extremely close to $0$ or $1$.** - Consider a node with three neighbors of distances $0.5$, $0.6$, and $0.7$, respectively. Suppose the optimized radius is $0.55$. **The resulting edge masks using Eq. (7) with $k=1,000$ are $1.00$, $1.93\times 10^{-22}$, and $7.18\times 10^{-66}$**, respectively . - Additionally, we present below the percentage of edge masks in different ranges resulting from real experiments on RISE and GNNExplainer, respectively. The closer the edge mask values are to $0$ or $1$, the smaller the mismatch between the mask and the binary optimization objective. **It can be seen clearly that the mismatch is almost negligible for RISE, but very severe for GNNExplainer.** |Edge Mask Value| GNNExplainer(%)|RISE(%)| |-|-|-| |[0, 0.1] and [0.9, 1.0]|0.003|99.314| |[0, 0.2] and [0.8, 1.0]|11.463|99.438| |[0, 0.3] and [0.7, 1.0]|54.508|99.741| Experiment constructed on the first 1000 molecules in QM9 test dataset for lumo using SchNet. On average, there are 314 edges per graph. ## Essential References Not Discussed > Discussion of 3D GNNs - **We have provided a discussion of different 3D geometric GNNs, including the ones used in this paper, in Appendix E.** We will include more details, such as how messages are constructed, in the next revision. ## Other Comments Or Suggestions: > The need of explanation - **In the introduction, we have briefly introduced the motivation and elaborated on the need of 3D GNN explanation methods.** - We will include further discussion in the revision. **The main purpose is not to identify novel chemical interactions unknown to chemists, but rather to understand how ML models make decisions**. This is particularly important for scientific applications, where explainability is crucial for domain scientists to trust ML models. > The number of edges - We do not mean that the number of edges is $O(C^N)$ ($N$ is the number of nodes) but rather a term to describe the large number of edges 3D GNNs have. **We will change to the word "rapidily" in the next revision.** > Search spaces before introducing optimization - We will make necessary modifications. ## Questions For Authors: The questions have similar responses to the content of 'Other Comments Or Suggestions'. Please refer to the section above.
Summary: This contribution introduces RISE (Radius of Influence based Subgraph Extraction), an innovative explanatory approach for 3D geometric Graph Neural Networks (GNNs) in molecular learning. RISE's principal contribution is the allocation of a "radius of influence" to each atom (node). This delineates the confined area where message passing encapsulates the most significant spatial and structural interactions for the model's predictions. RISE reconfigures 3D networks into directed proximity graphs (DPGs), whereby each node possesses a designated radius that governs the establishment of edges. By optimizing radii of impact, RISE extracts subgraphs that accurately reflect the model's predictions and are chemically interpretable. The submission evaluates RISE across several 3D GNN architectures (SchNet, DimeNet, and SEGNN) and datasets (QM9 and GEOM), demonstrating its constant superiority over established explanation techniques, such as GNNExplainer, PGExplainer, and LRI-Bernoulli. Significantly, RISE generates chemically interpretable explanatory subgraphs that correspond with genuine chemical linkages, whereas current methodologies frequently give chemically uninterpretable outcomes. ## Update after rebuttal The authors provided detailed discussions to address my concerns. I appreciate their effort and would like to retain my original score: accept this submission. Claims And Evidence: The claims made in the submission are well supported by empirical evidence. Claim 1: Proximity (distance) determines the importance of message passing in 3D GNNs. To support this claim, the author conducted two experiments: The first one is removing edges from different distance-based annular bins and measuring the model performance. Secondly, randomly removing edges within each annulus to assess the variance in importance. The results in Table 1 clearly show that "removing closer annuli leads to a more significant drop in MAE, as values in most cells strictly decrease compared to the previous row." Figure 5 further demonstrates that "edges at similar distances have comparable importance, as indicated by the relatively flat trend lines with minimal fluctuation." Claim 2: RISE outperforms existing explanation methods for 3D GNNs. This claim is supported by comprehensive experimental results across multiple backbone models (SchNet, DimeNet, and SEGNN) and datasets (QM9 and GEOM). Tables 2 and 3 show that "RISE consistently outperforms all baselines across various budgets" for most molecular properties and model configurations. The authors even note that they set "the baseline models' budgets... such that they strictly preserve more edges than RISE, making the comparison even more advantageous for the baselines," which strengthens the credibility of their results. The third major claim is that RISE produces chemically interpretable explanatory subgraphs, unlike existing methods. This claim is supported by visualizations in Figure 4, which clearly show that "only RISE yields chemically interpretable results that conform to interpretable chemical structures." The authors provide a concrete example with the ethane molecule (CH₃CH₃), where "the radii of influence from our experiments assign the C of interest with a radius of 1.532 and the H of interest with a radius of 1.171," allowing RISE to extract the chemical bonds precisely. Methods And Evaluation Criteria: The methodological approaches and assessment criteria employed in this submission are well-suited and thoughtfully constructed. The submission formulates the instance-level graph explanation, described as identifying "a subgraph $G^S \subseteq G$ that is important to the target $Y$." The authors effectively highlight the shortcomings of current techniques when implemented with 3D GNNs. The RISE methodology demonstrates considerable theoretical rigor. By recasting 3D graphs as directed proximity graphs (DPGs), where "each node $v_i$ has an associated radius $0 \leq r_i$, and a directed edge $e_{i\rightarrow j}$ exists if and only if $d_{ij} < r_i$, where $d_{ij}$ is the distance between $v_i$ and $v_j$," the authors create a framework that corresponds well with established physical principles of molecular interactions. This approach acknowledges that "interactions between nodes separated by large distances are typically negligible due to the rapid decay of force magnitudes." The evaluation strategy is appropriate and direct. The use of MAE as the primary performance indicator offers a straightforward measurement of how effectively the explanatory subgraph maintains the model's predictive accuracy. Visual representations of the explanatory subgraphs offer compelling qualitative evidence of RISE's capacity to generate chemically meaningful interpretations. The researchers demonstrate that "under a small budget, when explanation methods can preserve only a limited number of edges, RISE is the only method that selectively retains edges corresponding exclusively to chemical bonds," representing a significant advancement for molecular science applications. Theoretical Claims: The theoretical foundations presented in this submission regarding RISE demonstrate logical consistency. The introduction of directed proximity graphs (DPGs) as a mathematical extension of 3D graphs presents a rigorous framework. This formulation establishes a suitable mathematical structure for representing molecular graphs with variable influence radii. The authors' assertion that "any 3D graph constructed based on node radii can be viewed as a directed proximity graph" and that "a 3D graph constructed based on a cut-off distance (the same radius for all nodes) is a proximity graph under uniform radii" (page 5) follows directly from the DPG definition and exhibits proper mathematical derivation. The paper establishes that RISE maintains "consistency in optimization, as it does not require converting discrete values into continuous ones for optimization and does not include penalty or regularization terms to promote discreteness or enforce the budget" (page 5,6). This claim receives adequate support through the formulation of RISE, which optimizes continuous influence radii directly rather than relaxing binary masks to continuous values. The bound provided to illustrate inconsistencies in existing methods, "$L(Y; \Phi(G^S)) \leq L(Y; \Phi(X, M_{soft} \odot A)) + L(\Phi(X, M_{soft} \odot A); \Phi(X, M \odot A))$" (page 4), correctly identifies the optimization objective discrepancy that remains unaddressed during the optimization phase in soft mask-dependent methods. Experimental Designs Or Analyses: The experimental methodology and analytical procedures presented in this submission are sufficient to demonstrate the effectiveness of RISE. Two complementary approaches effectively examine the relationship between proximity and message passing significance in 3D. This experimental protocol appropriately isolates distance effects while maintaining control over confounding variables, thereby testing the proximity-importance hypothesis with methodological rigor. The experimental outcomes, presented in Table 1 and Figure 5, demonstrate that "removing closer annuli leads to a more significant drop in MAE" (page 7) and that "edges at similar distances have comparable importance" (page 7). These findings provide substantial empirical support for the theoretical basis of RISE and its radius-of-influence conceptualization. Methodological fairness is ensured by configuring "the baseline models' budgets... such that they strictly preserve more edges than RISE" (page 7), thereby introducing a conservative bias against the proposed method. This experimental constraint strengthens the validity of the comparative analysis by demonstrating RISE's superior performance despite operational disadvantages in edge preservation capacity. The visualization analysis of explanatory subgraphs in Figure 4 offers qualitative evidence supporting RISE's capacity to generate chemically meaningful interpretations. The authors observe that "under a small budget, when explanation methods can preserve only a limited number of edges, RISE is the only method that selectively retains edges corresponding exclusively to chemical bonds" (page 8). This observation underscores the practical relevance of RISE for molecular science applications. The experimental methodology and analytical procedures presented in this manuscript provide compelling evidence for RISE's effectiveness in explaining 3D molecular GNNs through rigorously designed and systematically executed evaluations. Supplementary Material: The supplementary materials in this submission strengthens the central findings. Appendix B contains an animated visualization (Figure 6) depicting the radius of influence optimization process, elucidating RISE's operational dynamics. Appendix D supplies additional explanatory subgraph visualizations that further demonstrate RISE's capacity to generate chemically meaningful interpretations. The other sections provide additional supporting materials for the effectiveness of the RISE. This well-structured supplementary documentation significantly enhances the manuscript's reproducibility, technical clarity, and empirical foundation through systematic presentation of supporting evidence and methodological details. Relation To Broader Scientific Literature: This submission effectively positions its contributions within the domains of molecular graph learning, graph neural networks, and explainable artificial intelligence scholarship. Building on established literature concerning the limitations of molecular representations, the authors articulate that "chemical behaviors and biological functions of molecules are largely determined by their 3D geometric structures" (page 1) while simultaneously recognizing contemporary advancements in three-dimensional graph learning methodologies. The work extends the foundations established by previous explanation frameworks, such as GNNExplainer and PGExplainer. The authors identify a substantial methodological gap in the current literature, noting that "existing approaches struggle to effectively explain 3D GNNs" (page 2). They appropriately acknowledge the previously published research specifically addressing 3D GNN explanation while critically assessing its methodological constraints and limitations. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: N.A. Other Comments Or Suggestions: N.A. Questions For Authors: 1. The paper focuses on relatively small molecular graphs. How would RISE scale to larger molecular systems that would exhibit long-range interaction? 2. How does RISE compare with attention-based explanation methods that might be adapted for 3D GNNs? Are there insights from attention mechanisms that could be incorporated into RISE? 3. Have you investigated the robustness of RISE explanations to small perturbations in molecular geometry? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you so much for your efforts reviewing our work. We respond to your questions below. ## Questions For Authors: > Extension to larger systems with long-range interactions - Even in larger atomic systems, short-range interactions typically dominate chemical bonding and molecular stability. Covalent bonds, hydrogen bonds, and van der Waals forces are strongest at short distances, meaning **our method remains highly applicable and effective in most cases**. - However, there are scenarios where long-range interactions play a crucial role. For example, in proteins, tertiary and quaternary structures depend heavily on long-range interactions such as salt bridges, hydrophobic packing, and π-π stacking. **We have acknowledged this in the discussion of future work in Sec. 5.** - **Our approach also has the potential to be extended to such scenarios.** Instead of using a single value as the radius mask, we could allow multiple radii of influence, represented as annular regions that collectively adhere to a predefined budget. For example, for a given atom, the model could identify edges with distances in the ranges [0.0, 0.3] and [0.6, 0.8] as important. This extension aligns well with chemical principles, as different types of interactions—such as covalent bonding, van der Waals forces, and electrostatic interactions—tend to dominate at specific distance ranges. > Attention-based explanation methods - There are some attention-based methods in the literature, with Graph Attention Networks (GAT) being a representative. **Attention-based methods are ante-hoc**, e.g. [1]. Ante-hoc methods are built into the model itself, making it inherently interpretable (like based on evolving attention scores) while **our work's focus is on post-hoc explanation of any existing 3D GNNs** that use radius cut-off graphs. The scope of our work and that of attention-based explanation are different, so there is not a direct comparison. - In terms of incorporating insights from attention mechanisms, attention-based GNNs (e.g. GAT) provide explicit edge importance scores for each node with respect to all other nodes. This is different from subgraph extraction, where we try to find the important edges (substructures) with respect to the entire graph. Therefore, we think that attention mechanism might be incorporated into RISE as a joint optimization process. We jointly optimize attention weights and explanation masks, where the explanation seeks to refine or filter the attention. **Although this is a very interesting perspective, it is out of the scope of our current work.** > Small perturbations in molecular geometry - This is a good point! We did not test RISE under small perturbations in molecular geometry in the original version, because **Small perturbations will alter the ground truth values of chemical properties, making evaluation intractable.** Since the exact chemical property values after perturbation are unknown (intractable for us to compute each time a small perturbation is introduced), it is not possible to faithfully assess or compare different explanation methods in such cases. - Assuming minor perturbations in atomic positions do not alter chemical properties, we conducted a robustness test on the same 4 molecules as in the qualitative results in Appendix D (discussed in the last paragraph of Sec. 4.2). Results indicate that when adding normal Gaussian noise to a small fraction (1/5) of atoms, RISE maintains explanation stability. While perturbations affect radii of influence, the binary edge mask remains unchanged, which means subgraphs extracted by RISE remain the same. **This demonstrates the robustness of RISE under small perturbations** (and for sure, again, if we assume minor perturbations in atomic positions do not alter chemical properties.) [1] Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism, Siqi Miao et al., ICML 2022 --- Rebuttal Comment 1.1: Comment: Thanks for your detailed responses, which address all my concerns. I have no further comments. --- Reply to Comment 1.1.1: Comment: Thank you for your response and your kind, positive feedback on our work. We're delighted to hear that you're satisfied with our reply.
Summary: This paper proposes a novel explanation method that localizes interpretability within each node’s immediate 3D neighborhood. By defining a "radius of influence," the approach constrains message passing to spatially and structurally relevant subgraphs. This enhances interpretability while maintaining alignment with the physical dependencies in molecular applications. ## update after rebuttal The author's response has resolved many of my questions, and I will increase the score. Claims And Evidence: This paper provides a detailed analysis of the differences between 2D and 3D models, proposes a graph explanation method for the 3D domain, and demonstrates its superiority through experiments. Methods And Evaluation Criteria: Yes Theoretical Claims: I am so sorry that I am not an expert in this field. Based on the reasoning from 2D to 3D, these formulas might be correct, but I am not entirely sure. Experimental Designs Or Analyses: The experiments compare the proposed RISE with some 3D baselines. However, from the paper, it can be seen that the proposed 3D model is an extension of the 2D model. Shouldn't an experiment be designed to compare the differences between the 2D and 3D models and examine the advantages of the 3D model in terms of interpretability? Supplementary Material: No Relation To Broader Scientific Literature: Not found Essential References Not Discussed: No Other Strengths And Weaknesses: None Other Comments Or Suggestions: Section 4.1 is missing the bolded Results. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your valuable feedback! We provid our responses below. ## Experimental Designs Or Analyses > Comparison with baselines on 2D GNNs - **First of all, we want to emphasize that our work is not just an extension of its 2D counterparts.** Our work reformulates 3D graphs as directed proximity graphs (DPG), and based on this formulation, we determine the radii of influence. This is a principled approach specifically designed for 3D GNNs based on their unique characteristics. - We are not sure whether you are suggesting that we compare our method with 2D explanation baselines, even though they are designed for 2D GNNs, or that we evaluate our method on 2D GNNs. - **If you are suggesting a comparison with baselines for 2D GNN explanation, we have already included them in our work.** GNNExplainer and PGExplainer are the two franchise works for 2D GNN explanation. We have also compared our method with LRI, the only available explanation method designed for 3D GNNs. - **If you are suggesting an evaluation on 2D GNNs, our method is not applicable to them, as it explicitly considers the properties of 3D geometric graphs and 3D GNNs.** 2D topological GNNs cannot be formulated as directed proximity graphs, and there is no concept of distance or radius in 2D topological graphs. Our method aims to find the radii of influence for 3D geometric GNNs. ## Other Comments Or Suggestions > Missing bolded 'Results' - Thanks for pointing this out! We will revise it in the next version.
Summary: The paper presents RISE, a method for explaining 3D molecular GNNs by identifying key substructures using a radius of influence for each atom. RISE formulates the explanation process as an optimization problem that finds a compact, chemically interpretable subgraph while maintaining prediction fidelity. Instead of soft edge masks, RISE optimizes the radius of influence for each atom, ensuring consistency between the explanation and the model’s learning process. The authors evaluate RISE on QM9 and GEOM datasets, comparing it against GNNExplainer, PGExplainer, and LRI-Bernoulli. The results show that RISE produces more interpretable explanations, capturing actual chemical substructures like functional groups and bonds while maintaining high explanation fidelity. Claims And Evidence: 1. Claim: **Unlike traditional 2D molecular graphs, 3D molecular graphs introduce implicit dense edges based on spatial proximity, making existing explainability methods ineffective.** However, the experiments in this paper show that RISE doesn't significantly outperform GNNExplainer and PGExplainer. Since both baselines are very old, the motivation of this paper -- existing explainability methods ineffective on 3D graph, doesn't seem to be valid. Methods And Evaluation Criteria: 1. While the idea of using spatial relationships in 3D GNN explanations is valuable, the approach is mainly a reformulation of standard explainability techniques rather than a fundamentally new method. 2. RISE is compared against GNNExplainer, PGExplainer, and LRI-Bernoulli. The comparisons are reasonable, but the lack of additional subgraph-level explainability baselines limits the scope of the evaluation since this is a subgraph-based method. (For example, SubgraphX, MAGE) 3. Unlike other methods that produce random subgraphs, RISE extracts explanations that correspond to actual chemical bonds, making them useful for domain experts. But authors are encouraged to provide results showing that by comparing with the existing method. Theoretical Claims: No Experimental Designs Or Analyses: 1. The paper demonstrates that distance is a key factor in message-passing but does not explore alternative explanation strategies or compare different distance-based formulations. 2. Experiment is not thorough. See "Claims And Evidence". Supplementary Material: No Relation To Broader Scientific Literature: Recognizing Learning Differences in 3D GNNs: Rather than treating graphs as fixed node-edge structures, RISE treats them as node-radius systems, which better aligns with molecular interactions. Essential References Not Discussed: [1] Integrating Explainability into Graph Neural Network Models for the Prediction of X-ray Absorption Spectra, 2023 This paper also evaluated on QM9. How do you compare your method with it? Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for reviewing our work. We respond below. ## Claims And Evidence - The baselines are given budgets **preserving more edges than RISE, favoring them in comparison**. - Despite this, RISE **consistently** shows strong performance. > Baselines old; existing methods ineffective on 3D graph - **The issues of applying existing methods to 3D GNNs have also been noted in Sec. 2 of the LRI paper [1].** - **There are not too many explanation methods that can be used.** - The provided reference **MAGE is not comparable to instance-level methods (our work)**, as it is **model-level**. MAGE does not generate subgraphs but identifies key motifs **across the whole dataset** for a prediction class. - The other reference, SubgraphX, is older than some baselines like PGExplainer and LRI. **Nevertheless, we have included its results (see "Methods and Evaluation Criteria")**. ## Methods And Evaluation Criteria > Not a fundamentally new method - Our method is **not just a reformulation** of existing methods. - **Eq. (1) defines a well-established general framework for subgraph extraction.** Many works [1-4] reformulate Eq. (1) to Eq. (4) for gradient-based optimization with budget and sparsity constraints. - We reformulate Eq. (1) to Eq. (5) using **directed proximity graphs (DPG) tailored for 3D graphs**. **This is the first and only reformulation to 3D GNNs that is directly optimizable using radii of influence. This is fundamentally different from Eq. (4), which requires additional constraints to address issues from soft mask relaxation.** > Comparison with subgraph-level explainability baselines - **The provided reference may not be directly comparable to our method.** - **MAGE is a model-level explanation method** that identifies the most influential motifs **across the entire dataset** for a given class label. Our method is instance-level, identifying **a subgraph for each graph**. - SubgraphX searches through various node combinations, and the search space for hard mask optimization scales exponentially with the number of nodes. **Our method and all baselines relax hard masks to enable efficient gradient-based optimization.** Consequently, **SubgraphX is 5× slower than our method on QM9, with an even larger gap for larger molecules**. - Nevertheless, SubgraphX is still instance-level; we compare it with RISE. **The results below clearly show that RISE outperforms SubgraphX** (metric is MAE, lower, better). We present results on the first 1,000 molecules from the QM9 test data due to time constraints. We will include the full results in the revision if you feel necessary. - **Beyond time complexity, SubgraphX shares the same limitation as all other node-masking methods (see Appendix C).** Its exponential scaling prevents extension to edge masking. |Method|μ(0.3)|μ(0.4)|μ(0.5)|α(0.3)|α(0.4)|α(0.5)| |-|-|-|-|-|-|-| |SubgraphX|10.844|10.528|10.354|2.550|2.632|2.816| |RISE|0.482|0.320|0.134|1.983|1.311|0.525| |Method|HOMO(0.3)|HOMO(0.4)|HOMO(0.5)|LUMO(0.3)|LUMO(0.4)|LUMO(0.5)| |-|-|-|-|-|-|-| |SubgraphX|0.387|0.282|0.272|0.610|0.557|0.540| |RISE|0.375|0.190|0.095|0.474|0.251|0.114| > RISE extracts meaningful results - **We have already provided several qualitative results in Appendix D**, as discussed in the last paragraph of Sec. 4.2. If you believe it is necessary, we will include more in the next revision. ## Experimental Designs Or Analyses > Different distance-based formulations - Our work is motivated by the finding that distance is a key factor in message passing for 3D geometric graphs. **No existing explanation method uses a distance-based formulation**. - That said, we are unsure if you suggest designing alternative distance-based formulations for comparison with RISE. **Our work is among the first to explore 3D GNN explanation. RISE is currently the best method we can propose based on our insights. Our findings can inspire the community to develop more geometry-oriented subgraph extraction methods.** ## Essential References Not Discussed > Comparison with the provided reference - **The scope of the provided reference [5] is very different from our work.** - **[5] does not explain 3D geometric graphs but rather 2D graphs.** - "*Molecular graphs were generated from the SMILES strings of the molecules*" (p2, 1st paragraph under Methods, 2nd last sentence) - While they also use QM9, their focus is on explaining models predicting the X-ray absorption spectrum (XAS). In contrast, we propose a general method for 3D GNN explanation and evaluate it across multiple quantum-level properties. [1]Interpretable Geometric Deep Learning via Learnable Randomness Injection [2]Parameterized explainer for graph neural network [3]GNES: Learning to explain graph neural networks [4]Stratified GNN Explanations through Sufficient Expansion [5]Integrating Explainability into Graph Neural Network Models for the Prediction of X-ray Absorption Spectra
null
null
null
null
null
null
Continual Reinforcement Learning by Planning with Online World Models
Accept (spotlight poster)
Summary: In this work the authors propose a new task-unknown continual reinforcement learning setting, in which the agent needs to learn a sequence of tasks without exactly boundary or id. To deal with this new setting, the authors propose a new continual RL method, OA, which introduce a sparse world model by FTL models to overcome the catastrophic forgetting. Besides, the authors propose a new CRL benchmark, the Continual Bench, which is helpful for following works Claims And Evidence: I think the claims are clear and convincing evidence. Methods And Evaluation Criteria: I have two questions about the algorithm in Section A.2. First, in the line 2, the loop start with a condition: task changes. How does the agent know when the task changes? What exactly this condition sentence means? Second, in line 12 to line 13, the $A_{ss}^{\left(t\right)}$ is calculated with $A_{ss}^{\left(t+1\right)}$ and $\phi_s$. Will the sparsity of $A_{ss}$ and $B_{s}$ change after these lines? How to maintain the sparsity? Theoretical Claims: I am interested in understanding the extent to which Assumption 1 and Assumption 3 are sustained throughout the experiments. Specifically, I would like to inquire whether the configuration of $\Lambda$ in the experimental setup is sufficient to satisfy the conditions stipulated in Assumption 3. Experimental Designs Or Analyses: I would like to know that why the horizon $H$ is set to 15? As I know, this is a large number for the model-based RL, but not an enough number when you want to generate a whole trajectories. Supplementary Material: N/A Relation To Broader Scientific Literature: After Wołczyk, this work propose another kind of vision of the CRL, which will helpful for following works. I think this work tells us that the CRL is a very different question with the continual classification problem. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: I’m sorry but I do not understand the proof in Lemma 3 Proof line 2 to 3. Why $\text{Tr}\left(\phi\left(\mathbf{x}_t\right)\phi\left(\mathbf{x}_t\right)^\top\mathbf{W}^{\left(t+1\right)}\mathbf{W}^{\left(t+1\right)\top}\right)-\text{Tr}\left(\phi\left(\mathbf{x}_t\right)\phi\left(\mathbf{x}_t\right)^\top\mathbf{W}^{\left(t\right)}\mathbf{W}^{\left(t\right)\top}\right)$ less or equal than $\text{Tr}\left(\phi\left(\mathbf{x}_t\right)\phi\left(\mathbf{x}_t\right)^\top\left(\mathbf{W}^{\left(t+1\right)}-\mathbf{W}^{\left(t\right)}\right)\left(\mathbf{W}^{\left(t+1\right)}-\mathbf{W}^{\left(t\right)}\right)^\top\right)$ ? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your valuable review and questions. Below we respond to the comments and raised questions. ***Methods And Evaluation Criteria*** > What does "task changes" in Algorithm A.2 mean and how does the agent know this? Admittedly, the presented algorithm still requires the task boundary information to reset the planner state. We save the planner state (memory, please see line 245) to mainly reuse the previous planning for better initialization, which is an implementation choice. We could also discard the memory and re-plan every time step, and therefore remove this condition. > Will the sparsity change after line 12-13 of Algorithm A.2? Yes, the sparsity of the matrix will change after the update. However, each update is guaranteed to be sparse because the sparsity of $\phi_s(x_t)$ is pre-determined, which also bounds the computation per update. ***Theoretical Claims*** >whether the chosen $\lambda$ in the experiments is adequately configured to ensure that Assumptions 1 and 3. Both Assumption 1 and 3 depend on selecting a sufficiently large $\\lambda$ to control the growth of the feature-space covariance and the norms of sub-block inverses. Specifically, Assumption 1 ensures that the empirical average of feature outer products stays close to those from new data, a condition more easily met when $\\lambda$ is larger. Meanwhile, Assumption 3 employs $\\lambda$ in the regularized inverse $\\bigl(\\mathbf{A}\_{ss}\^{(t)} + \\tfrac{1}{\\lambda}\\mathbf{I}\\bigr)\^{-1}$ to keep $K$ within reasonable bounds. In practice, an appropriate $\\lambda$ helps maintain both assumptions throughout experiments by mitigating outlier effects in the feature space and ensuring that inverse sub-blocks remain well-conditioned. However, because the ideal value of $\\lambda$ can vary by environment or dataset, some tuning is typically required. ***Experimental Designs Or Analyses*** > why the horizon $H$ is set to 15? This hyperparameter would trade off the planning accuracy and the computational requirement. Setting $H$ too small will get efficient planning but the planned action can be short-sighted; a longer $H$ could incur larger planning cost, but the policy can better optimize long-term rewards. However, since we plan with a learned world model, the model error can compound along the planning horizon, so there is no monotonic benefit from using longer horizons. $H=15$ is an empirical value which is obtained from our pilot trials. ***Questions For Authors*** Thank you for pointing out the mistake. We missed an extra term in our proof. Below is the corrected derivation: \\begin{align} {} & \\mathrm{Tr}(\\phi(\\mathbf{x}\_t)\\phi(\\mathbf{x}\_t)\^\\top\\mathbf{W}\^{(t+1)}\\mathbf{W}\^{(t+1)\\top}) - \\mathrm{Tr}(\\phi(\\mathbf{x}\_t)\\phi(\\mathbf{x}\_t)\^\\top\\mathbf{W}\^{(t)}\\mathbf{W}\^{(t)\\top}) \\\\ = {} & \\mathrm{Tr}\\left(\\phi(\\mathbf{x}\_t)\\phi(\\mathbf{x}\_t)\^\\top\\left(\\mathbf{W}\^{(t+1)} - \\mathbf{W}\^{(t)}\\right)\\left(\\mathbf{W}\^{(t+1)} - \\mathbf{W}\^{(t)}\\right)\^\\top\\right) \\\\ & + 2\\mathrm{Tr}\\left(\\phi(\\mathbf{x}\_t)\\phi(\\mathbf{x}\_t)\^\\top\\left(\\mathbf{W}\^{(t+1)} - \\mathbf{W}\^{(t)}\\right)\\mathbf{W}\^{(t)\\top}\\right) \\end{align} Notice that by the definition of $\\Delta\_t$, we have \\begin{align} {} & 2\\mathrm{Tr}\\left(\\phi(\\mathbf{x}\_t)\\phi(\\mathbf{x}\_t)\^\\top\\left(\\mathbf{W}\^{(t+1)} - \\mathbf{W}\^{(t)}\\right)\\mathbf{W}\^{(t)\\top}\\right) \\\\ \\leq {} & 2\\mathrm{Tr}\\left(\\phi(\\mathbf{x}\_t)\\phi(\\mathbf{x}\_t)\^\\top\\Delta\_t\\mathbf{W}\^{(t)\\top}\\right) \\\\ = {} & 2\\mathrm{Tr}\\left(\\underbrace{\\phi(\\mathbf{x}\_t)\\phi(\\mathbf{x}\_t)\^\\top\\left(\\sum\_{i=1}\^{t}\\phi(\\mathbf{x}\_i)\\phi(\\mathbf{x}\_i)\^\\top + \\frac{1}{\\lambda}\\mathbf{I}\\right)\^{-1}}\_{\\preceq \\mathbf{I}}\\phi(\\mathbf{x}\_t)\\mathbf{y}\_t\^\\top\\mathbf{W}\^{(t)\\top}\\right) \\\\ \\leq {} & 2\\mathrm{Tr}\\left(\\phi(\\mathbf{x}\_t)\\mathbf{y}\_t\^\\top\\mathbf{W}\^{(t)\\top}\\right). \\end{align} So the correct bound should be $f\_t(\\mathbf{W}\^{(t+1)}) - f\_t(\\mathbf{W}\^{(t)}) \\leq \\mathrm{Tr}\\left(\\phi(\\mathbf{x}\_t)\\phi(\\mathbf{x}\_t)\^\\top\\Delta\_t\\Delta\_t\^\\top\\right) + \\frac{4}{t}\\mathrm{Tr}\\left(\\phi(\\mathbf{x}\_t)\\mathbf{y}\_t\^\\top\\mathbf{W}\^{(t)\\top}\\right)$, which differs from the original bound where the coefficient of the second term is $\\frac{2}{t}$. We will revise the proof in our manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The authors' clarification has satisfactorily addressed my concerns. I have no further questions.
Summary: The paper presents a Follow-The-Leader-based online world model, implemented as a composition of a learnable linear layer and a fixed-weight non-linear layer, that is used to solve the continual reinforcement learning problem as a part of model predictive control. The world model has a regret bound of $\mathcal{O}(\sqrt{K^2D\log(T)})$ under certain assumptions. The paper also introduces the $\textit{Continual Bench}$ benchmark and demonstrates the superiority of their online world model. ## update after rebuttal I will maintain my score, given that the rebuttal has not significantly changed my opinion of the paper and my rebuttal comment is unaddressed. Claims And Evidence: The paper's main claims are supported as its method outperforms agents built on deep world models on their $\textit{Continual Bench}$ benchmark, and their regret proof is sound given their assumptions. Section 2 claims that "image-based environments demand prohibitive computation resources, and the lack of meaningful overlapping prevents us from evaluating the transfer of resources". This claim isn't necessarily true since Atari environments have been successfully tackled, for offline learning, even back in 2015 with much less compute [1]. It's unclear why online learning should be more challenging w.r.t. computational resources. A minor criticism is that the line 128 states that continual agent may not know the number of tasks, but the paper's algorithm in the appendix makes use of task boundaries to reset $\mu$. [1] Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. Methods And Evaluation Criteria: Yes. However, the Continual Bench's number of tasks is only 6, which is considerably smaller than that of Continual World [1], which is 20. It would be ideal to test on a longer number of tasks, especially since the paper's learning algorithm OA can run out of capacity due to being a fixed-size linear model. [1] Wołczyk, M., Zając, M., Pascanu, R., Kuciński, Ł., & Miłoś, P. (2021). Continual world: A robotic benchmark for continual reinforcement learning. Advances in Neural Information Processing Systems, 34, 28496-28510. Theoretical Claims: I have read the proof of Theorem 1 present in the appendix, but not extremely carefully. Experimental Designs Or Analyses: Yes. The evaluation of OA and baselines on the Continual Bench benchmark was insightful. I have one concern, which is that the Continual Bench experiments only train the learning algorithms on 100 episodes per task. To my knowledge, 100 is not a lot of gradient steps for RL algorithms to maximally learn from an environment, and the success rate plot in Figure 5 shows that when task learning is over many models' performance curves don't plateau. While this is not a critical issue for continual learning since the setup does not assume the models can be trained to convergence, repeating the experiment for a larger episode count would yield valuable insights into how different world model performances change when given more chance to learn. Supplementary Material: I have read all the supplementary material in detail. Relation To Broader Scientific Literature: The paper adds to the array of recent work that investigates continual reinforcement learning through the lens of model stability and plasticity. To my knowledge, this is the first work that investigates online world model learning applied to continual reinforcement learning. Essential References Not Discussed: To my knowledge, the paper covers all essential references. Other Strengths And Weaknesses: - It is unclear how OA can scale to more complex problems, as it seems both OA and CEM are suited for simple, low-dimensional state and action spaces. Other Comments Or Suggestions: - Specify what $g, \delta$ are in Eq 8. Questions For Authors: - How applicable is their assumption 1 for theorem 1? Doesn't it assume the observations become more and more similar to past observations? In this case, wouldn't the constant regret assumption not apply to highly nonstationary environments? - How effective is the sparse online world model learning update is for discrete state space? It seems to hinge on L2 loss for derivations to hold. - How does continual bench change reward exactly? When learning a new task does the model not receive reward for previous tasks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable review and questions. Below we respond to the comments and raised questions. ***Claims And Evidence*** > The claim that "image-based environments... We agree that the Atari games were solved even back in 2015. However, it typically takes millions to billions of simulated time steps for learning a single task, which amounts to hours to days of wall-clock time even in a distributed training setup [1,2]. Besides, in the online continual RL setting, we could not parallelize the data collection since the online agent only has a single lifetime. This constraint, together with the fact that we need to learn multiple tasks sequentially in continual RL, could make the computation required by image-based experiments quite expensive, and therefore we opt for the state-based environments. > Line 128 states that continual agent... Admittedly, the presented algorithm still requires the task boundary information to reset the planner state. We save the planner state (memory, please see line 245) to mainly reuse the previous planning for better initialization, which is an implementation choice. We could also discard the memory and re-plan every time step, and therefore remove this condition. ***Methods And Evaluation Criteria*** We agree that 6 tasks are limited. However, we hope to clarify that Continual World's CW20 is actually CW10 repeated twice. Although we only include 6 sequential tasks, we cover different objects like window, peg, button, door and faucet, which have similar difficulty level to CW10's tasks. As a future work, we plan to extend the Continual Bench to include more tasks, making it more useful for evaluating life-long learning online agents. ***Experimental Designs Or Analyses*** We agree with your points. We set the maximum number of episodes per task as 100 episodes due to the limited computational budget and time constraint. We'd love to conduct experiments for longer episodes once the computational resources are available. ***Other Strengths And Weaknesses*** Thanks for raising this important question. We admit that the existing algorithm has difficulty scaling up to high-dimensional problems, as we have discussed in Appendix D. Nevertheless, we hope the online world model learning + planning framework can be useful for developing online agents for more complex problems. We hope to continue this line of research to develop more capable model classes for learning world models online, and design more efficient approximate planning algorithms. ***Other Comments Or Suggestions*** Sorry for missing this. $g$ is the goal state and $\delta$ is a small number to determine whether the current state reaches the goal (typically 0.005 in our experiments). We will add the explanation in our revision. ***Questions For Authors*** > How applicable is their assumption 1 for theorem 1?... We'd like to clarify that the condition $\\sup\_{\\mathbf{x}}\\|\\phi(\\mathbf{x})\\phi(\\mathbf{x})\^{\\top} - \\frac{1}{t}\\sum\_{i=1}\^t\\phi(\\mathbf{x}\_i)\\phi(\\mathbf{x}\_i)\^{\\top}\\|\_2 \\leq \\frac{1}{\\lambda t}$ does not require that new observations must become more and more similar to features that have already been seen. Instead, it ensures that any single feature vector has negligible impact on the overall covariance once enough data is gathered, enabling robust and consistent estimates. We acknowledge that this analysis may be less suited to highly nonstationary environments. However, if the environment evolves smoothly enough that each new “epoch” of data has a bounded impact on the overall covariance after sufficient samples, the same theoretical guarantees should apply. > How effective is the sparse online world model learning update for discrete... We use L2 loss primarily for analytical simplicity. Similar guarantees hold with other losses (e.g., cross-entropy for discrete state space cases) by maintaining the same assumptions. Our L2-based derivation does not rely on the state continuity and is not a fundamental constraint. Even with discrete states, one can also embed them in a continuous feature space (e.g., one-hot or learned embeddings) and apply the same incremental updates. > How does continual bench change reward exactly? When the environment switches to a new task, the observation and the reward signal of the **new** task will be provided to the agent, and the agent will **not** receive reward for previous tasks. So the agent only has a single stream of experience, and it is expected to learn from it without forgetting how to solve old tasks. --- [1] Espeholt, Lasse, et al. "Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures." International conference on machine learning. PMLR, 2018. [2] Kapturowski, Steven, et al. "Recurrent experience replay in distributed reinforcement learning." International conference on learning representations. 2018. --- Rebuttal Comment 1.1: Comment: Thank you for your clarifications. Out of curiosity, do you expect the results to change significantly if the agent continues to receive rewards for previous tasks? One could argue that is the training regime that many continual learning benchmarks operate under, even though they typically cannot revisit old tasks, at least in their training stream without experience replay.
Summary: In this paper, the authors focus on the problem of Continual Reinforcement Learning, solving multiple tasks that are presented in sequence. In practice, this is a difficult problem because conventional methods often lead to Catastrophic Forgetting. To this end, the authors propose a model-based agent that learns via a Follow-The-Leader approach. The authors justify their approach through mathematical reasoning and proof. They then evaluate their approach on a multi-task setup (Continual Bench) based on Meta-World, demonstrating promising results over the state of the art. ## update after rebuttal In light of the other reviewer's comments, I keep my assessment of accept. Claims And Evidence: Yes. The basic claims made in the paper seem to be justified empirically in the evaluations, at least in the specific dataset used (Continual Bench). The quantitative results are strong and show promise. Methods And Evaluation Criteria: Yes, albeit only one dataset was used (Continual Bench), and this dataset was created by the authors. If there were a way to try other Continual Learning benchmarks, that would strengthen the case of the paper. Theoretical Claims: The theoretical claims made in the paper appear to be correct, although it's possible I may have missed details. Experimental Designs Or Analyses: Yes. The experimental design is sound. Supplementary Material: No, I did not thoroughly check through the supplementary material. Relation To Broader Scientific Literature: Compared to prior work, the authors propose a new method for Continual Reinforcement Learning in addition to a new dataset (Continual Bench) Essential References Not Discussed: No. Not to my knowledge. Other Strengths And Weaknesses: The authors focus on an important problem in Reinforcement Learning (Multi-task/Continuous Learning) and propose an approach with strong theoretical justification. Furthermore, they present promising results and a new dataset for future research. I believe this paper would be a good contribution to the conference. The paper's argument could be strengthened with evaluation on more datasets/domains; although it may be difficult to find appropriate continual learning domains as the authors note. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your supportive review, as well as the suggestions that we should evaluate the method on additional datasets/benchmarks. The main challenge of doing so is that there is no appropriate continual RL test suite to our knowledge. This is also the motivation for us to discuss the importance of unified world dynamics and to propose a new benchmark (Continual Bench). Nevertheless, we will continue working on this area and test on newly developed benchmarks if any. Thank you again for your feedback!
Summary: Update after rebuttal: I have read the author responses and reviews. I would like to maintain my score recommending acceptance. Summary: The paper proposes an approach to Continual Reinforcement Learning (CRL) through the development of an Online Agent (OA) that leverages online world models. The central idea is to address catastrophic forgetting—a major challenge in CRL—by employing Follow-The-Leader (FTL) shallow models to capture world dynamics, paired with model predictive control (MPC) for planning. The authors introduce Continual Bench, a new benchmark designed to test both forgetting and transfer in CRL environments. Empirical results show that OA outperforms strong baselines built on deep world models with traditional continual learning techniques. Overall impression: I really like the way the paper is written. It is quite modular and clear. Sufficient literature has been cited throughout the text. The core idea of learning a unified world dynamics that can be shared among different tasks is appealing for transfer. I think the community will benefit from this paper. Claims And Evidence: This paper claims that the OA agent can solve CRL tasks incrementally without catastrophic forgetting by planning with online world models. OA surpasses deep world models combined with various continual learning methods. Regret bounds for the sparse online model learning process are formally derived and justified with key assumptions (like feature mappings stabilization and bounded inputs/outputs). Methods And Evaluation Criteria: The evaluation metrics used (average performance, regret, learning curves) seem appropriate. I really like the way the authors evaluate on all previously seen tasks even when learning on a particular new task to track catastrophic forgetting. Theoretical Claims: The paper provides a regret bound for the sparse online model learning process, ensuring that OA's world model updates incrementally without forgetting past tasks. Experimental Designs Or Analyses: The setup allows studying both forgetting (by testing how agents retain old skills) and transfer (by evaluating knowledge reuse across similar tasks). Tasks are sequenced to maximize distributional shifts, testing how well models generalize without task-specific identifiers. Supplementary Material: I did not review supplementary. Relation To Broader Scientific Literature: The work builds on model-based RL foundations (Sutton, 1990) and extends them into CRL. It directly engages with recent FTL-based approaches (Liu et al., 2024) and compares against standard CL methods like EWC, SI, and PackNet. The benchmark Continual Bench critiques and refines prior environments like Continual-World (Wołczyk et al., 2021), addressing their limitations by ensuring consistent state spaces for testing transfer and forgetting. Essential References Not Discussed: None Other Strengths And Weaknesses: Covered already. Other Comments Or Suggestions: Typo: Line 292: stat -> state Questions For Authors: What is meaningful overlapping referred to? overlapping of what? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your supportive review and questions. Below we respond to the comments and raised questions. --- ***Q1: What is meaningful overlapping referred to?*** We intended to mean the overlapping of task attributes. For example, two Atari games may lack meaningful overlapping because they have distinct appearance and different underlying game logic. The lack of task attribute overlapping makes the study of transfer difficult. In our proposed environment (Continual Bench), we carefully designed a sequence of tasks that share certain attributes (e.g., moving the robotic arm, gripping objects), ensuring meaningful overlapping of task attributes. Thank you again for your question, and we will make the term clearer in our next revision, as well as fixing the typo you mentioned.
null
null
null
null
null
null
Evaluating VLMs' General Ability on Next Location Prediction
Reject
Summary: This paper introduces a benchmark for evaluating the performance of vision-language models (VLMs) on next-location prediction. The benchmark is created with open-source map public taxi trajectory data. They draw the first 12 points of the taxi trajectory on the map and ask the VLMs to predict the location of the 13th point. Using this benchmark, the authors evaluated the performance of 14 VLMs. They found the VLMs can produce meaningful predictions rather than random guesses. The authors have also set up a platform to evaluate human prediction performance on this benchmark. They found there is a significant gap between the performance of VLMs and human prediction performance. ## update after rebuttal No updates. Claims And Evidence: * The benchmark and study results are useful contributions to the community. * However, it is up for debate how much value this new benchmark provides and what types of new research can benefit from it. Methods And Evaluation Criteria: * The evaluation setup is reasonable. Theoretical Claims: * There are no theoretical proofs in the paper. Experimental Designs Or Analyses: * The experimental designs are sound. Supplementary Material: * I did not review the supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable comments. We appreciate the time and effort you have devoted to reviewing our work. Below, we address your concerns in detail. **How much value does this new benchmark provide?** 1. **From the perspective of large vision-language models (VLMs)**: The proposed benchmark is fundamentally designed to evaluate the *reasoning capabilities* of VLMs. Consider a simple example: suppose the vertices of a square are labeled A, B, C, and D in order. If a driver travels from A to B to C, and all roads are accessible without complex traffic rules, it is highly unlikely that the next point will be D—since a direct route from A to D would have been more efficient. This suggests that the prediction of the next location relies on understanding the *topological structure of the road network*. In more complex real-world maps, this task requires even stronger reasoning skills. Our benchmark provides a framework to quantitatively assess whether VLMs possess such abilities. As shown in our results, current VLMs still fall short of human-level map reasoning and spatial understanding. 2. From the perspective of next-location prediction: Most existing approaches rely on learning city-specific trajectory patterns, often limiting their generalizability. In contrast, our work explores the potential of VLMs to leverage visual understanding of road networks to achieve generalizable next-location prediction. Because the visual representation of road networks tends to be domain-agnostic, VLM-based approaches offer the potential to build universal models for trajectory prediction. Our benchmark thus serves as a standardized evaluation framework for such vision-based next-location prediction, not limited to VLMs alone. **What types of new research can benefit from this benchmark?** 1. **Spatial computing:** One promising application area is spatial computing. Real-world navigation often requires path planning over maps. Using VLMs to understand spatial layouts is an emerging research direction. Our benchmark allows researchers to quantitatively assess a model’s ability to reason about spatial structures and path planning, helping to avoid potential pitfalls in real-world deployments. 2. VLM-based reasoning: As discussed above, the benchmark evaluates the reasoning abilities of VLMs. Since road networks form complex topological graphs, predicting the next step in a trajectory requires non-trivial reasoning over these structures. Our benchmark provides a testbed to evaluate whether VLMs can exhibit human-like map-based reasoning. 3. **General-purpose next-location prediction**: By leveraging the general visual understanding capabilities of VLMs, there is strong potential to build general-purpose next-location predictors that are not bound to a specific city or dataset. As one of the first works to explore vision-based next-location prediction, we hope this study inspires further research into broadly applicable, vision-driven trajectory modeling. Once again, thank you for your thoughtful feedback and the time you’ve dedicated to our work. If you have any further questions or if any concerns remain unaddressed, please feel free to reach out, we would be happy to continue the discussion. Sincerely, The authors of Paper 740 --- Rebuttal Comment 1.1: Comment: Thank you for your response.
Summary: This paper explores the general capability of Vision-Language Models (VLMs) in performing next-location prediction, a key aspect of spatial intelligence that humans often handle through visual estimation. The authors introduce VLMLocPredictor, a novel benchmark designed to evaluate VLMs' predictive capabilities on next-location tasks. The paper makes the following key contributions: 1. Visual Guided Location Search (VGLS) Module – A recursive refinement strategy that leverages visual guidance to iteratively narrow down the search space for next-location prediction. The VGLS module employs a hierarchical question-answering process where the VLM predicts which half of the map is more likely to contain the next location, progressively refining the prediction area through iterative feedback. 2. Comprehensive Vision-Based Dataset – The dataset integrates open-source map data with publicly available taxi trajectory data from the Porto and Chengdu datasets. The dataset is categorized into easy, medium, and hard subsets based on the number of roads and trajectory distances, creating a structured evaluation framework. 3. Human Benchmark – The authors established a human performance benchmark through a large-scale social experiment, where over 100 participants predicted the next trajectory point on the same test set used for the VLMs, generating over 10,000 samples. ## update after rebuttal Thanks for providing rebuttals. Unfortunately, most of my concerns remain unaddressed. I would like to maintain my original rating. Claims And Evidence: In Section 4.3.1, the authors claim that "not all scenarios have sufficient data for training RNNs." However, the reviewer did not find concrete examples or real-world cases in the paper where training data for next-location prediction tasks is insufficient. This weakens the strength of the claim due to the lack of supporting evidence. In Section 5, the authors state that methods based on large language models “face challenges related to cross-city transferability." However, no clear supporting evidence, such as cross-city experiments or quantitative analysis, is provided to substantiate this claim. Including such evidence would strengthen the credibility of the conclusion. Methods And Evaluation Criteria: Overall, the proposed methods and evaluation criteria are thoughtfully designed and appropriately matched to the nature of the next-location prediction task. However, additional clarification on the stopping criteria for the VGLS module and the method for generating geographic coordinates from the final selected area need to be elaborated to enhance the methodological rigor. Theoretical Claims: The paper does not present any formal theoretical proofs or rigorous mathematical derivations. The proposed Visual Guided Location Search (VGLS) module is primarily described as a procedural algorithm rather than a theoretical framework supported by formal proofs. While the paper introduces a recursive refinement strategy and discusses its logical foundation, it does not attempt to formally prove the convergence or optimality of the VGLS process. Therefore, there are no theoretical claims that require validation through mathematical proofs. The authors could consider providing a formal analysis of the convergence properties, computational complexity, and potential error bounds of the recursive search process. This would enhance the theoretical robustness of the proposed method. Experimental Designs Or Analyses: The experimental design and analysis in the paper are generally well-constructed and thoughtfully aligned with the research objectives. However, a few aspects require further clarification or improvement: 1. The paper reports that human performance was collected through a large-scale social experiment. However, it is unclear whether participants were provided with consistent task instructions or whether there were controls in place to ensure that human performance data is reliable and consistent across different scenarios. Furthermore, the paper compares VLM performance with that of "experts" without clarifying the qualifications or selection criteria for these experts. Providing more details about the expertise level of these participants would strengthen the validity of the comparison. 2. The use of MAE, RMSE, and pass rate (for 100m, 500m, and 2000m thresholds) is appropriate for evaluating prediction accuracy and usability. However, the paper does not clarify the rationale behind selecting these specific distance thresholds for pass rate evaluation. An explanation based on the practical implications of these thresholds would make the evaluation more convincing. Supplementary Material: The reviewer has carefully reviewed the Further Experiments section. The authors conclude that the blue-yellow combination is more effective than other high-contrast combinations when dividing the image regions. However, considering the potential influence of color transparency, background color, and route color would further improve the completeness and robustness of the study. Relation To Broader Scientific Literature: The paper makes meaningful contributions at the intersection of next-location prediction, vision-language modeling, and spatial reasoning. It extends VLMs beyond their traditional use cases, introducing a training-free approach that generalizes across different environments. More parameters of model results in stronger reasoning for the next location, which aligns with prior conclusion of the number of parameters is crucial for the ability of model inference. Essential References Not Discussed: The paper provides a relatively comprehensive summary of related works. The authors have covered key references in the fields of next-location prediction, vision-language models (VLMs), and spatial reasoning. The discussion effectively situates the contributions within the broader scientific context, with no significant gaps in referencing essential prior works. Other Strengths And Weaknesses: Pros: 1. The paper demonstrates strong originality by applying vision-language models (VLMs) to next-location prediction, a novel extension beyond conventional VLM tasks. 2. The proposed Visual Guided Location Search (VGLS) introduces a creative and effective recursive refinement strategy that leverages visual reasoning for spatial prediction, demonstrating potential for training-free generalization across cities. 3. The construction of a large-scale dataset combining real-world taxi trajectories from Porto and Chengdu, along with a well-designed human benchmark based on over 10,000 predictions, provides a robust and meaningful evaluation framework. Cons: 1. The recursive partitioning strategy in VGLS, while effective, may introduce high computational costs as the number of iterations increases, and the lack of a clear stopping criterion creates ambiguity in the refinement process. Moreover, how to obtain the estimated geographic coordinates after the area search remains to be clarified. 2. The comparison with baseline models is somewhat limited, as the paper primarily uses a simple RNN and human performance without benchmarking against more sophisticated trajectory prediction models. 3. The analysis of experiments is relatively shallow. It does not explore why Claude demonstrates superior performance. Investigating the architectural or training differences that give Claude an advantage could provide valuable insights for designing future vision-language models with enhanced spatial reasoning and general intelligence capabilities. 4. The paper lacks substantial theoretical innovation, as the proposed recursive partitioning strategy (VGLS) primarily builds on existing hierarchical search and visual reasoning techniques without introducing fundamentally new theoretical insights. Additionally, the practical applicability of the method remains limited, as the current performance of VLMs still falls short of human-level accuracy in most scenarios, indicating that further improvements are needed before the approach can be reliably deployed in real-world applications. Other Comments Or Suggestions: Before the “Prompt Consideration”, it should be a full stop. The description for the next location prediction should be consistent as it is denoted as next-location prediction in the Introduction. Questions For Authors: 1. What are the underlying factors contributing to Claude 3.5 Sonet’s superior performance in next location prediction compared to other VLMs? Understanding why Claude outperforms other models could provide valuable insights for improving spatial reasoning in VLMs. If the authors can identify specific architectural or training differences responsible for Claude’s advantage, it would strengthen the paper’s contribution to the design of future VLMs. 2. What insights can be drawn from the model's failure cases, and how could they inform future improvements? A more detailed analysis of the failure patterns (e.g., trajectory complexity, road network alignment) could reveal structural limitations in the VGLS approach and suggest avenues for refining the model’s decision-making process. Moreover, the improvement of the prompt design is also an interesting issue. 3. Does the method of image partitioning (e.g., left-right splitting versus triangular splitting), the order of partitioning, image scaling and image orientation affect prediction performance? Clarifying whether different splitting strategies or orders influence the accuracy and consistency of the model’s predictions would provide deeper insights into the robustness of the VGLS approach. Additionally, understanding whether image scaling and orientation adjustments affect model performance could help refine the preprocessing pipeline and improve generalization across different map formats and resolutions. Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your comments. Below, we respond to your concerns. Due to space constraints, some points may be addressed briefly; please feel free to raise any questions. **On the Superior Performance of Claude Models** The Claude series consistently achieves SOTA results on spatial reasoning tasks [1]. Researchers speculate that Claude models may possess a well-developed world model, likely due to **optimization for screen control tasks**, which require fine-grained spatial understanding of interface layouts, highlighting its spatial reasoning capabilities. **On Novelty** The novelty of this work lies in introducing the **first benchmark for next-location prediction based on vision-language models**. Since road network structures vary across cities, training city-specific models limits generalization. In contrast, vision encoders provide **universal map representations**, enabling cross-city generalization. Promising future path include: Reinforcement learning with reward signals, Visual fine-tuning via LoRA. We leave these directions to future work and hope this benchmark inspires broader research in vision-based location prediction. **On the Formal Convergence of Our Method** Our method assumes that VLMs inherently possess an **internal capability** to predict the next location, but this capability may not be directly shown because of inability to paint on the image. We therefore reformulate the generative task as a discriminative one. For an image of size HxW, the distance between the model’s selected location and its internal prediction is bounded by $\sqrt {H^2+W^2}/{2^i}$ after i steps. This distance converges to 0 as the number of steps increases. **Insights from Failure Cases** We categorize the identifiable errors into two types: **Visual hallucinations**: In some cases, the region of interest is too small for the model to differentiate, potentially due to the patching mechanism in the vision encoder. **Speed-related biases**: As shown in Porto Case 41, an abnormally high-speed segment caused the model to overemphasize that portion, leading to an incorrect prediction. This may reflect a limitation in the model’s attention mechanism. **On the Influence of Other Trajectory Factors** In Appendix B.1, we analyze several additional factors: **Image scaling**. When a unit length represents a shorter real-world distance, performance improves. **Angular change**. Larger angular shifts tend to worsen accuracy. **Path length**. Longer trajectories correlate with higher errors. Experiments related to transparency, background color, and trajectory color will be included in the appendix if the paper is accepted. **Regarding the Statement on Insufficient Data for Training RNNs** A simple motivating example is the prediction of next-location trajectories for **elderly individuals**. GPS data for this demographic is extremely sparse due to the need for specialized data collection equipment. As a result, it is difficult to train RNN-based models effectively in such cases. Leveraging the generalization ability of VLM offers a way to bypass the need for domain-specific, large-scale trajectory data. **On the Transferability Limitations of Other Models** As mentioned in prior works, models such as LLM-Mob and Agent-Move require training a dedicated token for each region within a specific city. These tokens are tightly coupled with the city they are trained on, significantly limiting the transferability of these models across cities. **On Consistent Instructions for Human Participants** We have stated in Line 167 (right column) of our paper, *“Users are presented with the same input prompts”* This ensures fair comparison. **On the Choice of Distance Thresholds** **100m** roughly corresponds to the size of a football field. **500m** typically covers the space between traffic signals. **2000m** approximates the span of a city block. Each threshold offers meaningful granularity for evaluating real-world spatial accuracy. **On the Number of Iteration Steps** As mentioned previously, increasing the number of iterations reduces prediction error, akin to inference-time scaling in large models. We chose to terminate after 10 rounds, at which point the average distance between the predicted region center and the model's internal prediction was **under 32 meters**, and we consider it acceptable. The final prediction is defined as **the center of the region** selected in the 10th round. **On Comparison with More Sophisticated Trajectory Models** Following your suggestion, we now include results for **four SOTA models**. Due to space constraints, we only report the average MAE across all datasets here: Transformer: 216.73, DeepMove: 215.18, GETNext: 214.33, LLM-Mob: 211.12. We will include full experimental results in the final version of the paper if accepted. Sincerely, The authors of Paper 740 [1] https://mcbench.ai/leaderboard [2] https://www.anthropic.com/news/visible-extended-thinking
Summary: This paper introduces a new task, next-location prediction, which leverages map images and historical coordinates to predict the next location. The paper proposes a framework, VLMLocPredictor, which guides VLMs to iteratively refine the next-location prediction. Moreover, the paper compares the performance of multiple VLMs and humans on this task, provides discussions on the results, and offers further analyses of several influential factors. Claims And Evidence: Yes. Methods And Evaluation Criteria: I have questions regarding the benchmark datasets used in this paper. Merely providing images of road networks and historical trajectories seems insufficient to predict the next location on a map scale. Theoretical Claims: The major claims in this paper are theoretically correct. Experimental Designs Or Analyses: I am a bit skeptical of the validity of the experimental designs in this paper. To predict the next location, the input information seems to lack sufficient conditions. Supplementary Material: I have reviewed the entire content of the supplementary material. Relation To Broader Scientific Literature: The contributions are related to the scientific literature on the application of VLMs to map image processing. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. This paper is comprehensive in content. From introducing a new task, providing corresponding solutions, benchmarking the performance of multiple VLMs and humans, to final discussions and analyses, it covers a wide range of aspects. 2. The paper conducts extensive experiments and provides various discussions. The analyses include the performance of predicting locations on the map by VLMs and humans, as well as several influential factors, such as prompts. Weaknesses: 1. There may be issues with the experimental design. Next locations on the navigation map are highly dependent on the intention or destination of the drivers and are influenced by factors such as dynamic traffic conditions. Merely providing images of the road network and historical trajectories seems insufficient to predict the next location. Providing insufficient information may lead to guesses based on inadequate conditions. 2. The analysis provided in this paper, such as the color choices, appears to be somewhat superficial. The results of ablation studies show that many attempts result in only minor differences. Given sufficient information, it would be better to introduce research challenges with scientific value and provide corresponding solutions, rather than a simple enumeration of conditions. Other Comments Or Suggestions: Please see the weaknesses. Questions For Authors: Could you provide an in-depth summary of the scientific insights for the research community or the application values for real-world scenarios in the paper, to illustrate its broader influence and significance? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for your valuable comments. We appreciate the time and effort you have devoted to reviewing our work. Below, we respond to your concerns in detail. ### **Regarding Experimental Design** 1. While it is true that taxi drivers' trajectories are influenced by intent, prior work suggests that human mobility remains highly predictable despite this. Notably, the Science paper *"Limits of Predictability in Human Mobility"* (2010), which has been cited over 4,000 times [1], showed that for hourly human mobility data, the theoretical upper bound of next-location prediction accuracy can reach as high as **93%**. This indicates that, although behavior is goal-driven, there are still strong patterns and structural constraints that make the task meaningful and predictable. 2. In addition, the topological structure of road networks imposes natural constraints on plausible movements. Consider a simple example: suppose the vertices of a square are labeled A, B, C, and D in order. If a driver travels from A to B to C, and all roads are open and unconstrained, it is unlikely that the next step will be D—since a more efficient plan would have been a direct route from A to D. This highlights that predicting the next location depends on a model’s ability to reason about spatial efficiency and road network topology. 3. Lastly, as larger models consistently achieve better results, which is consistent with scaling laws in deep learning, and no model has yet reached human-level performance, it demonstrates that the behavior of taxi driver can be in-deed inferred. For these reasons, we respectfully ask the reviewer to reconsider their concerns about the experimental design. ### Regarding Research Challenges 1. We agree that experiments like *color choice* are relatively simple, which is why they were included only in the appendix. However, they serve to illustrate that our benchmark design is deliberate and well-grounded. Moreover, in our ablation studies, we show that carefully designed prompts reduce model error by over 22%, indicating a meaningful impact. 2. As for the scientific value of the research challenge: as mentioned earlier, our benchmark directly targets the reasoning capabilities of VLMs. Road networks are complex topological graphs, and predicting trajectory continuations requires non-trivial structural reasoning. Our benchmark provides a first step in evaluating whether models can perform human-like spatial reasoning over such structures. While we do not focus on improving model accuracy in this work, we see this as a **rich direction for future research**, and suggest a few promising directions: - **Reinforcement learning with reward signals**: Since selecting the correct region (e.g., color) is straightforward to verify, one could define reward functions and train models to improve their reasoning through interaction. - **Visual fine-tuning with LoRA**: For example, training a vision-language generation model to take a map as input and output a predicted next-point image. These approaches could significantly improve VLM performance on our benchmark, and we leave their exploration to future work. We hope this clarifies the scientific challenges and potential of the benchmark. ### **Regarding In-Depth Summary of the Scientific Insights** 1. **Vision-based Next-Location Prediction**: Because the visual representation of road networks is domain-agnostic, VLMs offer the potential to build generalizable next-location predictors. Our benchmark establishes a unified framework for evaluating such models, not limited to VLMs alone. 2. **Reasoning over Complex Road Networks**: As emphasized, our benchmark is designed to test whether VLMs can reason over complex topological structures, which is central to understanding spatial intelligence and planning. From the experimental results, the large models exhibit the complex reasoning ability to some extent. Once again, thank you for your thoughtful feedback and the time you’ve dedicated to our work. If you have any further questions or if any concerns remain unaddressed, please feel free to reach out—we would be happy to continue the discussion. Sincerely, The authors of Paper 740 [1] Song, Chaoming, et al. "Limits of predictability in human mobility." *Science* 327.5968 (2010): 1018-1021.
null
null
null
null
null
null
null
null
AEQA-NAT : Adaptive End-to-end Quantization Alignment Training Framework for Non-autoregressive Machine Translation
Accept (poster)
Summary: The paper presents AEQA-NAT, a novel Non-Autoregressive Machine Translation (NAT) framework that introduces a Semantic Quantization Space (SQS) inspired by VQ-VAE. The key components include: - Pre-aligned Semantic Quantization Space (SQS) leveraging mBART. - Semantic Quantization Alignment Loss (LSQA) to enforce consistency. - Aligned Reordering (AR) to improve syntactic alignment. These innovations help AEQA-NAT mitigate the training-inference gap, enhancing translation quality while maintaining high decoding efficiency. Claims And Evidence: The paper claims that AEQA-NAT eliminates the training inference gap and achieves state-of-the-art performance among NAT models. These claims are supported by: - Comprehensive experiments comparing AEQA-NAT against strong NAT baselines. - Demonstrated performance improvements in BLEU scores, particularly on raw data. - Reduced dependency on knowledge distillation, indicating improved generalization. However, AEQA-NAT still lags behind the basic Autoregressive baseline, Transformer, in translation quality. Methods And Evaluation Criteria: The methodology is well-motivated, combining quantization-based semantic alignment with adaptive reordering to address NAT’s dependency modeling issues. The evaluation on standard benchmarks (WMT14, WMT16, etc.) is appropriate, and comparisons with strong NAT baselines validate its performance. One concern I have with the method is its reliance on Aligned Reordering (AR), especially for low-resource languages where alignment might be less reliable. Theoretical Claims: N/A (this is an empirical paper) Experimental Designs Or Analyses: The experiments are extensive across different WMT benchmarks and include ablation studies on SQS size, sampling strategies, and length effects. Supplementary Material: Yes, I skim through all the appendix sections. Relation To Broader Scientific Literature: The paper builds upon previous NAT improvements, particularly knowledge distillation-based approaches and Directed Acyclic Transformers (DAT). Instead of distillation, AEQA-NAT introduces a semantic quantization approach similar to latent variable modeling in VQ-VAE. The work is well-positioned within the broader trend of improving NAT dependency modeling, offering an alternative path to addressing multimodality beyond KD and DAG-based methods. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: To use the aligned reordering technique, what is the computational cost? Do you compute the alignment with some aligner model on the fly during inference? Previous work address the multimodality problem in NAT by applying knowledge distillation or using DAG (as in DAT) to generate coherent translation. In your work, I guess your semantic quantization and aligned reordering do a similar thing right? Can you provide more intuition behind this? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your comments. Your feedback has been very helpful to us. 1. The cost of aligned reordering is $O(n \cdot m \cdot d)$, and we do not use an additional aligner model during the inference phase. We appreciate your interest in the intuition behind our method. When we recognized the existence of the training-inference gap in MLM-based NAT systems, we naturally hypothesized that this gap would hinder the performance of NAT, and our experimental results confirmed this hypothesis. Inspired by VQ-VAE, we quantize the semantic representation vectors, which forces the model to learn more compact semantic consistency representations. This not only facilitates bridging the training-inference gap but also mitigates the multimodal distribution characteristics in language mapping. Based on the discretized semantic representations, we model the word order correspondence through the aligned reordering mechanism, ensuring the syntactical accuracy of the discrete representations, which further alleviates the multimodal problem in NAT. 2. This work conducts a detailed study on typical datasets widely used in NAT systems (such as WMT ’14, ’16, etc.). We appreciate your mention of low-resource languages in relation to the reliability of our method, and this will be one of the key areas of focus in our future work.
Summary: Non-autoregressive transformers (NATs) are attractive due to computational efficiency for machine translation workloads. However, existing approaches fail to completely close training-inference mismatches for these systems. This work proposes Adaptive End-to-End Quantization Alignment Training for NATs (AEQA-NATs) to reduce this gap, optimizing for a few novel training goals and better leveraging a joint semantic embedding space. When successfully applied, AEQA-NATs achieve the fastest decoding throughput of any NAT model and see improvements in minimizing the BLEU score gap between autoregressive and non-autoregressive transformers. Claims And Evidence: The core claims of this work are clearly outlined in earlier sections and then supported by the methodology and data in Section 3. The ablation results later in this work serve to further support the core claims of this paper. Methods And Evaluation Criteria: Chosen evaluation criteria seem fairly standard for this application space. BLEU, COMET, and similar, typical translation metrics are employed on typical datasets (WMT ’14, ’17, etc.). Theoretical Claims: This is largely an empirical work, no significant theoretical claims are made that need to be substantiated. Experimental Designs Or Analyses: The experimental design appears fairly standard for machine translation works. As mentioned before, typical metrics are employed. A range of similar NAT models are compared against in a seemingly comprehensive manner. Additionally, a thorough and credible ablation study is conducted. Supplementary Material: Reviewed some training details in Appendix E, with especially relevant details including end evaluation results based on varying sampling rates during training. Relation To Broader Scientific Literature: Based on the results observed in this work, the described approach is largely a step forward for machine translation NATs. While somewhat inspired by certain VAEs, the significance of this work appears to be constrained only to efficient machine translation. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Regarding strengths, the paper is very will written, which is particularly helpful as it is fairly information-dense. The proposed approach makes intuitive sense and the shared SQS is a sensible improvement. Additionally, the obtained results make a strong case for the viability of the proposed approach. When it comes to weaknesses, it is a bit unclear how stable the provided results are (i.e. do different training runs result in significant differences in end evaluations). Additionally, using the original transformer architecture as an autoregressive baseline is dubious, although this obviously does not impact the significance of demonstrated improvement upon NAT methods. Other Comments Or Suggestions: The series of expressions under Section 2.3 is not well set-up and is an area for improvement on revision passes. While clear upon reinspection, it is dense and a little difficult to initially parse. Typos: entend -> extend in Section 3 under Implementation Questions For Authors: Why was the original transformer architecture chosen as an autoregressive baselines? With nearly a decade of architectural improvements, surely a more modern baseline could have been chosen for a more reasonable comparison. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your valuable comments. Your feedback is important to us. 1. In response to your suggestion, we have added further descriptions to the sequence of expressions in Section 2.3 to more clearly explain the data flow process: - The source text $(x_1,x_2, \dots, x_n)$ is input into the encoder $f_{\text{enc}}$ to obtain the hidden representations $(h_{x,1},h_{x,2}, \dots, h_{x,n})$. - The length prediction module and the Softcopy mechanism transform the hidden representations into intermediate representations $(h_1,h_2, \dots, h_m)$. - The semantic consistency representations $(z_q(h_1),z_q(h_2), \dots, z_q(h_m))$ are obtained by querying the Semantic Quantization Space (SQS). - Finally, the semantic consistency representations $(z_q(h_1),z_q(h_2), \dots, z_q(h_m))$ and the encoder's hidden representations $(h_{x,1},h_{x,2}, \dots, h_{x,n})$ are fed into the decoder $f_{\text{dec}}$ for the decoding process. 2. Regarding your suggestion to "choose a more modern and improved autoregressive Transformer architecture as the autoregressive (AT) baseline," although existing NAT systems have made significant progress in translation quality, their overall performance still lags behind that of the vanilla Transformer. Therefore, the vast majority of NAT systems use the original Transformer architecture as the baseline for AT systems for comparison. We have adopted this approach as well. Your suggestion is valuable and will be an important direction for future exploration.
Summary: This paper works on non-autoregressive machine translation. It bridges the gap by introducing the latent variables and applying glancing training over the latent codes. In addition, order alignment of latent code is also introduced. The empirical results are very good. Claims And Evidence: The main claim is that the proposed method bridges the training-inference gap caused by Glat. The improved scores on standard datasets support this claim. The main claim is not well-justified because the solution is a latent variable model, which naturally enhances the sequence modeling capability. It is not certain if glancing is useful at all in the proposed method. My best understanding is that the proposed method is a well-designed latent variable model for NAR machine translation, but I think the motivation and the proposed method are not aligned. Additional note: Glat, a training technique for many NAR systems, uses the training-inference gap. In fact, the gap is essential to maintaining performance (reducing it to 0 degrades performance). Thus, the gap may not be a drawback but rather a feature of Glat. In fact, there is also a gap between the training and the inference in the proposed method. First, the glancing of the hidden codes are based on the groundtruth tokens, which are not available during the inference. The order alignment also needs ground truth during training. Methods And Evaluation Criteria: This paper uses tokenized bleu for most datasets and includes chrf, COMET, and BLEURT for evaluating translation quality. It is generally valid and comprehensive. However, using tokenized BLEU scores in NAR systems has been criticized over the years, and the paper should report SacreBLEU when possible (Tables 4 and 5). Theoretical Claims: There are no poofs in this work. Experimental Designs Or Analyses: As mentioned before, the proposed system introduces latent variables and claims to bridge the gap between the training and inference of glat training. This brings the concern that the main factor of the performance gain is from the latent variables, and has nothing to do with glat. There should be at least a baseline line indicating that the glancing of latent code is not used (i.e., No SQ(x) in Eqns 11 and 12). Supplementary Material: I checked A.1 and A.2, which explain more about latent variables. Relation To Broader Scientific Literature: This paper relates to non-autoregressive machine translations, which involve methods like Glat, DAT, NPD, and DSLP. In addition, VQ-VAE, which has broader applications, is also related. Essential References Not Discussed: The latent-Glat is closely related to this work. In fact, many aspects are the same: 1. Codebook, 2) glancing over latent codes, 3) non-autoregressive latent variables Yu Bao, Hao Zhou, Shujian Huang, Dongqi Wang, Lihua Qian, Xinyu Dai, Jiajun Chen, Lei Li, latent-GLAT: Glancing at Latent Variables for Parallel Text Generation , ACL 2022 Other Strengths And Weaknesses: The hyper-parameters of glancing training are not explored, which is an issue as improving glancing is a major claim. Other Comments Or Suggestions: The proposed system achieves good performance with relatively low FLOPs compared with models based on DAT or CTC. The authors may consider adding analysis to this regard. The acronym DSLP is not explained nor cited. Questions For Authors: When you compute the latency, what are the specs of the machine, including the GPU and CPU? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your comments. ## 1. Clarification on Motivation-Method Alignment To address your concern regarding the alignment between our motivation and the proposed method, we would like to clarify the following: We highly value the role of GLAT in NAR systems, which is why we chose it as our baseline model. Our intention to *bridge the training-inference gap* is not to reduce Glancing Sampling to zero (and in our work, we do not aim to eliminate Glancing Sampling), but rather to maintain consistent use of it during both training and inference. Specifically, we predict complete translations in both phases, as opposed to the mismatch seen in training ($X+Y_{obs} \rightarrow Y_{mask}$) and inference ($X \rightarrow Y$). Therefore, AEQA-NAT is designed to ensure consistency between training and inference, as detailed in Eq. (11) and Eq. (15). To summarize, our motivation is that the training-inference gap hinders NAT from fully realizing its potential, and we have empirically demonstrated this in Fig. 1. We propose AEQA-NAT, which introduces a semantic consistency space to semantically quantize and align the source and target texts. This approach preserves the MLM-based NAT modeling capability while overcoming the explicit reliance on target words, thereby achieving unified prediction of complete translations in both training and inference stages. Thus, our motivation is aligned with the proposed method. ## 2. Differentiation from Latent-GLAT We appreciate your attention to our methodological innovation and would like to clarify the significant differences between AEQA-NAT and Latent-GLAT (as cited on p.9). The following outlines the three key technical distinctions: - **Latent variable design**: AEQA-NAT introduces a novel *Semantic Quantization Space* (SQS) that jointly models discrete latent variables for both source and target texts during the pre-alignment phase. This bilateral alignment mechanism, driven by the collaborative effect of different loss terms, establishes a unified semantic quantification space that maintains cross-lingual consistency between training and inference phases. In contrast, latent-GLAT employs a unilateral approach that only encodes target-side information, requiring an additional network for latent variable prediction during inference. - **Training objective**: Latent-GLAT employs glancing training on both latent variables and explicit tokens during the training phase, optimizing for masked token reconstruction. During inference, it directly predicts complete translations. In contrast, AEQA-NAT directly predicts complete translations in both the training and inference phases, ensuring parity between training and inference. - **Architectural foundation**: As detailed in Appendix A.1 (p.13), AEQA-NAT diverges from VAE-based approaches that learn latent features through auxiliary networks. Instead of introducing additional networks to learn latent variables, our Semantic Quantization Space (SQS) **aligns vectorized discrete representations between source and target languages during the pre-alignment phase.** ## 3. Performance Attribution Analysis In response to your concern that "the main factor of the performance gain is from the latent variables, and has nothing to do with GLAT," our manuscript argues that the training paradigm of MLM-based NAT creates a training-inference gap. Therefore, our baseline primarily compares to MLM-based NAT systems. Thus, we focus on comparing the translation performance under Uniform sampling and Adaptive sampling methods, as shown in Table 6. We also present in Table 3 the impact of the sampling rate of GLAT during the inference phase on performance (with a fixed sampling rate during inference). The results demonstrate the effectiveness of the GLAT sampling method. ## 4. Hyperparameters of Glancing Training and Metric Regarding your comment on "the hyperparameters of glancing training not being explored," we have discussed the hyperparameters of glancing training in our settings: - Training-phase sampling rates: See Appendix E.1 - Inference-phase sampling strategies: See Table 3 For fair comparisons with previous work, we follow established practices from prior research and use sacreBLEU for WMT 17 EN-ZH, and tokenized BLEU for other benchmarks. The experimental results reported using sacreBLEU can be found in Table 3. ## 5. Other Comments and Questions Latency comparisons with vanilla Transformer were conducted on identical NVIDIA A40 GPU configurations. DSLP denotes deep supervision and additional layer-wise predictions. We appreciate your comment about the explanation and citation of DSLP, and we have now revised the manuscript to address this issue. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. For the revision: please include more discussion on Latent GLAT in the revision and VQ-VAE. Although AEQA-NAT does not need to predict the codebook, which is shared for all samples and is still used during inference, AEQA-NAT is still a latent-variable model. Therefore, I am not convinced that "AEQA-NAT diverges from VAE-based approaches. ### Regarding "Performance Attribution", I do not disagree with the contribution of GLAT; rather, I am wondering which is the main contributor, using GLAT or being a latent model. 1. According to Table 6, - Having the latent variable (with K=4096) itself improves the performance from 10.84 to 25.14 - Having GLAT only improves from 25.14 to 25.96. (I am not sure if Table 3 uses AR, otherwise, 25.14 could be 26.82) 2. In addition, Table 3 suggests the best performance without GLAT is 26.82 on WMT'14 EN-DE, which is the same as with GLAT (Line 10, DLSP doesn't count unless Table 3 uses DSLP). It is clear that being a latent mode contributes the most, and adding GLAT only makes a small difference (if any). ## Regarding the "gap" In addition, I am not sure if we can classify the proposed approach as addressing the training-inference gap because "MLM-based NAT creates a training-inference gap." Here is what I thought: 1. CMLM (1 Iteration) < Vanilla NAT < GLAT: This tells us that the gap is not necessarily a bad thing for NAT. 2. Without a gap in the proposed framework, the performance is actually worse (according to Table 3, where it is a clear trend that lambda = 0 is even worse). Overall, I do not doubt that the authors have proposed an efficient and strong NAT, but I don't think the motivation of this paper aligns with its solution. --- Reply to Comment 1.1.1: Comment: Thank you for your responses. We appreciate your comment on the efficiency and modeling capability demonstrated by the method we propose in this work. ## Response to Performance Attribution 1. We would like to clarify your comment that “Table 3 suggests the best performance without GLAT is…”. Please note that **Table 3 actually uses GLAT**, rather than being “**without GLAT**.” We would like to emphasize that this paper includes two GLAT processes:GLAT-Training and GLAT-Inference. In Table 3, GLAT is used in both the training and inference phases. In our setting, Glancing Training is the default, as can be understood from the context, specifically in Section 2.4, and Equations 11 and 12. To provide further clarification, we have expanded on the performance changes under different sampling ratios during inference, as shown in Table A: | Sampling ratio λ | 0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1 | |------------------|-------|-------|-------|-------|-------|-------|-------|------|-------|-------|-------| | WMT14 EN-DE | 18.10 | 18.34 | 20.71 | 24.53 | **26.82** | 25.64 | 23.87 | 23.08 | 22.42 | 22.57 | 21.63 | The results in **Table A reveal the contribution of GLAT during inference**. When $\lambda$ = 0, which is similar to existing GLAT-based methods, GLAT is used in the training phase, while the inference phase maintains the “training-inference gap.” The model’s performance on WMT14 EN-DE is 18.10, which is lower than the performance when λ takes other values [18.34, 26.82]. Therefore, the conclusion here should be: “Table 3 suggests that the best performance **with GLAT maintained during inference is 26.82 on WMT14 EN-DE**.” 2. Response to "Which is the Main Contributor (GLAT or Latent Model)" We have expanded the ablation study (as shown in Table B) to address this question: | Line | K = 2048 | GLAT-Train | GLAT-Inference | BLEU | |------|----------|---------------|----------------|-------| | 1 | | | | 10.84 | | 2 | ✓ | | | 14.57 | | 3 | ✓ | ✓ | | 18.02 | | 4 | ✓ | ✓ | ✓ | 25.96 | The results in Table B provide an intuitive demonstration of the contribution of GLAT (at different stages). Please note that in our setting, GLAT samples the latent variable, so it cannot be applied without including the latent variable (see Eq. 12 and its explanation on Page 4, “Note that what we…”). According to Table B: - The latent variable itself (with K=2048) improves performance from 10.84 to 14.57(not 25.14). - With the addition of SQS, applying GLAT during training further improves the performance from 14.57 to 18.02. - Most importantly, after applying GLAT during inference, the performance increases to 25.96, as shown in Line 4. The core claim of our paper is that, when comparing Line 3 (without GLAT in inference) and Line 4 (with GLAT in inference) in Table B, a significant performance improvement is observed (from 18.02 to 25.96). Based on the above, both the SQS and GLAT during the training and inference phases significantly improve the performance of AEQA-NAT, **with the most notable performance improvement occurring during inference with GLAT, achieving a 7.94 BLEU increase**. 3. We would like to clarify the motivation behind our use of the latent variable. Upon discovering the training-inference gap in MLM-based NAT systems, we naturally hypothesized that this gap would limit the full potential of NAT. Our experimental results confirm this hypothesis. Inspired by VQ-VAE, we adopted semantic vector quantization (latent variable) as a representation of semantic consistency, which satisfies the needs of both the training and inference processes for NAT. Thus, we introduced this method to bridge the training-inference gap. ## Response to Gap Discussion 1. Regarding your comment, “CMLM < Vanilla NAT < GLAT, this tells us that the gap is not necessarily a bad thing for NAT,” if I understand correctly, you are suggesting that both CMLM and GLAT exhibit this gap, but GLAT outperforms Vanilla NAT, hence “the gap is not necessarily a bad thing.” We would like to clarify once again, as we previously mentioned in the rebuttal ("Clarification on Motivation-Method Alignment") and in the paper (Fig. 1, Eq. 12, and Eq. 15): - Our goal is not to reduce glancing sampling to zero. - The training-inference gap we refer to in MLM-based NAT systems pertains to the mismatch between the training process ($X+Y_{obs} \rightarrow Y_{mask}$) and the inference process ($X \rightarrow Y$). - Our proposed training-inference consistency refers to maintaining consistency between the training phase ($X+SQ(X) \rightarrow Y$) and inference phase ($X+SQ(X) \rightarrow Y$), as empirically demonstrated in Line 4 of Table B. 2. We appreciate your comment and will include a discussion on latent-GLAT in the revised version of the paper.
Summary: This paper argues that there is a training-inference gap in Non-autoregressive Transformers (NATs), where NATs sample target words during training to enhance input but have no access to target information during inference. To address this, they propose an Adaptive End-to-end Quantization Alignment (AEQA) training framework, which introduces a semantic consistency space to eliminate the need for target information during inference. Experimental results demonstrate the effectiveness of AEQA, especially on raw data. Claims And Evidence: No. The claim "Experimental results demonstrate that our method achieves state-of-the-art performance among fully NAT models on major WMT benchmarks" is problematic. The state-of-the-art method FA-DAT [1] is ignored, which outperforms AEQA-NAT on raw data. [1] Fuzzy Alignments in Directed Acyclic Graph for Non-Autoregressive Machine Translation, ICLR 2023. Methods And Evaluation Criteria: Yes. The evaluation covers a wide range of benchmark datasets and metrics. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: Currently, there is limited interest on non-autoregressive machine translation, yet the studies conducted in this area have the potential to inspire broader fields of research, such as the LLM community, which primarily relies on the next token prediction mechanism. Essential References Not Discussed: AEQA-NAT employs an aligned reordering mechanism to align the word order of source and target sentences. The concept of aligned reordering is not new in NAT [1,2], but they are not cited. [1] AligNART: Non-autoregressive Neural Machine Translation by Jointly Learning to Estimate Alignment and Translate, EMNLP 2021. [2] Guiding Non-Autoregressive Neural Machine Translation Decoding with Reordering Information, AAAI 2021. The state-of-the-art fully-NAT method on raw data is ignored [3]. [3] Fuzzy Alignments in Directed Acyclic Graph for Non-Autoregressive Machine Translation, ICLR 2023. Other Strengths And Weaknesses: Pros: 1. This article identifies the training-inference gap issue in NAT, which makes sense and has been overlooked in previous research. The proposed Quantization Alignment method cleverly addresses this issue. 2. AEQA-NAT achieves impressive experimental results, especially in enabling vanilla NAT w/o distillation to perform nearly as well as the autoregressive baseline. 3. Both the experiments and analyses are very comprehensive. Cons: I do not find significant weaknesses of this paper. Other Comments Or Suggestions: The abbreviation "AR" for "aligned reordering" can be misleading, as AR usually refers to "autoregressive". I recommend using a different abbreviation or simply labeling it as "w/ reordering" in Table 1. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your comments. Your suggestions have been very helpful to us. 1. In the revised manuscript, we have added citations to the relevant literature you mentioned and revised the conclusions in both the abstract and the experimental sections accordingly. 2. In response to your suggestion, we have changed "AR" to "w/ reordering" in Table 1 to avoid any potential confusion.
null
null
null
null
null
null
Competing Bandits in Matching Markets via Super Stability
Accept (poster)
Summary: The paper identifies a problem with using the Gale-Shapely algorithm for stable matching with two-sided uncertainty: finding a weakly stable matching based on partial ranking doesn't give guarantees for the full ranking. Instead, they build on a different algorithm: finding a super stable ranking that guarantees consistency between partial and full rankings. Based on that they build an algorithm to minimize the regret and analyze the instance-dependent lower bound and an upper bound. They also propose a decentralized version of the algorithm. Claims And Evidence: Yes, mostly clear Methods And Evaluation Criteria: My main concern with the paper is the notion of optimizing regret in the case of stable matching. From the user perspective, what matters most is that the algorithm converges quickly and not so much that the intermediate arms that are chosen are low-regret. In this case, I don't see why the authors don't present a bound for a pure-exploration version of the stable matching instead of the low-regret version. This limits the impact of the paper as the main metric of interest is mismatched with the motivating problem. Theoretical Claims: Mostly yes but why is N <= K? In how many cases are the number of users smaller than the number of options? For example, on of the motivations for stable matching is matching students to universities. Are the authors saying the interesting case is when there are fewer students than colleges? Fewer users than the number of crowd-sourcing tasks? If their contribution is only interesting in the case of N <= K, then that limits the impact of the paper. Experimental Designs Or Analyses: Theoretical paper, does not apply. Supplementary Material: Did not review supplementary material. Relation To Broader Scientific Literature: The users find an interesting problem with the stable matching problem and correctly connect it with the super stable matching from the Irving 1994 paper. Essential References Not Discussed: None, that I see. Other Strengths And Weaknesses: Bandits have been well studied over the past. The bar for introducing a new algorithm is high and both novelty and a good motivation for a real-world problem needs to be met. My main concern here is with the motivation of this problem and the regime in which it is proposed. Other Comments Or Suggestions: Line 21: should be learn instead of elarn Questions For Authors: It would be good to talk about the motivation for regret instead of pure-exploration. Also, what are motivating examples where N <= K? The one that I can think about is the organ donor problem potentially. Also: what is the source of uncertainty for these two-sided problems in real life? Again, connect the uncertainty to a real-world problem. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable review. Please find the responses below. **Re motivation for regret instead of pure-exploration:** In online learning (including matching markets), guaranteeing good cumulative performance and minimizing losses is crucial. Regret, unlike pure exploration, balances learning (exploration) with earning (exploitation) to maximize total rewards. Pure exploration focuses solely on identifying the best option, disregarding rewards gained during learning. Consequently, the importance of cumulative reward in matching markets (e.g., e-commerce marketplaces) makes regret minimization a natural objective. **Re motivation of Bandits in Matching Markets:** Bandit learning in matching markets is an active field of research with a long and rich history, starting with Das & Kamenica (2005). This paper is not introducing the problem. This area has seen significant research, with over [100 publications](https://scholar.google.com/scholar?q=%22bandit+learning%22+AND+%22matching+markets%22) since then, and a recent surge in activity beginning with Liu et al. (2020). This paper pushes the frontier of bandit learning for finding any stable matching in matching markets. It proves Extended-Gale Shapley's advantage over the standard version and leverages super-stable matching structure to establish tighter, instance-specific regret lower bounds. **Re why is $N \leq K$:** We present the results with $N \leq K$ for notational convenience, but the results and respective proofs presented in this paper can be easily generalized to $N > K$ (only notational and textual changes required). **Re motivating example for $N \leq K$:** Let us consider the two sided e-commerce marketplace, such as UpWork, where clients submit specific tasks (number of task is $N$) and there are multiple vendors that can be matched with the tasks (number of vendor is $K$). In these platforms often the number of tasks is much less compared to the number of vendors (i.e. $N \leq K$). As mentioned above, we do not claim that $N \leq K$ is more interesting than the $N> K$ scenario. The presentation is just for notational convenience. **Re the source of uncertainty for these two-sided problems in real life?** Uncertainty in two-sided matching stems from a-priori unknown payoffs between users and arms, and vice versa. In the e-commerce marketplace example, a client's reward (denoted as $\mu_{i,j}$) depends on the vendor's skillset, but the vendor's execution is randomized, varying due to latent extrinsic factors modeled by reward distributions. Similarly, a vendor's preference for a client depends on factors like payment and trustworthiness, which can also vary due to latent extrinsic factors, leading to the vendor's reward (denoted as $\gamma_{j,i}$) being modeled by random variables. We will expand on this in the revised version of the paper.
Summary: This paper studies bandit learning in two-sided matching markets where both users and arms have unknown preferences and must learn them through bandit feedback. It introduces super-stable matching, using Irving’s (1994) concept to overcome the limitations of standard Gale-Shapley (GS) algorithms, which only guarantee weak stability under uncertainty. The proposed Extended-GS algorithm, combined with UCB-LCB-based rank estimation, enables efficient matching in a centralized setting with logarithmic pessimal stable regret. A decentralized version is developed using a 2-bit communication protocol, incurring only a constant regret increase. The paper also establishes a new instance-dependent lower bound, showing that the admissible gap is a key complexity parameter for stable matching with bandit feedback. Claims And Evidence: The efficiency of the round-robin exploration strategy is theoretically justified, but it may not be the most optimal choice in practice. There is a lack of alternative exploration strategies leaves room for stronger empirical support. Additionally, while the decentralized algorithm is claimed to be scalable with a 2-bit communication protocol, no empirical evidence is provided to demonstrate its performance in large-scale settings, where factors like network latency and asynchrony could impact its effectiveness. Finally, the claim that the method is applicable to real-world matching markets (e.g., crowdsourcing, ride-sharing, college admissions) is plausible but remains untested on real-world data—all experiments are conducted on synthetic setups, making it unclear how the approach would handle dynamic, highly imbalanced, or adversarial market conditions. Methods And Evaluation Criteria: The experiments are conducted only on synthetic data, with no real-world datasets or practical case studies to validate the approach’s applicability in markets like crowdsourcing, ride-sharing, or college admissions. While synthetic setups allow for controlled comparisons, real-world dynamics, preference heterogeneity, and strategic behavior could introduce additional challenges not captured in the current evaluation. Furthermore, while the decentralized algorithm’s theoretical regret bound is well-analyzed, its empirical performance is not tested—the experiments focus only on the centralized approach. Given that real-world markets often involve decentralized decision-making with communication constraints, an empirical assessment of the decentralized method’s scalability and robustness would strengthen the paper. Finally, the exploration strategy (round-robin exploration) is simple but potentially suboptimal, and it would be valuable to compare it against adaptive exploration strategies to assess trade-offs in convergence speed. Expanding the evaluation to real-world scenarios, alternative exploration techniques, and decentralized settings would enhance the credibility and impact of the proposed methods. Theoretical Claims: I checked the proofs for Proposition 2.5 & Corollary 2.7, Theorems 3.5 & 4.1, and Theorem 5.2. The regret analysis follows standard UCB-based techniques and introduces the admissible gap, though the reliance on round-robin exploration may be loose compared to adaptive strategies. The instance-dependent lower bound is derived using Combes et al. (2017) and Graves & Lai (1997). Some assumptions are too strong: e.g., the scalability of the decentralized approach assumes perfect phase synchronization, which may not hold in real-world applications, and the impact of preference ties on convergence is not explicitly addressed. Experimental Designs Or Analyses: The experimental design has notable limitations. The evaluation is entirely based on synthetic data, with no real-world datasets or application-driven case studies, making it difficult to assess how well the algorithm generalizes to real-world markets with dynamic and imbalanced preference structures. Moreover, while the centralized algorithm is tested, the decentralized version is not empirically evaluated, even though its theoretical regret bound is derived. Given that real-world matching markets often operate in decentralized settings, an empirical study on scalability, robustness, and communication constraints would significantly strengthen the paper. Additionally, the round-robin exploration strategy, while simple, is not compared against more adaptive exploration approaches, leaving open the question of whether a more intelligent exploration mechanism could further improve performance. Supplementary Material: Yes. I reviewed their proofs. Relation To Broader Scientific Literature: First, it connects to the bandit learning literature, particularly structured multi-armed bandits in matching markets. The problem of learning stable matchings under uncertainty was first introduced in Das & Kamenica (2005), which considered single-sided uncertainty, where only users needed to learn preferences while arms had full information. Subsequent works, such as Liu et al. (2020) and Sankararaman et al. (2021), focused on decentralized learning algorithms in matching markets but assumed single-sided uncertainty and relied on the standard Gale-Shapley (GS) algorithm, which only guarantees weak stability. The Explore-then-Gale Shapley (ETGS) framework developed in Kong & Li (2023) improved upon these results by allowing for user-optimal stable regret minimization. This paper differs from these works by introducing two-sided uncertainty and leveraging super-stable matching—a concept from Irving (1994)—to guarantee true stability rather than weak stability. In terms of matching theory, the paper draws from classical stable matching literature, particularly the Gale-Shapley (1962) deferred acceptance algorithm, but expands upon it by incorporating super stability from Spieker (1995). Prior work in bandit-based stable matching relied on weakly stable matchings under partial information, leading to suboptimal matching assignments when full rankings were revealed. By integrating Irving’s Extended Gale-Shapley algorithm, the paper provides a more robust stability concept that ensures convergence to fully stable matchings under incomplete preferences. Regarding regret analysis in bandit learning, the paper advances the theoretical understanding of instance-dependent regret bounds in structured matching problems. The instance-dependent regret lower bound derived in Theorem 5.2 builds upon the KL-divergence-based minimization techniques from Combes et al. (2017) and Graves & Lai (1997), but specifically tailors them to two-sided matching problems with bandit feedback. However, while the theoretical contributions are well-situated within the literature, the paper lacks empirical comparisons to recent two-sided matching bandit algorithms (e.g., Pagare & Ghosh (2024), Zhang & Fang), particularly in scalability and real-world performance. A broader experimental evaluation across different matching frameworks could further strengthen the paper’s impact in this domain. Essential References Not Discussed: Missed references: Pagare & Ghosh (2024), “Explore-Then-Commit Algorithms for Decentralized Two-Sided Matching Markets” (ISIT 2024) This paper also provides instance-dependent regret bounds but under different matching assumptions. A comparison between the admissible gap used in the current work and the regret characterizations in Pagare & Ghosh (2024) could strengthen the novelty and justification of the new lower bound in Theorem 5.2. Dai & Jordan (2021), “Learning Stable Matches with Uncertainties in Preferences” (NeurIPS 2021). The omission of Dai & Jordan (2021) NeurIPS is a significant gap in the paper’s literature review. Including it would situate the contributions more precisely within the existing body of research on bandit learning in stable matching, particularly regarding the differences between static and dynamic uncertainty settings. Other Strengths And Weaknesses: No. Other Comments Or Suggestions: No. Questions For Authors: 1. How tight is the instance-dependent regret lower bound in practice? 2. How does the super-stable matching framework compare to user-optimal stable regret frameworks? The paper focuses on pessimal stable regret, whereas recent works such as Hosseini et al. (2024) and Kong & Li (2023) optimize for user-optimal stable regret. 3. Why was the decentralized algorithm not evaluated empirically? 4. How sensitive is the performance to different exploration strategies? 5. Why were no real-world datasets used, and how would the method generalize? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful feedback. We first want to clarify the paper's scope. This paper advances bandit learning for finding *any* stable matching in matching markets. It demonstrates Extended-Gale Shapley's effectiveness over standard Gale Shapley and uses super-stable matching structure for instance-dependent regret lower bounds. Developing methods for real-world challenges (large-scale, communication constraints, collusion robustness) is outside this paper's scope, consistent with most work in this area, and will be highlighted as future work. Please find individual responses below. **Re Missing References:** Pagare & Ghosh (2024) is cited in the Introduction (page 1, column 2, paragraph 2) and Table 1. We could not locate Dai & Jordan (2021), “Learning Stable Matches with Uncertainties in Preferences” (NeurIPS 2021), and believe the reviewer may refer to "Learning in Multi-Stage Decentralized Matching Markets." This paper addresses preference uncertainty using nonparametric statistics and variational analysis, making it not directly related. We will add it to our citations of loosely connected prior works. **Re lack of empirical comparison:** Our algorithm is compared against the centralized two-sided version of Kong et al.'s Explore then Gale Shapley, which, as stated, is the centralized variant of Pagare & Ghosh (2024) and Zhang & Fang. Therefore, we respectfully disagree that our empirical results do not compare against the recent two-sided matching bandit algorithms. Furthermore, no non-adaptive exploration baseline exists for bandits in matching market with two-sided uncertainty (to our knowledge). **Re How tight is the instance-dependent regret lower bound in practice?** We are unclear about what is meant by `practical tightness of the lower bound`. If the reviewer asks about the upper and lower bound gap, we reiterate that a tight bound for bandits in two-sided matching markets is beyond the current state-of-the-art. Our work establishes a lower bound for binary stable regret, leaving optimal algorithm design for future work. **Re focus on pessimal stable regret:** This work explores the complexity of bandit learning for finding *any* stable matching, contrasting with finding a specific one. Our binary stable regret quantifies the hardness of this objective, for which we provide regret upper and lower bounds. Note this was the original objective in Liu et al (2020) as well. Optimizing regret for a specific stable matching (e.g. user-, arm-, or social-optimal) is a related learning objective that can leverage our findings. In particular, using the distributive lattice structure of the stable-matching efficient exploration for this task can be designed (in future) once *any* stable matching is found. **Re Why was the decentralized algorithm not evaluated empirically?** The centralized and decentralized algorithms are closely linked. We prove the decentralized algorithm's regret is at most $(1+N^2)$ higher (uniformly across instances and time T) than the centralized one. We will include decentralized algorithm simulations in the revised version to demonstrate this empirically. Our synthetic experiments focus on highlighting the effectiveness of Extended Gale-Shapley versus Gale-Shapley variants, for which the $(1+N^2)$ regret addition provides no further insight. **Re How sensitive is the performance to different exploration strategies?** Our regret upper bounds hold (up to reward gap-independent constants) as long as the exploration strategy ensures all (user, arm) pairs are explored with a constant fraction in the long run. Round robin exploration, adopted in prior work for its simplicity, ensures uniform long-run exploration. Finding an optimal non-adaptive exploration strategy is outside this work's scope. We believe under-exploring "unlikely" (user, arm) pairs is promising, but the two-sided uncertainty and numerous stable matchings complicate non-adaptive exploration. **Re Why were no real-world datasets used, and how would the method generalize?** This work, like most prior work in this area, focuses on establishing theoretical guarantees. Evaluating with real-world datasets is typically outside this line of work's scope. The reviewer's question about generalization is unclear. Scaling to large systems is beyond this paper's scope and is often addressed by contextual reward modeling, as recently considered for one-sided matching markets in Parikh et al. "Competing Bandits in Decentralized Large Contextual Matching Markets" arXiv:2411.11794.
Summary: This paper studies the bandit learning in matching markets with two-sided unknown preferences. It investigates the structure of super stability to determine the exploration-exploitation process. Existing works mainly consider LCB-UCB methods before identifying the full ranking or using known $\Delta$ to decide the exploration budget. Exploiting super stability to adaptively determine the exploration-exploitation improves the dependence on the $\Delta$ in the regret. The paper proposes both centralized and decentralized algorithms with stable regret upper bound guarantees. The lower bound correspondings to the newly defined $\Delta$ is also provided. Experiments validate the convergence. Claims And Evidence: Yes. The claims are supported by the theoretical analysis and emprical validations. Methods And Evaluation Criteria: Yes. The stable regret and binary regret (market unstability) is commonly adopted in the literature. Theoretical Claims: I do not check every detail of the proof. But the stable regret order is standard in existing exploration-then-commit algorithms. Experimental Designs Or Analyses: Yes. Some baselines are missing. For example, the following works also consider bandit learning in matching markets with player-optimal stable regret. Zhang Y, Wang S, Fang Z. Matching in Multi-arm Bandit with Collision. Advances in Neural Information Processing Systems (NeurIPS), 2022, pp. 9552-9563. Kong F, Wang Z, Li S. Improved Analysis for Bandit Learning in Matching Markets. Advances in Neural Information Processing Systems (NeurIPS), 2024. Supplementary Material: Yes, I reviewed the supplementary material, specifically Appendix B, which includes: 1. Algorithm details: Pseudocode for the Extended GS Algorithm(Algorithm 2) and decentralized algorithms (Algorithms 3 and 4). 2. Implementation specifics: Clarifications on reward observation, partial rank construction, and synchronization mechanisms for decentralized settings. This section ensures reproducibility and validates the technical soundness of the proposed methods. Relation To Broader Scientific Literature: Yes. This paper on bandit learning in matching markets is related to the machine learning/learning theory/bandits/multi-player literature. Essential References Not Discussed: Yes. Both the table comparisons and experiments lack some necesary baselines. In Table 1, after the UCB-D3 algorithm, the following works [1][2] studying player-optimal stable regret and the two-sided unknown setting [3] are missing. [1]Zhang Y, Wang S, Fang Z. Matching in Multi-arm Bandit with Collision. Advances in Neural Information Processing Systems (NeurIPS), 2022, pp. 9552-9563. [2]Kong F, Wang Z, Li S. Improved Analysis for Bandit Learning in Matching Markets. Advances in Neural Information Processing Systems (NeurIPS), 2024. [3]Zhang, Y. and Fang, Z. Decentralized two-sided bandit learning in matching market. In The 40th Conference on Uncertainty in Artificial Intelligence. Other Strengths And Weaknesses: Strength: 1. Investigating the super stability is a novel and interesting idea to adaptively balance exploration and exploitation in matching markets. 2. The introduction of the admissible gap is a theoretically novel contribution that refines prior instance-dependent analyses (e.g., \Delta_min) by incorporating the structure of super-stable matchings. This parameter elegantly captures the interplay between partial rankings and true stability, advancing the literature on problem-dependent regret in matching markets. Weaknesses: 1. Though the work improves existing dependence on $\Delta_{\min}$ using a newly defined gap. The definition relies on the "admissible partial rank set" (A(Fu,Fa)), which is abstract and heavily tied to the lattice structure of super-stable matchings. This makes a less intuitive compared to $\Delta_{\min}$ and readers may struggle to grasp its relationship to instance hardness without concrete examples. Can you provide some market example to clearly compare these two gaps? Other Comments Or Suggestions: The logic of Corollary 2.7 is a little difficult, you can modify it to make the meaning of the sentence clearer. Questions For Authors: In line 12 of algorithm 1, if each player chooses the arm with the largest UCB estimation value, won't there be a conflict? If not, more explanations are required. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments. **Relationship between $\Delta_{\min}$ and $\Delta_{\mathcal{A}}$:** We first note that for the partial rank where the top $N$ user for each arm, and the top $N$ arms for each user are separated we always have the user-optimal matching as a super stable-matching. Hence, $\Delta_{\min} \leq \Delta_{\mathcal{A}}$ holds for all the instances. For general instances, it is not possible to improve this relationship, as there are instances where they are equal. - *We now present a class of instances where $\Delta_{\min} = \varepsilon$, and $\Delta_{\mathcal{A}} = (1 - 2\varepsilon)$ for any $\varepsilon > 0$*. Consider the situation with $N$ users and $N$ arms. Fix any $\varepsilon > 0$. For each user $i$ let $\mu_{i,i} = (1- \varepsilon)$ and $\mu_{i,(i+1) mod\\, N} = (1- 2\varepsilon)$ (top 2 arms), and each of the remaining arms has a mean reward $\mu_{i,j} = \varepsilon$ for all $j \notin \\{ i, (i+1) mod\\, N \\}$. For each arm $j$ let $\gamma_{j, (j - 1) mod\\, N} = \varepsilon$ and $\gamma_{j, i} = (1 - \varepsilon)$ for all $i \neq (j - 1) mod\\, N$. We have $\Delta_{\min} = \varepsilon$ for this instance. Consider the partial ranks $P_{u,i} = \\{ i > j, (i+1) mod\\, N > j: \forall j \notin \\{ i, (i+1) mod\\, N \\} \\}$ for $i \in [N]$, and $P_{a,j} = \\{ i > (j-1) mod\\, N, \forall i \neq (j - 1) mod\\, N \\}$ for $j \in [K]$. We have $\Delta_{\min}(P_u, P_a, \mu, \gamma) = (1 - 2\varepsilon)$ for this partial rank. But this $(P_u, P_a)$ has $\\{ (i,i): \forall i \in [N] \\}$ as a super-stable matching. Therefore, $\Delta_{\mathcal{A}} \geq (1 - 2\varepsilon)$. This shows that there can be *arbitrary* (ratio is unbounded) separation between $\Delta_{\mathcal{A}}$ and $\Delta_{\min}$. **Re Missing references:** We will expand the Table 1 to include more references pointed out by reviewers, including [1], [2], and [3] mentioned in this review. We cite some prior works (including [3]) in the paper, but leave them from Table 1 due to space constraints. **Re Corollary 2.7:** Thank you for the careful review. We will simplify the double 'for any followed by for all' logic in the final version. In particular, we will replace `for all matching M` by `each super-stable matching M`. **Re line 12 Algorithm 1:** Note that the $M_{stable}$ match returned by the Extended-GS algorithm may return multiple arm candidates for an user $i$, namely $m_i$. The largest UCB is taken only over $m_i$, not over the entire set of arms $[K]$ in line 12. For any two distinct users $i$, and $j $, and their respective arm candidates $m_i, m_j \in M_{stable}$, we have $m_i$ and $m_j$ are non-overlapping. Therefore, it is guaranteed that there will be no conflict. We will provide this explanation in the revised version. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. The comparison between $\Delta_{\min}$ and $\Delta_{A}$ is insightful. It is a great improvement from $\varepsilon$ to $1-2\varepsilon$. Compared with all existing works that depend on the standard $\Delta_{\min}$, I really appreciate the method investigated to help the area understand the problem-dependent regret in matching markets. I am happy to increase my score.
Summary: This paper addresses the problem of bandit learning in two-sided matching markets with two-sided reward uncertainty, where both users and arms must learn their preferences through repeated interactions. The authors propose an innovative approach using super-stability from Irving (1994) to enhance traditional Gale-Shapley (GS) algorithms. They adapt the Extended Gale-Shapley (GS) algorithm to find super-stable matchings instead of just weakly stable ones. Super-stable matchings are more robust under uncertainty and ensure true stability under complete preference rankings under two kind of incomplete learning models: (i) central and (ii) local. For the central model: The algorithm integrates the Extended GS algorithm with UCB-LCB-based rank estimation. If a super-stable matching exists under the current estimates, the algorithm selects it; otherwise, it explores via a round-robin method. The decentralized algorithm extends the centralized setting with only a constant regret increase, using 2-bit communication between users and arms. Instance-Dependent Regret Lower Bound: A new instance-dependent lower bound for binary stable regret is derived. This bound characterizes the fundamental hardness of the problem using the admissible gap and highlights the pivotal role of super-stable matchings in overcoming informational bottlenecks. Decentralized Approach: Users propose to arms, and arms accept based on their preferences. Two shared binary flags coordinate matching and exploration, ensuring efficient learning without a central authority. Claims And Evidence: Centralized: $O(K \log(T) / \Delta_A^2)$ Decentralized: An additional $O(N^2)$ term over the centralized bound. Lower Bound: $\Omega(K_{\text{eff}} \log(T) / \Delta_A^2)$ where $K_{\text{eff}}$ represents the effective number of competing pairs. The proof ideas for these claims are correct. I didn’t go through all details. Methods And Evaluation Criteria: NA Theoretical Claims: All claims are theoretical and look sound to me. Experimental Designs Or Analyses: Experiments are well-done. Probably, the authors can show how the results vary with N and K (in a single setup). Supplementary Material: I went over the general proofs and their ideas. The proofs are combination of existing bandit regret bounds as well as ideas from stable matching. Relation To Broader Scientific Literature: This will have a broad impact on the community in general since this is a very nice problem to address. It will potentially open doors for other such problems. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: 1. How will you make this fully-decentralised (i.e., without using binary signal mechanism)? This is remarked as a comment but never clearly outlined. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive review. **Re How will you make this fully-decentralised?** The binary flags are used to set *restart* to True and *success* to False by individual user or arm. Note that *restart* is set from the user side, and arm side only passively acts on the *restart* signal. An user that wants to trigger a *restart* can send a <RESTART> signal to all the arms ($K$ rounds if user can communicate with only one arm, or $1$ round if it can communicate with all arms). Once an arm receives a <RESTART> signal, the arms can respond back the <RESTART> signal to all the proposing users. Within $1$ more round all the users will receive the <RESTART> signal, and this will complete the fully-decentralised setting of restart flag. The users can adopt a similar strategy for the *success* flag as well.
null
null
null
null
null
null
A Comparison of LLM fine-tuning Methods & Evaluation Metrics with Travel Chatbot Use Case
Reject
Summary: This paper compares various finetuning and evaluation metrics of LLMs, focusing on the travelling dataset. With RLHF, this paper claims Mistral 7B achieves better performance than GPT-4. Claims And Evidence: Yes. Methods And Evaluation Criteria: The authors used Reddit dataset to perform RLHF. However, the RLHF utilizes upvote ratio from Reddit, which maybe biased. Although the human evaluators are included in the research, these human evaluators are limited (female, aged 25-34, Asian). Theoretical Claims: Not applied to this research. Experimental Designs Or Analyses: The experiment design is valid. Supplementary Material: I reviewed the excel file that is provided in the supplementary. Relation To Broader Scientific Literature: This paper explores the different fine-tuning strategies for the travelling dataset. As a result, this paper finds that the RLHF can improve the Mistral 7B model to outperform GPT-4 baseline. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. Explored the different finetuning strategies for LLaMa and Mistral. 2. Used the data from Reddit for the travelling data. Weakness: 1. The RLHF utilizes upvote ratio from Reddit, which maybe biased. 2. Although the human evaluators are included in the research, these human evaluators are limited (female, aged 25-34, Asian). Other Comments Or Suggestions: The authors needs to check the typos in the manuscript. For example, the Table number is not checked in: "The four models were trained with the data split and parameters outlined in Table ??". Questions For Authors: The authors may consider to include more data sources, which should not exclude to Reddit. Also, the authors may consider to use human labelling score for the RLHF. Code Of Conduct: Affirmed. Overall Recommendation: 2
Summary: This paper compares various LLMs' fine-tuning methods and evaluation metrics in the context of a travel chatbot. The study evaluates three fine-tuning approaches: QLoRA RAFT, and RLHF, applied to two 7B-parameter LLMs, LLaMa 2 and Mistral. The dataset, sourced from Reddit travel-related subreddits, was augmented into structured formats (e.g., Q&A pairs, RAFT triplets) to support domain-specific training. Claims And Evidence: The authors summarize six claims in conclusion section, with the first three focusing on human evaluation (e.g., "human evaluation is critical," "GPT-4 metrics align with human judgment"). However, as acknowledged in the "Limitations" section, the human evaluation relied on only three individuals of similar background (female, aged 25–34, Asian). Expanding evaluator diversity (e.g., varying ages, cultural backgrounds, expertise) would strengthen the validity of these claims, as the current limited sample risks biasing results, particularly for culturally sensitive or geographically nuanced travel advice. Additionally, while the authors argue that "traditional metrics are insufficient for LLM evaluation," they do not contextualize this claim within travel-specific challenges. For example: Travel queries often involve dynamic information (e.g., real-time policies, and seasonal events). How well do metrics like Ragas’ "context relevancy" capture such requirements? Such discussions would clarify domain-specific limitations of metrics, moving beyond generic conclusions. Regarding claims 4–6 (e.g., "RAFT outperforms QLoRA," "RLHF significantly improves performance"), the authors validate method efficacy through metric comparisons but lack travel-domain-specific analysis. Concrete insights like these would help practitioners prioritize methods for travel use cases, rather than relying on broad assertions like "RLHF works." Methods And Evaluation Criteria: They compare three well-known methods to fine-tune the LLMs. The comparison includes diverse evaluation criteria, including traditional NLP metrics, LLMs scores, and human evaluation. Theoretical Claims: NA Experimental Designs Or Analyses: The experimental designs and analyses in the paper demonstrate a structured approach but suffer from critical validity issues that weaken confidence in the conclusions. 1. Human evaluation relied on only three evaluators with homogeneous demographics. 2. Most quantitative metrics (e.g., ROUGE, BLEU) showed zero variance across models, but the evaluation used only 37 curated questions. A small dataset may fail to capture the complexity of travel-related queries. 3. The author's analysis is simple and does not revolve around the difficulties in the travel domain. What are the main challenges in deploying fine-tuning methods in travel QA? The author did not provide sufficient insights on this matter. Supplementary Material: I have not reviewed the supplementary material. Relation To Broader Scientific Literature: NA Essential References Not Discussed: This job is more like a technical report that reproduces some well-known methods in the travel domain. For this purpose, I don't think there are any references that need to be cited again. Other Strengths And Weaknesses: The conclusions lack robust validation, as the authors' analysis remains superficial and fails to offer actionable insights into the domain-specific challenges of travel chatbots. For instance, claims about metric-human alignment or RLHF efficacy are inadequately substantiated by empirical evidence, and the limited scope of analysis (e.g., relying on a small, homogeneous evaluation cohort) undermines the generalizability of findings. To strengthen credibility, deeper engagement with travel-specific properties would be necessary. Other Comments Or Suggestions: typos: missing reference in line 334. Questions For Authors: 1. Will you release the training dataset and the final model weights? 2. The human evaluation involved only three evaluators of similar demographics (female, 25–34 years, Asian). Could the authors clarify whether they tested for potential cultural or demographic bias in the ratings (e.g., by comparing responses tailored to Western vs. Asian travel preferences)? If not, how do they justify the generalizability of conclusions like "GPT-4 aligns with human judgment"? 3. The authors argue traditional metrics fail to capture LLM complexity but do not propose travel-specific alternatives. Have they explored augmenting metrics with domain-aware criteria (e.g., real-time fact-checking, multi-option support)? 4. The evaluation used only 37 curated questions. How were these questions selected, and what steps were taken to ensure coverage of diverse travel scenarios (e.g., planning, crisis handling, cultural nuances)? Code Of Conduct: Affirmed. Overall Recommendation: 1
Summary: Summary: The paper mainly focus on the comparison of various fine-tuning methods (QLoRA, RAFT, RLHF). Two pre-trained LLMs (LLaMA 7B & Mistral 7B) were fine-tuned and the performance was evaluated against various metrics. Besides, the author collect the travel dataset from travel-related subreddits and find that Mistral RAFT with further fine-tuned with RLHF, outperformed other models, Advantages: 1. The practical application on travel chatbot is an interesting case study. 2. The research employs a wide range of evaluation metrics for comparison. Disadvantages: 1. The article seems to contribute less in terms of comparing LLM fine-tuning methods that are specific to the travel domain, and it is unclear how well the fine-tuning methods and evaluation metrics would generalize to other domains or tasks. 2. Regarding the data acquisition costs for different comparisons, including the time required for fine-tuning (not just inference time), additional information is needed to better assist in selecting the appropriate LLM fine-tuning methods. 3. All the images and tables in the article are too rough and difficult to see clearly. Some references for table appear to be incorrect. Claims And Evidence: see weakness Methods And Evaluation Criteria: see weakness Theoretical Claims: see weakness Experimental Designs Or Analyses: see weakness Supplementary Material: yes Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: see weakness Code Of Conduct: Affirmed. Overall Recommendation: 1
Summary: This paper compares LLM fine-tuning methods (QLoRA, RAFT, RLHF) and evaluation methods (E2E benchmarks, NLP metrics, Ragas, GPT-4 metrics, and human evaluation) using a travel chatbot case. Data was sourced from Reddit and augmented for each fine-tuning method. QLoRA and RAFT were applied to LLaMA2-7B and Mistral-7B, with Mistral-RAFT performing best in human and GPT-4 evaluations. RLHF further improved it, making it the top model. Claims And Evidence: For QLoRA and RAFT comparison, the experiments results show tha RAFT outperforms QLoRA with Mistral on most metrics but with Llama the advantage of RAFT over QLoRA does not appear to be significant. For metrics alignment, although it is seen that the correlation differences between various metrics and human evaluation, categorizing quantitative metrics, Ragas metrics, and GPT-4 evaluation metrics, and then quantitatively comparing their correlation differences with human evaluation, would be more convincing. Methods And Evaluation Criteria: The finetuning methods this work applied (not proposed) make sense for the problem and the dataset constructed in this paper appears to be highly effective and holds significant potential value for research on travel chat LLMs. Theoretical Claims: No theoretical claims and proofs have been provided in this work. Experimental Designs Or Analyses: The experiments of models with finetuning methods shows each model is well trained (shows in Figure4). The experiments of Inference Times with different models (shows in Figure5) seems not very related to the topic or claims. The experiments on based-line models and finetuned models to prove RAFT outperforms QLoRA looks not valid enough, since pretrained models and GPT4 never see the Travel data, to show RAFT is better it may need to compare the finetuning methods on more comparable model models(eg. Llama2 7B,Mistral 7B,Gemma 7B, Qwen 7B) Supplementary Material: No supplementary material was provided. Relation To Broader Scientific Literature: The metrics alignment with human preference study is related to the LLM metric study where researches aim to find better metric for model evaluation to get better LLM performance (eg. In [1] it found that traditional evaluation metrics based on the similarity between outputs and reference answers are also ineffective for questions. ) [1]Zheng L, Chiang W L, Sheng Y, et al. Judging llm-as-a-judge with mt-bench and chatbot arena[J]. Advances in Neural Information Processing Systems, 2023, 36: 46595-46623. Essential References Not Discussed: No Other Strengths And Weaknesses: Strength: This work constructs a broadly themed dataset on travel chat and conducts manual data annotation to support RLHF. These efforts will contribute to the development of LLMs focused on travel chat. Weakness: The experiment part of this article is not well organized, In the "Model Training & Performance" section, a large portion of the content and images showcase the model's loss curves and inference efficiency. However, this doesn't seem to be closely related to the contributions for claims of this paper. Other Comments Or Suggestions: From the perspective of motivation, the author believes that travel chat LLMs will help address the challenges in the travel industry. However, why did the author choose to explore the path of fine-tuning smaller LLMs rather than utilizing large-scale parameter LLMs in combination with methods like RAG or AI agents? Can the former achieve performance comparable to the latter while significantly reducing costs? This work finds that RLHF significantly improves model performance to outperform benchmark models. Could it be that the dataset or knowledge built in this work has never been exposed to these benchmark models? Questions For Authors: See my comments above Code Of Conduct: Affirmed. Overall Recommendation: 2
null
null
null
null
null
null
Boosting Adversarial Robustness with CLAT: Criticality Leveraged Adversarial Training
Accept (poster)
Summary: The paper presents CLAT, a layer-aware adversarial training algorithm designed to mitigate adversarial overfitting by identifying and fine-tuning layers that learn non-robust features. The approach leverages layer criticality, a metric that quantifies a layer’s functional importance to subsequent features, to pinpoint critical layers for targeted fine-tuning. Experimental results demonstrate that CLAT enhances the adversarial robustness of existing methods, suggesting its effectiveness in improving model resilience against adversarial attacks. Claims And Evidence: The paper proposes a way to identity important layers for adversarial robustness. However, the actual method includes the first stage of "pretraining" using prior methods, presumably on all the layers. It is unclear to me 1. If these layers are more critical, why not only finetune them from the beginning? 2. Why using different objective in stage 2, where the criticality is minimized instead of the adversarial loss? I think that the paper can benefit from more experiments to support the claim that the identified layers are indeed important. Methods And Evaluation Criteria: 1. The definition of layer criticality is not clear to me. The paper can benefit more if the author can ground this concept into prior literature, or using some theoretical results to back it up. Theoretical Claims: No theoretical contents Experimental Designs Or Analyses: The evaluation focuses on existing adversarial training method + CLAT. However, since CLAT is mainly for identifying important layers for training, it should make more sense to use these layers for adversarial training too. Supplementary Material: None Relation To Broader Scientific Literature: The paper takes an architecture-centric approach by identifying critical layers within the network, which is orthogonal to most adversarial training methods that primarily focus on designing new loss functions. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths 1. The baselines are comprehensive, including AWP and SWA which also aim to mitigate robust overfitting. 2. This method seems to be compatible with most of the robust training method. Other Comments Or Suggestions: None Questions For Authors: Can you finetune the model using your identified layers from scratch, using traditional robust training methods (trades, AT)? Will the results become better than fine-tuning on all layers? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thoughtful questions and suggestions. We address each point below. **Finetuning from beginning / CLAT for training:** CLAT can indeed be applied from the beginning, without any prior training, as well as after standard adversarial training. This directly addresses your suggestion of using the identified layers during the main adversarial training process. As shown in Figure 1 (see the “CLAT for 100 epochs” curve), training only the identified critical layers from scratch eventually surpasses baseline adversarial training methods—demonstrating that CLAT is not limited to post hoc fine-tuning. While convergence is slower, final performance is higher than PGD-AT, reinforcing the utility of these layers during training itself. Lines 315–325 in the main text and Figure 5 in the appendix further support this observation with full training trajectories. Using TRADES instead of PGD-AT mirrors these trends, with CLAT consistently improving performance when applied from scratch or as a fine-tuning step (see finetuning results in Table 1 and Table 2). In summary, CLAT is not merely an add-on to existing methods, but a flexible and general mechanism for improving adversarial robustness, whether applied during or after training. **Different objective:** Continuing to optimize the standard adversarial loss after the model has already been trained often leads to degradation in generalization on clean data—a well-documented issue in adversarial training. To address this, we introduce a joint objective in Stage 2 that combines cross-entropy loss with a criticality term (Equation 6). This encourages the model to reduce reliance on non-robust features within the most sensitive layers, while still preserving performance on clean inputs. This insight and formulation are discussed in lines 158–174 of the paper. The balance between the two terms is controlled by the hyperparameter λ, which we find to be stable across datasets and architectures. In practice, λ enables a smooth trade-off between robustness-enhancing updates and accuracy-preserving behavior during fine-tuning. **More experiments to show identified layers are important:** We believe our current experiments already provide strong support for the importance of the identified layers and **reviewers eBwd and 9V5S** both agree on the comprehensiveness of our evaluation. We conduct multiple ablations to highlight this, including: (1) comparing performance when fine-tuning critical versus random layers, (2) analyzing the effect of selecting layers with the lowest versus highest CIDX values, and (3) evaluating the consistency of selected critical layers across batch sizes and datasets for a fixed architecture—amongst several other results in the main paper and appendix. Additionally, we provide evidence (see response on “consistency” to **Reviewer 9V5S**) that the identified critical layers remain consistent across different adversarial attacks used to compute them. We are happy to incorporate this into the appendix for completeness. Together, these experiments consistently demonstrate that the identified layers are not only meaningful, but central to CLAT’s improvements in both clean and adversarial accuracy. **Critical layer definition:** Layer criticality refers to the extent to which individual layers contribute to adversarial vulnerability. We define this empirically by measuring how much each layer’s output changes when the input is perturbed adversarially—assigning higher scores to layers that exhibit larger activation shifts (Equation 5). While the term “criticality” is new in this context, the idea relates to prior work on layerwise sensitivity, pruning, and fine-tuning for robustness [1]. However, such methods typically focus on redundancy reduction or apply global training updates. In contrast, CLAT introduces a novel, lightweight, and scalable mechanism to identify and leverage structurally important layers for improving adversarial robustness. We present theoretical motivation and justification for this formulation in the Methods section and Appendix F, and **Reviewer 8QH8** specifically noted the thoroughness of this explanation. We agree that a deeper theoretical understanding of certain phenomena—such as why specific layers consistently emerge as more critical—would be valuable, and we view this as an exciting direction for future work. Nonetheless, our empirical results—including comparisons with random and non-critical layers, consistency across datasets, attacks, and batch sizes, and consistent performance gains—provide strong support for the validity and utility of our criticality measure.
Summary: The paper aims to make adversarial training more efficient by selectively training only the most critical layers based on a criticality factor. This factor is determined using the local Lipschitz constant, calculated as the average difference in a layer's features with and without adversarial perturbations added to the input. The identified "critical" layers are then trained by incorporating a regularization term into the objective function. Results on multiple datasets demonstrate performance improvements. Claims And Evidence: The paper makes two major claims: (1) layer-wise adversarial training can be more efficient, and (2) it can improve both robustness and clean accuracy. These claims are sufficiently supported by experimental evidence. However, they are not adequately compared with existing works that also utilize layer-wise training. Methods And Evaluation Criteria: The benchmarks used for evaluation are reasonable for the method. However, the novelty and originality of the approach are limited given prior work. Theoretical Claims: No theoretical claims are made. Experimental Designs Or Analyses: I have examined the experimental design, including the robustness of CLAT against various adversarial attacks across different datasets and models, and found it to be reasonable. However, the paper lacks a strong analysis. For instance, each model has different critical layers, and it would be interesting to see a deeper analysis of this phenomenon. Supplementary Material: I have skimmed through the supplementary materials and examined Section D.4 more closely. Relation To Broader Scientific Literature: This paper aims to reduce the cost of adversarial robustness by training selective layers. A significant body of work has explored this topic from different angles. Essential References Not Discussed: There is a significant body of work on adversarial attacks and robustness at the layer and architectural level, dating back to 2017. However, these papers are not properly discussed in this paper. A few example papers are given in the following. Regularizing Deep Networks Using Efficient Layerwise Adversarial Training (https://arxiv.org/pdf/1705.07819) Free Adversarial Training with Layerwise Heuristic Learning Layer-wise Adversarial Defense: An ODE Perspective (https://openreview.net/forum?id=Ef1nNHQHZ20) Intriguing properties of adversarial training at scale (https://arxiv.org/abs/1906.03787) Adversarial Attacks and Batch Normalization: A Batch Statistics Perspective (https://ieeexplore.ieee.org/abstract/document/10056932) Smooth Adversarial Training (https://arxiv.org/abs/2006.14536) Reliably fast adversarial training via latent adversarial perturbation (https://openaccess.thecvf.com/content/ICCV2021/html/Park_Reliably_Fast_Adversarial_Training_via_Latent_Adversarial_Perturbation_ICCV_2021_paper.html) Other Strengths And Weaknesses: The writing needs improvement. Additionally, it would be useful to relate the work to previous papers that are closely aligned with the proposed approach. Other Comments Or Suggestions: I would advise the authors to avoid using overly complex phrases when describing their method, such as "a paradigm shift" (L45-46). Questions For Authors: No questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback and for recognizing the experimental rigor of our work. Below, we respond to each of your concerns and questions. **Novelty:** Please see the “Novelty” section in our response to Reviewer **8QH8**. For a concrete comparison, consider RiFT [1], a fine-tuning method that also updates a subset of layers. While effective, RiFT identifies redundant layers using weight perturbations, whereas CLAT uses input perturbations to identify layers most responsible for adversarial vulnerability. CLAT consistently outperforms RiFT and other baselines across datasets and architectures (see Tables 1 and 2). If there are other methods the reviewer believes are directly comparable, we would be happy to clarify distinctions. **Further analysis:** Thank you for highlighting this point. We analyze layer selection across models, datasets, and training settings—examining the number of layers selected and when fine-tuning should begin. We also include multiple ablation studies verifying the impact of selected layers. Our response to Reviewer **9V5S** under “Cidx consistency” shows that critical indices remain stable regardless of perturbation type. We agree that deeper insight into why certain layers emerge as more important—perhaps due to architectural roles or training dynamics—would be valuable. As our focus is on practical improvements to robustness, we limit scope accordingly. **Additional baselines:** Our submission prioritized SOTA methods, strong threat models (e.g., PGD, AutoAttack), and widely adopted training protocols. Below, we compare against omitted fine-tuning methods using their best-performing models and original settings. As shown, CLAT outperforms these methods—even under stronger attacks. > **Note:** FGSM is substantially weaker than PGD due to its single-step nature. ### Comparison with Additional Baselines | **Method** | **Model** | **Attack** | **Adv. Accuracy** | |----------------|--------------|---------------------------|-------------------| | [3] | VGG-19 | FGSM, ε = 0.1 | 68.37 | | **[3] + CLAT** | VGG-19 | FGSM, ε = 0.1 | **78.55** | | [5] | ResNet-18 | iFGSM, ε = 8/255 | 46.29 | | **[5] + CLAT** | ResNet-18 | iFGSM, ε = 8/255 | **59.95** | | [6] | WRN28-10 | PGD-50-10, ε = 8/255 | 47.06 | | **[6] + CLAT** | WRN28-10 | PGD-50-10, ε = 8/255 | **58.62** | | [8] | ResNet-20 | PGD-20, ε = 8/255 | 51.07 | | **[8] + CLAT** | ResNet-20 | PGD-20, ε = 8/255 | **54.20** | | [9] | ResNet-50 | PGD-10, ε = 8/255 | 43.70 | | **[9] + CLAT** | ResNet-50 | PGD-10, ε = 8/255 | **53.67** | **Clarifications:** - [3] Uses FGSM and perturbs all layers, with high overhead and limited robustness. - [4] Superseded by stronger methods like *Fast is Better than Free* [10], which we include. - [5] Underperforms under PGD. - [6] Perturbs three fixed layers; does not adapt to model structure or input sensitivity. - [7] Targets large-scale training; less relevant to models like ResNet-18, WRN-34-10. - [8] Modifies BatchNorm; orthogonal to CLAT’s selective fine-tuning objective. - [9] No longer SOTA, but CLAT improves its performance when combined. **Writing:** Noted on writing style and complex phrasing. We will incorporate these refinements in the camera-ready version. **References** [3] Regularizing Deep Networks Using Efficient Layerwise Adversarial Training: Sankaranarayanan, S., Jain, A., Chellappa, R., & Lim, S. N. (2018). Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). [4] Free Adversarial Training with Heuristic Layerwise Perturbation: Zheng, H., Zhang, M., & Huang, H. (2020). arXiv preprint arXiv:2010.03131. [5] Yang, Z., Liu, Y., Bao, C., & Shi, Z. (2020). Layer-wise Adversarial Defense: An ODE Perspective. International Conference on Learning Representations (ICLR). [6] Reliably Fast Adversarial Training via Latent Adversarial Perturbation: Park, G. Y., & Lee, S. W. (2021). Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 7758-7767 [7] Intriguing Properties of Adversarial Training at Scale: Xie, C., & Yuille, A. (2019). arXiv preprint arXiv:1906.03787. https://arxiv.org/abs/1906.03787 [8] Adversarial Attacks and Batch Normalization: Muhammad, A., Shamshad, F., & Bae, S.-H. (2023). IEEE Access, 11, 96449-96459. https://doi.org/10.1109/ACCESS.2023.3250661 [9] Smooth Adversarial Training: Xie, C., Tan, M., Gong, B., Yuille, A., & Le, Q. V. (2021). arXiv preprint arXiv:2006.14536. https://arxiv.org/abs/2006.14536 [10] Wong, E., Rice, L., & Kolter, J. Z. (2020). Fast is better than free: Revisiting adversarial training. arXiv. https://arxiv.org/abs/2001.03994.
Summary: The paper introduces CLAT (Criticality-Leveraged Adversarial Training), which aims to enhance the adversarial robustness of neural networks by identifying and fine-tuning critical layers that are most vulnerable to adversarial attacks. The main contributions include the development of a criticality index for layer selection and a fine-tuning objective to reduce the non-robust features of these layers. The authors claim that CLAT improves both clean accuracy and adversarial robustness while mitigating overfitting. Claims And Evidence: The claims made in the submission are not sufficiently supported by novel or convincing evidence. The method proposed is more akin to a fine-tuning approach rather than a novel adversarial training method. The primary contribution lies in the selection of critical layers using the criticality index (Equation 3). However, the adversarial training loss (Equation 5) is not innovative and has been commonly used in other regularization techniques such as logits pairing [1] and some other follow-up papers. Despite the authors' claims that the feature weakness defined in Equation (2) approximates the local curvature value, the overall method lacks novelty and significance in the context of adversarial training, to the reviewer's knowledge. [1] Adversarial logit pairing, 2018 Methods And Evaluation Criteria: The proposed methods, while relevant to the problem of adversarial robustness, do not introduce substantial advancements over existing techniques. The criticality index and the fine-tuning objective are conceptually interesting but do not represent a significant departure from current practices. Theoretical Claims: The theoretical claims regarding the criticality index and its relationship to feature weakness are adequately presented but do not provide a strong basis for novelty. The connection between the proposed feature weakness metric and local curvature is not sufficiently explored or validated to support the claims of innovation. Experimental Designs Or Analyses: The experimental designs appear insufficient to validate the significance of the proposed method. 1. The authors tested CLAT on several datasets (CIFAR-10, CIFAR-100, Imagenette, and ImageNet) and network architectures, but the improvements over baseline adversarial training methods are marginal, as shown in Table 1,2,3,4. 2. The experiments lack breadth, as they do not include more recent backbone architectures (e.g., Vision Transformers). 3. Furthermore, the comparisons with other state-of-the-art adversarial training methods are inadequate, limiting the strength of the conclusions drawn. Supplementary Material: I reviewed the supplementary material, including additional experimental results and ablation studies. While these provide further context, they do not address the fundamental limitations of the proposed method. Relation To Broader Scientific Literature: The key contributions of the paper are related to existing work in adversarial training and layer-specific fine-tuning. However, the proposed method does not significantly advance the field beyond current practices. The paper cites relevant literature but fails to distinguish itself from prior work in terms of innovation or impact. Essential References Not Discussed: The paper does not adequately discuss recent advancements in adversarial training that focus on layer-specific interventions or architectural modifications. The methods that explore the use of specialized layers or modules for enhancing robustness should be included to provide a more comprehensive context. Other Strengths And Weaknesses: 1. The method proposed by the author is more like a fine-tuning method than an adversarial training method. (1)The main contribution of this paper is to propose a method for selecting critical layer in eq.3. (2)Loss for adversarial training, in eq.5, is the commonly used training method, for example, with the interpretation and definition of design eq.5 in logits pairing [1] . Although the authors claim that feature weakness defined in Equation (2) is also an effective approximation to the local curvature value, the proposed method is still lacking in innovation and importance from the perspective of adversarial training. [1] Adversarial logit pairing, 2018 2. The experimental designs are insufficient to validate the significance of the proposed method. (1)The authors tested CLAT on several datasets (CIFAR-10, CIFAR-100, Imagenette, and ImageNet) and network architectures, but the improvements over baseline adversarial training methods are marginal, as shown in Table 1,2,3,4. (2)The experiments lack breadth, as they do not include more recent backbone architectures (e.g., Vision Transformers). (3)Furthermore, the comparisons with other state-of-the-art adversarial training methods are inadequate, limiting the strength of the conclusions drawn. Other Comments Or Suggestions: No Questions For Authors: see my comments about strength and weakness. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you so much for your time! Below, we’ve provided detailed responses to each of your concerns and criticisms. We hope this helps clarify everything. **Experimental support and breadth of evaluation:** Respectfully, we disagree with the concern regarding insufficient experimental support. As also noted by **Reviewers 9V5S and eBwd**, our submission presents a comprehensive and thoughtfully designed evaluation. We test CLAT across four diverse datasets (CIFAR-10, CIFAR-100, Imagenette, and ImageNet), multiple CNN architectures, and in conjunction with several adversarial training methods, including state-of-the-art approaches from RobustBench. These include both models trained from scratch and partially pretrained models. To our knowledge, CLAT is the first fine-tuning method that can also be applied from scratch, demonstrating flexibility across training regimes. Despite this broad applicability and strong performance, CLAT introduces negligible overhead, requiring no backward passes or gradient computations for layer selection. We further support our claims with targeted ablation studies that isolate the contribution of each component. While we cannot exhaustively include every prior method, we carefully prioritized baselines that reflect challenging, high-performing adversarial training settings. Several well-regarded SOTA methods accepted at top-tier venues conduct fewer evaluations in terms of dataset and model diversity. If there are specific baselines or comparisons the reviewer would like us to include, we are happy to provide additional results or clarify their exclusion. **Finetuning method:** Yes, CLAT is a fine-tuning method by design—and this is a core strength. It introduces a layer-selective strategy that mitigates overfitting and consistently improves both clean and adversarial performance. The proposed criticality index identifies layers still learning non-robust features that benefit from continued optimization. As a fine-tuning method, CLAT is modular, integrates easily into existing adversarial training pipelines, and adds minimal overhead. Its ability to be applied from scratch further highlights the relevance of the selected layers and the generality of the approach. More broadly, fine-tuning and adversarial training are not mutually exclusive. Fine-tuning has emerged as a valuable direction for improving robustness and has been recognized at top-tier venues [1]. **Novelty and ALP:** CLAT is, to our knowledge, the only method that achieves state-of-the-art robustness across diverse training setups, by fine-tuning fewer than 5% of parameters with minimal overhead and no backward pass required for layer selection. CLAT also introduces a distinct training objective compared to prior work such as Adversarial Logit Pairing (ALP) [2]. While ALP adds a global logit-level regularizer to align clean and adversarial outputs, CLAT uses a forward-pass-only feature sensitivity metric to select a small subset of layers and applies a layerwise regularizer to penalize their vulnerability. This objective targets internal robustness rather than output-level alignment and constrains optimization structurally and locally, in contrast to ALP’s full-model training. This combination of non-gradient-based sensitivity analysis, feature-level regularization, and sparse fine-tuning defines a lightweight and general framework that is fundamentally distinct from prior approaches. **Marginal improvements:** Please see our response to **Reviewer 9V5S** regarding this concern. **Transformer-based models:** Vision Transformers represent a fundamentally different architectural paradigm with structural properties that diverge significantly from CNNs. While this is an important direction, it is outside the scope of this work, which focuses on the structural role of layer selection in CNN-based adversarial robustness. Preliminary results of CLAT on TinyVIT showcase an increase in both clean and adversarial accuracy by approximately 3%. --- **References** [1] Zhu, K., Wang, J., Hu, X., Xie, X., & Yang, G. (2023). *Improving generalization of adversarial training via robust critical fine-tuning*. arXiv preprint [arXiv:2308.02533](https://arxiv.org/abs/2308.02533). [2] Kannan, H., Kurakin, A., & Goodfellow, I. (2018). *Adversarial logit pairing*. arXiv preprint [arXiv:1803.06373](https://arxiv.org/abs/1803.06373).
Summary: This paper proposes a criticality index to identify critical layers that are more prone to perturbation and then apply CLAT to fine-tune these layers for better clean and adversarial accuracy. Results are evaluated on various models, methods and datasets, proving the effectiveness of CLAT. Claims And Evidence: My major concern is that despite the soundness of finding critical layers for emphasized AT, the entire theory is based on existing findings without much advancement. The improvement, although consistent across methods, models and datasets, is relatively limited (~2%). This may also suggest that the identification of critical layers may not contribute as significantly as expected to adversarial robustness. Methods And Evaluation Criteria: Methods are intuitive and easy to understand. Evaluations are primarily comprehensive. Theoretical Claims: There are no outstanding theoretical claims or analysis in this paper. Experimental Designs Or Analyses: Experiments have been persuasive as the paper includes multiple methods and models. It also includes results from ImageNet and randomly selected layers as a comparison with CLAT to demonstrate the soundness of the method, which is satisfactory. However, considering the significance of the former experiments, I would recommend including these results into the main paper. Supplementary Material: Part D.4 to E.2. Relation To Broader Scientific Literature: The idea is consistent with existing research and aligns with scientific consensus within AT and robustness overfitting. Methods are consistent across models and datasets, despite the relatively limited improvement. Overall, the contribution would be moderate. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: N/A. Other Comments Or Suggestions: N/A. Questions For Authors: 1. Apart from using randomly selected layers as a comparison group to show the effectiveness criticality, is there a more straightforward way to show that the chosen layers are the desired vulnerable layers? For example, visualizing the output variation of a number of layers including the critical and non-critical ones. 2. Will the critical layers vary given different perturbations? For example, CLAT uses untargeted perturbation. Will the shift of perturbation results in a different selection of critical layers? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We’re grateful for your insightful comments and for appreciating the care we put into our experiments. Below, we offer detailed responses to each of your points. **Marginal improvement:** We respectfully disagree. Gains of ~2% in adversarial robustness—particularly through fine-tuning—are considered meaningful in recent work (e.g., RiFT [1] reports average improvements of ~1.4%). In addition, CLAT introduces a distinct and lightweight mechanism for identifying structurally important layers, based on their sensitivity to input perturbations. CLAT’s ability to improve robustness from scratch—across training settings and learning rates—while updating under 5% of parameters suggests these layers are inherently robust-relevant, not artifacts. **Critical Layers:** We also validate our layer selection by comparing against low-criticality layers (Appendix E.3). Additionally, thanks for the suggestion—we now include a visualization of the criticality index for RN50 at the start of fine-tuning (post 70 epochs of AT), showing clear separation between high- and low-criticality layers (e.g., 34, 41, 48). Similar patterns hold across architectures and throughout fine-tuning. Link for image: https://imgur.com/a/iNKlttr **Cidx consistency:** The identified critical layers remain stable across perturbation types. For instance, indices computed using AutoAttack closely match those from untargeted PGD for DN121, RN50, and RN18 on CIFAR-10. | **Network** | **PGD CIDX** | **AA CIDX** | |-------------|------------------------|------------------------| | DN121 | [39, 14, 1, 3, 88] | [39, 14, 1, 3, 88] | | RN50 | [34, 41, 48, 3, 36] | [34, 41, 48, 3, 36] | | RN18 | [11, 10, 4, 2, 12] | [11, 10, 4, 2, 12] | **Additional Results:** We agree and are happy to move the ImageNet and random ablations into the main paper, space permitting. They were placed in the appendix due to the broader use of CIFAR datasets for baseline comparison. [1] Zhu, K., Wang, J., Hu, X., Xie, X., & Yang, G. (2023). *Improving generalization of adversarial training via robust critical fine-tuning*. arXiv preprint arXiv:2308.02533. --- Rebuttal Comment 1.1: Comment: I am grateful for the authors' response. All my concerns are well-addressed.
null
null
null
null
null
null
Reasoning-as-Logic-Units: Scaling Test-Time Reasoning in Large Language Models Through Logic Unit Alignment
Accept (poster)
Summary: This paper introduces Reasoning-as-Logic-Units (RaLU), a novel test-time reasoning framework designed to address hallucinations in LLM reasoning and enhance their performance in mathematical and coding reasoning tasks. Specially, RaLU consistes of three parts. Logic Unit Extraction: begins by generating an initial program and constructing a Control Flow Graph (CFG) using static code analysis. The CFG is then decomposed into discrete logic units, each representing a self-contained computational intent. This decomposition allows for structured refinement of the program's logic. Logic Unit Alignment: engages in iterative dialogue with the LLM to judge, explain, and correct each logic unit. If a unit is incorrect, it is refined and re-evaluated. This process continues until all units are validated or a predefined threshold is reached. Solution Synthesis: synthesizes them into a coherent reasoning path and generates the final solution. This ensures that the final solution inherits the logical rigor of the program while retaining the interpretability of natural language. The authors conduct experiments on several mathematical reasoning and code generation datasets. Claims And Evidence: RaLU significantly reduces reasoning hallucinations - RaLU outperforms existing baseline methods - The authors give error analysis where the proposed method identifies and corrects the errors. - It also give a theoratical explanation. RaLU enhances the accuracy and interpretability - It achieves good performance on each benchmark. - It can generate readable explanations, improving transparent and interpretability. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, I have checked Experimental Designs Or Analyses: The experimental designs is reasonable. Supplementary Material: Yes, i have read the appendix Relation To Broader Scientific Literature: The main focus of this paper is to organize and plan the sub-steps of multi-step reasoning through prompt engineering. From this perspective, the proposed method is relatively similar to previous approaches. The difference lies in the fact that this paper uses logical units as interpretable and verification units for reasoning, thereby introducing new elements to the control flow of reasoning. Essential References Not Discussed: No Other Strengths And Weaknesses: See above Other Comments Or Suggestions: See above Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the review. To ensure we address your concerns with precision, could you kindly clarify your suggestions? We are delighted to address any questions you may have and refine our paper accordingly. We look forward to your feedback and reassessment.
Summary: This paper presents a novel test-time scaling framework, Reasoning-as-Logic-Units (RaLU), which consists of three main steps: Logic Unit Extraction (directly generating a program to address the given problem, and using static analysis tools to create a control flow graph to decompose the program into logic units), Logic Unit Alignment (iteratively judge and refine each unit), and Solution Synthesis (generate the final answer according to the conversation history). The authors compare their RaLU with other previous methods on several benchmarks in mathematical reasoning (GSM8K, MATH) and algorithmic reasoning (HumanEval+, MBPP+) across different open-sourced base models (DeepSeekV3, Llama3.3-70B-Instruct and Qwen2.5-72B-Instruct), and RaLU achieves consistently the best accuracy. They also conducted ablation studies, comparing with line-by-line and NL-step, to demonstrate the effectiveness of decomposing the program into logic units and using program-aided techniques. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: I checked the details in the main text, which look sound to me. Supplementary Material: No Relation To Broader Scientific Literature: The paper presents a pioneering test-time scaling framework designed to tackle reasoning hallucinations and enhance the reasoning capabilities of LLMs. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. RaLU achieves great performance in several benchmarks such as GSM8k, MATH, HumanEval+, MBPP+. 2. The ideas of decomposing complex reasoning into unit steps, using both program and natural languages to reduce reasoning hallucinations, and enforcing more rigorous and interpretable reasoning steps, and using external tools (creating CFG) are all great and effective. 3. The paper is well written and structured and the ideas and methods are clearly presented. Weaknesses: 1. Since RaLU is a test-time scaling method, it would be better to also compare the inference cost (e.g., total time cost, token consumptions) with previous methods to see the computation-accuracy tradeoff. Other Comments Or Suggestions: 1. The format of equation (4) is wrong. 2. In (6), should $1-\beta$ be replaced with $\beta$? Questions For Authors: Why in (4) and (5), the probabilities of each token are added instead of multiplied (or equivalently, the log of probabilities are added)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are sincerely grateful for your recognition of RaLU’s contributions. Many thanks to your constructive comments to enhance our work. # Questions We appreciate the reviewer's insightful question. Let’s use perplexity=exp(-1* Mean(token probabilities)) as another metric using “multiplying”. We have supplemented an ablation study, which revealed that the impact of the selection strategy is marginal. Specifically, we conducted experiments using Qwen-72B-Instruct on the MATH dataset because the MATH dataset is complex enough, and there are enough cases (89/700) where the branch reaches the threshold, and different selection strategies may have a relatively significant impact on the final result. We used three comparison strategies: random selection, choosing the candidate with the minimum perplexity, and choosing the last one. The accuracy results with the 89 cases are as follows. |Confidence(original)|Random|Perplexity|Last| |-|-|-|-| |42/89=0.472|40/89=0.449|45/89=**0.506**|38/89=0.427| This can be attributed to two key factors: - During self-correction iterations, the LLM tends to produce tokens with consistently high probabilities (the average probability >0.9 for most responses in our analysis). This results in minimal variance between the average probability (our confidence score) and perplexity (geometric mean equivalent) metrics - their Pearson correlation reaches 0.93. Essentially, both metrics reflect similar confidence patterns. - Through qualitative analysis, we observed that many candidate units generated within budget limits are functionally equivalent variants differing only in implementation details. This semantic equivalence explains why random selection only causes minor performance degradation. While perplexity-based selection slightly outperforms our original strategy (+0.9%), the limited gain suggests that the verification-revision loop in Stage 2 already filters out most critical errors before selection occurs. It also indicates that multiplying can be a insightful metric and deserves consideration. In the revised appendix, we will add relevant discussions and provide different candidate selection strategies in the code to accommodate various scenarios. Thank you again for helping us enhance this work. # Strengths And Weaknesses **Inference cost**: We appreciate the reviewer's insightful comment. We list the average token consumption of RaLU and baselines on MATH-np using Qwen-72B-Instruct. RaLU consumes 15x tokens compared to CoT, while saving 10x tokens compared to multi-path reasoning baselines such as Self-Check and ToT. We will include the information about token cost in Appendix. | Token | Direct | CoT | ToT | PoT | SC | SCal | SCheck | RaLU | | - | - | - | - | - | - | - | - | - | | Input | $8 \times 10^{4}$ | $8 \times 10^{4}$ | $7.5 \times 10^{7}$ | $2 \times 10^{5}$ | $1.3 \times 10^{6}$ | $1.5 \times 10^{5}$ | $2 \times 10^{7}$ | $4.8 \times 10^{6}$ | |Output| $7 \times 10^{4}$ | $1.5 \times 10^{5}$ | $1.2 \times 10^{7}$ | $9 \times 10^{4}$ | $3 \times 10^{6}$ | $8 \times 10^{4}$ | $1.4 \times 10^{6}$ | $7 \times 10^{5}$ | - | - | - | - | - | - | - | - | - | # Other Comments 1. Thanks for pointing out the error of equation (4). We correct it as $\text{Conf}(\tilde{\mathcal{U}}) = \frac{1}{n}\sum_{j=1}^{n} \sigma(lp_j) $ 2. Thank you very much for pointing out our clerical error. In the Appendix, $1-\beta$ also needs to be replaced with $\beta$, and the rest of the derivation process and conclusions remain unchanged. Thank you very much for your meticulous review.
Summary: This paper introduces a novel prompt engineering-based approach, Reasoning-as-Logic-Units (RaLU), which consists of three key components:1) Logic Unit Extraction, 2) Logic Unit Alignment, and 3) Solution Synthesis, to enhance the reasoning capability of the LLMs. RaLU decomposes the task into multiple logic units, by aligning the code statements within each logic unit with the natural language task specification, it mitigates reasoning hallucinations in the model, and then it synthesizes all the logic units to generate the final solution. Experimental results have demonstrated that RaLU outperforms existing baselines on benchmarks for both mathematical and code generation tasks Claims And Evidence: 1. Most of the claims are well evaluated. However, there is a little concern in Section 4.2, RaLU v.s. Self-Correction Methods. “Many existing self-correction-based methods (e.g., Self-Refine and SelfDebug), often degrade performance by introducing errors into initially correct responses–a flaw exacerbated by their assumption of imperfection existence in the initial response attempt.” I am not sure whether this assumption is correct, more evidence should be given. Additionally, does the baseline (SCheck/SD⋆) used in this paper also adopt this assumption? 2. The initially generated program largely determines the quality of the final result, as all logic units are extracted from it. Therefore, experiments should be conducted multiple times to reduce the impact of randomness. 3. In the process of judging the correctness of each logic unit, if the limit budget is reached, the method chooses the candidate with highest confidence score as the correct one. There should be an experiment by comparing it with another strategy, e.g., choosing a candidate randomly. Methods And Evaluation Criteria: Yes. The methods make sense for enhancing the reasoning ability for LLMs to perform better on both mathematical and code generation tasks. Theoretical Claims: The theoretical claims in this paper are reasonable. Experimental Designs Or Analyses: Overall, the experimental design is reasonable. However, some issues remain, such as the design of the ablation study, the validation of the effectiveness of certain procedures, and the impact of randomness. Supplementary Material: The examples provided in the supplementary materials are detailed and effectively aid in understanding the methods presented in the paper. Relation To Broader Scientific Literature: This paper presents an effective approach to enhance model reasoning capabilities by decomposing a task into multiple parts or breaking it down into several sub-tasks and solving them sequentially. This approach can be considered a general framework that can be applied to a wide range of code-related tasks. Essential References Not Discussed: The paper ‘ChatRepair’ has been referred to in this paper, but the discussion about it is not enough. The method proposed in this paper relies entirely on the model's intrinsic reasoning ability, whereas ChatRepair serves as a representative approach that leverages external feedback to guide reasoning. On the benchmark of code generation, it would be beneficial to include ChatRepair as a baseline for comparative evaluation. Otherwise, a more detailed discussion should be provided. Other Strengths And Weaknesses: The approach is novel by integrating the CFG and extracting the logic units from it to align the code statements within each logic unit with the natural language task specification. And experimental results have demonstrated that RaLU outperforms existing baselines on the chosen benchmarks. Potential Weakness: 1. Experiments should be conducted with multiple random seeds. 2. No specific experiments can demonstrate the effectiveness of selecting the candidate with the highest confidence score. Other Comments Or Suggestions: N.A. Questions For Authors: 1. Can you show the performance with running multiple times to remove the potential risk of randomness? 2. Can you conduct specific experiments to demonstrate the effectiveness of selecting a candidate with the highest confidence score? e.g., comparing with the random selection. 3. How about the performance of ChatRepair on the benchmarks of code generation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank Reviewer KWVa for your constructive feedback on our work. We have carefully considered all comments and hope our point-by-point response can address your questions. ## Questions 1. **Multiple runs of experiments**: Thank you for your feedback! Given the inherent stochastic nature of LLMs and the absence of seed support in the official API, we additionally conducted two independent trials on Qwen-72-Instruct for MATH and Mbpp/Mbpp+ benchmarks (temperature=0.7, identical prompting strategies) to further validate the robustness of RaLU under different model configurations. While these preliminary results (reported as mean ± std in Table 2) align with our prior findings, we will finalize three full replications across all models and datasets before publication to ensure statistical rigor. This iterative process strengthens our confidence in RaLU's consistent performance across varying model architectures and task domains. | Benchmark | Direct | CoT | ToT | PoT/SR\* | SC | SCal | SCheck/SD | RaLU | | - | - | - | - | - | - | - | - | - | | Mbpp | 0.923±0.0070 | 0.895±0.0014 | 0.905±0.0045 | 0.860±0.0046 | 0.922±0.0033 | 0.924±0.0038 | 0.905±0.0038 | **0.957±0.0012** | | Mbpp+ | 0.788±0.0021 | 0.761±0.0052 | 0.772±0.0046 | 0.725±0.0038 | 0.779±0.0061 | 0.787±0.0122 | 0.750±0.0349 | **0.856±0.0017** | | Math-np | 0.705±0.0065 | 0.701±0.0085 | 0.695±0.0045 | 0.743±0.0223 | 0.764±0.014 | 0.725±0.0065 | 0.685±0.0088 | **0.803±0.0085** | 2. **Ablation Study of Candidate Unit Selection**: Thank you for your insightful suggestion. We conducted ablation experiments using Qwen-72B-Instruct on MATH, which is sufficiently complex with enough cases (89/700) where the branch reaches the threshold. We used three comparison strategies: random selection, choosing the candidate with the minimum perplexity, and choosing the last one. The accuracy results are as follows. | Confidence (original) | Random | Perplexity | Last | | - | - | - | - | | 42/89=0.472 | 40/89=0.449 | 45/89=**0.506** | 38/89=0.427 | The results revealed that the impact of the selection strategy is marginal. During self-correction iterations, LLMs tend to produce tokens with high probabilities. This results in minimal variance between the average probability (confidence) and perplexity (geometric mean equivalent) metrics. Through qualitative analysis, we observed that many candidate units generated within budget limits are functionally equivalent variants differing only in implementation details. This semantic equivalence explains why random selection only causes minor performance degradation. Due to the limited space, we provide a more detailed description in our response to Reviewer jMQo, Question Part, for your reference. 3. **ChatRepair Comparison**: Thank you for your suggestion. We will discuss the differences between ours and ChatRepair-like approaches more comprehensively. As you mentioned, the core design concepts of the two are distinct. Our approach relies entirely on the model's intrinsic reasoning capabilities, whereas ChatRepair depends on external feedback for corrections. In practice, it is challenging to obtain comprehensive test cases. ChatRepair also struggles with complex mathematical reasoning and may cause overfitting and data leakage using test data. As a feedback-based correction method, ChatRepair does not sufficiently align with our method in terms of problem assumptions, input constraints, and evaluation metrics. Our method has been comprehensively compared with relevant baselines for test-time reasoning enhancement. The baseline papers also did not compare against ChatRepair. Also, ChatRepair is an Automated Program Repair tool that aims to generate patches for buggy programs instead of code generation or reasoning. ## Claims 1. **RaLU v.s. Self-Correction Methods**: We appreciate the reviewer's insightful feedback regarding our analysis of self-correction methods. We will make the expressions more rigorous by changing it to “Many existing self-correction-based methods (e.g., Self-Refine, Self-Debug) implicitly encourage differences between self-corrected responses and initial responses, which can potentially introduce errors into initially correct responses.” According to their original papers, Self-Refine and Self-Debug prompt the LLM to fix the response based on self-generated or external feedback and adopt the newly generated response as the final result. On the other hand, Self-Check generates multiple candidate steps after information extraction and other procedures. It then decides on step-through voting, maintaining the possibility of retaining the original response. In Self-Check, different candidate responses are treated equally. Our method also allows retaining the original response to mitigate the issue of introducing errors into a correct response. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' efforts in addressing my comments, including the new results and clarifications. I would like to increase my score. --- Reply to Comment 1.1.1: Comment: Thank you for your kind words and willingness to increase the score! We greatly value your constructive feedback, which has been instrumental in improving our work.
Summary: The paper proposes a novel prompting/structured reasoning technique method (RaLU) that mitigates reasoning inconsistencies within the generated LLM output by proposing an alignment (alignment between the task and the generated code) and self-refinement (decomposing code into logical units and iteratively refining in context with LLM judges) modules for structures and decomposable reasoning. The results of the paper show a SOTA performance with increases on 4 math and algorithmic reasoning tasks. Further empirical proofs show that structuring the reasoning process with initial code and decomposing the code with CFG is essential for achieving strong performance. Claims And Evidence: All of the empirical claims are well supported by evaluating the method vs diverse benchmarks that include CoT reasoning or variations (ToT), structured reasoning (PoT) and self-refinement. The method shows performance gains across 3 different models compared to all of the tested techniques. Further ablations are also consistent, showing that key components of the method (Decomposition with CFG and writing code) are necessary for the reasoning paradigm. With this said, the theoretical explanation and claims seem either to be overstating the contribution components of the method or are handwavy (Section 3 mainly): 1) While the identified three types of reasoning inconsistencies seem correct, how can we be sure that those modes are exhaustive or cover the majority of reasoning inconsistency types? Is there any analysis w.r.t. the identified/presented error types? 2) Across the method explanation (both in the intro and section 3), the authors repeatedly mention that the solution synthesis (final step) results in "verified" (each node is verified) reasoning paths. However, as all of the units are judged and refined with an LLM and are not deterministically confirmed to be true, the statement seems to be a tad strong. (Further questions and concerns can be explored in the Theoretical claims section) 3) While the LLM-judge approach has shown some performance yields, it has also been shown [1,2,3] that there is a self-preference bias within LLMs, which is amplified during the process of self-refinement. How do the self-refinement and self-judging modules of RaLU stack against this phenomenon? Has it been tested? 4) After Judging and refining all of the segmented units, they are recombined (final synthesis) into a new program. However, after individual refinement, the units are aligned with the task, yet are they aligned and consistent with respect to each other? Can we just concatenate the refined chunks of code and obtain an executable program? Does the code after the final synthesis step explicitly and verifiably include all of the chunks from the refined units? 5) How does the written CFG heuristic segment deeply nested loops and branches? How can varying levels of nesting be recombined after refinement? Are those decisions left to the LLM? [1] Panickssery, A., Bowman, S.R., Feng, S. (2024). LLM Evaluators Recognize and Favor Their Own Generations. arXiv preprint arXiv:2404.13076. [2] Xu, W., Zhu, G., Zhao, X., Pan, L., Li, L., Wang, W.Y. (2024). Pride and Prejudice: LLM Amplifies Self-Bias in Self-Refinement. arXiv preprint arXiv:2402.11436. [3] Wataoka, K., Takahashi, T., Ri, R. (2024). Self-Preference Bias in LLM-as-a-Judge. arXiv preprint arXiv:2410.21819. Methods And Evaluation Criteria: The chosen datasets, benchmarked methods and evaluations are relevant to the task explored in the paper. Although the choice of the models and the datasets is not explicitly systematic or weel justified. (2 additional questions w.r.t. this are in the questions section) Theoretical Claims: The main theoretical claim of the paper is a "Bayesian" argument about the generated self-refined (self-repair) unit (and subsequently a solution) is more likely to be correct than the unrefined counterpart. Both of these arguments (Section 3.2-3.3) rest heavily on the assumptions of optimality of the LLM judge and the LLM refinement process. The Bayesian argument only works in case the expected performance of the Judge is not random and is even much better than the judgement/generation of the model that outputted the initial program. Given that the Judge and the initial generator are the same LLM with different contexts, I cannot really consider the Bayesian lens argument to have full mathematical rigour. Experimental Designs Or Analyses: The ablations and experimental designs are sound. Supplementary Material: I had to review All of Appendix A to understand the theoretical /bayesian argument about applying self-repair. Relation To Broader Scientific Literature: The paper contributes a novel prompting/structured reasoning method for tackling complex reasoning tasks with controllable modules for decomposition, task alignment and self referential improvement. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: The Bayesian explanation/argument (both in the main paper and appendix) seems rather handwavy and makes very strong assumptions w.r.t. the self-judging and self-refinement processes (Questions and concerns above). Please either add more mathematical rigour and mention the assumptions explicitly or replace the section. Questions For Authors: 1) Why are these sets of varying models chosen exactly? How does the idea scale for models <=30B and 100B+ (outside of deepseek) 2) Can the authors mention what is the average token (amount) difference between CoT, PoT and RaLU? How efficient is it to use RaLU? 3) Can the authors provide additional benchmarks that would show the consistency of the method? GSM8k, MATH and HumanEval well-saturated benchmarks. Benchmarks such as AQUA, ProofWriter, AR-LSAT etc. might be better suited for testing the method. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful for your valuable comments and hope this response can address your questions. # Questions 1. We select the latest models from three renowned open-source families. The effectiveness of RaLU is not directly tied to the model's size but rather depends on the model's reasoning capabilities and the ability to follow detailed instructions. We supplemented experiments with Qwen-14B on MATH and Mbpp/Mbpp+. As shown in the Table, RaLU still provides significant improvements for smaller yet capable models. |Benchmark|Direct|CoT|ToT|PoT/SR\*|SC|SCal|SCheck/SD|RaLU| |-|-|-|-|-|-|-|-|-| |Mbpp|0.840|0.860|0.831|0.804|0.868|0.852|0.852|**0.902**| |Mbpp+|0.725|0.733|0.720|0.698|0.754|0.706|0.714|**0.839**| |MATH-np|0.603|0.691|0.651|0.731|0.751|0.710|0.593|**0.784**| 2. Due to the limited space, please refer to our response to **Reviewer jMQo, Strengths And Weaknesses** 3. We have added AQUA on Qwen-72B-Instruct. Impressively, our RaLU framework continues to achieve the best performance. We agree that more benchmarks would further validate RaLU's robustness and plan to include AQUA in our revised version. |Benchmark|Direct|CoT|ToT|PoT|SC|SCal|SCheck|RaLU| |-|-|-|-|-|-|-|-|-| |AQUA|0.764|0.799|0.791|0.807|0.779|0.811|0.772|**0.846**| # Claims 1. Thank you for your insights on "reasoning hallucinations." Here, they primarily refer to the logic mismatch between code- and NL-based reasoning—the key challenge RaLU addresses. We do not cover traditional logical inconsistencies (e.g., reasoning-final answer mismatch) and will clarify this in the revised paper. Given the difficulty in exhaustively categorizing reasoning errors, we adopt a top-down approach: abstracting reasoning hallucinations as disruptions to one-to-one sequence mappings (e.g., "12345"$\leftrightarrow$ "abcde"). These fall into three core types: - Element errors - Missing/redundant elements - Sequence misordering Other errors can be viewed as combinations of these. (Note: We’ll add "or vice versa" to "1) accurate NL step…"). We acknowledge the theoretical challenge of covering all error types due to real-world complexity. If they are hallucinations independent of the three types, we'll supplement experiments to further analyze whether RaLU can mitigate the new type. Our classification aims to guide technical improvements—even if marginal types exist, solving these three significantly boosts reliability, with extensible methodology. 2. We will replace "verified" with "self-verified" to enhance the rigor of the text, emphasizing that this is an internal validation process only relying on the LLM itself to prevent such ambiguity. 3. Although the LLM in RaLU may misjudge a unit, this is not a self-preference bias. Self-preference bias occurs when the LLM favors answers aligned with its training distribution, deviating from human preferences [1]. In RaLU, self-judgment only assesses correctness—misclassifying an incorrect unit as correct is an error (not a bias), affecting its final accuracy, not human-aligned processes. A more relevant issue might be confidently incorrect predictions, where the LLM overestimates its answers' correctness. To mitigate this, RaLU separates program generation and judgment/refinement into different dialogues, obscuring the units' source. Additionally, units are extracted from generated programs with modified indicators, reducing overconfidence in familiar distributions. [1] S. Dhuliawala, et. al, "A Diachronic Perspective on User Trust in AI under Uncertainty” 4. The questions are around alignment: cross-unit alignment, inter-statement alignment, and unit-to-program alignment. RaLU employs conditional generation. Though it doesn’t guarantee absolute consistency, its strength lies in dynamic context modeling and flexible dependency capture, avoiding the rigidity of hard constraints. - Corss-Unit: Units are constrained by prior ones during refinement, aided by Transformer’s self-attention for implicit semantic linking. For example, variable name changes in earlier units propagate to later ones. Hard constraints risk inconsistencies if dependencies are incomplete. - Code: Theoretically feasible, but RaLU prioritizes logic over syntax. Forced concatenation imposes rigid boundaries, whereas LLM regeneration dynamically optimizes interfaces (e.g., auto-completing variables) and adheres to syntax rules via pre-trained knowledge. - Synthesis: Conditional generation may introduce minor misalignment, but RaLU’s verification units steer the LLM toward valid solutions. Hard constraints risk overfitting to rules at the expense of semantics (e.g., redundant type conversions). Key logic (e.g., boundary handling) is preserved via attention weights, while non-critical parts are optimized—mimicking human programming cognition. 5. The decision is not made by the LLM. Each node in CFG has <= 2 children (if, else); We traverse CFG using depth-first search, organizing the nodes along the way into units to be entered in the order.
null
null
null
null
null
null
DEALing with Image Reconstruction: Deep Attentive Least Squares
Accept (poster)
Summary: This paper introduces Deep Attentive Least Squares (DEAL), a novel data-driven image reconstruction method that bridges traditional regularization techniques with modern deep learning. The authors propose an alternative to complex, highly parameterized deep architectures by leveraging the principles of classic Tikhonov regularization. The key idea is to iteratively refine intermediate reconstructions by solving a sequence of quadratic optimization problems. The method achieves state-of-the-art performance comparable to leading plug-and-play and learned regularizer approaches, while offering additional benefits such as interpretability, robustness, and provable convergence behavior. ## Update After Rebuttal I would like to thank the authors for their detailed response and their efforts to address the issues raised in the initial review. I will maintain my recommendation to weak accept. Claims And Evidence: The claims made in the submission are well-supported by clear and convincing evidence, including theoretical proofs, extensive experiments, and comparisons with state-of-the-art methods. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate and well-aligned with the problem of image reconstruction. The use of standard datasets, metrics, and comparisons with state-of-the-art methods ensures that the evaluation is rigorous and meaningful. Theoretical Claims: The theoretical claims are well-supported by correct proofs, the authors provide a rigorous theoretical foundation for their method. The theoretical claims like Proposition 4.1 (uniqueness of updates), Lemma 4.2 (Lipschitz continuity), Theorem 4.3 (existence of fixed points), and Theorem 4.4 (exponential convergence) are supported by standard mathematical tools such as positive definiteness, Lipschitz continuity, and the Banach fixed-point theorem. These claims are thoroughly explained in both the main text and the supplementary material. Some assumptions, like ker(H)∩ker(W)=0 in the manuscript are reasonable and empirically supported too. Experimental Designs Or Analyses: The experimental design and analyses are generally sound, but additional ablation studies and scalability tests could further strengthen the paper: 1. A more detailed analysis of the impact of the attention mechanism and spline parameterization 2. More results to show scalability of DEAL Supplementary Material: The supplementary material is comprehensive and effectively reinforces the claims made in the main paper. It provides additional detailed experimental results, further validating the method's effectiveness. Relation To Broader Scientific Literature: DEAL's contributions are well-grounded in the broader literature, building on and extending prior work in iterative refinement, learned regularization, and theoretical analysis. The novel idea of iterative refinement with learned filters and an attention mechanism is particularly noteworthy and has the potential to inspire future research in this area. Essential References Not Discussed: While the paper provides a solid foundation, it could benefit from a more detailed discussion of Attention Mechanisms in Image Restoration in the related works section. For instance, citing seminal works such as "Attention-Guided CNN for Image Denoising" would help contextualize the proposed method within the broader landscape of attention-based approaches in image restoration. Other Strengths And Weaknesses: Strengths: DEAL combines traditional regularization with deep learning in a novel way, this is a creative and original approach. The paper is well-written and clearly presents the method, theoretical analysis, and experimental results. The visualizations and supplementary material enhance the clarity of the presentation. Weakness: Some details could be further refined and supplemented. Other Comments Or Suggestions: 1. The font size in Figures 1 and 2 is somewhat small, which may hinder readability. Enlarging the font or providing higher-resolution versions of these figures would improve the reader's experience. 2. While the paper includes some ablation studies (e.g., filter size, number of filters), a more comprehensive analysis of the impact of different components (e.g., attention mechanism, spline parameterization) could further strengthen the claims. 3. Although DEAL demonstrates competitive performance, there is still a noticeable gap compared to methods like DRUNet (2022). To better highlight DEAL's advantages, the authors could include a table comparing the number of parameters, computational efficiency, and performance metrics (e.g., PSNR, SSIM) between DEAL and state-of-the-art methods. This would provide a clearer picture of DEAL's strengths, particularly in terms of efficiency and scalability. 4. Given the close connection between DEAL and attention mechanisms, it would be beneficial to add a dedicated section on attention mechanisms in image restoration in the related works. This would help situate DEAL within the broader context of attention-based approaches and highlight its unique contributions. Questions For Authors: While DEAL is shown to be efficient for moderate-sized problems, I’m curious about its performance on very large-scale datasets (e.g., high-resolution medical images). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s time and feedback and are glad that DEAL is recognized as creative and original. To summarize our responses: - We added a paragraph to the related work focusing on attention mechanisms. - We performed a new experiment to survey the scalability of DEAL for large medical images. - We provided more ablation on the impact of the attention mechanism. > 1. The font size in Figures 1 and 2 is somewhat small... We will enlarge the font-sizes and improve the resolution of the Figures. > 2. While the paper includes some ablation studies ... To highlight the impact of the attention mechanism and the spline parameterization, we added the following ablation: - The attention mechanism is crucial for DEAL. Without it (M= Identity), the filter bank collapses to a single high-order Laplacian, leading to a huge performance drop e.g. around 4dB in denoising for $\sigma_n =25$ on our grayscale testset. - Since we use a shallow CNN for mask generation, we cannot achieve good performance with conventional nonlinearities. Therefore, we use the deep spline framework of [1]. We initially learned the splines with zero initialization and no constraints. There, we observed that $\varphi_1$ and $\varphi_2$ tended toward symmetric potentials, while the last $\{\phi_c\}$ resembled cut-off functions. These observations lead to our proposed setup for these splines as described in the paper. The total number of the parameters for all the learnable splines is less than 2000 and adding more spline knots (parameters) does not improve performance. > 3. Although DEAL demonstrates competitive performance ... We augmented our tables and added more metrics including the number of parameters to further clarify our comparisons with SOTA methods (see the updated table in point 2 reviewer dGc6). We also provide a new large-scale experiment for the MRI setup to showcase the scalability of DEAL (see response point [Questions For Authors]). > 4. Given the close connection between DEAL and attention mechanisms... We will add this paragraph to the related work section: **Attention Mechanism**: Originally popular in natural language processing, attention mechanisms are now widely used in image processing. Two main approaches exist: (i) patch-based attention, where images are divided into patches and processed using scaled dot-product attention, as in SwinIR [2] and Restormer [3]; and (ii) point-wise multiplicative attention, which integrates attention directly into the architecture, as in attention-guided CNNs [4]. While patch-based methods effectively capture long-range dependencies, they introduce significant computational overhead due to the need to compute inner products across all patches in a latent space. In contrast, point-wise attention avoids this, making it a more efficient alternative. Our approach to implementing attention is more aligned with methods that rely on point-wise multiplication, rather than the key-query parameterization of transformers. In particular, the way attention is incorporated in DEAL is highly interpretable (see Section 6). We will add more references to the paper's version and adapt the writing of Section 6 to emphasize more on the interpretability of our attention mechanism. > [Questions For Authors]: While DEAL is shown to be efficient for moderate-sized problems... We designed a new MRI experiment to address the questions. We use four images of sizes 265x256, 512x512, 1024x1024 and 2048x2048. The largest one is extracted from a high-resolution MRI brain image [5]. The smaller ones are obtained by bicubic downsampling. For the MRI setup, we use 8-fold Cartesian masks of the size of the image, where only around 12 percent of the mask entries are non-zero. We perform our experiments on a TeslaV100-SXM2-32GB GPU. There, DEAL can successfully handle the reconstruction of an image of size 2048x2048. In contrast, Prox-DRUNet fails at this size. To provide more insights, we add the following table in the paper, where we report the time and memory usage for each of the images in the aforementioned setup. DEAL is consistently faster than Prox-DRUNet with less memory usage. | Method| 256x256 (Time s)| 256x256 (Mem. GB)| 512x512 (Time s)|512x512 (Mem. GB)|1024x1024 (Time s)| 1024x1024 (Mem. GB)|2048x2048 (Time s)|2048x2048 (Mem. GB)| |-|-|-|-|-|-|-|-|-| |DEAL (Ours)|6.4|0.38|36.0|1.51|173.0|6.03|1800.0|24.11| |Prox-DRUNet|18.5|0.87|62.1|3.38|240.0|13.35|NA|NA| Refs. [1] Bohra et al. "Learning activation functions in deep (spline) neural networks.", 2020. [2] Liang et al. "Swinir: Image restoration using swin transformer.", 2021. [3] Zamir et al. "Restormer: Efficient transformer for high-resolution image restoration.", 2022. [4] Tian et al. "Attention-guided CNN for image denoising." 2020. [5] Martinez et al. "BigBrain-MR: a new digital phantom with anatomically-realistic magnetic resonance properties at 100-µm resolution for magnetic resonance methods development." 2023.
Summary: This paper presents a least-square-type image reconstruction method. It is formulated as a conjugate gradient method. It consists of of an iterative refinement process with two main components: one that estimates the reconstructed image and a another that generates a mask, modulating the response of the prior filter in a spatially adaptive manner for its following iterations. Similar to plug-and-play methods, the proposed model is trained on the denoising task and can be universally applied to other image reconstruction tasks given the forward operator and a validation set on the target task to set its hyperparameters, including the standard deviation of noise and the weight assigned to the prior term. The strength of the proposed method is two fold. It achieves near SOTA performance comparable to similarly universally pretrained methods with a relatively low computational complexity, while also ensuring robustness and convergence to a unique solution. ## update after rebuttal I thank the authors for responding to my main concerns. I will maintain my recommendation to accept. Claims And Evidence: - Proof for theoretical claims are provided in the supplementary material. - Convergence: The authors refer to Theorem 4.3 and illustrate examples on how convergence to a fixed point occurs for both the relative error and the resulting PSNR of the reconstruction. - Robustness: An example of robustness to various forms of initialization for the 2x super-resolution task is provided. - Interpretability: The authors present a couple of examples of learned filters for the denoising task and offer reasonable interpretations of how the associated masks are adapted to the structural features captured by those filters. - Performance: Near SOTA results are demonstrated on multiple tasks including, denoising, super-resolution and MRI. Methods And Evaluation Criteria: Yes, they are commonly considered evaluation criteria in the literature. Theoretical Claims: I did not go through all the proofs provided in the supplementary material. Experimental Designs Or Analyses: Regarding the super-resolution task, would it be fare to expect the compared methods to account for the considered addition of noise? Supplementary Material: I reviewed the initialization settings and the figures as they were discussed in the main text. Relation To Broader Scientific Literature: The proposed least-square-type image reconstruction method builds on classical regularizers that focus on sparsity in image gradients and parametric models. The method utilizes implicit regularization techniques using plug-and-play denoisers and spatial adaptivity, which are allow it to achieved improved performance. Iterative refinement methods and nonlocal Laplacians contribute to its robustness and convergence. Presenting near state-of-the-art performance with low computational complexity makes it well-positioned among efficient neural network architectures for image reconstruction. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: - Originality: The particular approach to formulate the problem and solve it with confidence in convergence to a unique solution seems original. - Clarity: The provided illustrations for the experiments clarify the claims, at least for a the limited cases considered. - Significant: Based on the identified related works, the proposed methods is a significant improvement in terms of the trade-off on performance and efficiency, while maintaining theoretical convergence properties. Other Comments Or Suggestions: No other comments. Questions For Authors: 1. The authors have chosen only a handful of baselines for the comparisons. Why those specific methods and why only those and not others? 2. How does the performance compare to SOTA deep learning based methods without theoretical guarantees, such as transformers and diffusion-based methods that claim to solve multiple tasks including those considered in the experiments? 3. Is it reasonable to also test the performance without searching for the considered hyperparameters? In other words, as an additional baseline, how bad does the performance get without access to a validation set? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer's time and feedback and are glad that DEAL is recognized as a significant improvement in the performance-efficiency trade-off within image reconstruction techniques. In summary, - We clarified our choice of comparison methods. - We added diffusion models and transformers as additional comparisons for super-resolution. - We provided insights into hyperparameter sensitivity. > [Experimental Designs Or Analyses] Regarding the super-resolution task, would it be fare to expect the compared methods to account for the considered addition of noise? For super-resolution, we use the exact same setup as the ProxDRUNet and use their given optimal hyperparameters that are different for each noise level. For IRCNN and DPIR, formulas for optimal hyperparameters are given within their codes based on the noise level for the super-resolution task. For MRI, we follow the WCRR paper's hyperparameter tuning setup with ten validation images to adapt all methods. > 1. Why those specific methods and why only those and not others? The competing methods are selected based on code availability and similar experimental setups. Due to their flexibility, and since our method is also universal, we mainly focused on PnP approaches. Super-resolution: Here, DPIR is the SOTA among PnP methods. DPIR is tied with a fix number of steps and its performance will degrade if iterated more. By constraining the underlying DRUnet, ProxDRUnet addresses this problem and obtains convergence guarantees. Instead of a DRUnet, IRCNN deploys a lightweight CNN. Now, we added the end-to-end trained transformer SwinIR [1] and the diffusion model DiffPIR [2]. MRI: For medical grayscale images, we also compare with classical TV and the SOTA explicit learned regularization methods WCRR, SARR, and SAFI. > 2. How does the performance compare to SOTA deep learning-based methods without theoretical guarantees... We added comparisons to the transformer-based SwinIR and diffusion models for superresolution. End-to-end models like SwinIR are highly sensitive to their training tasks and degrade significantly when conditions change. Diffusion models excel in image generation, but the current diffusion-based reconstruction methods often perform worse than DPIR for image super-resolution in terms of PSNR metric (see Table 2 of the DiffPIR paper [3]). Our superresolution results confirm these findings (see response point 2 of reviewer dGc6 for the updated table). > 3. Is it reasonable to also test the performance without searching for the considered hyperparameters? In other words, as an additional baseline, how bad does the performance get without access to a validation set? Regarding the two hyperparameters, we can always set the model noise level $ \sigma = 15 $ based on our observations in different setups. In contrast, $\lambda$ must be adapted to the data noise level. This is typical for variational regularization, where $\lambda$ inversely scales with the data noise level $\sigma_n^2$. We investigate this closer for multi-coil MRI reconstruction at data noise level $\sigma_n = 0.002$. Indeed, the performance depends primarily on $\lambda$. Interestingly, even a 10-fold change in $\lambda$ maintains reasonable reconstruction quality. In general, higher $\lambda$ leads to blurred images, while lower $\lambda$ does not remove the artifacts. In the following table, for the mentioned MRI setup, the reconstruction PSNR for the different choices of the model noise level $\sigma$ and regularization strength $\lambda$ is given. | σ \ λ|0.01|0.1|1|10| |-|-|-|-|-| | 5|27.00|33.77|32.25|29.28| |15|32.50|33.88|31.80|28.31| |25|33.05|33.61|31.42|27.75| |50|32.96|33.05|30.96|26.75| Refs. [1] Liang et al. "Swinir: Image restoration using swin transformer.", 2021. [2] Zhu et al. "Denoising diffusion models for plug-and-play image restoration.", 2023.
Summary: This paper presents Deep Attentive Least Squares (DEAL), a novel image reconstruction method that bridges traditional signal processing and modern deep learning. DEAL formulates reconstruction as an iterative least squares problem with spatially adaptive regularization, where an attention mechanism dynamically modulates the regularization weight based on the local image structure. The approach efficiently solves a sequence of quadratic problems using a learned multi-convolution filter and a conjugate gradient solver, ensuring both interpretability and computational efficiency. The authors provide rigorous theoretical analysis, proving the uniqueness of solutions, convergence guarantees, and robustness to initialization. Experimentally, DEAL is evaluated on denoising, super-resolution, and MRI reconstruction, achieving competitive results while using significantly fewer parameters than state-of-the-art deep learning models. The study highlights DEAL’s interpretability, universality, and theoretical guarantees, positioning it as a promising alternative to heavily parameterized deep models for inverse imaging problems. Claims And Evidence: 1. Claim: DEAL provides a principled approach to image reconstruction, combining classic signal processing techniques with deep learning insights. Evidence: DEAL is formulated as an iterative least-squares problem with spatially adaptive regularization. The authors draw connections to traditional Tikhonov regularization and modern plug-and-play (PnP) methods, demonstrating how DEAL balances interpretability and performance. 2. Claim: DEAL achieves convergence to a unique fixed point, ensuring stability and robustness. Evidence: The authors provide rigorous theoretical guarantees (Propositions 4.1–4.4), proving uniqueness under mild conditions and showing that the iterative updates lead to a contraction mapping under certain assumptions. Empirically, DEAL consistently converges in experiments, regardless of initialization. 3. Claim: DEAL outperforms traditional spatially adaptive regularization methods and achieves performance close to deep learning-based approaches while using significantly fewer parameters. Evidence: Extensive experiments on denoising, super-resolution, and MRI reconstruction show that DEAL surpasses non-adaptive regularization methods like WCRR and SARR. Furthermore, Table 1 and Table 2 demonstrate that DEAL’s PSNR scores approach those of DRUNet-based models, despite having 30× fewer parameters. 4. Claim: The learned attention mechanism effectively suppresses regularization in structured image regions, preserving details. Evidence: The authors provide visualizations (Figures 7 and 8), showing how learned masks adapt to image structures. The masks reduce regularization in smooth areas while enforcing it in noisy regions, preventing excessive smoothing of important features. 5. Claim: DEAL is robust to hyperparameters and initialization, making it a reliable reconstruction method. Evidence: Experiments in Figure 5 and Appendix C.3 show that DEAL consistently converges to the same solution regardless of initialization. The model is also shown to be less sensitive to hyperparameter tuning compared to deep learning-based methods. 6. Claim: DEAL generalizes well across different inverse problems without task-specific retraining. Evidence: Unlike deep neural networks that require retraining for each task, DEAL is directly applied to denoising, super-resolution, and MRI reconstruction by simply adjusting two hyperparameters (σandλ). The results in Table 3 and Table 4 confirm its versatility. The claims in this paper are well-supported by both mathematical theory and empirical validation. The combination of theoretical convergence guarantees, extensive benchmarking, and qualitative visualizations strongly substantiates DEAL’s effectiveness and robustness for inverse imaging problems Methods And Evaluation Criteria: Methods: The authors of this paper propose the Deep Attentive Least Squares (DEAL) method to address the image reconstruction problem, integrating classical regularization techniques with modern deep learning approaches: 1. Formulation as an Iterative Least Squares Optimization with Adaptive Regularization DEAL formulates the image reconstruction problem as an iterative least squares optimization incorporating a spatially adaptive regularization term. In each iteration, a conjugate gradient (CG) algorithm is employed to solve the quadratic optimization problem efficiently. The regularization term is dynamically adjusted via learned attention weights, allowing the model to adapt to different image regions and enhance reconstruction quality. 2. Spatially Adaptive Regularization via an Attention Mechanism A shallow convolutional neural network (CNN) is used to compute attention weights, which modulate the local impact of the regularization term. The CNN extracts multi-scale features and applies pointwise nonlinear transformations to learn an adaptive weight distribution. This learned attention mechanism allows DEAL to reduce regularization in structured regions while enhancing regularization in noisy areas, thereby preserving critical image details. 3. Multi-Convolution Module for Efficient Feature Extraction DEAL employs a multi-layer convolutional structure to extract spatial features efficiently. These convolutional layers do not incorporate nonlinear activation functions, ensuring a large receptive field while maintaining interpretability. This architecture is used in both the reconstruction process and the mask generation module, allowing for shared feature representation. 4. Training on a Denoising Task to Improve Generalization To enhance generalization, DEAL is trained on an additive white Gaussian noise (AWGN) denoising task, where the noise level is set as σ_n∈[0,50]. The training data includes both grayscale and color images, utilizing the BSD68 and CBSD68 datasets. The loss function consists of three components: ①Mean Squared Error (MSE): Ensures consistency between the reconstructed output and the ground truth image. ②Stability Penalty on Attention Weights: Suppresses unstable variations in the regularization adjustment. ③Total Variation Regularization on Learned Activation Functions: Prevents excessive complexity in the nonlinear transformations. 5. Applicability to Various Inverse Problems Once trained, DEAL can be directly applied to a range of inverse problems, including denoising, super-resolution, and MRI reconstruction, without requiring additional training. The model only requires tuning of two hyperparameters—regularization strengthλand noise levelσ to adapt to different tasks. Evaluation Criteria The authors evaluate DEAL using both quantitative and qualitative metrics, comparing its performance across multiple inverse imaging tasks. 1. PSNR PSNR is the primary metric used to assess image quality and is widely applied in inverse imaging tasks.A higher PSNR value indicates a reconstruction closer to the ground truth image.Tables 1, 2, and 3 report the PSNR results of DEAL for denoising, super-resolution, and MRI reconstruction. 2. SSIM SSIM measures the structural similarity between the reconstructed and ground truth images, evaluating perceptual quality.Figures 3 and 5 demonstrate that DEAL preserves fine details better than competing methods. 3. Convergence and Stability Theoretical analysis (Theorems 4.3 and 4.4) proves that DEAL has a unique solution and guarantees convergence.Experimental results in Figure 4 show that DEAL consistently converges under different initialization conditions.Unlike Plug-and-Play (PnP) approaches (e.g., DPIR), where performance may degrade with excessive iterations, DEAL steadily converges to a fixed point, ensuring stable performance. 4. Comparison with State-of-the-Art Methods DEAL is compared against the following methods: ①Classic methods: BM3D, Total Variation (TV) regularization.②Learned regularization approaches: WCRR, SARR, SAFI.③Deep learning models: DnCNN, DRUNet, Prox-DRUNet. Tables 1, 2, and 3 show that DEAL achieves performance close to DRUNet-based methods while using only 1/30 of the parameters, highlighting its advantage in computational efficiency. 5. Computational Efficiency Table 4 reports the computational time for MRI reconstruction, showing that DEAL is significantly faster than the iterative refinement method SAFI and comparable to the non-adaptive WCRR baseline.Figure 5 further illustrates DEAL’s efficiency in super-resolution tasks, demonstrating faster and more stable convergence compared to PnP-based methods. The authors provide rigorous theoretical guarantees for Deep Attentive Least Squares (DEAL), demonstrating its uniqueness, convergence properties, and stability under certain conditions. The theoretical foundation is based on iterative least squares optimization with spatially adaptive regularization. Theoretical Claims: The authors provide rigorous theoretical guarantees for Deep Attentive Least Squares (DEAL), demonstrating its uniqueness, convergence properties, and stability under certain conditions. The theoretical foundation is based on iterative least squares optimization with spatially adaptive regularization. 1、Uniqueness of the Solution Claim: DEAL ensures a unique solution at each iteration, provided that the problem satisfies specific conditions. Supporting Theory: Proposition 4.1 states that if the intersection of the null spaces of H and M(x_k)W contains only the zero vector, i.e., ker(H)∩ker(M(x_k)W) = {0}, then the system matrix A_k is positive definite, ensuring the uniqueness of the solution in equation (8). 2、Lipschitz Continuity and Stability of the Iterative Process Claim: The iterative update operator T(x, y) is Lipschitz continuous, ensuring stability across iterations. Supporting Theory: Lemma 4.2 proves that if ker(H) ∩ ker(W) = {0} and M(x)^2 is greater than or equal to epsilon_M times the identity matrix, then T(x, y) is Lipschitz continuous with a bounded Lipschitz constant. 3、Existence of a Fixed Point Claim: DEAL converges to a fixed point, ensuring stability and reliability in practical applications. Supporting Theory: Theorem 4.3 establishes that T(x, y) maps into a bounded region and has at least one fixed point, provided that M(x)^2 is Lipschitz continuous with a constant L. 4、Convergence Guarantee and Rate of Convergence Claim: The iterative process exhibits exponential convergence under contraction mapping conditions. Supporting Theory: Theorem 4.4 states that if T(x, y) is a contraction mapping, meaning that for two inputs x1 and x2, the update operator satisfies the inequality: ||T(x1, y) - T(x2, y)|| ≤ q * ||x1 - x2|| with q < 1, then DEAL converges exponentially to a unique fixed point x-hat, with a decay rate proportional to q raised to the power of (k-1). 5、Stability with Respect to Input Perturbations Claim: DEAL is stable under variations in the input measurements y, meaning that small perturbations in the input lead to bounded changes in the reconstruction. Supporting Theory: Equation (18) states that for two different inputs y1 and y2, the difference between their corresponding reconstructions is bounded as follows: ||x-hat - z-hat|| ≤ (1 / (1 - q)) * (||H|| / lambda_epsilon) * ||y1 - y2||. This result implies that DEAL produces consistent and stable reconstructions even when faced with measurement noise or variations in input data. Experimental Designs Or Analyses: The experimental design in this paper is well-structured and aligned with the research objectives, ensuring a comprehensive evaluation of DEAL across multiple inverse imaging tasks. Below is an assessment of the key experimental components: 1、The study evaluates DEAL on denoising, super-resolution, and MRI reconstruction, aligning well with its iterative least squares formulation and spatially adaptive regularization framework. 2、The authors use BSD68, CBSD68, and fastMRI datasets and compare DEAL against BM3D, TV, WCRR, SARR, SAFI, DnCNN, DRUNet, and Prox-DRUNet, ensuring a fair and comprehensive benchmark. 3、DEAL is trained on an AWGN denoising task with σ_n∈[0, 50], using Adam optimization and cosine annealing, but lacks detailed sensitivity analysis on hyperparameters. 4、The results are quantitatively evaluated using PSNR (Tables 1-3), SSIM (Figures 3 & 5), and convergence analysis (Figure 4), but statistical significance tests (e.g., confidence intervals) are not reported. 5、The study provides clear training details and evaluation protocols, demonstrating robustness across different conditions, but code availability is not explicitly mentioned. Supplementary Material: I have reviewed the supplementary material, focusing mainly on Appendices A, B, C, and D. In Appendix A, the authors provide detailed descriptions of the hyperparameter settings and ablation studies for the DEAL method. Appendix A not only gives specific initialization and configuration of hyperparameters but also experimentally validates the impact of different hyperparameter setups on denoising performance. For example, the authors verify the importance of Nc = 128 filters and show the effect of filter size on performance. These experimental details offer deep insights into tuning the DEAL algorithm and demonstrate its robustness under different configurations. In Appendix B, the authors provide rigorous theoretical proofs including detailed derivations for uniqueness, convergence, and fixed-point existence. These theoretical analyses demonstrate the mathematical robustness and reliability of the DEAL method, providing strong theoretical support for the algorithm's performance presented in the main paper. Specifically, Theorem 4.3 and Theorem 4.4 prove the convergence and exponential convergence rate of DEAL, further enhancing the theoretical credibility of the method. Appendix C provides additional experimental results, including grayscale deblurring tasks, MRI reconstruction, and super-resolution task convergence analysis. In Section C.1, the authors demonstrate the performance of DEAL in the grayscale deblurring task by comparing it with other methods like DPIR, EPLL, and FDN, showing that DEAL performs excellently in this task. The experimental results in Appendix C further validate the DEAL method’s effectiveness in real-world tasks and showcase its robust performance. Appendix D offers visualizations of the DEAL model components, including the learned filters, convolution kernels, and splines. These visualizations help in understanding how DEAL adapts to different image structures and adjusts the image reconstruction process via attention mechanisms and convolution structures under varying noise levels. The figures illustrate how DEAL reduces the contribution of image structures to regularization, maintaining sharpness in the reconstructed images. Overall, the supplementary material provides additional experimental results, theoretical derivations, and visualizations of model components, which greatly enhance the credibility of the paper. Relation To Broader Scientific Literature: The contributions of this paper are closely connected to existing research in the field of image reconstruction. The authors propose the DEAL method, which combines the fields-of-experts model (Roth & Black, 2009) and PnP methods (Venkatakrishnan et al., 2013), integrating traditional signal processing and modern deep learning techniques to introduce a novel image reconstruction method. Compared to previous learning-based regularization methods, the proposed DEAL method offers significant advantages in terms of interpretability, computational efficiency, and convergence. The paper correctly cites relevant literature, covering important studies in image reconstruction and regularization, such as the learning paradigms discussed by Chen et al., 2014 and Effland et al., 2020, as well as the parameterization strategies addressed by Goujon et al., 2023 and Zach et al., 2024. The authors also reference research on autoencoders (Li et al., 2020) and algorithm unrolling (Kobler et al., 2022), accurately highlighting the relationship and distinctions between DEAL and these methods. In terms of innovation and impact, the paper contributes by combining classical signal processing techniques with deep learning methods, proposing a new approach that integrates iterative refinement and attention mechanisms. Unlike deep learning methods like DRUNet, DEAL provides convergence and robustness guarantees, with clear advantages in computational efficiency and interpretability. The innovative combination in DEAL positions it with broad potential applications and influence in the field of image reconstruction. Overall, the paper correctly cites relevant literature and proposes an innovative and promising solution in the field of image reconstruction. Essential References Not Discussed: The paper does not overlook any critical related research, and all core references relevant to the DEAL method are appropriately cited. The authors correctly reference key works in the field of image reconstruction and regularization, such as the fields-of-experts model and PnP methods, and do not neglect any prior research crucial to this work. So the paper properly attributes existing methods and cites relevant literature without significant omissions. Other Strengths And Weaknesses: Strengths: The paper excels in originality by proposing DEAL, a novel approach that effectively combines traditional signal processing with modern deep learning techniques, providing a clear solution to challenges in image reconstruction. The method’s theoretical guarantees for convergence and robustness add significant value, making it stand out among other methods in the field. The contributions are highly significant, offering improvements in interpretability, computational efficiency, and performance. The paper’s clarity is strong, with well-defined methodology, clear explanations, and comprehensive experimental results that demonstrate the efficacy of the approach. Weaknesses: While the contributions are important, there is a slight limitation in the generalization of the method to other tasks beyond the presented inverse problems, such as super-resolution or more complex imaging scenarios. Additionally, although the paper is well-written, some aspects could benefit from further elaboration, such as the details of the learned attention mechanism and its practical applications in different domains. Despite this, the overall structure and writing are clear and accessible, making the paper easy to follow. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s feedback. We added more experiments and comparisons to highlight DEAL’s generalization and scalability (see response point 2 to reviewer dGc6 and response point [Questions For Authors] to reviewer Q6VC) and provided additional details on the attention mechanism (see responses point 2 and 4 to reviewer Q6VC). Our code repository will be released with the final paper.
Summary: Summary This paper proposes a novel Maximum a posteriori (MAP) method for solving linear inverse problems with Gaussian noise. The proposed method is based on Fields-of-Experts (FoE) regularization [1]. The authors suggest using quadratic potentials and learning the remaining parameters of the FoE regularization. The proposed algorithm is implemented as a neural network, and the regularization parameters are learned through end-to-end training. Main findings: - Using quadratic potential significantly accelerates the algorithm, enabling the effective learning of the regularization parameters. - The regularization parameters learned from a denoising task can be directly applied to other linear inverse problems, such as super-resolution and MRI reconstruction, with minimal hyperparameter tuning. - The proposed approach achieves reconstruction performance comparable to plug-and-play (PnP) methods. [1] Roth, Stefan, and Michael J. Black. "Fields of experts: A framework for learning image priors." 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). Vol. 2. IEEE, 2005. ## Update After Rebuttal I want to thank the authors for their detailed response and for the effort they have put into addressing the concerns raised in the initial review. In light of the additional results and explanations provided by the authors, I have updated my score to **“4: Accept.”** Claims And Evidence: I think yes, the claims made in the submission supported by evidence. Methods And Evaluation Criteria: I have some concerns regarding the evaluation. 1. The key novelty of the proposed method seems to be its ability to (almost) directly apply a model trained on denoising to other inverse problems. If this is the case, evaluating the model on denoising tasks may not fully capture its main contribution. It's useful to see how the proposed denoiser compares to state-of-the-art alternatives. However, these results could be presented as baseline comparisons rather than performance evaluations. 2. Since the proposed method requires fine-tuning of hyperparameters $\lambda$ and $\sigma$ for each task, comparing solely to plug-and-play methods may not provide a comprehensive perspective. I suggest including comparisons with recent state-of-the-art neural architectures specifically trained for super-resolution and MRI reconstruction. This would offer a more balanced view of the method's performance. 3. There appears to be a discrepancy in the reported PSNR values for DRUNet on the CBSD68 dataset. The reported PSNR values (33.85 and 31.21 for CBSD68 with sigma 15 and 25 respectively) are approximately 0.45dB lower than those reported in the original DRUNet paper [1], which lists values of 34.30 and 31.69 under the same conditions. It would be helpful to clarify this inconsistency. 4. The reported results for DRIP in the color super-resolution experiment appear significantly lower than those presented in the original DRIP paper [1] (as shown in Table 8). Additional clarifications on the experimental setup and any differences in implementation would help understand this gap. [1] Zhang, Kai, et al. "Plug-and-play image restoration with deep denoiser prior." IEEE Transactions on Pattern Analysis and Machine Intelligence 44.10 (2021): 6360-6376. Theoretical Claims: Yes Experimental Designs Or Analyses: Please refer to the Methods And Evaluation Criteria section for further details. Supplementary Material: I have reviewed the appendix. Relation To Broader Scientific Literature: The key contributions of the manuscript are related to the literature about PnP methods. Essential References Not Discussed: I did not come across related works that are essential for understanding the key contributions of the paper but are not currently cited. Other Strengths And Weaknesses: Please refer to the Methods And Evaluation Criteria section for further details. Other Comments Or Suggestions: I have no further comments. Questions For Authors: I have no further questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their time and valuable comments. To summarize our response: - We addressed the discrepancy of our numbers with the DPIR paper (due to cropping), we also added denoising results for the DPIR setup to underline the validity of our results. - We compare with a state-of-the-art neural network for super-resolution to offer a balanced view of DEAL's performance. - We adapt the writing of our paper to highlight the key points of the comments. If our responses have addressed your concerns, we would greatly appreciate it if you could reconsider your rating. > 1. The key novelty of the proposed method... Indeed, denoising is not the main performance benchmark, and we will adjust the writing to emphasize this more. In particular, our results for super-resolution and MRI are far more competitive and on par with SOTA *universal* (trained on denoising and applicable to other tasks) methods. Regarding novelty, beyond its easy extension to various tasks, DEAL bridges classic regularization with deep learning, offering interpretability, theoretical guarantees, and fewer parameters — all key advantages over other ML models. For this, our novel attention mechanism embedded into the FoE model is crucial; learning only the FoE components substantially deteriorates the performance (see response point 2 to reviewer Q6VC). > 2. Since the proposed method requires... Initially, we compared our method with SOTA PnP approaches, ensuring fair hyperparameter tuning across all evaluation tasks (see response point [Experimental Designs Or Analyses] of reviewer on6j). To showcase the benefits of universality, we adapt the super-resolution experiment to include an end-to-end trained transformer SwinIR [3], designed for a bicubic blur kernel. As expected, it is by far the best method within its training regime. However, its performance decreases substantially for other blur kernels and under the addition of noise. Likewise, for MRI, the literature shows that such models perform poorly outside of their training regime (e.g., DuDoRNet in Table 1 of [5]). Unfortunately, we could not find a well-performing end-to-end model for our specific setup (multi-coil, downsampling factor, noise). Generally, this lack of generalization for end-to-end methods is a major concern, and we highlight this with the new results in our paper. In the following table, #θ denotes the number of parameters in million, the triplets after PSNR are (s, $\sigma_n$, used kernels) where avg4 is the average of four different kernels. The two best numbers are bolded for each column. Method|Category|#θ|PSNR (2,0,bicubic)|PSNR (2,2.55,avg4)|PSNR (2,7.65,avg4)|PSNR (2,12.75,avg4)|PSNR (3,0,bicubic)|PSNR (3,2.55,avg4)|PSNR (3,7.65,avg4)|PSNR (3,12.75,avg4)| |-|-|-|-|-|-|-|-|-|-|-| |DEAL|Explicit Reg|**0.85**|**29.91**|**27.99**|**26.58**|25.75|**26.83**|**26.20**|**25.27**|24.59| |Prox-PnP|Conv. PnP|32.64|-|**27.93**|**26.61**|**25.79**|-|**26.13**|**25.29**|**24.67**| |IRCNN|PnP|**0.19**|29.84|26.97|25.86|25.45|26.74|25.60|25.72|24.38| |DPIR|PnP|32.64|29.63|27.79|**26.58**|**25.83**|26.70|26.05|**25.27**|**24.66**| |DiffPIR[4]|Diff. Model|93.56|29.73|27.84|26.48|25.63|-|-|-|-| |SwinIR[3]|End to End|11.94|**30.88**|24.56|22.84|20.73|**27.76**|22.41|21.24|19.53| > 3. There appears to be a discrepancy... Let us clarify the confusion. There are two different evaluation setups: we use 256x256 center-cropped images, as performed in ProxDrunet [2], while DPIR [1] uses full-size images. Our results match Table 1 of ProxDRUNet [2]. To cross-check the performance, we reran the evaluations within the setup of DPIR. Here, our results match exactly Table 2 of the DPIR paper [1]. Our chosen evaluation is detailed in Section 5.1, and we will revise the paper to avoid future confusion caused by the different setups. The two best numbers are bolded for each column. |Method|σₙ=5|σₙ=15|σₙ=25| |-|-|-|-| |BM3D|40.19|33.52|30.71| |DEAL(Ours)|40.33|**33.95**|**31.31**| |Prox-DRUNet|**40.40**|33.91|31.14| |DNCNN|-|33.90|31.24| |DRUNet|**40.59**|**34.30**|**31.69**| > 4. The reported results for DPIR... We use the setup of [2] and our results are following their Table 4. There are three differences with Table 8 of DPIR: (i) we report the metrics on 256x256 center-cropped CBSD68 images; (ii) the noise levels are different; (iii) we report the average over the 4 kernels (a-d). The configuration is described in Section 5.2, and we will adapt the writing to prevent future confusion caused by the different setups. Refs. [1] Zhang et al. "PnP image restoration with deep denoiser prior.", 2021. [2] Hurault et al. "Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization.", 2022. [3] Liang et al. "Swinir: Image restoration using swin transformer.", 2021. [4] Zhu et al. "Denoising diffusion models for PnP image restoration.", 2023. [5] Song et al. "Solving inverse problems in medical imaging with score-based generative models, 2021. --- Rebuttal Comment 1.1: Comment: As highlighted in the authors' rebuttal, SwinIR trained for bicubic blur performs significantly better than DEAL when the inference task settings match those used during training. However, its performance declines when inference conditions, such as the blur kernel or added noise, differ from the training setup. It suggests that SwinIR requires retraining or adaptation to maintain optimal reconstruction performance across different settings. I am curious whether DEAL can be applied consistently across all super-resolution tasks using the same hyperparameters or if, like SwinIR, it requires parameter tuning for different scenarios. Additionally, regarding DEAL's key advantages over other ML methods. DEAL solves a full-size optimization problem for each input sample during inference. How does this impact the computation complexity of the inference? Specifically, how does the runtime scale as input image size increases? --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for your response. > I am curious whether DEAL can be applied consistently across all super-resolution tasks using the same hyperparameters or if, like SwinIR, it requires parameter tuning for different scenarios. In DEAL's evaluation setup, we only need to adjust the **scalar** hyperparameter $\lambda$ to the data noise level $\sigma_n$. Then, we use the same $\lambda$ for all blur kernels and downsampling ratios. We follow the same principle for other universal methods when fine-tuning their hyperparameters (in accordance with their papers and implementations). We compare against two SwinIR models from their codebase, which are trained in a noiseless setup for rates s=2, s=3, separately. No weights for the noisy super-resolution tasks are provided and SwinIR does not offer interpretable hyperparameters for noise adaptation. Retraining the models requires significantly more data and computational resources compared to the minimal hyperparameter fine-tuning of the universal approaches. To provide further evidence for the good generalization of DEAL to new kernels, we propose the following experiment: A noise-less superresolution task with a downsampling rate of s=2, where the SwinIR is trained on the bicubic kernel and DEAL's $\lambda$ is fine-tuned on the bicubic task. Then, we apply SwinIR and DEAL to new kernels (A-D, different Gaussian kernels) with no further change for both models. Here, we explicitly see that SwinIR needs retraining for new kernels while DEAL still performs well with the hyperparameters tuned on the bicubic kernel. The following table contains the reconstruction PSNR for the noiseless super-resolution task with s=2 for different blur kernels on centered-cropped CBSD68 data. |Method|Kernel Bicubic|Kernel A|Kernel B|Kernel C|Kernel D| |-|-|-|-|-|-| |DEAL|29.91|29.59|29.76|28.57|27.21| |SwinIR|30.88|25.72|25.85|24.50|23.66| Regarding the adaptation of DEAL's $\lambda$ to the noise level $\sigma_n$, we can also use a theoretically motivated closed-form formula for $\lambda$. The performance drop is marginal compared to the fine-tuned case. The following table presents the average reconstruction PSNR over 4 kernels for the super-resolution task with s=2 on centered-cropped CBSD68 data. |Method|$\sigma_n=2.55$|$\sigma_n=7.65$|$\sigma_n=12.75$ |-|-|-|- |DEAL (fine-tuned)|27.99|26.58|25.75 |DEAL ($\lambda = 0.1 + 0.035\sigma_n^2$)|27.97|26.57|25.75 Additionally, one can retrain DEAL in an end-to-end manner by incorporating the forward operator, which will close the significant gap in the training regime of SwinIR. However, this is outside the scope of our paper which tries to position DEAL among universal approaches. Furthermore, adapting end-to-end models to different tasks (e.g., MRI and super-resolution) often necessitates architectural changes—this is not the case for DEAL or other universal approaches. > Additionally, regarding DEAL's key advantages over other ML methods. DEAL solves a full-size optimization problem for each input sample during inference. How does this impact the computation complexity of the inference? Specifically, how does the runtime scale as input image size increases? A similar question regarding the scalability of DEAL was raised by Reviewer Q6VC. To address this, we did experiments on images ranging from size 256x256 to 2048x2048, reporting memory and time usage in our rebuttal (please see the response [Questions for Authors] in the rebuttal to Reviewer Q6VC). Our results demonstrate that DEAL consistently outperforms Prox-DRUNet, the most comparable method in terms of universality, performance, and convergence properties. In particular, the optimization process for DEAL is efficient: (i) we rely on CG which is known as an optimal solver given the structure of our $A_k$; (ii) each update in the refinement process is warm-started with the previous solution; therefore, the later CGs in the pipeline need very few iterations to converge. Finally, we would like to emphasize the **interpretability** of DEAL. In Section 6 of our paper (Figures 7 and 8), we provide two interpretations of DEAL and its backbone attention mechanism. Figure 8 is particularly striking, as it demonstrates the exact relationship between the input measurements and DEAL's output. Specifically, each pixel of this output is a just a weighted average of the measurements, where the weights are the products of DEAL's refinement. These weights adapt well to the structure of the image. We believe that this interpretability is an elegant qualitative feature of DEAL, which goes beyond numerical metrics and opens the door to more well-performing, explainable image reconstruction methods. We sincerely appreciate your consideration and look forward to your final evaluation.
null
null
null
null
null
null
Boosting Masked ECG-Text Auto-Encoders as Discriminative Learners
Accept (poster)
Summary: This paper introduces D-BETA, a cross-modal pre-training framework designed for self-supervised learning of ECG signals and textual reports. D-BETA combines the strengths of generative and contrastive learning by leveraging masked language modeling and masked ECG reconstruction to recover missing data. Additionally, it employs contrastive loss functions alongside a nearest-neighbor negative sampling strategy to improve alignment between the two modalities. Extensive experiments conducted on multiple public datasets demonstrate that D-BETA significantly outperforms existing methods, particularly in downstream tasks such as zero-shot learning and linear probing. These findings underscore its strong generalization capabilities and highlight its potential for advancing diagnostic applications. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: this is not theory work Experimental Designs Or Analyses: Yes, the authors implement comprehensive experiments. Supplementary Material: yes, i read the experiment implementation details and dataset visualisation. Relation To Broader Scientific Literature: ECG analysis is a important task for medical application[1]. [1] AI-enabled electrocardiography alert intervention and all-cause mortality: a pragmatic randomized clinical trial. Nature Med Essential References Not Discussed: No Other Strengths And Weaknesses: #### **Strengths** - **Comprehensive Downstream Task Evaluation**: The authors evaluate their model's performance across a wide range of downstream tasks, providing a thorough analysis of its capabilities in practical applications. #### **Weaknesses** 1. **Lack of Originality in Loss Design** - The framework combines multiple loss functions (ETS, ETM, MLM, MEM), none of which are original contributions by the authors. Furthermore, the coefficients used to balance these losses are not systematically explored or justified. This makes the work appear more like an engineering effort rather than a novel machine learning contribution, suggesting it may be better suited for a clinical application journal rather than ICML. 2. **Heavy Reliance on CLIP Loss** - In Table 6, removing the ETS loss (which essentially reduces the model without using vanilla CLIP loss) results in a significant drop in performance. This indicates that the model heavily depends on the CLIP loss, while other components contribute relatively little. Similarly, replacing T5 with Med-CPT in Table 7 also leads to a performance decline. These observations raise concerns about whether the improvements stem primarily from leveraging more powerful off-the-shelf text encoders rather than from the proposed framework itself. 3. **Inconsistent Ablation Studies** - In Appendix Tables 12 and 13, the authors ablate ETM and MLM losses but fail to provide a consistent and systematic comparison of each individual loss component. This lack of clarity makes it difficult to discern which parts of the framework actually contribute to performance improvements. Additionally, the absence of detailed ablation studies undermines the ability to assess the incremental value of each loss function. 4. **Limited Originality** - The work lacks significant novelty, as all the loss functions are borrowed from prior studies, and the authors merely combine them without introducing substantial innovation. This diminishes the originality of the contribution and raises questions about the framework's suitability for a high-impact venue like ICML. Other Comments Or Suggestions: please see the weakness. Questions For Authors: please see the weakness. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: > [R4-1]: Lack of originality / Limited originality: The framework combines multiple loss functions (ETS, ETM, MLM, MEM), none of which are original contributions by the authors. Furthermore, the coefficients used to balance these losses are not systematically explored or justified. This makes the work appear more like an engineering effort rather than a novel machine learning contribution, suggesting it may be better suited for a clinical application journal rather than ICML … The novelty of the contributions raises questions about the framework's suitability for a high-impact venue like ICML. We appreciate your feedback and would like to kindly refer to the key contributions of our works as in R2-1. While the individual loss components are inspired by prior works, our contribution lies in how they are specifically adapted, integrated for the underexplored masked ECG-text multimodal autoencoder setting. We acknowledge that we did not systematically explore loss weighting coefficients, which can be a valuable direction for future work. However, **given the context of investigating ECG-text multimodal pre-training by proposing a novel generalized contrastive masked autoencoder framework that has already demonstrated to surpass existing benchmarks, we kindly believe your conclusion about “an engineering effort” only based on the lack loss coefficients may overlook our approach**. **We also emphasize the growing importance of ECG-based modeling in both machine learning and clinical applications. As you pointed, “ECG analysis is an important task for medical application”, ECGs are non-invasive, widely used physiological signals, and clinical reports provide contextual information that complements the waveform, but the combination between them remains underexplored in dynamic SSLs. Therefore, we hopefully aim for ICML (Primary Area: Applications->Health / Medicine) as a suitable venue to foster this impactful direction.** > [R4-2]: Heavy reliance on CLIP-style ETS loss and Flan-T5: In Table 6, removing the ETS loss (which essentially reduces the model without using vanilla CLIP loss) results in a significant drop in performance. This indicates that the model heavily depends on the CLIP loss, while other components contribute relatively little. Similarly, replacing T5 with Med-CPT in Table 7 also leads to a performance decline. These observations raise concerns about whether the improvements stem primarily from leveraging more powerful off-the-shelf text encoders rather than from the proposed framework itself. We would like to respectfully correct a potential misinterpretation in your examples. First, in our attempt to conduct ablation studies, it is ordinary when our proposed components have better performance effects than when not using them. **In Table 6, while removing the ETS loss leads to a performance drop, removing other components (e.g., Flan-T5 or N3S) also results in comparable degradation. In Table 7, Flan-T5 is the encoder we proposed in our framework, and is slightly better than Med-CPT (the best in MERL). We can also see that even with a weaker encoder (e.g., BERT), the remaining architecture still outperforms the last row in Table 6 (same BERT usage)**. Therefore, we sincerely believe the improvements reflect the synergistic design of the overall framework, rather than over-dependence on a specific component. > [R4-3]: Inconsistent ablation studies on individual losses In Appendix Tables 12 and 13, the authors ablate ETM and MLM losses but fail to provide a consistent and systematic comparison of each individual loss component. This lack of clarity makes it difficult to discern which parts of the framework actually contribute to performance improvements. Additionally, the absence of detailed ablation studies undermines the ability to assess the incremental value of each loss function. **We kindly note that our Table 12 evaluates the effect of ETM, while Table 13 evaluates the joint effect of MLM and MEM purposely, which together form our overall generative reconstruction objective.** Furthermore, rather than the need of detailed individual loss effect, we believe that focusing more on our main ablation studies which examine the incorporation of core components are more important in our framework, with the effects of N3S, ETS, Flan-T5 (Table 6), variations of text encoders (Table 7), the scalability in the ECG encoder (Table 8). --- We hope these clarifications address your concerns. We thank you for your review and hope that you will consider raising our score, as we believe our work offers valuable contributions to the ICML community.
Summary: This paper presents a self-supervised pretraining method for jointly learning from electrocardiograms (ECGs) and text. Their method, D-BETA, combines modality-specific masked modeling, a sigmoid matching loss, and a nearest neighbor negative sampling strategy to enhance performance. Results demonstrate superiority over state-of-the-art ECG pretraining strategies on a variety of downstream datasets and tasks. ## Update after rebuttal I have read the authors' review and will maintain my original recommendation. I thank the authors for clarifying my questions/concerns, but the rebuttal does not change my overall stance on the submission. Claims And Evidence: Yes, claims appear to be sound and supported by evidence. Methods And Evaluation Criteria: Yes. This study uses large-scale, standard ECG datasets and evaluation metrics that are consistent with prior work. Theoretical Claims: N/A Experimental Designs Or Analyses: Experimental design appears sound. I am curious how baseline results were derived, however. Were baselines pretrained from scratch on the same data as D-BETA, were their model weights taken as is and used for fine-tuning, or were results taken directly from their respective papers? Supplementary Material: Yes – all of it. Relation To Broader Scientific Literature: This study represents an addition to the growing collection of vision-language models for multimodal ECG-text representation learning. This method features a few additional existing previously existing techniques to boost performance beyond current published state-of-the-art. Essential References Not Discussed: C-MELT [1] is another ECG-text pretraining method that combines contrastive and generative objectives; this is an important omission to me. A few additional citations to ECG-text foundation models could be included [2-4]. I wouldn’t say these are “essential” in that their omission/inclusion changes my stance on the paper, but it is helpful to inform the reader that this is a growing space with many relevant approaches. [1] C-MELT: Contrastive Enhanced Masked Auto-Encoders for ECG-Language Pre-Training." arXiv preprint arXiv:2410.02131 (2024). [2] Han, Yu, et al. "Foundation Models in Electrocardiogram: A Review." arXiv preprint arXiv:2410.19877 (2024). [3] Tian, Yuanyuan, et al. "Foundation model of ECG diagnosis: Diagnostics and explanations of any form and rhythm on ECG." Cell Reports Medicine 5.12 (2024). [4] Jin, Jiarui, et al. "Reading your heart: Learning ecg words and sentences via pre-training ecg language model." arXiv preprint arXiv:2502.10707 (2025). Other Strengths And Weaknesses: *Strengths*: - The paper is generally well-written with clear presentation - Experiments are thorough and show clear improvement over existing state-of-the-art *Weaknesses*: - Some methodological details surrounding baseline implementation could be clarified - Technical novelty is limited (though the additions are clearly helpful): the sigmoid loss is borrowed from SigLIP – per the authors’ admission – and the combination of contrastive and generative approaches has been seen in C-MELT [1]. Other Comments Or Suggestions: - L11 on RHS: Change “e.g.” -> “e.g.,” - L86: Remove extra comma after “D-BETA” - L374 on RHS: Remove extra space before footnote in “ECG examples ” Questions For Authors: 1. Were baselines pretrained from scratch on the same data as D-BETA, were their model weights taken as is and used for fine-tuning, or were results taken directly from their respective papers? 2. What is the authors’ justification for using a language model pretrained on natural text (that is then fine-tuned)? Could the approach potentially be improved by using a text encoder pretrained on cardiovascular reports specifically, perhaps negating the need to fine-tune the text encoder? 3. Can the authors provide an ablation study on the negative sampling? I see a discussion of why N3S might be helpful in Section A.2, but do not see numerical results backing this up. 4. Do the authors intend to release code and model weights? This will be important to ensure reproducibility. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > [R3-1]: "Were baselines pretrained from scratch on the same data as D-BETA, were their model weights taken as is and used for fine-tuning, or were results taken directly from their respective papers?" Regarding baseline comparison, we used the results reported in the original baseline papers. They can use **different datasets for the pre-training stage but we keep the fair comparisons by strictly following the baselines’ released data splits, preprocessing and downstream configurations**. > [R3-2]: "What is the authors’ justification for using a language model pretrained on natural text (that is then fine-tuned)? Could the approach potentially be improved by using a text encoder pretrained on cardiovascular reports specifically, perhaps negating the need to fine-tune the text encoder?" Thank you for your thoughtful question. **Flan-T5 is pre-trained on large data of natural language that possibly captures rich medical context from domain-specific website (e.g., cardiovascular disease related forums)**, which indicates it is helpful. Additionally, trained with huge corpus, Flan-T5 shows **strong adaptability across unseen tasks. If further fine-tuning Flan-T5 in D-BETA, it could perform better on specific cardiovascular reports**. As shown in Table 7, we can see that Flan-T5 after fine-tuned during D-BETA pre-training stage is nearly 1% better than fine-tuned Med-CPT, a biomedical-pretrained model used in MERL (as a SOTA approach). That said, we agree that using a Flan-T5 text encoder first pre-trained specifically on cardiovascular reports might also offer additional benefits. > [R3-3]: "Can the authors provide an ablation study on the negative sampling? I see a discussion of why N3S might be helpful in Section A.2, but do not see numerical results backing this up." We kindly note that the **effectiveness of N3S is already reported in Table 6**, where removing N3S causes a 2% drop in zero-shot performance. Additionally, in Appendix A.2, we report that the ETM accuracy without N3S stagnates at ~75%, while with N3S it exceeds 96%. We believe these results support N3S’s impact of semantically-aware negative sampling. > [R3-4]: "Do the authors intend to release code and model weights? This will be important to ensure reproducibility." **Yes, as noted in the Impact Statement part (Lines 442-451), we will publicly release the pretrained models and code upon acceptance. We are happily committed to ensuring reproducibility and enabling future research**. > [R3-5]: "Technical novelty is limited (though the additions are clearly helpful): the sigmoid loss is borrowed from SigLIP – per the authors’ admission - and the combination of contrastive and generative approaches has been seen in C-MELT [1]." We respectfully understand the reviewer’s perspective and would like to refer to R2-3 and R2-1 for further discussions on our related justifications and contributions. **We do not directly follow the presentation in their paper but make suitable adjustments in our modeling context and our core contributions go beyond using ETS loss**. We kindly note that [1] is not discussed here due to the sensitive policy of the conference. > [R3-6]: “A few additional citations to ECG-text foundation models could be included [2-4]. I wouldn’t say these are “essential” in that their omission/inclusion changes my stance on the paper, but it is helpful to inform the reader that this is a growing space with many relevant approaches.” Thank you for your kind suggestions. [2] is a comprehensive review paper that synthesizes various aspects of ECG foundation including background, existing datasets, common modeling approaches, and various practical applications. [3] also focuses on the ECG foundation model but proposes to use LLMs to enrich ECG report text with medical knowledge then perform a signal-text-label contrastive learning. Meanwhile, [4] does not pretrain ECG-text directly but interestingly views ECG signals as a language, with QRS complexes as words and rhythms as sentences. **Alongside the discussed works in R1-2, we believe these reflect the growing momentum in this space as you kindly noted**. --- We appreciate your time to review our work. At this end, we have addressed your comments and we would be grateful if you are satisfied with our responses and could acknowledge our rebuttal.
Summary: This paper introduces the D-BETA framework for joint pre-training of ECG signals and their corresponding clinical text reports, aiming to learn cross-modal self-supervised representations. The method integrates generative tasks—specifically, masked language modeling (MLM) and masked ECG reconstruction (MEM)—with a discriminative task based on ECG-text matching via contrastive learning. Additionally, the paper proposes the ETS loss to enhance the model’s discriminative capability and employs a nearest-neighbor negative sampling strategy (N3S) to effectively select negative samples. Extensive experiments on several public datasets (e.g., PhysioNet 2021, PTB-XL, CSN, CPSC2018, and CODE-test) demonstrate that D-BETA achieves significant performance improvements under full fine-tuning, linear probing (even when only 1% of the training data is used), and zero-shot scenarios. Ablation studies further validate the contributions of key components such as the ETS loss, N3S strategy, and the Flan-T5 text encoder. Claims And Evidence: Claims: The authors claim that combining the advantages of generative and discriminative learning can significantly improve the alignment and representation of cross-modal features from ECG signals and text reports, thereby enhancing downstream tasks such as cardiac disease diagnosis and zero-shot inference. Evidence: The paper provides extensive experimental results, reporting superior performance (e.g., improvements in AUC, and notable gains in fine-tuning and linear probing scenarios) compared to state-of-the-art methods. In addition, ablation experiments corroborate the positive impact of the ETS loss and the N3S strategy on the overall model performance. Methods And Evaluation Criteria: Methods: The paper employs a Transformer-based ECG encoder and a pre-trained Flan-T5 text encoder, using a cross-attention module to fuse features from both modalities. Three task branches are set up corresponding to masked language modeling (MLM), masked ECG reconstruction (MEM), and ECG-text matching (ETM). Evaluation Criteria: The experiments are conducted on multiple datasets under full fine-tuning, linear probing, and zero-shot settings, with evaluation metrics mainly including classification accuracy and AUC. Overall, the chosen methods and evaluation criteria effectively capture the characteristics of the problem and the capabilities of the proposed model. Theoretical Claims: Fusion of Generative and Discriminative Learning: The authors argue that combining generative tasks (e.g., MLM and MEM) with a discriminative task (ECG-text matching) in a unified self-supervised framework can complement each task’s strengths. While the generative tasks help capture the fine-grained structure of the data, the discriminative task reinforces the separation of cross-modal features, leading to more robust and discriminative representations. Sigmoid-based ETS Loss: The paper introduces a novel ETS loss that computes the matching probability for each ECG-text pair independently using the Sigmoid function and optimizes the distance between positive and negative pairs via binary cross-entropy loss. The authors contend that this approach is more efficient than traditional softmax-based contrastive losses—which require global normalization—and thus provides a lightweight, memory-friendly alternative. Rationale Behind the N3S Strategy: The authors provide theoretical justification for using the nearest-neighbor negative sampling (N3S) strategy instead of random negative sampling. By selecting negative samples that are semantically distant from the positive samples in the pre-trained text embedding space, the strategy effectively enhances the contrastive learning process. This claim is based on the recognition of inherent semantic similarities in medical text data and underscores the importance of semantic divergence in negative sampling. Experimental Designs Or Analyses: The experimental design is comprehensive, covering full fine-tuning, low-resource linear probing (using 1% of training data), and zero-shot settings, and validating the model across multiple public ECG datasets. Ablation experiments assess the contributions of critical components (e.g., various text encoders, different numbers of Transformer layers, and the presence or absence of the ETS loss and N3S strategy). However, it is noted that the experimental results for zero-shot ECG classification are consistent with those reported in “Zero-Shot ECG Classification with Multimodal Learning and Test-time Clinical Knowledge Enhancement” and “Lead-agnostic Self-supervised Learning for Local and Global Representations of Electrocardiogram.” It remains unclear whether the authors have reproduced these works under identical experimental conditions for a fair comparison. Moreover, the paper does not directly compare the ETS loss against a traditional softmax-based loss; instead, it only shows the effect of removing ETS. This approach demonstrates the contribution of the discriminative task but does not conclusively prove the superiority of the sigmoid-based ETS loss over conventional softmax formulations. Supplementary Material: Appendix A.1: Data and Training Details Includes representative examples of ECG-text pairs, data preprocessing procedures, and training hyperparameter settings, providing detailed information necessary for experiment reproduction. Appendix A.2: Detailed Description of the N3S Negative Sampling Strategy Describes how pre-trained Flan-T5 is used to generate text embeddings and how FAISS is employed to perform nearest-neighbor negative sampling to ensure that selected negative samples are semantically distinct from the positive ones. Appendix A.3: Zero-Shot Evaluation and Additional Experimental Analyses Covers the configuration for zero-shot experiments, additional ablation results, and t-SNE visualizations, which further elucidate the model’s performance under different evaluation scenarios. Relation To Broader Scientific Literature: This work is closely related to previous self-supervised ECG learning methods (e.g., CMSC, 3KG, ST-MEM) and multimodal fusion approaches (e.g., MERL), while also drawing inspiration from visual-language pre-training frameworks like CLIP. The innovative incorporation of the N3S strategy using FAISS for efficient vector retrieval and the integration of generative and discriminative tasks (e.g., ETS loss) provide a novel perspective on cross-modal feature learning. Essential References Not Discussed: Although the paper cites a substantial number of related works, it might benefit from discussing more recent advances in multimodal fusion or self-supervised learning in the context of ECG data to further enrich the literature background and offer a more comprehensive comparison. Other Strengths And Weaknesses: Strengths: - The paper innovatively combines generative and discriminative self-supervised learning. - The proposed method demonstrates particularly strong performance in low-resource and zero-shot settings, showing promising potential for practical applications. Weaknesses: - The overall task design seems to largely build upon existing multimodal joint modeling methods and appears more like an aggregation of several tasks, lacking sufficient novelty. - Regarding the ETS loss, while the use of a Sigmoid function to avoid the computational burden of global normalization is an interesting idea, the experimental validation and theoretical justification for its effectiveness in enhancing discriminative power remain insufficient. Other Comments Or Suggestions: none Questions For Authors: The paper does not include an experiment comparing ETS directly against a conventional softmax-based loss—only an ablation where ETS is removed is provided. How can the authors further demonstrate the efficiency and superiority of the ETS loss over traditional methods? In the loss module, the authors emphasize the efficiency advantages of the proposed approach. Could the authors elaborate on how the N3S strategy impacts the overall computational burden? Does it affect the model’s efficiency and scalability? Have the authors reproduced and fairly compared related methods under a unified experimental setting to ensure a fair comparison of performance? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > [R2-1]: "The overall task design seems to largely build upon existing multimodal joint modeling methods and appears more like an aggregation of several tasks, lacking sufficient novelty." While our framework builds upon several established tasks, to the best of our knowledge, **we are the first to design and investigate the contrastive masked autoencoder ECG-text pre-training**. As novelty is subjective, we focus on the contribution and significance of our work to the ICML (especially with the primary area of Applications->Health/Medicine) community: - We propose D-BETA that specially uses a transformer-based ECG encoder and the Flan-T5 model (overlooked in ECG-clinical community), together with attention-based fusion modules and decoders. - We propose and investigate the insight of discriminative ETS loss into masked ECG-Text autoencoder implementations (with ETM, MLM, MEM and arbitrary lead augmentation), enabling robust multimodal representation learning (unexplored in the literature). - We are the first to introduce the N3S technique to address data unexplored redundancy in the MIMIC-IV ECG dataset, improving the quality of negative samples and boosting model performance. - We conduct extensive experiments across zero-shot, linear probing, and fully fine-tuned settings, demonstrating that our approach consistently outperforms strong baselines across more than 100 cardiac conditions. We also verify the effectiveness of proposed components by conducting the diverse ablation studies and necessary appendix, as as acknowledged by the reviewers NfLP ("Experimental designs are generally sound"), ffzJ ("Experiments are thorough and show clear improvement over existing state-of-the-art") and TuyF (“providing a thorough analysis of its capabilities in practical applications”). Finally, we emphasize the practical implication of the ECG research field and sincerely believe that our work reflects a principled design and empirical effort rather than simple aggregation. > [R2-2]: "Although the paper cites a substantial number of related works, it might benefit from discussing more recent advances in multimodal fusion or self-supervised learning in the context of ECG data." Thank you for the suggestion. This is closely related to R1-2 in which we discuss more recent works regarding multimodal ECG-Text fusion. > [R2-3]: "The paper does not include an experiment comparing ETS directly against a conventional softmax-based loss … can the authors further demonstrate the efficiency and superiority of the ETS loss over traditional methods?" Thank you for the thoughtful point. Our ETS loss is inspired by the Sigmoid contrastive loss proposed in SigLIP[1], which has already **demonstrated strong theoretical and empirical advantages over softmax-based losses. These “Sigmoid” benefits remain efficiency and superiority in our implementation since we kindly introduce the unaffected ETS heads with Dense layer and Tanh activation while simply setting t=1 and b=0 (as we design positive-negative balanced batches during training)**. Building on this foundation, we focus our main ablations on showing ETS’s benefit in our specific multimodal ECG-text setup. [1] Zhai, Xiaohua, et al. "Sigmoid loss for language image pre-training." Proceedings of the IEEE/CVF international conference on computer vision. 2023. > [R2-4]: "In the loss module, the authors emphasize the efficiency advantages of the proposed approach. Could the authors elaborate on how the N3S strategy impacts the overall computational burden? Does it affect the model’s efficiency and scalability?" We appreciate this point and acknowledge the importance of discussing the computational aspect of using N3S. First, the FAISS index is constructed once before training using precomputed text embeddings in the small Flan-T5 space. During training, we only perform efficient nearest-neighbor retrieval using this index and this action will introduce some tradoff. Importantly, FAISS is known for its impressive speed and scalability in large-scale vector search tasks, and in our case, produces comparable runtime overhead and still doesn’t affect the efficiency and scalability much. **Empirically, on 1× NVIDIA A100-40GB, the one-time loading of the model and FAISS index takes ~1.2 seconds while the average (over 1000 samples) time using FAISS was approximately 0.00219 seconds (compared to 0.00002 seconds without it)**. > [R2-5]: "Have the authors reproduced and fairly compared related methods under a unified experimental setting to ensure a fair comparison of performance?" Regarding baseline comparison, we used the results reported in the original baseline papers. In D-BETA, we always aim for the fair comparisons by strictly following the baselines’ released data splits, preprocessing and downstream configurations. --- We sincerely appreciate your valuable feedback and have responded to your reviews and hopefully these meet your expectations and acknowledgement.
Summary: This paper proposes D-BETA, a novel contrastive masked transformer-based architecture to pre-train ECG signals and corresponding texts. The key components of the proposed approach include self-supervised learning for both ECG and medical texts, as well as fusion mechanism for both to enhance cross-learning. A nearest-neighbor negative sampling was also used to support contrastive learning. Experiment results and ablation studies were included. Claims And Evidence: Generally well supported claims. Methods And Evaluation Criteria: Evaluation criteria are generally sound. One comment/question is that in evaluating and comparing the proposed method to other baselines, whether all other baselines incorporates the text information in training. The text information itself is rich and sometimes contain even more information than the ECG itself, so if some of the other methods do not have the text info available, the comparison may not be a fair one. Theoretical Claims: NA Experimental Designs Or Analyses: Experimental designs are generally sound. It would be helpful to also discuss the computational efficiency of the proposed approach, when comparing to similar approaches that can incorporate both ECG and texts. Supplementary Material: All. Relation To Broader Scientific Literature: In discussing literature related to using both ECG and text in self-supervised or contrastive learning, the only paper cited was MERL. It would be helpful to expand the literature review here, as this is the key contribution of this paper. Essential References Not Discussed: NA Other Strengths And Weaknesses: None Other Comments Or Suggestions: Please explain when applying MAE to the ECG (multi-channel), how are the different channels masked (randomly, same time window masked simultaneously across channels, etc.) Please explain explicitly the formation of the latent representation $z$ as a fusion of the outputs from the ECG-specific and text-specific encoders. This seems to be the crucial step to enhance cross-learning of the two components. Overall, the paper would benefit from some theoretical analysis/heuristic on how cross-learning and fusion of the ECG and text component improve model performance. In some scenarios, the text info, especially from doctors, may be seen as a ground truth or label, rather than training data. Some discussion on this would be helpful, too. Questions For Authors: NA Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback on our submission. We would like to address your comments below: > [R1-1]: “In some scenarios, the text info, especially from doctors, may be seen as a ground truth or label, rather than training data. Some discussion on this would be helpful, too.” … “In evaluating and comparing the proposed method to other baselines, whether all other baselines incorporate the text information in training … if some of the other methods do not have the text info available, the comparison may not be a fair one.” Firstly, our model is not explicitly exposed to ground-truth labels during pre-training. **The textual reports in the MIMIC-IV-ECG dataset are machine-generated and serve as various descriptive inputs, not as discrete predefined label annotations**. Furthermore, additional text description can be beneficial, but importantly, we presented the efficient way to leverage them in an underexplored multimodal setting, optimizing performance in **the same downstream experiments** with the baselines: (1) Comparing with strong ECG-only SSL methods to show multimodal modeling power. (2) Comparing MERL (ICML publication) in the same multimodal aspect. Moreover, evaluated **downstream tasks such as fine-tuned classification and identification use unseen ECG recordings, no text is involved; or complete zero-shot tasks** (consistent with the latest close work like MERL). Therefore, we believe these comparisons are both fair and meaningful. > [R1-2]: “It would be helpful to expand the literature review here, as this is the key contribution of this paper.” **We acknowledge that discussing more details of recent related ECG-text works would be more helpful to our work. Therefore, we kindly synthesize more recent works available, and by this, we also want to highlight the rapidly growing developments and attention to this medical domain**: Firstly, [1] pretrains on a large private dataset using a standard contrastive SSL with ResNet-like ECG and BERT-based text encoders. Their evaluation is also limited to a few downstream tasks and diseases. [2] build on a similar contrastive modeling design but introduces prompt-based zero-shot inference. While insightful, their evaluation remains relatively narrow. ESI [3] opens another interesting angle when using a RAG pipeline to produce auto-generated detailed report data for the pretraining stage using multiple datasets (including MIMIC IV, PTB-XL, and Chapman). However, the contrastive modeling approach is still relatively similar (Bert-like text encoder, fully convolutional Convnext model, no MEM, ETM, or lead augmentation), and evaluation is relatively modest. Recently, as ECG representation learning continues to evolve, a few works have begun to explore more practical, user-end applications. For example, [4,5] leverage instructed multimodal LLMs to deal with ECG report generation and clinical question answering. These methods largely benefit from a well-pretrained ECG encoder, which D-BETA also can be potentially adapted. [1] Lalam, Sravan Kumar, et al. "Ecg representation learning with multi-modal ehr data." TMLR (2023). [2] Liu, Che, et al. "Etp: Learning transferable ecg representations via ecg-text pre-training." ICASSP (2024). [3] Yu, Han , et al. "Ecg semantic integrator (esi): A foundation ecg model pretrained with llm-enhanced cardiological text." (2024). [4] Zhao, Yubao, et al. "ECG-Chat: A Large ECG-Language Model for Cardiac Disease Diagnosis." (2024). [5] Yang, Kai, et al. "ECG-LM: Understanding Electrocardiogram with a Large Language Model." HDS (2025). > [R1-3]: “Please explain when applying MAE to the ECG (multi-channel), how are the different channels masked” As described in Section 3.1 (Lines 139-143), we apply random lead masking, where entire ECG channels are independently masked with a probability of 0.5. This lead-wise masking encourages the model to learn robust representations across varying input configurations. > [R1-4]: “Please explain explicitly the formation of the latent representation z as a fusion of the outputs from the ECG-specific and text-specific encoders. Overall, the paper would benefit from some theoretical analysis/heuristic on how cross-learning and fusion improve model performance.” Our fusion module is designed as stacked cross-attention blocks, which allows interaction between multi-lead ECG signals and semantic text embeddings (Lines 175-180) before decoding. This produces essential joint embeddings for our learning objectives: for example, the ETM task requires a unified representation to determine whether an ECG-text pair is matched, which naturally depends on information from both modalities. Similarly, MLM and MEM benefit as one modality provides useful context to reconstruct the other (e.g., ECG noise or text ambiguity compensation). This fusion also facilitates future downstream tasks such as ECG report generation, where signal-based text generation is critical.
null
null
null
null
null
null
Optimization for Neural Operators can Benefit from Width
Accept (poster)
Summary: This paper proposes a unified optimization framework using Restricted Strong Convexity (RSC) and smoothness to establish gradient descent convergence guarantees for Deep Operator Networks (DONs) and Fourier Neural Operators (FNOs). The key contributions are as follows: 1. A theoretical proof that the empirical losses for both operators satisfy RSC and smoothness under over-parameterization (Theorems 2-5). 2. A demonstration that wider networks improve optimization convergence, supported by both theoretical analysis and experiments. 3. Empirical validation on three operator learning tasks: antiderivative, diffusion-reaction, and Burgers’ equation. Claims And Evidence: The authors demonstrate that increasing network width benefits optimization, supported by Hessian/gradient bounds (Theorems 2-3 for DONs and 4-5 for FNOs) and loss reduction trends (Figures 1-2). However, the local loss descent established in Theorems 2 and 4 does not necessarily imply reduced optimization difficulty with increasing network width. Additionally, there are technical concerns in the RSC proof, particularly in the handling of interaction terms. Methods And Evaluation Criteria: This study proposes a unified theoretical and experimental framework to explore the impact of network width on optimization. While empirical results validate the benefits of width expansion, the theoretical analysis relies on overly restrictive assumptions that limit its generalizability. Furthermore, the practical feasibility of these assumptions is insufficiently validated. Theoretical Claims: Upon reviewing the proofs related to the DON section, we identified issues in the construction of the RSC proof. Specifically, certain steps in the derivation lack sufficient justification, raising concerns about the validity of the argument. Additional clarification or further details are necessary to ensure the correctness of the proof. Experimental Designs Or Analyses: The experimental results clearly show that increasing width improves optimization performance. However, experiments should also be used to validate the key assumptions (Assumptions 4 and 7) to ensure their practical relevance and applicability. Supplementary Material: Our review primarily focused on the supplementary material related to the DON (Deep Operator Network) section. Relation To Broader Scientific Literature: The paper connects to NTK theory (Jacot et al., 2018) and RSC analysis (Banerjee et al., 2023a). Essential References Not Discussed: This work lacks a discussion of concurrent work on operator-specific optimization (e.g., Qin et al., 2024 on Fourier spectral improvements). Other Strengths And Weaknesses: Strengths Originality: While the paper provides an RSC-based convergence proof for neural operators, much of the theoretical framework closely follows prior work, particularly Banerjee et al. (2023a), with limited novel extensions specific to neural operators. Significance: Offers practical insights into how network width impacts optimization, which could guide applications in scientific computing. Weaknesses Clarity: Proof sketches in Appendix C.2 lack intuitive explanations of cross-network interactions, making the derivations difficult to follow. Experiment diversity: All tasks rely on L2 loss, and the absence of adversarial or uncertainty-aware metrics limits the scope of the evaluation. Theoretical novelty: The theoretical contributions are incremental, as the RSC proof heavily builds on Banerjee et al. (2023a) with minimal adaptation to the specific challenges of neural operators. Other Comments Or Suggestions: No other comments. Just see "Strengths And Weaknesses" and "Questions". Questions For Authors: 1. There are some issues in the proof of Theorem 2, where the authors aim to demonstrate that $Q^t_\kappa$ is non-empty, characterized by the following conditions: $| \cos(\theta' - \theta_t, \nabla_\theta \tilde{G}_{\theta_t}) | \geq \kappa$ (cosine similarity condition), $(\theta_f' - \theta_{f,t})^\top \left(\frac{1}{n} \sum_{i=1}^n \frac{1}{q_i} \sum_{j=1}^{q_i} \ell_{i,j}' \sum_{k=1}^K \nabla_{\theta_f} f_k^{(i)} \nabla_{\theta_g} g_{k,j}^{(i)\top}\right) (\theta_g' - \theta_{g,t}) \geq 0.$ $(\theta_f' - \theta_{f,t})^\top \left( \sum_{k=1}^K \nabla_{\theta_f} f_k^{(i)} \nabla_{\theta_g} g_{k,j}^{(i)\top} \right) (\theta_g' - \theta_{g,t}) \leq 0, \quad \forall i \in [n], \forall j \in [q_i]. $ To simplify the analysis, the authors set $\theta_g' = \theta_{g,t}$ and claim that belonging to the $Q^t_\kappa$ set conveniently reduces to the feasibility of the cosine similarity condition as follows: $\left| \cos(\theta_f' - \theta_{f,t}, \bar{g}_f) \right| \geq \kappa. $ However, in this case $\left| \cos(\theta' - \theta_t, \nabla_\theta \tilde{G}_{\theta_t}) \right| = \frac{\langle \theta_f' - \theta_{f,t}, \bar{g}_f \rangle}{\| \theta_f' - \theta_{f,t} \| \| \nabla_\theta \tilde{G}_{\theta_t} \|}$ which is not equivalent to $\left| \cos(\theta_f' - \theta_{f,t}, \bar{g}_f) \right|$ without additional control over $\bar{g}_f$. 2. Assumptions 2 and 4 appear to be overly strong. In the convergence analysis of NTK, such assumptions are not required. As this work aims to extend NTK theory, it would be valuable to theoretically or empirically validate these assumptions. One feasible approach could be to examine the norm of the training trajectory during the neural network training process. This would provide insights into whether these assumptions hold in practice and help justify their necessity in the proposed framework. Ethical Review Concerns: No ethical concerns identified. This work focuses on the theoretical analysis of existing methods. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful to the reviewer for taking the time to review our paper. --- We start by responding to the reviewer's concerns. - We respectfully state that our work is not a "limited novel extension" of (Banerjee et al., 2023a): for a detailed justification we refer to our response to Reviewer bY3J, where we justify our claim on three facts: 1. we present a more general optimization framework that is used by both (Banerjee et al., 2023a) and our work; 2. several works in the literature share the same underlying mathematical approach and are not considered "limited novel extensions" of each other due to their different models; and 3. our models have differences that make their analysis complex and non-trivial (please see the details in Appendices D (for DONs) and E (for FNOs)), without automatically following from (Banerjee et al., 2023a). - Regarding our "experimental diversity", we stated in our paper that the objective of Section 8 "is to show the effect of over-parameterization on the neural operator training and not to present any kind of comparison between the two neural operators." We consider that our experiments already achieve the goal of illustrating and complementing the theory of our paper, and thus, there is no need to include additional performance metrics such as the ones mentioned by the reviewer. Indeed, since our analysis is about gradient descent over the **empirical loss**, which is an **L2 loss**, it is sufficient to study the behavior of this L2 loss to prove the benefits of width during training. Finally, since our theoretical results are the centerpiece of our work, having straightforward yet concrete empirical findings that support the theory helps to ensure that the focus remains firmly on the paper's core theoretical contributions. - We are grateful to the reviewer for suggesting more discussion on concurrent works including (Qin et al., 2024); we will include them in Appendix A. --- We now proceed to respond to the questions about our paper. **Question 1**: We are grateful to the reviewer for pointing out this issue. Indeed, as pointed out by the reviewer, the formula in equation (28) of Appendix D.2 (proof of Theorem 2) should be $|\cos(\theta' - \theta_{t}, \nabla_{\theta}\\bar{G}\_{\\theta_{t}}) |\\geq \\kappa$ with the understanding that $\theta'=[{\theta'\_f}^{\top}\\;{\theta\_{g,t}}^{\top}]^{\top}$. Then, the rest of the proof is **still correct** since it becomes virtually equivalent to the proof of non-emptyness of the $Q^t_\kappa$ set for FNOs (Appendix E.2). This follows from the fact that the $Q^t_\kappa$ set for FNOs only depends on the *cosine similarity condition*, which is similar to the reduction (28) we obtained for FNOs by taking $\theta'$ as above. In conclusion, our proof **still holds** after correcting equation (28) and appropriately changing the notation, following the same proof as the one used for FNOs. Again, thanks to the reviewer for spotting issue. **Question 2**: We are grateful for the suggested ways to strengthen the practical validation of our assumptions. We start by respectfully clarifying three things: 1. Our work's aim is not to "extend NTK theory" since we use a different mathematical approach (namely, RSC theory). Indeed, Section 2 proposes an alternative optimization framework to the NTK one. 2. We must emphasize that Assumption 2 and part of Assumption 4 **are actually found** in the NTK-based paper (Liu et al., 2021a). 3. We respectfully believe that Assumption 2 is not "overly strong" for *two* reasons. First, it is satisfied for *commonly used* smooth activation functions such as sigmoid, hyperbolic tangent, and Gaussian Error Linear Unit (GELU). Second, besides being used by the NTK-based paper mentioned above, it is used by all RSC-based papers mentioned in Section 2. We now discuss the empirical validation of Assumption 4 (Assumption 2 is not to be verified empirically since it is satisfied by the design choice of the activation functions). Assumption 4 requires *all* iterations to be within a neighborhood of the initialization point. This could certainly be validated in the way suggested by the reviewer—the task would be to find the $\rho$ and $\rho_1$ radii, which we suspect will depend on both the initialization point and the training data (training is a non-convex problem and its optimization landscape depends on the training data). Nonetheless, Assumption 4 is *only* required if we want our results to hold for *all* iterations of gradient descent—indeed, we could state a weaker version of Assumption 4 where it only holds for some *finite* set of iterations, and still our guarantees would hold for such iterations. Thus, even if we find that Assumption 4 is not satisfied, it is still possible that our theoretical conditions were met for some iterations along the training procedure. --- We hope that this rebuttal leads to a more positive assessment of our paper.
Summary: This paper addresses the problem of optimization convergence guarantees for neural operators, specifically Deep Operator Networks (DONs) and Fourier Neural Operators (FNOs), when trained using gradient descent (GD). The authors propose a unified optimization framework based on two key conditions: restricted strong convexity (RSC) and smoothness of the loss function. They demonstrate that these conditions are satisfied for both DONs and FNOs, particularly when the networks are wide. The paper provides theoretical guarantees for the convergence of GD in training these neural operators and supports the theory with empirical results on canonical operator learning problems. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: The contributions of the paper are related to scientific machine learning and optimization. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengthes: 1. The paper provides the first formal optimization convergence guarantees for DONs and FNOs, addressing a significant gap in the literature. 2. The authors complement their theoretical results with empirical evaluations, demonstrating that wider networks indeed lead to lower training losses and faster convergence for both DONs and FNOs. Weaknesses: 1. While the empirical results are promising, the experiments are limited to three canonical problems. It would be beneficial to see how the theory holds up on more complex or real-world operator learning tasks 2. The paper does not discuss the practical implications of the theoretical results in detail. For instance, how does the width of the network affect the generalization performance, and what are the trade-offs between width and computational cost? Other Comments Or Suggestions: No Questions For Authors: 1. The paper suggests that wider networks lead to better optimization convergence. However, wider networks also increase computational cost. How do the authors suggest balancing width and computational efficiency in practice? Why not deeper networks? 2. The paper assumes smooth activation functions. How sensitive are the results to the choice of activation function? Would non-smooth activations (e.g., ReLU) affect the optimization guarantees? 3. Have the authors considered comparing the performance of gradient descent with other optimization methods, such as SGD? 4. The paper focuses on optimization convergence, but how does the width of the network affect the generalization performance of DONs and FNOs? Are there any theoretical or empirical insights on this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful to the reviewer for taking the time to review our paper and for the list of questions which we now address. - **Question 1**: **First subquestion:** Our theoretical work establishes sufficient conditions for optimization and shows that the convergence rate may benefit from width, i.e., *less* optimization steps may be needed to achieve a lower loss value as the network increases in width. Nevertheless, increasing width also means that the computational cost increases *per* optimization step. We believe that the tradeoff between width and computational cost should be determined empirically depending on both (i) the user's tolerance for error and (ii) the available computational resources for training. Assume a user can tolerate a certain $\epsilon$ value of error on the empirical loss. If the user has limited computational resources and uses a wide network, it could happen that the *time* it takes to do a single optimization step is long, even though there are only a few steps needed to reach the tolerance level $\epsilon$. In such case, the user should opt for using networks with less width, because, even though there may be more optimization steps to reach the tolerance $\epsilon$, the *total* amount of time spent during training could be less. If, on the other hand, there is abundant and better computational resources, the user can now afford to increase the width more and even decrease the tolerance level. Now, it could also happen that increasing the width will not help too much, depending on the application. Thus, our recommendation would be to not start training with very large widths. For example, if we look at the simulations in Section 8, it is clear that how much help is attained by incrementing width is largely application driven—jumping from a width of 10 to 50 helps the Diffusion-Reaction problem much more than it does to the Burger's Equation. **Second subquestion:** We believe that the question of choosing between increasing depth and increasing width is delicate. Though this goes beyond the scope of our work, we can mention a few things in response to the question by the reviewer: - (i) justifying the benefits of depth versus width in optimization requires a different theoretical framework than ours; - (ii) adding more layers (depth) also adds more weights and thus more matrix-matrix multiplications in the gradient computations, so the computational benefit (if any) of increasing depth versus width has to be carefully considered; and - (iii) deeper networks may need extra architectural changes to avoid *vanishing* gradient effects (such as adding residual connections, normalization layers, etc.), which needs to be taken into account and whose effect on the training of neural operators needs to be studied. - **Question 2**: Our results are based on the calculation of the RSC condition and smoothness of the empirical loss, which requires calculating the Hessian of the empirical loss—and for this, we ultimately make use of the differentiability of the smooth activation functions. Thus, due to their lack of (global) differentiability, using non-smooth activation functions such as ReLUs would require a different analysis approach—for example, we may need to formulate an alternative notion to the RSC condition for non-smooth functions, as well as an alternative to the smoothness requirement (perhaps using *semi-smoothness* as in the work (Allen-Zhu et al., 2019) which uses ReLUs). We believe this is an interesting future direction. Finally, we would like to point out that, even though our paper only covers smooth activation functions, it encompasses commonly used activations such as sigmoid, hyperbolic tangent, and Gaussian Error Linear Unit (GELU). - **Question 3**: This is something we hope could be the topic of future theoretical work—to the best of our knowledge, no work has explored SGD with restricted strong convexity. In order to study such problem, we would need to start by adapting our general optimization framework in Section 2 to the stochastic gradient descent setting. - **Question 4**: This is beyond the scope of our paper, however, we hypothesize that generalization can also benefit from width for both DONs and FNOs. It is relevant to mention that (Kontolati et al. (2022)) has empirically shown that over-parameterization benefits generalization for DONs, as mentioned in Section 2. --- Finally, we would like to mention that, although the problems we study in our experiments may be regarded as "toy problems", they were chosen because they are **representative** of problems typically found in the operator learning literature. Indeed, the seminal papers on both DONs (Lu et al. (2021)) and FNOs (Li et al. (2021)) report results on Burger's equation, and the paper on DONs also reports results on the Antiderivative operator and the Diffusion-Reaction equation. --- We hope that this rebuttal leads to a more positive assessment of our paper.
Summary: The main results of this paper are to derive optimization convergence results for both Deep Operator Networks (DONs) and Fourier Neural Operators (FNOs), under gradient descent (GD). The main techniques in this paper are to show that the empirical loss functions for such two kinds of networks satisfy Restricted strong convexity (RSC) and Smoothness conditions under certain assumptions. Claims And Evidence: The results in this paper are more on the theoretical side. Yes, the claims made in the submission are supported by clear and convincing evidence, such as theoretical derivation. Methods And Evaluation Criteria: The results in this paper are more on the theoretical side. Yes, the proposed methods make sense. Theoretical Claims: Yes, I checked and thus believe the proofs for theoretical claims in this paper should be correct. Experimental Designs Or Analyses: The results in this paper are more on the theoretical side. Yes, the experimental designs and analyses are sound. Supplementary Material: Yes, I reviewed the supplementary material, especially on the theoretical side. Relation To Broader Scientific Literature: This paper first derived the optimization convergence result of loss functions using GD for Deep Operator Networks (DONs) and Fourier Neural Operators (FNOs), which extends the previous work in neural operators. Essential References Not Discussed: No. The references are enough to understand the key contributions of the paper. Other Strengths And Weaknesses: Strengths This paper derives optimization convergence results of loss functions using GD for Deep Operator Networks (DONs) and Fourier Neural Operators (FNOs), which are also verified by empirical results. Weaknesses The main results in this paper, Theorems 2,3,4,5, are about the step t of GD, and are not deterministic results. Thus, although in each step, the loss decreases with a high probability (which requires the width to be very large), it doesn't guarantee that the loss will decrease after a large number of iterations of GD. Therefore, it will be interesting if the authors can derive some convergence results for the whole training procedure. Other Comments Or Suggestions: No. Questions For Authors: It will be interesting if the authors can derive some convergence results for the whole training procedure. Ethical Review Concerns: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful to the reviewer for taking the time to review our paper. We now address the points raised by the reviewer. We are grateful for the question of guaranteeing convergence for the whole training procedure. If we understood correctly (and we kindly ask the reviewer to correct us if we have not), the reviewer is concerned that because our results (i) hold with high probability and (ii) require larger widths, they may not hold for the whole training procedure. Although this is a valid concern, we respectfully mention that conditions (i) and (ii) can be found in the existing literature of optimization guarantees for deep models—for example, among the works cited in Section 2 for both neural tangent kernel (NTK) and restricted strong convexity (RSC) approaches. We also point out that the reason why we end up with condition (i) is because we have to bound the norms of the gradients of the neural operators, which leads to the appearance of weight norms that can only be upper bounded by some constant *with* high probability. In contrast to our work, the work (Cisneros-Velarde et al., 2025) does not need such upper bounds since it studies feedforward neural networks with weight normalization, and so all of its results are deterministic (though condition (ii) is still needed). Finally, we point out that condition (ii) is expressed differently in each optimization approach: in terms of a gradient norm for the RSC approach and in terms of sample size for the NTK approach. Having shown that the nature of the conditions that ensure our optimization guarantees is not foreign to the existing literature and is intrinsic to our mathematical approach, we hope the reviewer finds our conditions well justified. Having said that, we believe that deriving deterministic convergence results for the whole training procedure is an important problem—a problem which may require further assumptions and even a different mathematical approach. We hope that all of our arguments presented in this rebuttal lead to a more positive assessment of our paper.
Summary: The paper provides convergence guarantees for neural operator learning, which are valid under assumptions of restricted strong convexity and smoothness of the loss function. The authors demonstrate that two learning operators (DON and FNO) satisfy these conditions. Both theoretical and experimental findings show that networks with greater width achieve superior optimization performance. ## update after rebuttal The authors have partially addressed my concerns, so I'm leaning toward acceptance. Claims And Evidence: the claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: -- Theoretical Claims: Theoretical claims are convincing. Experimental Designs Or Analyses: good Supplementary Material: the code is not provided Relation To Broader Scientific Literature: This paper appears incremental compared to https://arxiv.org/pdf/2209.15106 Banerjee et al. 2022. The main difference is that they extend the results to neural operator learning instead of feedforward models with smooth activations. Many of the proofs follow a similar structure to the work of Banerjee et al 2022. The paper's primary contribution seems to be verifying that DONs and FNOs satisfy the RSC and smoothness conditions (in Theorems 2-5), which then allows them to directly apply the convergence framework established by Banerjee et al. 2022. Essential References Not Discussed: -- Other Strengths And Weaknesses: **strenghts** - clear, well-written - interesting topic **weaknesses** - incremental compared to Banerjee et al 2022 - the code is not provided Other Comments Or Suggestions: "Nevertheless, optimization guarantees for DONs is also an open problem". line 076 typo (not DON) Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful to the reviewer for taking the time to review our paper. We now respond to the reviewer's concerns. --- We respectfully argue that our work **is not incremental** to (Banerjee et al., 2023a) [using the reference as cited in our work] for three reasons: 1. Our work **generalizes** the framework by (Banerjee et al., 2023a), as shown in Section 4. Indeed, we show that the results for feedforward neural networks (FFNs) studied by (Banerjee et al., 2023a) and for the neural operators we study (DONs and FNOs) are **particular** instances of our framework presented in Section 4—despite the architectural differences among FFNs, DONs, and FNOs. 2. Just the fact that our work and (Banerjee et al., 2023a) use a *similar* approach to obtain optimization guarantees (namely, the use of restricted strong convexity (RSC)) **does not imply** that our work is an "extension" of the other. If that were the case, this would be similar to saying that *every* work that uses the Neural Tangent Kernel (NTK) approach to obtain optimization guarantees is an extension of the seminal paper (Jacot et al., 2018). We remark that Section 2 lists several works that are based on the NTK approach, yet they are not considered a mere extension of each other: they study networks with different architectures or activation functions. Therefore, we feel that this is an unfair criticism of our work, and ignores the extensive set of new results and analysis we have presented for FNOs and DONs (see details in Appendices D and E). 3. Finally, as carefully explained in Section 1 (lines 063 to 081 of the first column of page 2) and Section 7, the **structural differences** between the neural operators (DONs and FNOs) and FFNs lead to a series of **challenges** in their analysis compared to FFNs. These challenges are reflected in our **involved and non-trivial** analysis: in the case of DONs, they stem from the empirical loss Hessian structure being substantially more complex due to the interactions between two neural networks; in the case of FNOs, they stem from the operator's Hessian structure being substantially more complex due to the Fourier transformations inside the activations. Proving that the RSC method still applies to these neural operators despite the architectural differences **do not** automatically follow from any prior work in the literature, including (Banerjee et al., 2023a). Indeed, before our paper, there was no evident reason to anticipate that the RSC method would be general enough to provide optimization guarantees to neural operators. For all the presented reasons, we respectfully reiterate that our paper is not incremental to (Banerjee et al., 2023a)—please see the details in our Appendices D (for DONs) and E (for FNOs). We remark that, to the best of our knowledge, our work is the first one showing optimization guarantees for operator learning. We are grateful to the reviewer to consider the justification we just provided. We hope our arguments will lead to a more positive assessment of our paper. --- Finally, the code is currently sitting in a private repository and we will provide the code used in our experiments through a public repository if our paper is accepted. We will also address the typo raised by the reviewer.
null
null
null
null
null
null
TTFSFormer: A TTFS-based Lossless Conversion of Spiking Transformer
Accept (poster)
Summary: The work presents a strategy to convert trained ANN transformer models into time-to-first-spike coded SNNs. Specifically, a neuron dynamics model with two flexible kernel functions is used to accurately represent all transformer model operands. It is shown that crucial operations such as SiLU/GELU activation functions, softmax, and LayerNorm can all be expressed with suitable kernels, yielding a conversion algorithm for the entire transformer architecture. The conversion is put to the test empirically, demonstrating reduced loss of precision compared to a wide range of baselines on the ImageNet-1k vision benchmark. Finally, the method is shown to be robust to imprecise spike times and theoretically more energy efficient. **Update after rebuttal** The authors' response has answered my remaining questions. My rating still stands, good paper! Claims And Evidence: The proposed conversion strategy unlocks significant performance gains over prior baselines as it manages to transform all relevant operators into spiking dynamics without significant loss of accuracy. TTFS coding is a promising strategy as it typically requires fewer energy-consuming spikes than the more commonly explored rate encoding. However, the advertised "perfect energy efficiency" (L434) may not straightforwardly translate into a real-world implementation. Methods And Evaluation Criteria: ViT on ImageNet-1k represents a reasonable proxy for a vision task performance that could be of interest for neuromorphic acceleration. Theoretical Claims: I have not carefully checked the conversion proofs. Experimental Designs Or Analyses: The accuracy comparison against the baselines is sound. For the timing robustness in Figure 4, it is unclear why the plot evaluates for powers of 2 but stops at p=384. Furthermore, it is hard to interpret p in terms of actual inference latency. Is there a way to quantify the minimum processing latency of a TTFSFormer neuron block? Supplementary Material: I have reviewed the pseudo-algorithm in the appendix but have not checked the proofs. Relation To Broader Scientific Literature: Various methods for spiking conversions of transformer architectures have been proposed; this work goes beyond existing strategies by focusing on time-to-first-spike encoding for transformer models. In particular, it addresses lacking support for non-ReLU activation functions, the non-linear softmax operator of the attention mechanisms, as well as missing TTFS-implementations of Layer Norm. Essential References Not Discussed: None Other Strengths And Weaknesses: The paper lacks implementation details, and no source code was provided. I encourage the authors to provide more implementation and hardware details of the experiments and consider releasing source code with the publication. Other Comments Or Suggestions: To aid the reader, it's worth expanding on the energy estimation strategy described in Section 5.2. Specifically, how are the OPs counted for a given transformer architecture? It may be worth including the ANN baselines before conversion in Table 1. L429 "few" -> little? L430 "Making" -> Taking? Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your suggestions. We would like to address your concerns and questions in the following. ### Code implementation Thank you for your question. Our code will be released with the publication. ### Is there a way to quantify the minimum processing latency of a TTFSFormer neuron block? Thank you for your question. The precision $p$ represents the precision of time in hardware implementation. While the accuracy of our converted model drops with lower precision, the problem can be solved with fine-tuning, as explored in previous works ([Stanojevic et al. 2024]). The precision does not reflect the latency of the inference, but a shorter latency leads to a lower precision on a given hardware. However, there are still other techniques that can reduce the hardware latency. In order to further reduce the hardware latency, we can follow the method proposed in [Park et al. 2020], in which the emitting stage and receiving stage can be overlapped in order to reduce the latency. This requires a balance between the conversion loss and latency. Since our work mainly focuses on a lossless conversion method of Transformer architecture, we leave the topic for future works. ### How are the OPs counted for a given transformer architecture? Thank you for your question. We will include a more detailed analysis in future versions of our paper. We briefly sketch the computing process in the following. Take one block in ViT as an example. - In the converted SNN, we can divide the operations into two categories: non-linear operations and linear operations. **Linear operations** take place in neurons with dynamics described in Corollary 4.4 and 4.6, which is exactly what most previous works on TTFS-based SNN use. **Non-linear operations** are new in our work, and the neurons may require more energy, depending on the hardware implementation. - In the original ANN, the energy consumption is measured by the number of multiplication and addition operations. All operations are shown below (d = 384 or 768 or 1024 in ViT-S/B/L respectively). We categorize layers according to their type. | Type | Notation of Parameters | Shapes | SNN Ops | ANN Ops | | :---: | :---: | :---: | :---: | :---: | | LayerNorm | (M, N): Matrix $X \in \mathbb{R}^{M \times N}$ | (197,d), (197,d) | $2 \cdot 197 \cdot (9d+1)$ | $2\cdot 197\cdot d$ | | MatMul | (M, K, N): Matrix $A \in \mathbb{R}^{M \times K},B \in \mathbb{R}^{K \times N}$ | (197,d,197), (197,197,d) | $197 \cdot [197 \cdot (3d + 2) + d \cdot (3\cdot 197+2)]$ | $2\cdot 197^{2}\cdot d$ | | SoftMax | (M, N): Matrix $X \in \mathbb{R}^{M \times N}$ | (197,197) | $6 \cdot 197^{2}$ | $197^{2}$ | | GeLU | (M, N): Matrix $X \in \mathbb{R}^{M \times N}$ | (197,4d) | $197 \cdot 4d$ | $197 \cdot 4d$ | | Linear | (M, K, N): Matrix $X \in \mathbb{R}^{M \times K}$ and weight $W \in \mathbb{R}^{K \times N}$ | (197,d,3d),(197,d,4d),(197,4d,d) | $11\cdot 197\cdot d^{2}$ | $11\cdot 197\cdot d^{2}$ | (The number of operations in each SNN layer is analyzed in section 4.3 and appendix C.) By summing up operations in each type of layer, we can finally get the estimated OPs in ANN and SNN. Thus, we have $197 \cdot (1206d+1578)$ non-linear operations and $197 \cdot 11d^{2}$ linear operations in one block. Now we can estimate the energy efficiency by (we divide the energy by 197 simultaneously) $$ \eta = \frac{\text{Energy of SNN}}{\text{Energy of ANN}} = \frac{(1206d+1578)E_{nl} + 11d^{2}E_{l}}{(11d^{2}+400d+197)E_\text{MAC}} \approx \frac{1206 \cdot E_{nl} + 11d \cdot E_{l}}{(11d + 400)E_\text{MAC}} $$ where $E_\text{MAC}$ is the energy consumption of one multiply-add operation in ANN, $E_{l}, E_{nl}$ is the energy consumption of one linear and non-linear operation in SNN. We use the settings in [Horowitz, 2014], namely $E_{l} = 0.9 \text{pJ},E_\text{MAC}=4.6\text{pJ}$. We assume that $E_{nl} = kE_{l}$, where $k$ is a constant depending on the hardware implementation. Take ViT-L for example, by letting $d=1024$, we have $$ \eta = 0.189 + 0.020k $$ In our paper, we assume that $k=1$, i.e. the non-linear neuron has the same cost with the linear neuron. We think that a reasonable estimation would be $E_{l} < E_{nl} < E_\text{MAC}$, for example $k=2$ or $k=3$. Although $k$ may vary according to the hardware implementation, we can conclude that the energy efficiency is between 20\%-30\%. [Stanojevic et al. 2024] Stanojevic, A., Woźniak, S., Bellec, G. et al. High-performance deep spiking neural networks with 0.3 spikes per neuron. Nat Commun 15, 6793 (2024). [Park et al. 2020] S. Park, S. Kim, B. Na and S. Yoon, "T2FSNN: Deep Spiking Neural Networks with Time-to-first-spike Coding," 2020 57th ACM/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, 2020, pp. 1-6 --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses! **Code release**: I am pleased you plan to release the code. **Minimum processing latency:** Thank you for clarifying; I've re-read the section and it was indeed a misunderstanding on my part. **OPs count**: I appreciate the detailed analysis, which is very helpful in building some intuition about potential hardware efficiency.
Summary: This paper proposes TTFSFormer, a novel method for converting Transformer architectures into Spiking Neural Networks (SNNs) using Time-to-First-Spike (TTFS) coding. The key innovation lies in designing generalized spiking neurons that address the limitations of prior TTFS-based approaches, particularly their inability to handle nonlinear operations in Transformers (e.g., softmax, LayerNorm). By introducing flexible input/output kernel transformations and a zero-reference time mechanism, TTFSFormer enables lossless conversion of pre-trained Transformers (e.g., ViT, EVA) into SNNs with minimal accuracy loss (<0.1%) and significant energy savings (~20% of ANN energy). Experiments on ImageNet-1K demonstrate state-of-the-art performance among SNN Transformers, outperforming both rate-coding and direct-training methods. Claims And Evidence: The assertion of lossless conversion (<0.1% accuracy drop) is validated by Table 1 (e.g., ViT-L/16: 85.8% vs. ANN’s 86.0%). Energy efficiency claims are backed by Table 2 (20% ANN energy). Methods And Evaluation Criteria: The method is well-suited for ANN-to-SNN conversion. TTFS coding and kernel transformations directly address nonlinearity challenges in Transformers. ImageNet-1K is a standard benchmark, Theoretical Claims: Theorems 4.1–4.3 and proofs in Appendix B are mathematically sound. Corollaries logically extend the theorems to identity transforms. Experimental Designs Or Analyses: Soundness: Training protocols (pre-trained ANN weights, SGD) align with ANN-to-SNN literature. Energy estimation (Eq. 28) follows established metrics (Horowitz, 2014). Limitation: Robustness tests (Fig. 4) use simulated precision; real-world hardware noise is not considered. Supplementary Material: There is no Supplementary Material. Relation To Broader Scientific Literature: TTFSFormer bridges two critical gaps: TTFS Coding: Extends Stanojevic et al. (2023) to Transformers, enabling nonlinear operations. SNN Transformers: Outperforms rate-coding methods (e.g., STA ) in energy efficiency while matching ANN accuracy. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: 1. This is the first work to successfully convert Transformer architectures into SNNs using TTFS coding, addressing a critical gap in temporal coding methods. 2. The proposed neurons with input/output kernel transformations (Theorems 4.1, 4.3) enable precise representation of nonlinear operations (e.g., softmax, LayerNorm), a major advancement over prior TTFS methods limited to linear mappings. 3. TTFSFormer achieves ~80% energy reduction compared to ANNs while maintaining near-identical accuracy (e.g., 85.8% for ViT-L/16 vs. ANN’s 86.0%). 4. Rigorous proofs (e.g., Theorem 4.2, Corollary 4.4) validate the equivalence between ANN activations and TTFS-based spike timing, ensuring lossless conversion. Weakness: 1. Table 1 lacks comparisons to very recent TTFS-based methods or advanced rate-coding SNN Transformers. 2. Discuss hardware implementation challenges and latency trade-offs of the proposed model could enhance the future work. 3. The TTFS could provide advantage in the aspect of low firing rates, hence there could supplement the firing rate comparsion and analysis. Other Comments Or Suggestions: None. Questions For Authors: See weakness. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed feedback. We are encouraged that you found our work novel with good results. We would like to address your concerns and questions in the following. ## 1. Table 1 lacks comparisons to very recent TTFS-based methods or advanced rate-coding SNN Transformers. Thanks for your suggestion. We add the comparision with more recent works as follows. | Work | Method | ANN Model | Accuracy | | ---- | ------ | --------- | -------- | | Adaptive Calibration [Wang et al. 2025] | burst, conversion | ViT | 77.09\% | | E-Spikeformer [Yao et al. 2025] | rate, direct training | - | 86.2\% | | QKFormer [Zhou et al. 2024] | rate, direct training | - | 85.65\% | | Spike-driven V2 [Yao et al. 2024] | rate, direct training | - | 80.0\% | | **Ours** | TTFS, conversion | ViT-L/EVA02-L | 85.8\%/90.0\% | In **Table 2** of our paper and the table above, direct training refers to training SNN directly through the surrogate gradient method. In **Table 2** of our paper, work not marked with "TTFS-based" is rate-based method by default. [Wang et al. 2025] Ziqing Wang and Yuetong Fang and Jiahang Cao and Hongwei Ren and Renjing Xu, "Adaptive Calibration: A Unified Conversion Framework of Spiking Neural Network", AAAI 2025. [Yao et al. 2025] Man Yao, et al. "Scaling spike-driven transformer with efficient spike firing approximation training." IEEE Transactions on Pattern Analysis and Machine Intelligence (2025). [Zhou et al. 2024] Chenlin Zhou et al. "QKFormer: Hierarchical Spiking Transformer using Q-K Attention", NIPS 2024. [Yao et al. 2024] Man Yao, et al. "Spike-driven Transformer V2: Meta Spiking Neural Network Architecture Inspiring the Design of Next-generation Neuromorphic Chips.", ICLR 2024. ## 2. Discuss hardware implementation challenges and latency trade-offs of the proposed model could enhance the future work. ### Hardware Implementation Challenge We think that potential challenges in hardware implementation include: - Design special hardware that is compatible with the non-linear neurons proposed in our method. With different dynamics compared with traditional LIF models, the hardware needs a special design in order to fully demonstrate the energy efficiency of our proposed methods. - Deal with hardware-related accuracy loss. Although we have proved that our conversion method is theoretically lossless, there will inevitably exist hardware-specific loss, including noise in membrane potential and time bias of TTFS spikes. We have discussed the robustness of our proposed method in Section 5.3, and more analysis is needed for hardware implementation. Fine-tuning is probably needed if the hardware loss is too large. ### Latency Trade-offs In our proposed method, there are two separate stages for one layer, namely, the emitting stage and the receiving stage. As proposed in [Park et al. 2020], the two stages can be overlapped in order to reduce the latency, requiring a balance between the conversion loss and latency reduction. Since our work mainly focuses on a lossless conversion method of Transformer architecture, we leave the topic for future works. We will add more discussion in the revised version. ## 3. The TTFS could provide advantage in the aspect of low firing rates, hence there could supplement the firing rate comparison and analysis. Thank you for your inspiring question. Compared with previous work on TTFS, each non-linear neuron emits exactly one spike every forward pass. The firing rate is relatively lower than the rate-based method since multiple time steps is needed in rate-based SNN, leading to multiple spikes emitted in one neuron. However, the firing rate is higher than previous work on TTFS-based converted CNN, since we cannot simply ignore those negative values in the network with non-ReLU activation functions and attention mechanisms. The firing rate can be further cut down. For example, in activation function SiLU, we can regard $\mathrm{SiLU}(x) \approx 0$ for $x \le -16$ and simply emit no spike, since $\mathrm{SiLU}(-16) \approx -1.8 \times 10^{-6}$ is close to 0. In this way, we can still keep the information carried by values around zero while enhancing energy efficiency. [Park et al. 2020] S. Park, S. Kim, B. Na and S. Yoon, "T2FSNN: Deep Spiking Neural Networks with Time-to-First-Spike Coding," 2020 57th ACM/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, 2020, pp. 1-6
Summary: The paper introduces a novel approach, TTFSFormer, for converting Transformer architectures into Spiking Neural Networks (SNNs) with Time-to-First-Spike (TTFS) coding. The method addresses the challenge of preserving high accuracy while significantly reducing energy consumption. The authors propose new neuron models and detailed conversion mechanisms to accommodate non-linear activations and complex components like the attention mechanism, which is a significant step in adapting SNNs to Transformer models. Experimental results on the ImageNet-1k dataset show that the method performs comparably to the original ANN architecture with minimal accuracy loss and lower energy costs. Claims And Evidence: no Methods And Evaluation Criteria: Novelty: This is the first work to focus on converting Transformer architectures to SNNs using TTFS encoding, which addresses both energy efficiency and accuracy preservation, two key challenges in SNNs. Theoretical Claims: he paper makes an important theoretical contribution by introducing generalized nonlinear neurons for the conversion process, significantly expanding the applicability of TTFS coding to complex architectures like Transformers. Experimental Designs Or Analyses: he authors provide strong experimental results, showing that their method performs well on multiple Transformer architectures, including ViT and EVA, with minimal accuracy loss (below 0.1%). Supplementary Material: Energy Efficiency: The method demonstrates substantial improvements in energy consumption compared to traditional methods, which is a crucial factor for practical deployment of SNNs. Relation To Broader Scientific Literature: While the paper compares TTFSFormer to other SNN methods, there is insufficient comparison with other Transformer-to-SNN methods that leverage different spiking encodings (e.g., rate coding or surrogate gradient methods). A more thorough comparative analysis with state-of-the-art techniques would enhance the paper’s contribution. Essential References Not Discussed: no Other Strengths And Weaknesses: Hardware Realism: The paper focuses primarily on the theoretical model and simulation results but lacks in-depth analysis of hardware implementation or real-world constraints, such as precision or latency in hardware. While the robustness of the model is mentioned, more detailed hardware evaluation would improve the practical impact of the work. Scalability Concerns: While the method performs well on the tested architectures, the scalability of TTFSFormer to larger, more complex Transformers with more layers and parameters remains unclear. An exploration of how the method scales with model size and complexity would be valuable. . Other Comments Or Suggestions: Limited Novelty in SNN Design: The paper introduces some interesting neurons and neuron dynamics, but the fundamental idea of TTFS-based SNNs is not entirely new, as it has been explored in other contexts. More emphasis on how the Transformer-specific components are handled would add more value to the novelty claim Questions For Authors: no Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments. We would like to address your concerns in the following. ### Hardware Implementation Thanks for your question. Since rate-based SNN is currently the most popular coding method, existing neuromorphic chips are specially designed for rate-based algorithms. With existing hardware circuits, one possible workaround is to store the mapping of non-linear functions in hardware storage, and simulate the change of potential. This is easy to implement since the non-linear function is shared in one layer. We hope that there will be neuromorphic chips suitable for TTFS-based SNN models, which will better demonstrate the energy efficiency of temporal coding methods. We will add more discussion in the revised manuscript. ### Scalability Concerns Different from other tasks such as NLP, the size of vision models typically ranges from 80M (such as ViT-B) to 1000M. In order to better demonstrate the scalability of our proposed method, we test our method on **EVA-G** (Giant) with 1.01B parameters in total, which is one of the largest popular pre-trained vision transformers. The experimentable result is shown in the table below. | | EVA-G (ANN) | EVA-G (SNN) | EVA-G (SNN, with precision=1024) | | --- | --- | --- | --- | | Top-1 accuracy | 88.882\% | 88.898\% | 88.160\% | | Top-5 accuracy | 98.678\% | 98.684\% | 98.440\% | The result shows that our conversion method is still virtually lossless when converting a large vision model, indicating that our method has excellent scalability in vision task. ### Limited Novelty in SNN Design We agree that the idea of time-to-first-spike (TTFS) coding has been put forward and explored in previous works. Many SNN algorithms, either direct training or conversion, have created convolutional SNN successfully with much lower energy consumption and comparable performance to CNN. However, we would like to clarify that our work is the first to focus on the transformer model with TTFS coding. Our work mainly focuses on analyzing and solving the challenges encountered in introducing transformer architecture into TTFS-based SNN, which has not been addressed in previous works. - Challenge 1: Transformer architecture involves plenty of non-linear operations, such as non-ReLU activations, attention, and LayerNorm. However, previous TTFS neuron dynamics are only able to represent piecewise linear functions (Section 3.3). - Our Solution 1: We proposed a non-linear neuron model, which introduces non-linearity in SNN. We have theoretically proved that our neuron model has a strong representation ability (Section 4.2). With the proposed neuron model, we can construct Transformer-specific components (in Section 4.3). - Challenge 2: Previous TTFS neurons have a small representation range. - Our Solution 2: We expand the representation range of TTFS neurons depending on the actual distribution of the input and output values. - Challenge 3: Accuracy of converted models. - Our Solution 3: We have theoretically proved that our conversion method is lossless. Moreover, we conduct experiments on various vision transformer models (ViT, EVA) with different sizes (from 80M to 1B), showing that our method can achieve SOTA performance in vision tasks. ### Insufficient Comparison Thanks for your suggestion. We add the comparison with more recent works as follows. | Work | Method | ANN Model | Accuracy | | ---- | ------ | --------- | -------- | | Adaptive Calibration [Wang et al. 2025] | burst, conversion | ViT | 77.09\% | | E-Spikeformer [Yao et al. 2025] | rate, direct training | - | 86.2\% | | QKFormer [Zhou et al. 2024] | rate, direct training | - | 85.65\% | | Spike-driven V2 [Yao et al. 2024] | rate, direct training | - | 80.0\% | | **Ours** | TTFS, conversion | ViT-L/EVA02-L | 85.8\%/90.0\% | In **Table 2** of our paper and the table above, direct training refers to training SNN directly through the surrogate gradient method. In **Table 2** of our paper, work not marked with "TTFS-based" is rate-based method by default. [Wang et al. 2025] Ziqing Wang and Yuetong Fang and Jiahang Cao and Hongwei Ren and Renjing Xu, "Adaptive Calibration: A Unified Conversion Framework of Spiking Neural Network", AAAI 2025. [Yao et al. 2025] Man Yao, et al. "Scaling spike-driven transformer with efficient spike firing approximation training." IEEE Transactions on Pattern Analysis and Machine Intelligence (2025). [Zhou et al. 2024] Chenlin Zhou et al. "QKFormer: Hierarchical Spiking Transformer using Q-K Attention", NIPS 2024. [Yao et al. 2024] Man Yao, et al. "Spike-driven Transformer V2: Meta Spiking Neural Network Architecture Inspiring the Design of Next-generation Neuromorphic Chips.", ICLR 2024.
Summary: This paper propose an ANN-SNN conversion method for spiking transformer based on time-to-first-spike (TTFS) method. The author first analyze the limitations of the previous TTFS method, and then propose a generalized TTFS neuron, which make it easier to relate Transformer to its SNN version. Experimental results on ViT and EVA models demonstrate the SOTA performance of the proposed method. Claims And Evidence: Yes, it is supported by rigorous theorem derivation and experimental analysis. Methods And Evaluation Criteria: Yes. Theoretical Claims: I have checked the theorem and the proof. Experimental Designs Or Analyses: I have checked Section 5.1-5.3. Supplementary Material: I cannot find the Supplementary Material. Relation To Broader Scientific Literature: The contribution of this paper is related to spiking neural networks and low-power artificial intelligence. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1.The idea of using TTSF to implement spiking Transformer is impressive, which can reduce energy consumption in neuromorphic chips. 2.The proposed method achieve SOTA performance with ViT and EVA model. 3.The authors provide rigorous theoretic analysis. Weakness: 1.The writing needs improvement. The article does not read very coherently, e.g., paragraphs 1-2 of the introduction. 2.The authors need to analyze the computational cost of the generalized TTFS neuron, especially compared to the LIF neuron. 3.This paper focused on improving classification performance of SNN, which in my view, is not the main strength of SNNs. The authors need to analyze other potential advantages. Actually, EVA can be used to other visual tasks. Whether the proposed method can be applied to other visual tasks? Other Comments Or Suggestions: See weakness. Questions For Authors: 1.Please clarify how to compute the energy consumption in Table 2. 2.Could the proposed method be used to other model? Like CNN and ResNet. 3.Whether the proposed method can be generalized to the language task? I would like to see some discussion on this direction. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive and thoughtful comments. We are encouraged that you find our idea impressive. We are glad you agree that our method achieves SOTA performance with low energy consumption. We would like to address your concerns and questions in the following. ### 1. Please clarify how to compute the energy consumption in Table 2. Thanks for your question. Our energy estimation follows the methodology proposed by [Nitin Rathi and Kaushik Roy, 2020]. For clarity, we analyze operations in a ViT block, categorizing them into **linear** (standard TTFS neuron dynamics, as in prior work) and **non-linear** (novel to our method, with hardware-dependent energy costs). Below is the list of operations (where d=384 or 768 or 1024 for ViT-S/B/L respectively): | Type | Notation of Parameters | Shapes | SNN Ops | ANN Ops | | :---: | :---: | :---: | :---: | :---: | | LayerNorm | (M, N): Matrix $X \in \mathbb{R}^{M \times N}$ | (197,d), (197,d) | $2 \cdot 197 \cdot (9d+1)$ | $2\cdot 197\cdot d$ | | MatMul | (M, K, N): Matrix $A \in \mathbb{R}^{M \times K},B \in \mathbb{R}^{K \times N}$ | (197,d,197), (197,197,d) | $197 \cdot [197 \cdot (3d + 2) + d \cdot (3\cdot 197+2)]$ | $2\cdot 197^{2}\cdot d$ | | SoftMax | (M, N): Matrix $X \in \mathbb{R}^{M \times N}$ | (197,197) | $6 \cdot 197^{2}$ | $197^{2}$ | | GeLU | (M, N): Matrix $X \in \mathbb{R}^{M \times N}$ | (197,4d) | $197 \cdot 4d$ | $197 \cdot 4d$ | | Linear | (M, K, N): Matrix $X \in \mathbb{R}^{M \times K}$ and weight $W \in \mathbb{R}^{K \times N}$ | (197,d,3d),(197,d,4d),(197,4d,d) | $11\cdot 197\cdot d^{2}$ | $11\cdot 197\cdot d^{2}$ | (The number of operations in each SNN layer is analyzed in section 4.3 and appendix C.) Now we can estimate the energy efficiency: $$ \eta = \frac{\text{Energy of SNN}}{\text{Energy of ANN}} = \frac{(1206d+1578)E_{nl} + 11d^{2}E_{l}}{(11d^{2}+400d+197)E_\text{MAC}} \approx \frac{1206 \cdot E_{nl} + 11d \cdot E_{l}}{(11d + 400)E_\text{MAC}} $$ where $E_\text{MAC}$ is the energy consumption of one multiply-add operation in ANN, $E_{l}, E_{nl}$ is the energy consumption of one linear and non-linear operation in SNN. We use the settings in [Horowitz, 2014], namely $E_{l} = 0.9 \text{pJ},E_\text{MAC}=4.6\text{pJ}$. We assume that $E_{nl} = kE_{l}$, where $k$ is a constant depending on the hardware implementation. Take ViT-L for example, by letting $d=1024$, we have $\eta = 0.189 + 0.020k$. In our paper, we assume that $k=1$, i.e. the non-linear neuron has the same cost with the linear neuron. We think that a reasonable estimation would be $E_{l} < E_{nl} < E_\text{MAC}$, for example $k=2$ or $k=3$. Although $k$ may vary according to the hardware implementation, we can conclude that the energy efficiency is between 20\%-30\%. ### 2. Could the proposed method be used to other models? Like CNN and ResNet. Thanks for your question. Our work generalizes prior TTFS-based methods from CNNs (consisting of linear, convolutional layers and ReLU) to Transformers. For CNN/ResNet architectures, our model reduces to the linear neuron dynamics described in **Corollary 4.4** and **4.6**, which is almost exactly what existing TTFS approaches (e.g. [Stanojevic et al. 2024]) are. While our method is compatible with CNN architectures, this would not introduce novel improvements for CNNs. ### 3. Whether the proposed method can be generalized to the language task? other visual tasks? Thanks for your inspiring discussion. While our experiments focus on vision tasks, the method is theoretically compatible with standard Transformer architectures in NLP. However, the generalization ability requires further experiments. For example, LLMs are typically larger than vision models, which poses challenges to the scalability of our method. We view this as a promising direction that desires future work. Besides, our method can be applied to regression-based vision tasks like object detection. We will add more discussion. ### 4. Computational cost of the generalized TTFS neuron The energy consumption of our non-linear neuron will be similar to or slightly higher than that of the LIF neuron. LIF neurons can be regarded as a special case of non-linear model where the potential change $\eta,\psi$ are exponential. When the potential change is not exponential, special hardware design is required, which is expected to have a similar energy cost with LIF neurons. [Nitin Rathi and Kaushik Roy, 2020] Nitin Rathi and Kaushik Roy. 2020. Diet-SNN: Direct Input Encoding with Leakage and Threshold Optimization in Deep Spiking Neural Networks. [Horowitz, 2014] M. Horowitz, "1.1 Computing's energy problem (and what we can do about it)," 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), San Francisco, CA, USA, pp. 10-14 [Stanojevic et al. 2024] Stanojevic, A., Woźniak, S., Bellec, G. et al. High-performance deep spiking neural networks with 0.3 spikes per neuron. Nat Commun 15, 6793 (2024).
null
null
null
null
null
null
Network Sparsity Unlocks the Scaling Potential of Deep Reinforcement Learning
Accept (oral)
Summary: The paper builds upon previous work on scaling model size in RL [1], and extends its limit with simple layer-wise random pruning at initialization [2]. This is primarily verified in state-based RL (DMC), where a pruned large network greatly surpasses a dense network with the same number of trainable parameters. This is further extended to vision-based RL and streaming RL setting, which all have shown similar tendencies. Finally, they provide an extensive analysis on the benefits of one-shot pruning, specifically in the regime of representation capacity (s-rank), plasticity (dormant ratio, gradient norm, parameter norm), and gradient inference. [1] SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning., ICLR'25. [2] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training., ICLR'22. Claims And Evidence: The effectiveness of one-shot pruning has been shown by prior work: The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training (ICLR'22). Methods And Evaluation Criteria: The proposed method (one-shot pruning) is well verified to 'unlock parameter scaling in RL', as shown by the consistent improvement in challenging DMC Hard tasks, which prior work (Simba) fails to follow. Theoretical Claims: No theoretical claims were made. Experimental Designs Or Analyses: Authors clearly show the benefits of one-shot pruning with metrics commonly used in diverse fields: 1. Higher s-rank (larger representation capacity) 2. Lower dormant ratio (better plasticity preservation [1]) 3. Larger gradient norm (better plasticity preservation) 4. Smaller parameter norm (better plasticity preservation [2]) 5. Reset doesn’t help (proof of plasticity preservation). 6. Higher simplicity bias score [3] 7. Gradients closer to orthogonal (less gradient inference [4]) [1] The Dormant Neuron Phenomenon in Deep Reinforcement Learning., ICML'23. [2] Normalization and effective learning rates in reinforcement learning., NeurIPS'24. [3] SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning., ICLR'25. [4] Understanding Plasticity in Neural Networks., ICML'23. Supplementary Material: Yes. Reviewed all sections. Relation To Broader Scientific Literature: Unlocking parameter scaling in RL is a contribution towards a large foundational model in RL. Essential References Not Discussed: I think most references in mind were present. However, the works on iterative pruning could be included: 1. In deep reinforcement learning, a pruned network is a good network., ICML’24. 2. Data-Efficient GAN Training Beyond (Just) Augmentations: A Lottery Ticket Perspective., NeurIPS’21. Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: Overall, I think this paper is well-written, well-analysed with interesting results. Questions For Authors: 1. It is unclear to me how ‘large’ gradient norm could be a proof of better plasticity. I think it’s more persuasive to claim that a ‘stable’ gradient norm rather than the norm itself. If there’s a literature around the connection between gradient norm and plasticity, it’d be nice it they’re included in the Gradient Norm section. 2. On that note, what do you think about the gradient norms consistently increasing in Figure 5? It seems like they will keep increasing when trained beyond 1M steps, and I don’t think that’s something we’d want in terms of stability. 3. Slightly off topic: are there any intuition on why Erdos-Renyi ratio is superior to uniform or other layer-wise ratios? Why is it better to have the ratios linear to the number of input/output neurons rather than quadratic? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive review. We address your questions below. > Q1: Related work on iterative pruning We have already discussed relevant works on dynamic sparse training (DST) in RL within our Introduction and Related Work sections. In the next version, we plan to expand our coverage of more general topology evolution methods, including the papers you mentioned. Thank you for pointing out the valuable references. > Q2: The connection between gradient norm and plasticity Your question about distinguishing between 'large' and 'stable' gradient norms is insightful and helps us clarify an important relationship that wasn't fully explained in our manuscript. In Figure 5, we observe that large dense networks experience a rapid collapse in gradient norm early in training while performance remains poor. This premature descent into a low gradient state reflects the agent's inability to effectively learn from new experiences, directly corresponding to the slow performance improvement in these networks. By contrast, sparse networks maintain more consistent gradient signals throughout training. Therefore, it's this early declining pattern, rather than absolute magnitude, that indicates plasticity loss. Several studies have explored the relationship between gradient norm collapse and plasticity loss, including: - Figure 7 in *The Primacy Bias in Deep Reinforcement Learning, ICML 2022* - Section 4.2 in *Loss of Plasticity in Continual Deep Reinforcement Learning, CoLLAs 2023* - Figure 10 in *Weight Clipping for Deep Continual and Reinforcement Learning, RLC 2024* - Figure 5 in *Addressing Loss of Plasticity and Catastrophic Forgetting in Continual Learning, ICLR 2024* How to precisely measure a network's plasticity loss level remains an open question, which is why current work tends to employ multiple different metrics simultaneously for a more comprehensive assessment. The lower dormant ratios, higher SRank values, and Reset diagnostic experiments in Section 4 collectively demonstrate how sparsity helps preserve network plasticity. Your question has prompted us to reconsider our presentation. Since gradient norm has confounding relationships with training stability, and our other metrics already establish the plasticity benefits of sparse networks, we plan to move this discussion to the appendix with more detailed explanations in our revised manuscript. > Q3: Further trend of gradient norms beyond 1M steps Your concern about continuously increasing gradient norms prompted us to extend our experiments to 2M steps on Dog Trot and Run tasks, shown in [`figure (anonymous link)`](https://anonymous.4open.science/r/ICML_2025_3388/GN.jpg), revealing two interesting patterns: In Dog Trot, where performance plateaus around 1M steps, gradient norms in sparse networks eventually peak and then gradually decrease, demonstrating natural stabilization. In Dog Run, where learning continues beyond 1M steps, gradient norms maintain moderate activity, corresponding directly with ongoing performance improvements. These patterns suggest sparse networks achieve an effective balance: they maintain sufficient gradient activity for learning without collapsing (unlike dense networks), while also not becoming unstable over time. We'll include this extended analysis in our revised manuscript's appendix to better illustrate the long-term benefits of network sparsity. > Q4: Comparison between Erdos-Renyi ratio and other layer-wise ratios Based on sparse training research in the broader deep learning field, the advantage of ER initialization likely stems from providing more balanced information flow across network layers. By scaling sparsity proportionally to the geometric mean of input/output dimensions (rather than uniformly), ER maintains approximately equal fan-in/fan-out ratios across layers, preventing bottlenecks in both forward and backward passes. The core purpose of our paper is to demonstrate how network sparsity as a fundamental property helps scale up DRL model size, rather than comparing different sparsification methods. We chose ER initialization because previous sparse training studies (in both DRL and supervised learning) have established its superiority over uniform initialization. To directly compare these approaches in our setting, we conducted additional experiments shown in the [`figure (anonymous link)`](https://anonymous.4open.science/r/ICML_2025_3388/ER_Uniform_Ratios.jpg). The results reveal that at lower sparsity levels (≤0.6), both initialization methods perform comparably. However, at higher sparsities (≥0.8), uniform layer-wise ratios exhibit a dramatic performance drop. These findings suggest that: 1. Sparsity itself benefits DRL scalability regardless of the specific layer-wise ratio configuration. 2. ER initialization provides greater robustness, especially at high sparsity levels, by creating a more balanced network topology that better maintains information flow.
Summary: This paper explores the scalability benefits of incorporating static network sparsity in deep reinforcement learning models. It introduces one-shot random pruning, where a fixed proportion of network weights are removed before training, leading to improved parameter efficiency compared to scaling up dense architectures. The analysis highlights that sparse networks enhance expressivity while alleviating optimization challenges such as plasticity loss and gradient interference. Furthermore, experiments on visual and streaming RL tasks demonstrate the robustness of sparsity, showcasing its consistent advantages across diverse reinforcement learning scenarios. ## update after rebuttal The authors provided additional results of DER on Atari, which also shows the sign of the proposed findings. I'll keep my positive evaluation. Claims And Evidence: I'm not familiar with the analysis in Sections 4.3 and 4.4, but overall, I believe the corresponding experiments effectively support the claims presented in each section. Methods And Evaluation Criteria: The primary experiments are conducted on the four most challenging DMC tasks using two RL algorithms, SAC and DDPG. Additionally, the authors extend their evaluation to visual and streaming RL. Overall, the evaluation appears solid and well-founded. Theoretical Claims: N/A Experimental Designs Or Analyses: I appreciate that Appendix B provides detailed settings for the major experiments. While using eight random seeds per experiment is slightly lower than expected, it is still a reasonable choice. Supplementary Material: The supplementary material includes a code repository for reproducing the experimental results. However, due to computational constraints, I was unable to verify the code. The appendix contains detailed experiment settings and an extended related work. Relation To Broader Scientific Literature: This paper presents insightful and valuable findings on scaling deep reinforcement learning networks. It demonstrates that incorporating static network sparsity through simple one-shot random pruning can enhance scalability and outperform dense counterparts. These findings are not only relevant to DRL but may also have broader implications for other learning paradigms, such as self-supervised learning and language model post-training. Essential References Not Discussed: No Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: I appreciate the authors' detailed analysis of RL methods for continuous action control problems. It would be interesting to see these observations extended to value-based methods in discrete action spaces, such as DQN on Atari. Nonetheless, the current findings are valuable and worth sharing with the community. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thorough review and positive evaluation of our work. > It would be interesting to see these observations extended to value-based methods in discrete action spaces, such as DQN on Atari. Nonetheless, the current findings are valuable and worth sharing with the community. We agree that extending our findings to value-based methods with discrete action spaces is crucial for demonstrating the broader applicability of network sparsity benefits. To address this, we've conducted new experiments on the Atari-100k benchmark using Data Efficient Rainbow DQN (DER) [1] as our baseline algorithm. The experiments compare performance improvements when scaling network width to 3x the default size with varying sparsity levels (0.0, 0.4, and 0.8). Results are shown in [`figure (anonymous link)`](https://anonymous.4open.science/r/ICML_2025_3388/atari_sparsity_results.pdf). *(Note: Due to time constraints before the first round response deadline, we've only completed experiments on 13/26 Atari tasks. We expect to update the figure with complete results within one more day.)* These results show that introducing network sparsity while scaling up model size produces similar benefits in discrete action tasks as observed in our continuous control experiments. This further validates the general effectiveness of network sparsity for scaling DRL models across different domains and algorithms. We note that the Atari-100k low-data regime may not fully demonstrate the benefits of scaling, and more comprehensive studies with longer training (e.g., 10M environment steps) would be valuable for future work [2]. Nevertheless, these preliminary results provide additional evidence supporting our main findings about network sparsity's role in unlocking DRL scaling potential. [1] When to use parametric models in reinforcement learning?, NeurIPS 2019 [2] In value-based deep reinforcement learning, a pruned network is a good network, ICML 2024 --- Rebuttal Comment 1.1: Comment: Thank you for the additional experimental results of DER. I appreciate your effort and will maintain my positive evaluation. Great work!
Summary: This paper uncovers an interesting finding: Instead of pursuing more complex modifications, introducing static Network sparsity alone can unlock further scaling potential beyond their dense counterparts with state-of-the-art architectures. And in experiments, they show that only using one-step random pruning can achieve great performance in several commonly used benchmarks. Claims And Evidence: Yes Methods And Evaluation Criteria: Their Method makes sense for the problem. Theoretical Claims: The theory in the paper is solid. Experimental Designs Or Analyses: I checked the experimental designs, and I think they are valid. Supplementary Material: All Relation To Broader Scientific Literature: I think this paper made a general contribution to the scaling law of the AGI. Essential References Not Discussed: I think the related works analyzed in this paper are wide. Other Strengths And Weaknesses: 1) Well-written, clear summary 2) completely experimental analysis 3) rich and solid visualization metrics testing 4) makes an important contribution to the understanding of the scaling law in Deep RL community Other Comments Or Suggestions: One thing I did not find in the paper is which part of the AC architecture should be more important -- actor or critic. What would happen to the learning ability when you only prune the critic or actor? Does a sparse actor play a key role? Questions For Authors: Please see above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your positive evaluation of our work. > Q: What would happen to the learning ability when you only prune the critic or actor? Does a sparse actor play a key role? This is an excellent question about the relative importance of sparsity in different components of actor-critic architectures. We conducted additional experiments comparing four configurations: dense networks, sparse actor only, sparse critic only, and sparse for both actor and critic. Our results shown in [`figure (anonymous link)`](https://anonymous.4open.science/r/ICML_2025_3388/Actor%20vs%20Critic.jpg) clearly show that applying sparsity to both actor and critic networks yields substantially better performance than either partial approach. Interestingly, when sparsity is applied to only one component (either actor or critic), the scaling curves closely resemble those of the dense baseline. This suggests that scaling benefits emerge from a balanced application of sparsity across the entire architecture rather than from any single component. We believe this occurs for several reasons: 1. Critics require sparsity to avoid optimization pathologies during online TD learning. As demonstrated in Section 4, scaling up dense DRL models leads to severe plasticity loss in critics and diminished value representation capacity. 2. The need for sparse actors aligns with previous findings [1,2] showing that actors are particularly sensitive to network scaling. In actor-critic methods, since actor learning depends on critic outputs, unnecessarily complex actor networks can impede performance. The specific mechanisms behind sparse actor benefits warrant further investigation. 3. The balance between actor and critic parameters is crucial. Our experiments use the SimBa architecture, which carefully established optimal component sizing [1] - in the default 4.51M parameter configuration, the critic accounts for 4.34M parameters. When scaling our networks, we proportionally increased both actor and critic sizes, already accounting for their different representational capacity requirements. By maintaining this ratio during scaling, applying sparsity to both components becomes necessary to preserve the architectural balance that makes SimBa effective. [1] SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning, ICLR 2025 [2] Bigger, Regularized, Optimistic: scaling for compute and sample-efficient continuous control, NeurIPS 2024
Summary: This paper shows that current deep RL architectures result in performance decreases when scaling the network in width or depth. Introducing sparsity in the form of a fixed mask over the weights, is able to to resolve this issue. The source of the benefits of sparsity is investigated in terms of representational capacity, plasticity and gradient interference. O Further experiments demonstrate the utility of the method in visual RL and streaming RL. Claims And Evidence: This paper has very comprehensive experiments and does an excellent job at demonstrating the usefulness of introducing sparsity. The variety of ablations and analyses help support the main points and also give some nice insights into why this method may be working. Methods And Evaluation Criteria: The benchmarks chosen are standard and appropriate for this setting. Theoretical Claims: N/A Experimental Designs Or Analyses: Extensive analyses are done on various aspects of the method (sparsity ratio, effect of scale) with appropriate experimental choices. Deeper looks into the effect on metrics such as reprentational capacity, plasticity (dormant neurons, gradient norms), gradient interference and simplicity bias give a more comprehensive view of the effects of sparsity. Supplementary Material: I read the related work and some of the details of the experimental setups. Relation To Broader Scientific Literature: This paper makes a surprising observation: static sparsity can result in large performance gains and scaling. This finding could make RL researchers rethink how to design new network architectures and I think these results would be of great interest to the RL community. Essential References Not Discussed: None Other Strengths And Weaknesses: The presentation is very clear and visually-pleasing, with nice use of "callout blocks" of different colors to highlight certain takeaways or important points. The writing and organization are also excellent and the paper is information-dense but still easy to follow. A weakness of the paper is that most of the experiments are conducted on four environments from DMC, which may impact the generalizability of the analyses. There are other experiments with visual and streaming RL though, which can support the idea that sparsity would still be useful in other settings. Overall, I think the currente experiments are adequate. Other Comments Or Suggestions: None Questions For Authors: Some clarification questions: - Did you try the more naive appraoch of a fixed sparsity ratio for every layer rather than the Erdos-Renyi ratio? Is this ineffective or only less effective? - In the Lottery Ticket Hypothesis paper, the authors do not observe any performance benefits when introducing sparsity with random weights (rather than the winning ticket). How would you reconcile their result with this paper's results, where large benefits seem to be observed? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: > **Q1**: Limited DMC environments affecting generalizability We acknowledge the reviewer's concern that most experiments are conducted on DMC, which might impact the generalizability of our analyses. To address this, we've conducted new experiments on the Atari-100k benchmark using Data Efficient Rainbow DQN (DER) [1] as our baseline algorithm. The experiments compare performance improvements when scaling network width to 3x the default size with varying sparsity levels (0.0, 0.4, and 0.8). Results are shown in [`figure (anonymous link)`](https://anonymous.4open.science/r/ICML_2025_3388/atari_sparsity_results.pdf). These results show that introducing network sparsity while scaling up model size produces similar benefits in discrete action tasks as observed in our continuous control experiments. This further validates the general effectiveness of network sparsity for scaling DRL models across different domains and algorithms. We note that the Atari-100k low-data regime may not fully demonstrate the benefits of scaling, and more comprehensive studies with longer training would be valuable for future work. Nevertheless, these preliminary results provide additional evidence supporting our main findings about network sparsity's role in unlocking DRL scaling potential. [1] When to use parametric models in reinforcement learning?, NeurIPS 2019 > **Q2**: Fixed uniform sparsity ratios vs. Erdos-Renyi ratios Based on the reviewer's question, we conducted new experiments comparing the naive approach of fixed/uniform sparsity ratios across all layers with the Erdos-Renyi (ER) ratio. The results shown in the [`figure (anonymous link)`](https://anonymous.4open.science/r/ICML_2025_3388/ER_Uniform_Ratios.jpg) reveal that at lower sparsity levels (≤0.6), both initialization methods perform comparably, while at higher sparsities (≥0.8), uniform layer-wise ratios exhibit a dramatic performance drop. The advantage of ER initialization stems from its ability to provide more balanced information flow across network layers. By scaling sparsity proportionally to the geometric mean of input/output dimensions, ER maintains approximately equal fan-in/fan-out ratios across layers, preventing bottlenecks that emerge with uniform sparsity at high sparsity levels where network connectivity becomes critical for information propagation. We want to emphasize that our paper's focus is on demonstrating how network sparsity as a fundamental property enables DRL model scaling, rather than identifying the optimal sparse topology. The fact that uniform sparsity at appropriate levels also improves performance further supports our main claim that sparsity itself (regardless of specific implementation) is a key enabler for unlocking the scaling potential of DRL networks. > **Q3**: Random sparsity benefits vs. Lottery Ticket Hypothesis findings The reviewer raises an important question about reconciling our findings with the Lottery Ticket Hypothesis (LTH). The key difference lies in the fundamentally different problem settings and objectives: First, we must distinguish between previous sparse training studies and our work. Earlier studies on sparse training (including LTH) were motivated by model compression to reduce computational costs while maintaining performance. Such approaches operate under the assumption that in supervised or unsupervised learning, larger dense networks generally yield better performance—the core premise behind modern scaling laws. However, this assumption is violated in online DRL settings. As demonstrated in Figure 1 of our manuscript, online DRL networks face severe scaling barriers where increasing model size not only fails to improve performance but often leads to catastrophic collapse. Our analysis in Section 4 further reveals that these scaling barriers emerge because larger dense networks are more susceptible to optimization pathologies during online RL training, preventing them from leveraging their theoretical capacity. Our work aims to demonstrate that network sparsity itself, as a fundamental network property, can mitigate these pathologies and unlock the scaling potential of DRL networks. This represents a fundamentally different objective than LTH's focus on finding efficient subnetworks within larger models. Therefore, our findings don't contradict the Lottery Ticket Hypothesis but rather highlight the unique utility of network sparsity in addressing online DRL's specific pathologies and inscalability. For a given sparsity level, we believe that better topologies (such as "winning tickets") could outperform one-shot random pruning. However, finding tickets requires significant computational resources, and discovering "lottery tickets" without training remains challenging. Efficiently identifying winning tickets as trainable networks represents valuable future work, but identifying optimal sparse topologies was not the focus of our current study.
null
null
null
null
null
null
Varying Manifolds in Diffusion: From Time-varying Geometries to Visual Saliency
Reject
Summary: This paper analyses the variance of pixel intensity over generation timesteps for salient versus non-salient regions of the image. They call the rate of change the generation rate. Specifically, they find that the salient regions of the image generally have a higher variance than non-salient points (86% of the time). They use this finding to perform image manipulation tasks, optimizing for a particular x_t to have the generation rate of a specified region in the image match the generation rate of another specified region of the image. Using these techniques they show qualitative examples of edits they make. Claims And Evidence: The claims made in the paper are not very well substantiated. On a dataset of 100 images, they show that 86% of the images have higher variance in the salient regions of the images. This single small dataset is not enough to say conclusively that what they are claiming is indeed the case, especially since even on the one dataset, it's more of a general trend than definitively showing that salient regions have higher windowed variance. Beyond this, they have no quantitative analysis of their work, such as on the edits they propose. The examples provided show that it is possible to make the edits they propose, but without quantitative results, it's difficult to say whether this is consistently the case. Methods And Evaluation Criteria: ## Visual saliency analysis I would expect to either see these results on a dataset larger than 100 images or show a set of different datasets. ## Baselines for qualitative results for proposed edits I am not entirely familiar with the current state of the art in the tasks shown, however I find it concerning that the only method compared do for image blending is from 2019. ## Quantitative results for proposed edits It seems this is an important section missing in the paper. Without quantitative results on a set of varied and large datasets, it is impossible to know if the proposed edits lead to consistent and generalizable improvements in image editing. Theoretical Claims: The proposed method theoretically makes sense given the tasks at hand. If the variance in the generation rate is indeed correlated with the saliency of the region in the image it is reasonable to see how it would be able to make the proposed edits. Experimental Designs Or Analyses: See `Methods And Evaluation Criteria` Supplementary Material: I thank the authors for providing their code; however, I did not install/run it. Relation To Broader Scientific Literature: A strength of this paper is the originality of what is being proposed. I have not seen an analysis like this, and if it is indeed the case, then I could see it inspiring future work analyzing the correlation between saliency and variance in the generation rate. Essential References Not Discussed: I would like to see comparisons in image blending with more recent methods. A quick search leads me to a WACV 2020 paper with code available by Zhang et al. If The authors find more recent work to compare to I would accept that as well. `L. Zhang, T. Wen, and J. Shi, “Deep Image Blending,” in Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 231–240.` Other Strengths And Weaknesses: See `Relation To Broader Scientific Literature ` Other Comments Or Suggestions: I do not understand what the five different plots in Figure 5 show. I assume they are for specific regions of an image, but I am unclear about that. Also, I assume the tasks shown involve image inversion before editing; however, that was never made clear. Questions For Authors: If the authors show improvements in more recent methods with quantitative results on a set of datasets, I would happily raise my rating. Even if the authors can show correlations on larger datasets between the saliency of a region and the windowed variance, this would be enough for me to improve my rating. For the time being, given a lack of quantitative analysis, I am unsure of if the proposed method is indeed responsible for the improvements shown. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review! Below we address the questions. **Q1. Larger Dataset for Visual Saliency Evaluation.** We have expanded our visual saliency evaluation to include the full MIT saliency benchmark CAT2000, consisting of 2000 diverse-category images. Our results indicate that 81% of randomly chosen salient points exhibit higher curve variance compared to non-salient points. Furthermore, we conducted rigorous statistical analyses as suggested by Reviewer xDCq on the above data: - Point-Biserial Correlation: $(r = 0.297, p=1.36e^{-69})$ - Independent Samples t-test: $(t = 18.050, p=1.36e^{-69})$ The statistical tests further confirm a significant positive correlation between generation curve fluctuations and visual saliency, and statistically significant differences between salient and non-salient curves. Visual results are presented in the https://limewire.com/d/3h4XH#SSSDAgZ0rL Figure 2. **Q2. Quantitative Evaluation on Applications.** We have included quantitative evaluations for all four proposed applications in Tab.1, Appendix D (lines starting from 648). Following related works, we employ large Vision-Language models such as CLIP, alongside user studies, to quantitatively assess the performance. Results demonstrate our method outperforms or is competitive with current state-of-the-art approaches across evaluation metrics. Detailed discussions of quantitative comparisons can be found in the final paragraph of each application subsection within Appendix D. **Q3. Comparison with Recent Image Blending Baselines.** According to our investigation, recent relevant works include [Wu2019, Zhang2020, Zhang2021, Xing2022], summarized in a comprehensive survey [Niu2025](page 8-10). Unfortunately, [Zhang2021] and [Xing2022] do not provide publicly available implementations. Therefore, our comparisons primarily focus on [Wu2019] and [Zhang2020], which were contemporaneously published in 2019. Upon evaluation, [Zhang2020](Deep Image Blending) frequently introduces undesirable background color leakage into composite objects, leading to noticeable color distortion in natural images. We illustrate these issues with specific examples in https://limewire.com/d/3h4XH#SSSDAgZ0rL Figure 4. Consequently, we select [Wu2019] as the primary baseline due to its consistently better performance in maintaining visual coherence in blended images. [Wu2019] Huikai Wu, Shuai Zheng, Junge Zhang, and Kaiqi Huang. GP-GAN: Towards realistic high-resolution image blending. ACM MM, 2019. [Zhang2020] L. Zhang, T. Wen, and J. Shi (2020). Deep Image Blending. WACV, 2020. [Zhang2021] He Zhang, Jianming Zhang, Federico Perazzi, Zhe Lin, and Vishal M Patel. Deep image compositing. WACV, 2021. [Xing2022] Yazhou Xing, Yu Li, Xintao Wang, Ye Zhu, and Qifeng Chen. Composite photograph harmonization with complete background cues. ACM MM, 2022. [Niu2025]Li Niu, Wenyan Cong, Liu Liu, Yan Hong, Bo Zhang, Jing Liang, Liqing Zhang. Making Images Real Again: A Comprehensive Survey on Deep Image Composition. arXiv preprint arXiv:2106.14490(v6), 2025. **Q4. Clarification of Curves in Figure 5.** We clarify that the curves presented in Figure 5 are generated at the pixel level, intended to illustrate the approximation of curve shapes discussed in Section 4.1. Since the objective is solely to compare shape approximations rather than interpret specific semantic content, pixel-level representation sufficiently serves this purpose. We will explicitly mention this clarification in the figure caption to avoid potential confusion. **Q5. Image Inversion for Real Images.** Yes, image inversion is required and employed for all real-image analyses and subsequent editing tasks discussed. To enhance clarity, we will explicitly mention the diffusion-model inversion at the conclusion of Section 3.1 (around line 134).
Summary: This work analyzes the correlation between image features relevant to visual saliency and the local deformation of the data manifold induced during the reverse diffusion process, referred to as the generation rate. Empirically, the authors find that the generation curve—the ordered sequence of generation rates computed for each pixel throughout the reverse diffusion process—is strongly associated with visually salient features. The authors propose an image editing technique that exploits the generation curve. The effectiveness of the proposed method is demonstrated across various image manipulation tasks, including semantic transfer, object removal, and image blending. Claims And Evidence: While the idea of geometrically analyzing data manifold behavior during the reverse diffusion process using differential maps is interesting, I think the experiments lack sufficient baselines to effectively compare and assess the performance of the proposed method. The detailed comments regarding the experiment, particularly comparisons, are listed in “Experimental Designs or Analyses” section. Methods And Evaluation Criteria: The description of the proposed method is generally clear but requires some clarification (detailed in “Questions for Authors” section). Regarding the evaluation criteria, the choice of metrics appears appropriate for the considered applications. Theoretical Claims: This paper does not include any theoretical claims or proofs. Experimental Designs Or Analyses: The experimental section could be further improved by incorporating additional baselines to better evaluate the effectiveness of the proposed technique. Specifically, I am curious why existing training-free methods that leverage pre-trained diffusion models for image editing and inverse problems were not included in the comparison, such as: 1. (Object Removal) SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations, Meng *et al.,* ICLR 2022 2. (Object Removal) RePaint: Inpainting using Denoising Diffusion Probabilistic Models, Lugmayr *et al.*, CVPR 2022 3. (Object Removal) Improving Diffusion Models for Inverse Problems using Manifold Constraints, Chung *et al.*, NeurIPS 2022 4. (Object Removal) Diffusion Posterior Sampling for General Noisy Inverse Problems, Chung *et al.*, ICLR 2023 5. (Image Blending) Toward Realistic Image Compositing with Adversarial Learning, Chen *et al.*, CVPR 2019 Additionally, providing more qualitative results that demonstrate the proposed method's superiority over the baselines would make the assessment more concrete. Supplementary Material: Yes, I checked the implementation attached as the supplementary material. Relation To Broader Scientific Literature: This paper has the potential to deepen our understanding of diffusion-based generative models through the lens of the manifold hypothesis. The authors' findings can be applied to various image editing tasks, including object removal and image blending. Essential References Not Discussed: Given the target applications, it would be valuable for the authors to discuss the following work in the revised version: 1. Toward Realistic Image Compositing with Adversarial Learning, Chen *et al.*, CVPR 2019 2. ObjectStitch: Object Compositing with Diffusion Model, Song *et al.*, CVPR 2023 3. Resolution-robust Large Mask Inpainting with Fourier Convolutions, Suvorov *et al.*, WACV 2022 4. SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations, Meng *et al.,* ICLR 2022 5. RePaint: Inpainting using Denoising Diffusion Probabilistic Models, Lugmayr *et al.*, CVPR 2022 6. Improving Diffusion Models for Inverse Problems using Manifold Constraints, Chung *et al.*, NeurIPS 2022 7. Diffusion Posterior Sampling for General Noisy Inverse Problems, Chung *et al.*, ICLR 2023 Other Strengths And Weaknesses: I have no further comments on the strengths and weaknesses. Other Comments Or Suggestions: I have no further comments. Questions For Authors: To summarize, I would like to ask the following questions to the authors: 1. Could you elaborate on how Equation 8 can be derived from Equation 4? 2. In line 211, the text states that perceptual metrics such as LPIPS, computed using the estimated $\hat{X}_0$ and their time derivatives, can be interpreted as the generation rate defined in Equation 5. I find this statement somewhat confusing, as my understanding is that the generation rate quantifies how the generative mapping from a prior distribution (e.g., Gaussian) to a data distribution deforms the data manifold over time. However, metrics like LPIPS are independent of the data distribution. 3. According to Section 4.2, a reference generation curve of an image patch is required for editing the source patch. However, how can this curve be obtained for real images that are not synthesized through the reverse diffusion process? Are techniques such as inversion incorporated to compute the generation rate and curve? 4. Although I may have misunderstood, could you clarify how visual saliency relates to the downstream applications discussed in the experiment section? While there is an observed empirical relationship between salient pixels in images and their rate of change during the reverse diffusion process, is there any insight into how this can be leveraged for image editing? Additionally, I wonder whether this approach can be applied to edit arbitrary regions in the input image that are not inherently salient. 5. Could you discuss the reasons why the works mentioned in the "Experimental Designs or Analyses" section were not considered in the evaluation? How does the proposed method compare to these works in terms of performance? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review! Below we address the questions. **Q1-1. Expanded Discussion on Baselines - Object Removal.** Methods for object removal generally fall into two categories: image inpainting and instruction-based editing (as detailed in the recent survey [Huang2025]). We selected effective representatives from both ([Podell2024, Li2024]). Regarding the specific baselines: - **RePaint[Lugmayr2022]**: our SDXL-inpainting baseline [Podell2024] is fundamentally built on Repaint's principles with a more advanced base model (SDXL) and specialized post-training on inpainting datasets. The theoretical justifications (Repaint+[Rout2023]) for Repaint-based inpainting is discussed around line 300. - **Inverse-Problem-Based Methods[Chuang2022, Chuang2023]**: We quantitatively compared with the more updated [Chuang2023] (see Table 1 and Figure 3 in https://limewire.com/d/3h4XH#SSSDAgZ0rL). Our method outperforms in editing direction-likely due to guidance from our reference curves-while they demonstrate greater visual changes. - **SDEdit[Meng2022]:** As an early diffusion-based editing method that incorporates guiding instructions, SDEdit is often regarded as a form of instruction-based editing. Subsequent methods, such as InstructPix2Pix, have reported significantly improvement upon SDEdit. Since our selected SOTA baseline, Zone[Li2024], demonstrated clear superiority over these methods, we consider explicit comparison to SDEdit unnecessary. **Q1-2. Expanded Discussion on Baselines - Image Blending.** [Chen2019] focuses on replacing an object with a semantically similar but visually distinct one (closer with image-to-image translation), whereas our method emphasizes seamless pixel-level blending of an inserted object with its surrounding. We discussed an extensive comparison in response to Reviewer Zb6M's Q3, identifying our baseline [Wu2019] as the most relevant and strongest open-source baseline. In the original paper, additional qualitative results for four tasks are provided in Appendix Figure 14-21. [Rout2023]Rout et al.. A theoretical justification for image inpainting using denoising diffusion probabilistic models. 2023. [Huang2025] Huang et al.. Diffusion Model-Based Image Editing: A Survey. TPAMI 2025. **Q2. Essential References Not Discussed.** We will integrate the discussion in Q1 and the following to our Appendix D. - **LaMa[Suvorov2022]** Although effective, the GAN-based LaMa frequently introduces grid-like artifacts when compared to diffusion-based inpainting as also noted in [Lugmayr2022]. - **ObjectStitch[Song2023]** They leverage guided diffusion models for object compositing, focusing on semantic consistency rather than boundary smoothing. Compared to our method, they provide more diverse outputs at the expense of precise pixel alignment. **Q3. Derivation of Equation Eq.8 from Eq.4.** Eq.8 can be explicitly derived from Eq.4 by treating $X_{t-\Delta t}$ as a function $g(X_t)$: $X_{t-\Delta t} =g(X_t) = \sqrt{\frac {\alpha_{t-\Delta t}}{\alpha_t}} X_t + (\sqrt {1-\alpha_{t-\Delta t}}+ \sqrt{\frac{\alpha_{t-\Delta t}(1-\alpha_t)}{\alpha_t}}) \epsilon_\theta^t(X_t)$. Taking the Jacobian of $D_g(X_t)$ with respect to $X_t$ yields: $D_{g}(X_t) = \sqrt{\frac {\alpha_{t-\Delta t}}{\alpha_t}} I + (\sqrt {1-\alpha_{t-\Delta t}}+ \sqrt{\frac{\alpha_{t-\Delta t}(1-\alpha_t)}{\alpha_t}}) D_{\epsilon_\theta^t}(X_t)$, where $I$ is the identity matrix. Thus, applying $D_g(X_t)$ to a vector $v$ produces Eq.8. **Q4. Clarification on Perceptual Metrics and Generation Rate.** The generation rate at data point level $X_t$ makes sense since it can also be interpreted as the rate at which noise is removed as $X_t$ moves closer to the original image $X_0$. In practice, this is effectively captured by the perceptual distance change computed through metrics such as LPIPS between the estimated $\hat X_0(X_t)$ and the real image $X_0$. **Q5. Obtaining Reference Generation Curves via Image Inversion.** Yes, we use diffusion ODE inversion to obtain generation curves for real images. We will clarify this clearly in Section 3. **Q6. Leveraging Generation Curves for Image Editing Applications.** The positive correlation of saliency with curve fluctuation is leveraged as follows: - **Saliency Manipulation and Blending:** By enhancing or suppressing the curve fluctuation of pixels, we directly control their saliency; disharmonious object boundaries, identified as inherently salient regions, are smoothed by reducing their curve fluctuations. - **Semantic Transfer and Object Removal:** The overall shape of curves encodes broader visual semantics, as discussed in lines 204-210. We transfer these semantics by matching the entire shape of generation curves; object removal is treated as a special case where semantics are shifted from the object to its background. These methods can be applied to arbitrary image regions, which do not necessarily have to be salient. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for addressing my questions. Since the concerns raised in my original review have been resolved, I will increase my rating. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback. We are glad that our revisions addressed your concerns, and we appreciate your updated rating.
Summary: This work researches the visual properties of images during the diffusion process. By employing the manifold hypothesis, the authors propose a new metric called the generation rate. They experimentally show correlation of this metric with the visual properties of image generation. Furthermore, the authors design an algorithm that can manipulate the visual properties of image generation by matching the generation rate during the diffusion process. Claims And Evidence: - In Line 154-157, are there any theoretical results that show $D_{f_t}$ is contractive? - In Line 242, what do you mean "... dominates the overall shape ..."? And for figure 5, I cannot find the yellow curve. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes, Supplementary Material: I have checked the appendix part, except for the impentation details. Relation To Broader Scientific Literature: This work proposes to research some metrics during the diffusion process, which may have strong connections with the visual properties of images. Although it lacks rigorous mathematical explanations, the experimental results show the potential of this view that can have wide applications. Essential References Not Discussed: No. Other Strengths And Weaknesses: ### Strength: - They define a new metric, generation rate, for measure the rate of the change of the data manifold in the diffusion process. They empirically show that this new rate has strong connections with the rate of the information removal during the diffusion process. - By manipulating the generation rate, they propose a matching algorithm that can modify the visual properties of generated image, which is an interesting technique. They show this technique can be applied to different tasks of image manipulation. ### Weakness: - The motivation of defining the generation rate as the norm of differential map seems to be insufficient, except for experimental results. - There are some explanations that are too experimental, like replacing $r_t$ by $\left\Vert D_{\epsilon_\theta^t}\left(X_t\right)[\operatorname{Proj}(v)]\right\Vert $ and replacing $\left\Vert D_{\epsilon_\theta^t}\left(X_t\right)[\operatorname{Proj}(v)]\right\Vert $ by $\left\Vert D_{h_t}\left(X_t\right)[v]\right\Vert $. It is better to provide more rigorous analyses. Other Comments Or Suggestions: - I thought it was better to provide more preliminaries and notations of manifold in Appendix A. For example, the ODE $dx = f(x,t)dt$ with initial value $x(0) = x_0$ induces the flow $\{\Phi_t\}$ with $\Phi_0(x_0)=x_0$ and $\Phi_t(x_0) = x(t)$. So in this paper, $M_t = \Phi_t(M_0)$, where $M_0$ is the target data manifold, and $f_t = \Phi_{\Delta t} \colon M_{t-\Delta t} \colon \rightarrow M_t$ is a diffeomorphism. I thought it may be a better way to understand the concept of time-varying manifolds. - Based on the notations in Appendix A, in the main part of this paper, I think $D_{f_t}$ and $D_{f_t^{-1}}$ should be replaced by $D_xf_t$ and $D_xf_t^{-1}$ respectively and so $D_{f_t^{-1}}(X_t)[v]$ should be $D_{X_t}f_t^{-1}(v)$. - For the Section 2.2, because the power method and $h_t$ has been widely used in the following contents, I thought it is better to provide more details of these techniques in appendix part. Questions For Authors: - In Line 121, about the projection operator $\text{Proj}(v)$, can we explicitly provide the formula for calculating $\text{Proj}(v)$ when given a $v \in \mathbb{R}^d$ and whether is this method computationally efficient? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review! Below we address the questions. **Q1.Theoretical Justification of Contractive $D_{f_t}$.** The observation of contractility is primarily empirical, demonstrated through experiments where tangent vectors {$v_i$} are scaled under the forward differential $D_{f_t}[v]$. This empirical contractility serves to intuitively illustrate our idea rather than being a necessary condition for defining the generation rate. Even without it, the scaling factor can still meaningfully quantify generation rates. **Q2. Clarification of 'Dominates Overall Shape' and Visibility of Yellow Curve in Fig.5.** We clarify the words 'dominates the overall shape' as indicating that the shape of $\|D_{\epsilon_\theta^t}(X_t)[\text{Proj}(v)]\|$ closely resembles the shape of the original $\|D_{f_t}(X_t)[\text{Proj}(v)]\|$. Regarding the visibility of the yellow curve in Figure 5, it overlaps significantly with the red curve in the first plot due to their high similarity, particularly visible upon closer inspection in the timestep range $t\in [0,40]$. **Q3. Expanded Preliminaries and Manifold Notations.** Following your suggestion, we will revise Appendix A to explicitly introduce the notations of diffeomorphism induced by the diffusion ODE. We agree to replace $D_{f_t}$ consistently with the clearer notation $D_xf_t$ throughout the paper. **Q4. Additional Details of Power Method in Appendix.** The power method and the identification of $h_t$ as a contractive map are discussed extensively in [Park2023], specifically in their Appendix F. We will expand our Appendix to briefly summarize these foundational details. **Q5. Formula and Computational Efficiency of $\text{Proj}(v)$.** Given an orthonormal tangent basis {$u_i$}$^k_{i=1}$ and an arbitrary ambient-space vector $v \in R^n$, the projection operator can be explicitly expressed as: $\text{Proj}(v) = UU^Tv$, where $U=[u_1...u_k]$ is an $n \times k$ matrix of tangent vectors. The above matrix multiplication is efficient, with primary complexity arising from calculating the tangent basis. Using the power method, this procedure typically requires tens of seconds on a GPU. We note this complexity is inherently necessary due to the high dimensionality of the ambient space. [Park2023]Park et al.. Understanding the latent space of diffusion models through the lens of riemannian geometry. NeurIPS, 2023.
Summary: Motivated by differential geometry of manifold, the authors defined a metric called (projected) generation rate. It signifies how much a direction on tangent space of the manifold at time t were amplified or diminished through the reverse diffusion mapping. They manage to compute it locally for a patch of pixels. Then, they empirically found the generation curve fluctuation perceptually correlates with the visual saliency of the patch. They developed a method to optimize / control the generation curve, with several applications e.g. semantic transfer, object removal, saliency manipulation and image blending. All in all they discovered an interesting geometric aspect in the diffusion generation process, and found interesting image control usage of it. ## Update after rebuttal The authors clarified the theoretical links to a few previous papers around diffusion sampling and showed the consistency with their observation. They also showed stronger statistical tests of their observation. The reviewer is happy to retain the score. Claims And Evidence: Mostly yes! See below. Methods And Evaluation Criteria: - **Minor point about the design of current method:** It seems from the current set up (Eq. 6), the projected generation rate is computed from the discretized sampling equation 4. As we know from EDM line of work, the time parametrization is somewhat arbitrary and could change for different scheduler. Even changing the sampling step size will change it. So would it be more proper to build the method on continuous time formulation (Eq. 3) which quantifies the instantaneous generation rate. Then it will be time scheduling independent and easier to compare across models? - I feel a more general / continuous time version of this work would be to compute the Jacobian of the end state as a function of the initial state, through the integration of the ODE. - **Major point about the method : Manifold projection** “*To mitigate this flaw, we define the projection operator Proj(v) as the projection of v onto the tangent space spanning by the leading singular vectors,*” I love the idea of finding on manifold directions. Though I’m not sure how critical the choice is. - Theoretically speaking, early in diffusion, as you have plotted in Figure 1, the noised data distribution is very close to Gaussian, so the directions should be isotropic and I’m not sure there is any special “on-manifold” direction. So the search for projection might not add value to the metric. - Empirically speaking, could there be an ablation study about the choice for the projection operation? How important it is to use the encoder layer of UNet to help? It seems to make the method very dependent on UNet based architecture. I guess, some other choices e.g. dataset PCA / PCA of trajectory can provide a principled way for projecting the perturbations. C.f. [WV2023] Fig. 18. Current method of using “power law based derivation of the tangent space” seems expensive and not sure how crucial it is. - From the description in Sec4.2, the optimization procedure of Eq. 10 is not totally clear to me. “*We update Xt using the gradient descent step Xt ← Xt−η∇Xt |c(ts|X , eij)−c⋆(ts)| with η being the learning rate of the SGD optimizer. After optimization, we recover .X0 from the optimized .Xt by Equation 4.*” Since the sampling trajectory is obtained via discretizing ODE, the states have sequential dependency. Do you optimize one time point at a time? with randomly chosen time points? do you need to traverse the sampling procedure multiple times to perform multiple gradient steps? Authors could consider make it clearer during revision (e.g. making an algorithm box) [WV2023] Wang, B., & Vastola, J. J. (2023). Diffusion models generate images like painters: an analytical theory of outline first, details later. *arXiv preprint arXiv:2303.02490*. Theoretical Claims: N.A. - I feel the paper might benefit from some theoretical and conceptual framing of what the generation curve / rate represents and what the expansion rate means. ( I know it correlates with visual saliency, but I kind of want a more principled explanation of their relation ) - I feel the authors might refer to [WV2023] In their case (e.g. approximating image manifold with Gaussian), then the on manifold directions are well defined and the generation rate can be computed analytically. Basically, on manifold perturbations would be amplified at varying rate, and off manifold perturbations will shrink through reverse diffusion. [WV2023] Fig.22 showed that empirically. - In [WV2023] the authors also observed the different kind of elements specified at different point of the generation process, e.g. layout / low frequency / high variance elements specified first, object details specified later. So I guess there is some connection to the shape the the temporal position of the peaks in the current work. (Fig2 right) [WV2023] Wang, B., & Vastola, J. J. (2023). Diffusion models generate images like painters: an analytical theory of outline first, details later. *arXiv preprint arXiv:2303.02490*. [WV2024] Wang, B., & Vastola, J. J. (2024). The Unreasonable Effectiveness of Gaussian Score Approximation for Diffusion Models and its Applications. TMLR Experimental Designs Or Analyses: - **Relation of saliency and generation curve fluctuation** In figure 2 the caption mentioned “*The generation curves fluctuate significantly at the pixels with high visual saliency, such as the wing tip of the bird.*” Though the visual looks really cool, it’s not visually clear how the example on left side reflects this claim. Is there any quantification showing the correlation between them? - For visual saliency there are some computational surrogates i.e. DNN models trained to predict saliency in perception. - **Quantification of the relation between saliency and curve fluctuation** “*For 86% of the images, higher visual saliency leads to higher fluctuation, validating the high consistency between curve fluctuation and visual saliency.*” For this results in Figure 3 left, can we plot the results in a more salient way? since the image sameples do not have an order, you can even plot the salient point variance against the non salient point variance. Further saliency is a continuous thing, and do not need to be thresholded here. Could you report the correlation between saliency value and windowed variance of the curve? Generally, I like the results in 3.2 for connection of generation curves to visual saliency, but I think better statistical quantification of their correlation could be useful and more convincing. Supplementary Material: A. C.2 section. D section Relation To Broader Scientific Literature: - I think the direction of leveraging generative models to study data geometry is super interesting, and can possibly reveal aspects of natural image manifolds that are hidden before. At the time of GAN and VAEs, quite a few groups tried to understand the image space / latent space geometry using similar motivations, i.e. understanding the Riemannian metric tensor of the latent space. [AHH2017] [SKT2018] [WP2021] [CA2022] - One paper that shares notable conceptual similarity to the current one is [WP2021], which also traced the change of manifold, throughout the layers in the GAN network, using differential geometry language. Basically, they defined a similar quantity like rate of image change when traveling along different directions locally. Luckily for GAN the tangent space is defined by the latent space, so no need to search for it. [AHH2017] Arvanitidis, G., Hansen, L. K., & Hauberg, S. (2017). Latent space oddity: on the curvature of deep generative models. ICLR [SKT2018] Shao, H., Kumar, A., & Thomas Fletcher, P. (2018). The riemannian geometry of deep generative models. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops* (pp. 315-323). [WP2021] Wang, B., & Ponce, C. R. (2021). The geometry of deep generative image models and its applications. ICLR [CA2022] Chadebec, C., & Allassonnière, S. (2022). A geometric perspective on variational autoencoders. NeurIPS Essential References Not Discussed: See above. Other Strengths And Weaknesses: **Strength** - I think the approach and theoretical framework presented in the paper is very novel and creative. - The comparison with the other generation rate / curve estimation method (Choi et al 2022) is interesting, showing they may be estimating similar underlying things. The previous method seems more intuitive and designed to be perceptually aligned. Other Comments Or Suggestions: Suggestions for style - For many line plots showing generation rate / curve, the legend and xy axes annotations are too small, could be made larger for easier reading. - Missing link around L873 - Mis statement: “*using only a pre-trained, unconditional diffusion model for image generation.*” Authors used SD2.1-base which is a conditional diffusion model, i.e. image sampling conditional on text. Questions For Authors: - Is there some intuitive explanations of why the generation rate should connect with curve fluctuation? I’m very curious! Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review! Below we address the questions. **Q1. Continuous-Time Formulation of Generation Rate** We agree that leveraging a continuous-time formulation provides a scheduler-independent definition. For the diffusion ODE $\frac d {dt}X_t=h(X_t,t)$, the variation of the norm of a unit tangent vector $v_t$ can be expressed as $\frac d {dt}\|v_t\|= \left \langle v_t, D_{h(X_t,t)}[v_t]\right \rangle$. We will clearly present and discuss this formulation in Section 3. **Q2-1. Necessity of Projection onto Tangent Space** In diffusion ODE, the data manifold $M_0$ is continuously transformed into $M_t=\phi_t(M_0)$ by induced diffeomorphism $\phi_t$, preserving a low-dimensional manifold structure at all times. Therefore, the projection onto $T_XM_t$ remains necessary. Although ideally one can track the $T_XM_t$ by pushing forward $T_XM_0$ via the diffusion ODE, under computational constraints we approximating the tangent space at each timestep. This is reliable at moderate to low noise levels despite some loss of precision at very high noise levels. **Q2-2. Alternative Projection Methods** Dataset PCA is impractical because it requires full dataset access at each noise level {$X_t^i$}$_{i=1}^N$, and PCA on single trajectories (as in [WV2023]) yields overly restricted subspaces. The adopted power-based approach, while dependent on the UNet architecture, is a practical solution. If there are other viable methods for high-dimensional nature image analysis, we will incorporate ablation studies with them. **Q3. Clarification on Optimization Algorithm** At each iteration, for variable $X_t$, we samples a random timestep $t_s$ and traverse the diffusion ODE from $X_t$ to $X_{t_s}$ without gradients. We then calculate gradients at $X_{t_s}$ and propagate them back to $X_t$ using adjoint method incorporated in PyTorch. We will clarify these details further in Section 4.2 and in the algorithm box. **Q4. Theoretical Explanation Connecting Generation Curve and Visual Saliency** - **Critical Period and Peak Positions** According to [WV2024], the estimated $\hat{X_0}$ undergoes a critical period of rapid change. Since the rate of change of $\hat{X_0}$ parallels our generation rate (as discussed in Section 3.2), this period directly corresponds to the dominant peak in our generation curves. The relation between critical period and feature variance[WV2024] also explains how peak positions encode detailed visual semantics essential for our semantic transfer task. - **Non-salient Curves with Peaks Near $t=0$**: [WV2024] notes that high-frequency features tend to have lower variance, causing their critical periods to occur later in the diffusion process. For non-salient backgrounds (like a plain wall), low-frequency components are largely captured by the distribution mean approximated by $\hat{X_0}(X_T)$. Consequently, only the high-frequency details—such as subtle textures—require explicit generation, which consistently results in peaks near $t=0$. Moreover, [WV2024] Figure 3.D analytically demonstrates that when the critical period occurs near zero, the corresponding change rate surges sharply, a result that aligns with our empirical findings (lines 191–194) of non-salient curves. - **Curvature and saliency**: Considering the continuous diffusion ODE expressed by score function $dX_t = -g(t)s(X_t,t)dt$, the generation rate can be related to the Hessian of the log probability density: $r(X_t,v) = |g(t) \left \langle D_{s(X_t,t)}[v], v \right \rangle | = |g(t) \left \langle D_{\nabla logP(X_t)}[v], v \right \rangle |= |g(t) \text{Hessian}_{log P}(v,v) |$. Intuitively, the Hessian measures curvature-the sensitivity to perturbations in a given direction. Visually salient areas, rich in semantic details, may exhibit larger curvature: For example, the change of boundary pixels can disrupt the outline of an object, resulting in the deviation from image manifold. This geometric perspective explains why salient regions have greater fluctuations. **Q5. Saliency and Curve Fluctuation Experiments** - **Visual examples (Figure 2 left):** We provide saliency maps (via EML-NET) in https://limewire.com/d/3h4XH#SSSDAgZ0rL Figure 1; Figure2 left highlights salient areas such as the mantis and the center of the lotus leaf, with additional curve fluctuations from leaf veins. - **Statistical correlation:** We first converted the discrete salient points in dataset to continues maps via Gaussian blurring as suggested. Statistical tests conducted on dataset yields: - Point-Biserial: $(r = 0.297, p=1.36e^{-69})$ - t-test: $(t = 18.050, p=1.36e^{-69})$ These confirm a strong positive correlation between fluctuation and saliency. This evaluation was conducted on an expanded dataset of 2000 images. **Q6. Suggestions for Style.** We will adjust the plots and update the missing link as suggested. We refer to our model as 'unconditional' because we exclusively use its unconditional mode (with text prompt 'None'). --- Rebuttal Comment 1.1: Comment: We appreciate the authors’ efforts to connect their current method with other theoretical frameworks and previous results on the diffusion sampling process (e.g., [WV2024]). This integration significantly enhances the overall presentation of their new perspective. Furthermore, we commend the authors for providing additional clarifications on the method and for the more robust statistical quantification of the results. Accordingly, we will maintain our score as is! --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the positive and thoughtful feedback. We are pleased that the integration with [WV2024] and additional clarifications have enhanced the presentation of our work. Your supportive comments are greatly appreciated.
null
null
null
null
null
null
Synthetic Text Generation for Training Large Language Models via Gradient Matching
Accept (poster)
Summary: This paper introduces GRADMM (Gradient Matching with ADMM), a novel method for generating synthetic, human-readable text to train large language models (LLMs) efficiently while preserving privacy. The approach leverages gradient matching to ensure that synthetic text replicates the training dynamics of real data, using the ADMM to optimize embeddings in a continuous space and project them into discrete, readable text sequences with low perplexity via top-k decoding. A key conceptual contribution is the theoretical guarantee that fine-tuning an LLM on this synthetic text converges to a solution close to that of real data, overcoming limitations of prior methods like heuristic LLM-generated text or unreadable embeddings from dataset distillation. The main findings show GRADMM’s effectiveness in two scenarios: generating substantial synthetic training data from a small set of real examples, and replacing larger real datasets with a compact synthetic set that maintains privacy. Claims And Evidence: C1: GRADMM-generated synthetic text guarantees convergence to a close neighborhood of the real data fine-tuning solution Theoretical analysis is provided in Section 4.5 and Appendix A, including Lemma 4.1, Theorem 4.2, and Corollary 4.3 Experimental results (Section 5.2, Table 1) show that on datasets like SST-2, fine-tuning with synthetic data achieves accuracy (e.g., 90.0%) close to or exceeding real data baselines (e.g., Random 1K at 91.2%). C2: GRADMM-generated synthetic text is human-readable The method uses top-k projection (Section 4.2) to map embeddings to vocabulary token sequences, with perplexity (ppl) as a readability metric. Table 1 shows synthetic data ppl (e.g., 5.2-5.8) close to real data (6.6-7.7), compared to 13.3 without top-k (Table 3). Qualitative results (Figure 2 and Appendix C) provide examples like “Great movie review is a must see experience...” (positive) and “Terribly bad and boring to me...” (negative). C3: GRADMM outperforms existing methods in performance I'm not satisfied about the selected baselines, but include the random gold data make it acceptable. Methods And Evaluation Criteria: Its evaluation criteria—classification accuracy, perplexity, comparisons with baselines (zero-shot/few-shot LLM generation, coreset methods), and benchmark datasets (SST-2, Tweet Emotions, Rotten Tomatoes)—tested across Phi, Llama-3.2-1B, and OPT-1.3B, appropriately measure performance, readability, and competitiveness. These methods and criteria align well with the problem. Theoretical Claims: The proof for Theorem 4.2 is mostly correct mathematically but hinges on an unverified assumption $\xi \leq \|\mathbf{g}_t\|$ Experimental Designs Or Analyses: I think the experiment is very thorough. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the novelty of our work and our thorough experiments. ---- 1. Unverified assumption $\xi \leq |g_t|$ in theory. The following [figures](https://anonymous.4open.science/r/gradmm/grad_diff/) confirm the validity of our theoretical assumption, by showing that the gradient error, i.e. $\| \nabla \mathcal{L}(\theta_t) - \nabla \mathcal{L}^s(\theta_t) \| \leq \| \nabla \mathcal{L}(\theta_t) \| = \| g_t \|$ at the pretrained parameters and this relation indeed holds during fine-tuning. Crucially, the data generated by GRADMM has a much smaller gradient error compared to the zero-shot baseline during fine-tuning, which is the reason for its superior performance.
Summary: This paper presents a novel approach for generating synthetic human-readable text to train Large Language Models (LLMs) via gradient matching. The authors propose a method called GRADMM (GRADient matching with ADMM) that leverages the Alternating Direction Method of Multipliers (ADMM) to iteratively optimize synthetic text embeddings to match the gradient of real training data. The goal is to generate synthetic text that not only preserves the privacy of real data but also ensures similar training dynamics and performance when used to fine-tune LLMs. The key contributions of this work include: 1. A theoretically rigorous framework for generating synthetic text that guarantees convergence and performance comparable to fine-tuning on real data. 2. The use of gradient matching in the embedding space to ensure that the synthetic text has similar training dynamics to real data. 3. A method to project optimized embeddings into human-readable text while maintaining low perplexity. 4. Experimental validation showing that GRADMM-generated synthetic text outperforms existing methods in terms of training efficiency and privacy preservation. The authors demonstrate the effectiveness of GRADMM through extensive experiments on various text classification tasks, including SST-2, Tweet emotions, and Rotten tomatoes. The results indicate that GRADMM can generate high-quality synthetic data even with limited real examples, achieving significant performance improvements over baseline methods. Additionally, the generated synthetic text is shown to be transferable to other LLMs, further validating the method's practicality and versatility. Claims And Evidence: The claims made in the submission are generally well-supported by clear and convincing evidence. The authors provide a comprehensive theoretical framework and extensive experimental validation to back up their assertions. Below is a detailed evaluation of the key claims and the evidence provided: 1. GRADMM generates synthetic text that guarantees convergence and performance comparable to fine-tuning on real data. The authors provide a rigorous theoretical analysis, including Lemma 4.1 and Theorem 4.2, which bound the gradient difference and prove the convergence of fine-tuning on synthetic data generated by GRADMM. This theoretical foundation supports the claim that GRADMM can produce synthetic text with similar training dynamics to real data. The experiments on various text classification tasks (SST-2, Tweet emotions, and Rotten tomatoes) demonstrate that GRADMM-generated synthetic text consistently outperforms or matches the performance of fine-tuning on real data. 2. GRADMM preserves the privacy of real training data. The synthetic text generated by GRADMM is guaranteed to be different from real data, ensuring privacy. The authors emphasize that GRADMM does not directly use real data samples but rather matches the gradient of real data, which inherently preserves privacy. But I think the evidence is weak and should be more experiments to demonstrate it. 3. GRADMM-generated synthetic text is human-readable and semantically meaningful. The authors use a top-k projection mGRADMM, which are shown to be meaningful and semantically consistent with the target labels. For instance, the synthetic movie reviews and tweets generated. 4. GRADMM is computationally efficient. Gradient Matching: The authors argue that matching the gradient of the last layer of the model significantly reduces the computational cost compared to matching the full gradient. This approach allows for faster and more memory-efficient generation of synthetic data. Experimental Validation: The paper reports that GRADMM reduces the generation time by 2.3x and memory usage by 2.6x compared to matching the full gradient, demonstrating its computational efficiency. Methods And Evaluation Criteria: The GRADMM method contains three important parts: 1. Alternating Direction Method of Multipliers (ADMM): The use of ADMM to iteratively optimize synthetic text embeddings to match the gradient of real data is a appropriate choice for this problem. ADMM is well-suited for solving constrained optimization problems and provides a theoretical characteristic for evaluating the model training on synthetic data. 2. Gradient Matching in the Embedding Space: Matching the gradients of synthetic and real data in the embedding space is a clever way to ensure that the synthetic text captures the essential training dynamics of real data. 3. Top-k Projection for Readability: The method of projecting optimized embeddings into human-readable text using top-k decoding ensures that the generated text is both meaningful and semantically aligned with the target categories. This technique balances the need for readability with the constraints of the vocabulary and perplexity. Theoretical Claims: Yes, I reviewed the proofs of Lemma 4.1 and Lemma 4.2. In Lemma 4.1, there is a small error in the transition from equation (19) to (20): specifically, \(d = \theta_t - \theta_0\). Experimental Designs Or Analyses: Yes. I checked the sec. 5.2 and 5.3. Supplementary Material: No. Relation To Broader Scientific Literature: Yes. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths + The paper introduces a novel approach to synthetic text generation using gradient matching, which is a creative and effective way to ensure that the synthetic data captures the essential training dynamics of real data. This method is particularly innovative in its application to text data, which is inherently discrete and challenging to optimize directly. + The authors provide a strong theoretical foundation for their method, including convergence guarantees and bounds on the gradient difference. This theoretical analysis adds credibility to the proposed approach and differentiates it from heuristic methods that lack formal guarantees. + The paper demonstrates the practical applicability of GRADMM by showing its effectiveness in generating synthetic text for fine-tuning LLMs. The results on various text classification tasks highlight the potential of this method for real-world applications, especially in scenarios where real data is scarce or privacy is a concern. Weaknesses 1. Insufficient Experimental Validation: - Inconsistent Problem Formulation: The experiments in Sections 5.2.1 and 5.2.2 address different aspects of the problem (data scarcity vs. data distillation) but are compared using the same metrics (accuracy and perplexity). This approach may not fully capture the nuances of each scenario. - Lack of Specific Metrics: The paper could benefit from more specific metrics tailored to each experimental setting. For example, in the data distillation scenario, metrics such as the similarity between synthetic and real data distributions (e.g., using divergence measures) could better illustrate the effectiveness of GRADMM. 2. Theoretical Bounds vs. Practical Insights: - Lack of Empirical Validation for Theoretical Bounds: While the theoretical bounds provided in the paper are valuable, they could be complemented with empirical evidence to demonstrate their practical relevance. For example, visualizing the gradient difference over training iterations or comparing the theoretical bounds with actual performance metrics could provide more intuitive insights. 3. Privacy Importance and Metrics: - Importance of Privacy: The paper does not sufficiently emphasize the importance of preserving training data privacy. A detailed discussion on the potential risks of data leakage and the benefits of using synthetic data in this context would strengthen the paper's motivation. - Lack of Privacy Metrics: The paper claims that GRADMM preserves privacy but does not provide specific metrics or experiments to validate this claim. Including metrics such as membership inference attack success rates or differential privacy guarantees could provide a more concrete assessment of the method's privacy-preserving capabilities. Overall, the paper presents a novel and theoretically sound approach to generating synthetic text for training LLMs. The strengths of the paper lie in its innovative use of gradient matching, rigorous theoretical analysis, and practical applicability. However, the weaknesses identified suggest areas for improvement, particularly in experimental validation, privacy assessment, and clarity of presentation. Addressing these issues could significantly enhance the paper's impact and applicability to real-world problems. Other Comments Or Suggestions: No. Questions For Authors: 1. How to evaluate that GRADMM preserves privacy? 2. Why did you generate only 100 synthetic data points based on 5, 10, 20, and 50 real data points? Why not generate a larger dataset comparable to the last column in Table 1? 3. In practice, are there experiments that can demonstrate whether the parameters of a model trained on synthetic data are indeed within a certain neighborhood of those trained on real data? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the novelty of our method, our theoretically-rigorous framework, our well-supported claims and extensive experiments. ---- **Experimental validation**: - Problem formulation: gradient matching in Eq 3 applies to both the data scarce regime (Fig 1) and Dataset Distillation (DD) (Table 1). In both, the synthetic data is generated by matching the average gradient of the few available examples (Fig 1) or a larger training data (Table 1). - Evaluation: while accuracy is an accurate metric to measure the performance in both cases, we include the following [table](http://anonymous.4open.science/r/gradmm/fid_score.png) for Fig 1a (SST2) which shows the divergence (FID) between the (i) training data distribution, (ii) distribution of the few available real examples, (iii) distribution of the 100 GRADMM synthetic data, (iv) distribution of 100 zero-shot synthetic data. Our synthetic data has a smaller FID, confirming its more similar distribution to that of real training data, compared to the baselines. This corroborates the superior performance of GRADMM. - FID is not generally used to evaluate DD (Table 1), which aims at generating a **small** subset of synthetic data with similar dynamics to a large training data. This is because: (i) FID requires both distributions to have a large sample size such that they resemble gaussian distributions, and otherwise is less accurate. (ii) distribution of the small generated data is not comparable to the large real training data (in particular, their variances are not comparable due to their very different sizes). Hence, FID is not commonly used for DD (c.f. the DD related work in Sec 2.1), but is more common when generating large data with diffusion models. **Theoretical bounds**: - The following [figures](https://anonymous.4open.science/r/gradmm/grad_diff/) confirm the validity of our theoretical assumption, by showing that the gradient error, i.e. $\| \nabla \mathcal{L}(\theta_t) - \nabla \mathcal{L}^s(\theta_t) \| \leq \| \nabla \mathcal{L}(\theta_t) \|$ at the pretrained parameters, and this relation holds during fine-tuning. Crucially, the GRADMM generated data has a much smaller gradient error compared to the zero-shot baseline during fine-tuning, corroborating its superior performance. - We generated 200 synthetic data in Figure 1a. The accuracy of the model fine-tuned on this larger subset is 90.1 $\pm$ 0.1, which is higher than 89.8 $\pm$ 0.4 of training on the original 100 synthetic data, and is closer to the last column in Table 1. - We compared the $L_2$ normed difference in model parameters when trained on real training data vs 100 synthetic data generated for SST2. The normed difference for GRADMM is 1.99, which is smaller than the 2.27 for zero-shot. This further confirms the validity of our theory. **Privacy**: - Thanks! We will add discussion of the importance of privacy, data leakage and benefits of synthetic data to our revised version. - GRADMM is the first *dataset distillation* method able to generate human readable text. DD methods provide differential privacy guarantees [1]. Specifically, for two datasets that differ in only 1 example, the parameters of the models trained on the distilled version of the two datasets are provably highly similar [1]. Intuitively, as dataset distillation methods generate data by techniques such as matching the mean of the data distribution or average gradient of a real training data, as long as synthetic data is not initialized by real training examples, there is no information leakage about individual training examples [1]. GRADMM does not initialize synthetic data with real training examples, and hence effectively preserves the privacy of the training data. - We conducted the loss-based MIA to the model trained on GRADMM synthetic data for SST2. Specifically, we select N=100 member samples from the training subset and N non-member samples. Then, we find the optimal threshold that maximizes the advantages (2 x (acc - 50%)) on these 2N samples. Finally, we test the loss-based MIA with optimal threshold on another 2N samples consisting of N members and N non-members and report the advantage score. We repeated the whole process 10 times and the advantage score (%) is only 1.75 $\pm$ 1.4, demonstrating the effectiveness of GRADMM against loss-based MIA. - The following [figures](https://anonymous.4open.science/r/gradmm/embedding_distance/) shows the histogram of the distances of synthetic examples to their closest real training data. None of the synthetic examples generated by GRADMM are very similar to the real training examples, further confirming that our synthetic data is not identical to real examples. We will add the above discussion and results for all the datasets to our revised version. We hope that our rebuttal has addressed the reviewer's concerns and they can consider further supporting our work to be accepted.
Summary: This paper improves on the SOTA synthetic data for LLM method by imposing a readability constraint in (2). This makes it necessary to 4.2 Alternating Between Text and Embedding Spaces. The experiments are convincing. Claims And Evidence: They are good Methods And Evaluation Criteria: They are good Theoretical Claims: They are good Experimental Designs Or Analyses: They are good Supplementary Material: No, I didn't read carefully at all Relation To Broader Scientific Literature: I didn't review this aspect carefully Essential References Not Discussed: I didn't review this aspect carefully Other Strengths And Weaknesses: No Other Comments Or Suggestions: No Questions For Authors: Can this scheme used to generate math, logic, and code? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for supporting our work and acknowledging our convincing experiments. ---- 1. Can this scheme used to generate math, logic, and code? The idea of our work (generating synthetic text via gradient matching using ADMM) can be applied to math, logic or code. However, this requires incorporating additional structure in the generated text. Controllable text generation which requires the text to follow a particular structure, syntactic or semantic properties has been a topic for several recent research, including [Li et al’22, Gong et al’22, Zhou et al’24 (references are in Line 133 from the paper)] to generate text in the form of tables, code, etc. Such techniques can be incorporated in our method to generate structured synthetic text. Considering that our method is the first to show the applicability of gradient matching to generate readable synthetic text, it lays the ground for many exciting future work, including controllable text generation via gradient matching.
Summary: This paper discussed a method for generating synthetic data to train LLMs, and aims to create a synthetic dataset that can train a similar dynamics to the real data. Theories and experiments are provided to justify the effectiveness. Claims And Evidence: I think the evidence is not very convincing, especially regarding the theory. Here is the argument: the authors try to create synthetic data that can mimic the dynamics of read data **samples** regarding training dynamics. But whether this is really useful? I am highly skeptical. The purposes of generating synthetic data are mainly for the following two perspectives: 1. help the mitigate the issue of limited real data; 2. privacy concerns. But the proposed method can not solve either of them. In the case the real data samples are scarce, generating synthetic data cannot fix the gap of generalization (notice that the ultimate goal is to train to optimize the **expected loss**). In the case of privacy, the synthetic data directly use real data to help generation, so privacy is also violated. I really doubt the usefulness of the proposed method. The experimental results are also quite weak and not comprehensive. 5.2.1 only uses 3 datasets and the evolution metrics are not comprehensive. Effective sample size and ideally confidence intervals for multiple runs should be considered. Methods And Evaluation Criteria: Not comprehensive enough. See Claims And Evidence. Theoretical Claims: Highly doubt the theoretical results. Besides the statements mentioned above in Claims And Evidence, lemma 4.1 will rely on pertaining errors on synthetic data and real-data trained model. I think this is not guaranteed to be small. I am not thinking the statement is making much sense given this presumption. Experimental Designs Or Analyses: The evaluation is not comprehensive, For instance, Figure 1 only includes single trials without confidence interval. Supplementary Material: Only quickly glance at the supplementary material. Relation To Broader Scientific Literature: Synthetic data is quite important to scientistic discovery, but this paper is not particularly related to that. Essential References Not Discussed: NA. Other Strengths And Weaknesses: As mentioned above, I really doubt the usefulness of the proposed method. The experimental results are also quite weak and not comprehensive. 5.2.1 only uses 3 datasets and the evolution metrics are not comprehensive. Effective sample size and ideally confidence intervals for multiple runs should be considered. Other Comments Or Suggestions: NA. Questions For Authors: NA. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. However, we disagree with their evaluation of our work, as we discuss below. ---- **Data scarce regime**: Fig 1 provides strong evidence for the applicability of our method to the data scarce regimes. The 100 synthetic data generated by GRADMM based on as few as 5 to 20 *randomly selected real examples* already reach 98%, 89%, 96% the performance of training on the *full training data* (last col in Tab 1), and outperform training on the 5-20 real examples with a big margin. The following [table](http://anonymous.4open.science/r/gradmm/fid_score.png) compares (for SST2, Fig 1a) the divergence (in terms of FID) between the (i) training data distribution, (ii) the distribution of the few available real examples, (iii) the distribution of the 100 synthetic data generated by GRADMM and (iv) the distribution of 100 synthetic data generated using the zero-shot approach. Our synthetic data has a smaller FID, confirming that it has a more similar distribution to that of real training data, compared to baselines. This corroborates the superior performance of GRADMM. While the effectiveness of GRADMM depends on the diversity of the available real examples, our empirical results show that a small number of randomly selected examples can be leveraged to effectively reduce the *expected loss*. We do not claim that GRADMM *perfectly minimizes* the expected loss with limited information (a few available real examples), but it is considerably more effective than other baselines in the data scarce regime. **Privacy concerns**: Using real examples as a guide to generate synthetic data **does not compromise the privacy of real training examples**. As discussed in Sec 2, GRADMM is the first *dataset distillation (DD)* method able to generate human readable text. DD methods provide differential privacy guarantees [1]. Specifically, for two datasets that differ in only 1 example, the parameters of the models trained on the distilled version of the two datasets are provably highly similar [1]. Intuitively, as dataset distillation methods generate data by techniques such as matching the mean of the data distribution or average gradient of a real training data, as long as synthetic data is not initialized by real training examples, there is no information leakage about individual training examples [1]. GRADMM does not initialize synthetic data with real training examples, and thus effectively preserves the privacy of the training data. To confirm, we conducted the loss-based Membership Inference Attack (MIA) to the model trained on synthetic SST2 data generated by GRADMM. Specifically, we select N=100 member samples from the training subset and N non-member samples. Then, we find the optimal threshold that maximizes the advantages (2 x (acc - 50%)) on these 2N samples. Finally, we test the loss-based MIA with optimal threshold on another 2N samples consisting of N members and N non-members and report the advantage score. We repeated the whole process 10 times and the advantage score (%) is only 1.75 $\pm$ 1.4, demonstrating the effectiveness of GRADMM against loss-based MIA. Finally, the following [figures](https://anonymous.4open.science/r/gradmm/embedding_distance/) show the histogram of the distances of synthetic examples to their closest real training example. Indeed, none of the GRADMM synthetic examples are very similar to the real training examples, further confirming that our synthetic data is not very similar to real examples. **Theoretical assumption**: Our gradient matching approach ensures a high similarity to the target gradient. The following [figures](https://anonymous.4open.science/r/gradmm/grad_diff/) confirm the validity of our theoretical assumption, by showing that the gradient error, i.e. $\| \nabla \mathcal{L}(\theta_t) - \nabla \mathcal{L}^s(\theta_t) \| \leq \| \nabla \mathcal{L}(\theta_t) \|$ at the pretrained parameters and this relation holds during fine-tuning. Crucially, GRADMM synthetic data has a much smaller gradient error compared to the zero-shot baseline during fine-tuning, which is the reason for its superior performance. **Empirical evidence**: The following [figures](https://anonymous.4open.science/r/gradmm/figure1) show std for our experimental results (based on three runs) and confirms the strong performance of GRADMM on the 3 dataset in our paper in addition to **two new datasets**, namely IMDB [Maas et al., 2011] and Sentence polarity [Pange et al., 2005]. We hope that our rebuttal could address the reviewers’ doubts. Our work is the first to show the applicability of dataset distillation to generate readable synthetic text via gradient matching with strong empirical performance, and provides a promising avenue for future research. Thus, we hope that the reviewer considers supporting our work to be accepted. [1] Privacy for free: How does dataset condensation help privacy? ICML’22.
null
null
null
null
null
null
Learning from Loss Landscape: Generalizable Mixed-Precision Quantization via Adaptive Sharpness-Aware Gradient Aligning
Accept (poster)
Summary: This paper propose a search-based method to find mixed-precision quantization scheme that can work with a much smaller dataset. To enable the generalization from the small search set to the large validation set, the paper propose to seek for quantization scheme that can lead to flatter loss minima of the quantized model. This leads to the proposal of using an adaptive sharpness-aware gradient aligning objective in the search process. ## update after rebuttal Post rebuttal, I agree with the author that the proposed method can help on calibrating the model with different datasets, no matter which policy search method is used. To this end, I increase my score to weak accept. Claims And Evidence: The claim that using sharpness information can help the search of a more generalizable quantization scheme is supported by both theortical analysis and experiment results. However, the claimed advantage of the proposed method is not justified. For example: 1. The paper claims that searching MPQ policy with subset would reduce performance, yer this is only a drawback of differentiable search-based methods, not for other more advanced mixed precision quantization methods like HAWQ. This claim is inaccurate and hinders the motivation of the proposed method. 2. In related work, the paper mentions that previous method suffer from intricate calculation of feature maps, increasing search costs. However, it seems apparent that the gradient computation required for the proposed ASAM is more costly than the feature map computation. The improved seearch cost claimed by the proposed method is only supported by the reduced numbre of epochs, yet ignoring the potential higher cost in each epoch. Methods And Evaluation Criteria: The evaluation is conducted cross multiple models and datasets. The setting is standard for quantization literature. However, the paper uses "con-epochs" as a key criteria in the experiments, which is not a fair criteria. The proposed method relies on SAM, which requires at least two back propagations per epoch, effectively increasing the convergence time over baseline methods even with a reduced epoch counts. Real training time should be reported here as a better criteria. Theoretical Claims: I checked the method derivation from Equ.(1) to (12). However, I did not check the theortical analysis in Sec. 2.4. Experimental Designs Or Analyses: As reported in Table 1, HAWQ appears to be a significantly stronger baseline than other DNAS-based methods, yet HAWQ is not used as baseline in other settings. I believe HAWQ shoudl be compared with in all settings to prove the contribution of the proposed method is effective. Moreover, HAWQ performs a single-shot ILP-based search followed by finetuning. It is unclear how the H+A is performed in Table 1 and what con-epoch stands for here. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: This paper aims to improve over previous DNAS-based mixed precision quantization method by including SAM-based objective in the search process. As both DNAS-based methods and SAM objectives are well studied, this specific usage appears to be novel. However, it shoudl be noted that DNAS-based methods are not promising compred with ILP-based methods like HAWQ in inducing mixed-preciison quantization scheme. This fundamentally hinders the significance of the proposed method. Essential References Not Discussed: References are adequate. Other Strengths And Weaknesses: Besides the weakness mentioned prevously on the motivation, baseline, and evaluation metrics, SAM-based optimization has a fundamental drawback of complexity, which makes the convergence challenging on more complicated architectures like transformers. The exploreation of the proposed method is limited to simple CNN models in this paper. The scalability of the search and the SAM objective to transformer models is questionable. Other Comments Or Suggestions: Post rebuttal, I agree with the author that the proposed method can help on calibrating the model with different datasets, no matter which policy search method is used. To this end, I increase my score. Questions For Authors: Please address the weaknesses mentioned in previous parts. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Q1 The claims of ASGA advantages and search cost. A1: Many thanks. We think the ambiguity of claim's presentation may cause your confusion of this work. Our experiments (**Table 1 in Section 3 of our paper**) show that **using ASGA with ResNet50 saves 16 GPU hours with better Top-1 compared to HAWQ alone, which validates our motivation**. We explain this as below. HAWQ's second-order information of the Hessian matrix is data sensitive[1]. It works well if the proxy subset distribution matches target domain; otherwise, performance degradation occurs since distribution shift may disrupt Hessian estimation. In fact, proxy dataset is often distribution-discrepant, especially for privacy-sensitive scenarios where directly getting subset is not allowed. Conversely, ASGA’s generalizable policy—searched on proxy dataset—can obtain satisfactory performance with large reduction of search time on target datasets (**Table 1**). We will improve related presentations (related work). As for search cost, we have added comparison experiments with GPU hours evaluation (**see A2**). --- [1]: Hawq: Hessian aware quantization of neural networks with mixed-precision, ICCV 2019. --- ## Q2 Is the evaluation criterion fair? A2: We have added experimental results of real training time as below (GH means GPU hours). However, we still believe con-epoch can be used for our scenarios. Rethink our two-step workflow: “X+A” means that ASGA first finds generalizable MPQ policy with small proxy data, and then baseline X finetunes this policy on target dataset with several epochs, where ASGA is not involved, and thus the search cost of each baseline at each epoch is **not increased significantly by ASGA**. Note that, the generalizable policy of model found by ASGA can be suitable to the baselines that require the same model structure, which **largely reduces search burden**. Without ASGA, baselines should train from scratch on large dataset, requiring much search time (**Table 1**). |Model| Methods | W/A | con-epoch |Top-1 | GH of ASGA | GH/epoch | Total GH | | :------: | :-----: | :---------: | :-------: | :-----: | :--------: | :------: | :-------: | | ResNet50 | HAWQ | 4MP/4MP | 98 | 74.6 | - | 3.04 | 298.2 | | | **H+A** | **4MP/4MP** | **92** | **74.9** | **2.4** | **3.05** | **283.1** | | ResNet18 | GMPQ | 3MP/3MP | 96 | 66.3 | - | 1.24 | 119.05 | | | **G+A** | **3MP/3MP** | **80** | **66.4** | **2.1** | **1.24** | **101.5** | | ResNet18 | SEAM | 3MP/3MP | 96 | 65.1 | - | 1.11 | 106.53 | | | **S+A** | **3MP/3MP** | **85** | **65.8** | **2.1** | **1.11** | **96.48** | ## Q3 Compare with HAWQ as baseline in other settings. A3: We need to clarify that "H+A" refers to using HAWQ to finetune the MPQ policy searched by ASGA, as presented in **A2**, while "con-epoch" denotes the number of iterations required for convergence. In this sense, both DNAS and HAWQ can be used as baseline to utilize the policy found by ASGA. Then, we have added experimental results under ResNet18 model, which show that by fine-tuning the policy of ASGA, HAWQ **saves 2.7 GH** (GH means GPU hours). Due to time limitation, the experiments on MobileNet-V2 will be provided in the future. | Model | Methods | W/A | con-epoch | Top-1 | GH of ASGA | GH/epoch | Total GH | | :-----: | :-----: | :---------: | :-------: | :------: | :--------: | :------: | :-------: | | Reset18 | HAWQ | 4MP/4MP | 95 | 67.3 | - | 1.14 | 108.3 | | | **H+A** | **4MP/4MP** | **90** | **67.4** | **2.1** | **1.15** | **105.6** | ## Q4 The significance of ASGA. A4: Notice that both DNAS- and ILP-based MPQs can employ ASGA for reducing search cost without sacrificing performance, which demonstrate the significance of the proposed method. In fact, our ASGA is **suitable to the enhancement of ILP-based MPQs**, for instance, HAWQ-V3 can quickly finetune ASGA’s policies for stable performance across different datasets and hardware. Besides of DNAS-based methods, **ASGA offers a flexible and efficient solution for enhancing various MPQs**, especially for resource-limited, cross-dataset scenarios. ## Q5 The scalability of ASGA on Transformer. A5: **ASGA is applicable to various models including Transformer**, since its core design—minimizing loss landscape sharpness — is not CNN-specific. In fact, [2] have demonstrated the scalability of SAM, which is applied to Transformers (BERT and ViT). Note that current MPQ researches (including SOTA works like EdMIPS and HAQ) primarily focus on CNNs, and thus our experiments also central on CNN models rather than Transformers. We will extend our work to Transformers in future research—a direction that is theoretically justified and critical for advancing practical MPQ deployments. --- [2]: Towards efficient and scalable sharpness-aware minimization, CVPR 2022. ---
Summary: This paper aims to reduce the mixed-precision quantization search costs by decoupling the policy search and model deployment dataset. In this way, the mixed-precision quantization policy of a model can be searched on a small-scale dataset and then the policy can be transferred to a large-scale one for deployment. To this end, the authors introduce sharpness-aware minimization (SAM) to ensure generalization, as well as a gradient alignment method to handle gradient conflicts and an adaptive perturbation radius to accelerate optimization. Claims And Evidence: Yes. Methods And Evaluation Criteria: I appreciate the authors' efforts, however, [1] already explored the feasibility of combining SAM with quantization. While I see the authors aim to organize this submission from another perspective (i.e., decoupling the search/deployment datasets), the perspective itself is still not new. Therefore, this submission looks like a bit A+B for me now. Moreover, I think at least some discussion and primary experiments should be provided in this submission, to further distinguish differences between the proposed method and [1]. [1] Sharpness-aware Quantization for Deep Neural Networks Theoretical Claims: Checked. Experimental Designs Or Analyses: I think the authors could provide more comparisons about recent published papers. Supplementary Material: Yes. Relation To Broader Scientific Literature: This work extends the previous works (GMPQ, SEAM). Essential References Not Discussed: While this submission provides basic discussion for mixed-precision quantization, it lacks of published papers in recent 3 years, e.g., [1][2][3][4]. Moreover, baselines are really old, e.g., EdMIPS (CVPR'20), GMPQ (ICCV'21), HAQ (CVPR'19). [1] Retraining-free Model Quantization via One-Shot Weight-Coupling Learning, CVPR 2024 [2] One-shot Model for Mixed-Precision Quantization, CVPR 2023 [3] SDQ: Stochastic Differentiable Quantization with Mixed Precision, ICML 2022 [4] Mixed-Precision Network Quantization via Learned Layer-wise Importance, ECCV 2022 Other Strengths And Weaknesses: I checked the two most related works, GMPQ and SEAM, and found the Fig.2, and Equ (4)(5) of this paper are quite similar to SEAM, and Fig.7 is quite similar to GMPQ. Other Comments Or Suggestions: N/A Questions For Authors: Combining quantization with SAM seems to be promising but leads to an additional question, can they improve the generalization performance? In other words, does the searched mixed-precision quantization policy provide better generalization performance compared to existing works (SEAM, GMPQ)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Q1 The novelty of the work. A1: Many thanks for your comment! This is a valuable question. The method SAQ [1] you mentioned appears similar to our work, but **they are actually different**. First, the goal of our work is **fundamentally different** from such work. SAQ quantizes and trains target model **on the same dataset**, aiming to enhance model's anti-interference ability by smoothing the loss function to prevent the occurrence of overfitting. In contrast, our ASGA aims to significantly boost the search efficiency on large dataset via searching for a generalizable MPQ policy on a small-scale proxy dataset, enabling it to achieve SOTA accuracy with fewer computational resources on a large-scale target dataset (**Table 1 in Section 3 of our paper**). The advantage of ASGA lies in the improvement of **cross-dataset generalization ability**. Second, SAQ focuses on **non-differentiable fixed-precision quantization**, particularly uniform quantization scenarios, whereas ASGA is designed for **mixed-precision quantization**. Comparative studies[2][3] demonstrate that MPQ **achieves better complexity-accuracy** balance in practical applications. Moreover, we emphasize that **ASGA is not merely an "A+B" combination**. Beyond separating search and deployment datasets, ASGA's primary contribution lies in optimizing the existing MPQ training paradigm, **offering novel solutions for cross-dataset generalization, convergence efficiency, and computational resource utilization**. --- [1]: Sharpness-aware quantization for deep neural networks[J]. arXiv preprint arXiv:2111.12273, 2021. [2]: Haq: Hardware-aware automated quantization with mixed precision, CVPR 2019. [3]: Towards mixed-precision quantization of neural networks via constrained optimization, ICCV 2021. --- ## Q2 Literature supplementation and old baselines. A2: Thanks for the advice. We'll cite these papers in revision. Regarding outdated baselines, according to the available source code in the papers you provided, we add comparative experiments on ResNet18 (GH means GPU hours). The results show that with ASGA-searched MPQ, RFQuant obtains up to **1.11X searching efficiency improvement**, further validating our method's effectiveness. Due to time limitation, the comparison experiments with more new baselines will be conducted in our subsequent research. | Methods| W/A | con-epoch | Top-1 | Top-5 | GH of ASGA | GH/epoch | Total GH | | :-----------: | :---------: | :-------: | :------: | :------: | :--------: | :------: | :-------: | | RFQuant[4] | 4MP/4MP |94 | 65.8 | 84.4 | - |1.30| 122.4 | | **R[4] + A** | **4MP/4MP** |**85** | **65.9** | **84.9** | **2.1** | **1.31** | **113.4** | --- [4]: Retraining-free model quantization via one-shot weight-coupling learning, CVPR 2024. --- ## Q3 The differences between ASGA, SEAM and GMPQ. A3: Thanks for your comment! In fact, both ASGA and GMPQ have similar goal, i.e., find generalizable policy across datasets, and thus share similar quantization process like Fig.7. Regarding Figure. 2 and Eqs.4-5, both SEAM and ASGA build upon the differentiable MPQ framework, and thus share several differentiable search formulations. However, our method is **fundamentally distinct from SEAM and GMPQ**: SEAM seeks generalizable MPQ policy by minimizing intra-class compactness and maximizing inter-class separation, and GMPQ exploits attribution consistency and it relies on a pre-trained full-precision model for quantized model alignment, requiring extra training costs; Different from them, ASGA focuses on loss sharpness optimization with **no need pre-trained model**, which is more efficient. For sake of clarity, we will modify the related figures and equations located by reviewers in the revision. ## Q4 Performance improvement of ASGA compared with GMPQ and SEAM. A4: Many thanks. First, many studies[5][6] have theoretically proven that minimizing loss landscape sharpness can well enhance model’s generalization ability. Then, we conducted extensive experiments to validate the superiority of ASGA over SEAM and GMPQ. Experimental results in **Table 1** show the quantized model derived from MPQ policy found by ASGA (for ResNet18) **achieves a Top-1 improvement of 0.7% and 0.1% on ImageNet over those of SEAM and GMPQ**, respectively, and **save training costs by 17% and 11.5%**, respectively. Moreover, the comparative experiments on a set of baselines in **Table 1** show that by using a small dataset of **only 0.5% the size of the large dataset** to search for quantization policies, ASGA **achieves the same accuracy on the large dataset and improves the search efficiency by up to 150%**. These validate **the superiority of ASGA in improving generalization of quantization policies over both SEAM and GMPQ**. --- [5]: Fisher sam: Information geometry and sharpness aware minimisation, ICML 2022. [6]: Surrogate Gap Minimization Improves Sharpness-Aware Training, ICLR 2022. ---
Summary: In the paper, authors propose a novel mixed-precision quantization method (ASGA) via learning the sharpness of loss landscapes, which improves the quantization generalization across datasets, thereby reducing the search cost. Particularly, the idea of introducing sharpness measure into quantization is interesting, and shows an obvious merit: the quantization policy searched from the small dataset can be directly implemented in the large one. To realize this, authors propose an enhanced sharpness measure, which aligns gradient directions, and dynamically adjusts the perturbation radius during training. This way can well deal with conflicts between different optimization objectives, and accelerate the optimization convergence. Also, authors provide theoretical analysis demonstrating the effectiveness of ASGA in reducing the upper bound of quantization generalization error and ensuring convergence. Experimental results show that ASGA achieves significant speedup in policy search compared to state-of-the-art methods while keeping comparable accuracy on large datasets. Claims And Evidence: The paper conducts both theoretical analysis and extensive experiments on a variety of datasets and models. The results demonstrate that the proposed method can achieve high accuracy with less search cost, validating the claims. Methods And Evaluation Criteria: The Adaptive Sharpness-Aware Gradient Aligning (ASGA) method is designed to tackle the challenge of heavy search overhead in Mixed-Precision Quantization (MPQ). It is reasonable to utilize proxy datasets such as CIFAR10, along with benchmark datasets like ImageNet and VOC, to assess the generalization capabilities and performance of quantization strategies across different tasks. Theoretical Claims: The proof of generalization performance in Lemma 1 is clear, which is based on PAC–Bayesian theory. The convergence analysis in Lemma 2 is correct, with clear steps. Experimental Designs Or Analyses: The use of multiple proxy and target datasets in the experiments is valid as it helps assess generalization. Supplementary Material: The supplementary material provides useful details such as the differences between the proposed method and conventional ones, details of the gradient-aligning technique, and the proof of the theoretical lemma. Relation To Broader Scientific Literature: The paper's key contribution of using loss landscape sharpness for MPQ policy search builds on prior work on sharpness and generalization in neural networks. It differentiates from existing MPQ methods which rely on complex feature map calculations. By leveraging simple sharpness measures, it offers a more efficient approach, filling a gap in the literature where no prior work applied sharpness for MPQ generalization. Essential References Not Discussed: The paper comprehensively covers relevant MPQ and sharpness-related literature. Other Strengths And Weaknesses: Strengths 1. The idea of the work is interesting. The use of loss-sharpness information in transferable quantization search (which can be trained with small proxy datasets) is well-motivated, showing great potential to deal with the efficiency issue suffered by current MPQ methods. 2. The paper is well-organized and the presentation is generally clear. 3. The thermotical analysis from Lemma 1 is important, showing the impact of loss sharpness on the upper bound of quantization generalization error. 4. The observation about the relationship between sharpness and MPQ transferability is important for understanding the motivation of the work. 5. The strategies tailored for gradient direction alignment is clear and also effective. 6. The experiments are convincing, which demonstrates that a small sharpness measure learned from proxy dataset helps seek a transferable MPQ policy for quantizing the model trained on large datasets. The experiment results are good both on the accuracy and efficiency. 7. The experimental results for ResNet18 and ResNet50 on CIFAR10 are promising. The proposed method achieves considerable speedup compared to SOTA methods. 8. Also, the method has great potential for practical applications of neural network quantization. Weaknesses 1. The paper employs certain assumptions in its theoretical analysis, such as those related to PAC-Bayesian theory and assumptions in the convergence analysis of stochastic optimization. These assumptions may not always hold true in practical application scenarios. Authors should explain how the theoretical conclusions drawn from these assumptions effectively guide the actual mixed-precision quantization (MPQ) tasks. 2. In the experimental section, you selected CIFAR10, Flowers, and Food as proxy datasets, and used ImageNet and VOC as target datasets. Authors should ensure that this choice of datasets adequately represents a wide range of scenarios and confirm that the performance of the ASGA method still holds for other types of datasets. Additionally, authors should have plans to conduct experiments on a broader range of datasets for validation. 3. The paper mentions that the ASGA method has been tested on different models, such as ResNet-18, ResNet-50, and MobileNet-V2. However, these models have significantly different structural characteristics. The authors should clarify the specific performance of the ASGA method in terms of its adaptability to model structures. Additionally, the authors should also consider whether the method needs to be adjusted accordingly for more complex or specialized model structures. 4. This paper mainly focuses on image classification and object detection tasks in the theoretical analysis and experimental verification. It is recommended that the authors clarify the applicability of the ASGA method in other deep learning fields, such as natural language processing and speech recognition. The authors should also determine whether this method requires significant modification to be extended to these fields. Other Comments Or Suggestions: Please check Other Strengths And Weaknesses. Questions For Authors: Please check Other Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: ## Q1 The rationality of the theoretical assumptions in the article. A1: Many thanks for your insightful comments! As you mentioned, our theoretical analysis employs certain assumptions, including the PAC-Bayesian framework and convergence analysis under stochastic optimization. We adopt PAC-Bayesian theory primarily because it provides a rigorous theoretical foundation for model generalization capability. Building upon this framework, we theoretically demonstrate that reduced loss sharpness in quantized models leads to tighter upper bounds on generalization error, thereby enhancing the generalization of MPQ policies. Our experimental results validate this theoretical insight: when applying ASGA for MPQ search, we observe a **1.5% improvement in Top-1 accuracy on ImageNet compared to conventional methods**. This empirically confirms that PAC-Bayesian error bounds effectively measure MPQ generalization capability in practice. Furthermore, **Figure 6 in our paper** illustrates the sharpness reduction during MPQ search, demonstrating ASGA's effectiveness in minimizing $\sigma_{max}$. These experimental findings align precisely with our theoretical predictions, reinforcing confidence in the validity of our analysis. Additionally, we analyze ASGA's convergence properties and derive its convergence upper bound under non-convex optimization conditions. As evidenced in **Figure 4**, our method demonstrates **stable convergence**, where the adaptive $\rho$ parameter **enhances training stability** for MPQ. ## Q2 Is the dataset used representative? A2: Many thanks. We selected multiple proxy and target datasets based on their widespread use and representativeness in image classification tasks. These datasets cover diverse image categories, styles, and difficulty levels, providing preliminary validation of ASGA's effectiveness and generalization capability. Experimental results demonstrate stable improvements in MPQ transfer performance across all scenarios. Although these rigorously selected datasets can, to a certain extent, already reflect the performance of ASGA quite well, it is undeniable that they still have certain limitations. For example, the data coverage in some specific fields may not be comprehensive enough, and so on, we believe ASGA's theoretical foundation and design approach possess universal applicability. To further evaluate ASGA's broad applicability, we plan to test it on larger-scale datasets, non-vision tasks (speech classification, autonomous driving detection), and extremely low-bit quantization tasks to ensure its effectiveness in more complex quantization environments. ## Q3 The adaptability and scalability of the proposed method to models? A3: Thanks for your comment! ASGA **requires no substantial modifications when applied to different models**. The core idea of the ASGA method lies in smoothing and optimizing the loss landscape. Through this approach, it effectively reduces the undesirable characteristics of the loss landscape, such as sharp local minima, and thereby significantly enhances the model's generalization ability, enabling it to better handle different input data and task scenarios. Our experimental validation in **Table 1 in Section 3 of our paper** across diverse model architectures, including ResNet18 and others, has consistently demonstrated improved performance, confirming ASGA's applicability to various structural designs. While minor parameter adjustments might be necessary when applying ASGA to more complex or specialized model architectures, such potential modifications would not compromise the method's general applicability. The architectural independence of our approach stems from its operation at the fundamental level of loss landscape optimization rather than specific network topologies. ## Q4 The scalability of the proposed method in other deep learning fields. A4: Thanks. As previously mentioned, ASGA demonstrates considerable versatility and can be applied to other deep learning domains without requiring substantial modifications. We plan to not only further investigate ASGA's performance in areas such as natural language processing, where its ability to handle sparse gradients and non-convex landscapes could offer distinct advantages, but also to deploy ASGA across diverse hardware scenarios, including both low-power devices and high-throughput systems, to explore its broader potential applications. These efforts will be complemented by rigorous benchmarking against state-of-the-art optimizers to obtain trade-offs in convergence speed, memory efficiency, and generalization. --- Rebuttal Comment 1.1: Comment: I have carefully reviewed the authors’ rebuttal, and all of my previous questions have been clearly addressed. I have updated my evaluation accordingly.
Summary: This paper proposes an Adaptive Sharpness-Aware Gradient Aligning (ASGA) method for generalizable mixed-precision quantization. ASGA aims to address the issue of excessive search burden in MPQ through quantization generalization, i.e., searching for quantization policies on small proxy datasets and then generalizing them to the large-scale dataset tasks. Specifically, authors consider a series of sharpness improvement strategies, such as SAM, gradient alignment, and adaptive perturbation radius strategy, which can balance accuracy, complexity, and generalization in the MPQ search. The paper conducts both theoretical analysis and experiments on a set of datasets, which have validated the effectiveness of the proposed method. Ablation studies confirm the effectiveness of its components. In summary, this work provides a new way to address MPQ's high search burden. ## update after rebuttal Claims And Evidence: In this work, all the statements are strongly backed by theoretical demonstrations or experimental findings, and all of them are without problems. Methods And Evaluation Criteria: ASGA aims to solve the high search cost in MPQ. Using proxy datasets like CIFAR10 and benchmark datasets such as ImageNet and VOC helps evaluate the generalization and performance of quantization strategies for various tasks. Theoretical Claims: I have carefully reviewed the theoretical proof sections in the article and believe they are correct. Experimental Designs Or Analyses: The datasets and baseline methods used in the article are representative and can be used to evaluate the effectiveness of the method. Supplementary Material: I have carefully reviewed the content of the supplementary material. Relation To Broader Scientific Literature: The paper innovatively uses loss landscape sharpness for enhancing MPQ policy search, which is built on neural network sharpness-generalization research. Unlike existing MPQ methods with complex feature map calculations, it uses simple sharpness measures for a more efficient approach. This work is the first attempt to apply the sharpness of the loss landscape to the MPQ generalization. Essential References Not Discussed: The related works have discussed the most relevant works about MPQ and Sharpness. Other Strengths And Weaknesses: Strengths 1. The paper has a clear structure and its idea is well illustrated, which is very easy to understand. 2. The utilization of the sharpness of the loss landscape for MPQ policy search is novel. 3. This design has sufficient theoretical proofs as support. 4. The experimental design of the work is reasonable and highly persuasive, which can verify the effectiveness of the proposed method. 5. The proposed method is computationally efficient since it does not introduce additional computational resource overhead. 6. The design about how to exploit the loss-sharpness information is clear. 7. The experimental results show the effectiveness of the proposed method in tackling various MPQ tasks. Weaknesses 1. The authors should briefly explain why σmax can measure the sharpness of the loss landscape. 2. Authors enhance model generalization by minimizing the loss sharpness. However, during actual training, the model may get stuck in locally flat regions, leading to convergence to suboptimal solutions. How can we ensure that the ASGA method, while pursuing a flat loss landscape, does not excessively get trapped in locally flat regions and miss the global optimum solution? 3. Has the author considered extending the ASGA method to other fields? For example, applying the ASGA method to other types of quantization methods, such as binary quantization? 4. In the paper, authors claim that the ASGA method can be applied to edge intelligence applications. A natural question is that whether the ASGA can be applied to other kinds of hardware platforms. For example, deploying this method on a Raspberry Pi? Other Comments Or Suggestions: Please refer to the weaknesses presented in Section “Other Strengths and Weaknesses”. Questions For Authors: Please refer to the weaknesses presented in Section “Other Strengths and Weaknesses”. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Q1 The authors should briefly explain why σmax can measure the sharpness of the loss landscape. A1: Many thanks for your comments! First, reference [1] has proven that $\sigma_{max}$ is positively correlated with the sharpness of the loss landscape. Moreover, $\sigma_{max}$ is the eigenvalue with the largest absolute value of Hessian, which reflects the curvature information of the loss landscape near a local minimum. In other words, a larger curvature indicates a sharper loss surface, leading to poorer model generalization. Conversely, smaller $\sigma_{max}$ corresponds to flatter regions where the loss changes gradually, leading to better generalization. Therefore, using $\sigma_{max}$ as a measure of the sharpness of the loss surface is appropriate. --- [1]: Surrogate Gap Minimization Improves Sharpness-Aware Training, ICLR 2022. --- ## Q2 Authors enhance model generalization by minimizing the loss sharpness. However, during actual training, the model may get stuck in locally flat regions, leading to convergence to suboptimal solutions. How can we ensure that the ASGA method, while pursuing a flat loss landscape, does not excessively get trapped in locally flat regions and miss the global optimum solution? A2: Thanks for the insightful comment! Indeed, avoiding convergence to flat regions while pursuing a flat loss landscape is a critical challenge. We recognize this issue, and thus incorporates both empirical loss and perturbed loss into the design of the ASGA loss function. This aims to maintain low loss value while seeking a flat landscape. In addition, the implicit gradient alignment mechanism helps minimize sharpness while preventing convergence stagnation caused by gradient conflict. This ensures that the model converges along a flat yet globally optimal direction guided by the loss landscape. From an experimental perspective, we validate our strategy across multiple datasets, demonstrating that our method can **obtain superior quantization policies to direct optimization on the target dataset while simultaneously improving generalization performance**. ## Q3 Has the author considered extending the ASGA method to other fields? For example, applying the ASGA method to other types of quantization methods, such as binary quantization? A3: Many thanks. We are indeed considering applying ASGA to other quantization methods. The core idea of ASGA - improving model generalization by optimizing the sharpness of the loss landscape - **is fundamentally generalizable**. For instance, the binary quantization you mentioned represents an extreme form of low-bit quantization. However, this method has very strong constraints, which may lead to a drastic change in the loss landscape. As a result, the model optimization process becomes extremely difficult, and it is challenging to find the optimal solution. **ASGA's sharpness optimization and adaptive gradient alignment mechanism can effectively mitigate this issue**. Therefore, we plan to further investigate and validate the effectiveness of ASGA in other quantization approaches in our future research. We are confident that ASGA can also demonstrate excellent performance in other quantization methods. ## Q4 In the paper, authors claim that the ASGA method can be applied to edge intelligence applications. A natural question is that whether the ASGA can be applied to other kinds of hardware platforms. For example, deploying this method on a Raspberry Pi? A4: Thanks for the comment! In edge intelligence applications, the primary challenges are to reduce computational overhead and enhance model generalization—objectives that align closely with ASGA’s design goals. While this paper focuses on ASGA’s application to conventional edge devices, we believe our method can be applicable to resource-constrained platforms such as Raspberry Pi. As pointed out in Section 3 of our paper, unlike the traditional model training process, ASGA only needs to conduct policy search on a small-scale proxy dataset and then finetune it on the large-scale target dataset. This approach can **significantly reduce a large amount of computational overhead for resource-constrained devices**. Thus, even for low-power devices like the Raspberry Pi, ASGA can be practically deployed by first performing offline quantization policy searches before on-device deployment, making it feasible for energy-efficient scenarios.
null
null
null
null
null
null
Exploring Criteria of Loss Reweighting to Enhance LLM Unlearning
Accept (poster)
Summary: The paper studies the effect of token-wise reweighting on gradient ascent type unlearning algorithms. They consider two kinds of reweighing: importance-based and saturation-based reweighing. They find that saturation is often more effective than importance-based reweighing and assigns lower weights to data with lower likelihoods while importance-based reweighting does the opposite. Based on their observation, the authors propose SatImp, which uses the product of importance weight and saturation weight in reweighing. They demonstrate the effectiveness of their method through extensive experiments on ToFU, WMDP and MUSE benchmarks. Claims And Evidence: The claims made in the submission are supported by clear and extensive experimental results. Methods And Evaluation Criteria: The authors use three popular unlearning benchmarks (TOFU, WMDP, MUSE) to evaluate the method. The choice of datasets are suitable for the problem at hand. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiments look valid based on my understanding. However, the experiments are done without repetitions (i.e., the authors do not report the standard deviation (or error bar) of the performance of each method). For example, in Table 2, it is unclear whether SatImp outperforms existing methods as the improvement seems to be marginal. Supplementary Material: The supplement contains the code in the submission. I did not review the supplement. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: As far as I am aware, the authors have not omitted essential references in their submission. Other Strengths And Weaknesses: The paper is well-written and extensive experiments are done to support the authors' findings. However, the method proposed in this work appears to be a straightforward combination of existing importance-based and saturation-based reweighting methods and does not seem to offer significant novelty. Moreover, as shown in Table 2,3, the improvement of SatImp over existing methods seems marginal and could be due to the randomness in evaluation/unlearning. Other Comments Or Suggestions: N/A Questions For Authors: In addition to the previous comments, I find the intuition "importance-based methods put more weights on tokens with less likelihood" a bit confusing. If my understanding were correct, in importance-based methods, a token is assigned a large weight if and only if it contains important information in the text. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Many thanks for your constructive comments and suggestions! Please see our responses below. **Q1 The experiments are done without repetitions, and it is unclear whether SatImp has a marginal improvement.** Many thanks for your suggestions. Here we provide the standard deviations for part of Table 2 (due to the 5000 chars limitation), more results will be updated if our paper could be accepted. This is the results of LLaMA-2-7B. | Method| 1\% ES Re. | 1\% ES Un. | 1 \% FQ | 1\% MU | 5\% ES Re. | 5\% ES Un. | 5 \% FQ | 5\% MU | 10\% ES Re. | 10\% ES Un. | 10\% FQ | 10\% MU | | --------- | ------ | ------ | ------ | ------ | ------ | ------- | ------ | ------ | ------ | ------- | ------ | ------ | | GA | 0.4747 ± 0.0158 | 0.1024 ± 0.0147 | 0.4127 ± 0.0082 | 0.5736 ± 0.0144 | 0.2212 ± 0.0148 | 0.0000 ± 0.0000 | 3.2e-11 ± 3.3e-12 | 0.5461 ± 0.0114 | 0.2216 ± 0.0118 | 0.0000 ± 0.0000 | 2.8e-30 ± 4.5e-31 | 0.5208 ± 0.0125 | | PO | 0.7110 ± 0.0166 | 0.5031 ± 0.0136 | 0.1748 ± 0.0188 | 0.5310 ± 0.0070 | 0.6735 ± 0.0148 | 0.6054 ± 0.0181 | 7.3e-12 ± 5.5e-13 | 0.5564 ± 0.0134 | 0.6314 ± 0.0142 | 0.6124 ± 0.0106 | 6.3e-17 ± 3.2e-18 | 0.5174 ± 0.0121 | | DPO | 0.6228 ± 0.0134 | 0.4000 ± 0.0211 | 0.0940 ± 0.0175 | 0.6053 ± 0.0145 | 0.5044 ± 0.0171 | 0.3846 ± 0.0130 | 4.4e-10 ± 6.2e-11 | 0.5759 ± 0.0108 | 0.6625 ± 0.0151 | 0.6174 ± 0.0126 | 7.1e-15 ± 4.9e-16 | 0.6158 ± 0.0188 | | NPO | 0.6709 ± 0.0110 | 0.1106 ± 0.0150 | 0.5881 ± 0.0151 | 0.6341 ± 0.0122 | 0.5888 ± 0.0165 | 0.1150 ± 0.0209 | 0.0653 ± 0.0142 | 0.6197 ± 0.0185 | 0.6567 ± 0.0089 | 0.0941 ± 0.0141 | 0.0140 ± 0.0071 | 0.6267 ± 0.0147 | | SimNPO | 0.7933 ± 0.0069 | 0.2993 ± 0.0140 | 0.0027 ± 0.0012 | 0.6246 ± 0.0164 | 0.6742 ± 0.0165 | 0.2096 ± 0.0053 | 0.0002 ± 0.0001 | 0.6144 ± 0.0084 | 0.6175 ± 0.0145 | 0.1862 ± 0.0120 | 4.9e-6 ± 3.3e-7 | 0.5970 ± 0.0061 | | WGA | 0.7539 ± 0.0074 | 0.0631 ± 0.0206 | 0.2708 ± 0.0123 | 0.6218 ± 0.0095 | 0.6700 ± 0.0156 | 0.0001 ± 0.0001 | 7.8e-8 ± 6.0e-9 | 0.6291 ± 0.0074 | 0.6453 ± 0.0150 | 0.0029 ± 0.0008 | 5.3e-17 ± 2.7e-18 | 0.6324 ± 0.0148 | | SatImp |0.8091 ± 0.0208 | 0.0552 ± 0.0140 | 0.2583 ± 0.0103 | 0.6375 ± 0.0154 | 0.7000 ± 0.0106 | 0.0019 ± 0.0010 | 1.9e-17 ± 4.3e-18 | 0.6415 ± 0.0092 | 0.6706 ± 0.0175 | 0.0153 ± 0.0028 | 2.9e-18 ± 5.1e-19 | 0.6419 ± 0.0115 | Considering the marginal improvement problem, other reviewers are also concerned. Please note that we have mentioned the selection of hyper-parameter setting are fixed after the analysis on TOFU (In the Appendix Line 1152). The best preformance setting for ES score is typically not the best for other metrics. Thus, the performance gaps may be marginal on FQ\&MU of TOFU, Acc. on WMDP, and metrics on MUSE with such a setting. After adjust this setting, new results on WMDP and TOFU are shown in **Reviewer LUgf Q2** and **Reviewer 8Bwg Q2**. **Q2 SatImp is a simple combination of existing method thus this paper does not seem to offer significant novelty.** We sincerely appreciate your comments, but we respectfully disagree with your assessment regarding the lack of novelty of our work. As far as we know, we are the first that proposing the importance-based (IB) reweighting mechanism (i). There is no IB annotations in all benchmarks before this paper. We are the first to make such annotations and make it public (in the supplementary material, will be released once this paper being accepted). This paper is also the first that summarize existing methods with a universal saturation-based (SB) framework (ii). While mounts of literature have explore the reweighting in LLM unlearning, no one has made such a summary to figure out why they are effective from the reweighting perspective. Besides, we simplify IB and SB with a simple mapping -- SimImp (iii) and SimSat (iv). Finally, we propose SatImp, which further enhances the performance via above analysis. Among these five findings, only (iv) is encountered a contemporary work (please refer to our response in **Reviewer LUgf Q1**). Thus, we sincerely hope you can re-evaluate the contribution and novelty of our paper. **Q3 Confusion about the importance-based setting** We apologize for any confusion that may arise. Your understanding are correct that a token is assigned a large weight if and only if it contains important information. In fact, we have done this in Page 3 Line 160. As mentioned above, we are the first to make such annotations. However, considering that there are numerous data points without importance labels. It would be labor-intensive for human to annotate all of samples in existing datasets. Thus, we try to find an approximate representation of importance labels. We notice that it is high-correlated between weights and likelihoods (Figure 2(e), (g)). Thus, we simplify the importance-based and saturation-based methods as put more weights on tokens with lower and higher likelihood, respectively.
Summary: This paper studies the loss reweighting mechanism for gradient difference-based LLM unlearning methods. It investigates the effect of importance-based reweighting and saturation-based reweighting on TOFU-1% setting. Based on the observations, a new reweight mechanism, SatImp, is proposed, and the results of the TOFU and WMDP experiments show some advantages. Claims And Evidence: **Claim 1:** Importance-based reweighting is better at unlearning while saturation-based reweighting is better at knowledge retention. *Evidence:* Experiments in Section 4.1 aim to answer this claim. *Comment:* This experiment does not fully support the claim as it operates on a limited setting. Authors may include the performance for Imp only and Sat only in the main experiments shown in Table 2/3 to fully support this claim. **Claim 2:** Smoothness of the loss reweighting has a big impact on performance. *Evidence:* experiments in section 4.2 aim to answer this claim. *Comment:* Same as claim 1, I don't think that an experiment on a limited setting fully supports the claim. **Claim 3:** More fine-grained token-level reweighting is better than span/instance/group level reweighting. *Evidence:* experiments in Section 4.2 aim to answer this claim. *Comment:* Same as Claim 1/2. **Claim 4:** Soft reweighting is better than hard sampling reweighting *Evidence:* Experiments in section 4.2 aim to answer this claim. *Comment:* I don't think this is closely related to the paper. Besides, intuitively, TopK/BottomK hard-sampling is similar to some extreme cases for the shape of weight distribution, it can be covered by a study on the effect of varying shaped weight distribution (shown in Figure 5) on unlearning performance. **Claim 5:** SatImp is better than baseline methods like NPO/DPO/SimNPO in LLM unlearning. *Evidence:* Experiments in section 5 aim to answer this claim. *Comment:* I have two concerns for this claim. ​ 1. The performance for baselines does not match numbers reported in other papers. For example, NPO/SimNPO are reported to achieve near 1 FQ and close to original MU (>0.5) on Forget-1%, and other higher numbers on Forget-5, 10% compared to numbers in Table 2. ​ 2. SatImp is reported to achieve only 0.2491 MMLU performance for WMDP, which is close to random guessing. This indicates a heavy loss on the base LLM knowledge. More discussion is needed for this experiment. **Other major concern:** ​ 1. The hyper-parameter choosing. The hyper-parameter for Equation 8/9/11/12 does not have a consistent pattern. It seems very tricky to choose a good parameter for the proposed reweighting. ​ 2. Sat reweighting in Equation 9 and 11 has two completely different forms. Is there any particular reason for this change? Methods And Evaluation Criteria: The evaluation is conducted on popular LLM unlearning datasets, including TOFU/WMDP/MUSE, and the evaluation criteria are standard. I think it is reliable. Theoretical Claims: No theoretical claims are made in this paper. Experimental Designs Or Analyses: Please see claims and evidence section for my comment on experiments. Supplementary Material: I checked the supplementary section B, C to understand the experiment setups better, and some parts in Section E for experiment results on MUSE dataset. Relation To Broader Scientific Literature: The idea in this paper is related to other loss reweighting methods for LLM unlearning. The experiments aim to understand the mechnism of several reweighting methods and combine them to achieve a better balance between unlearning and knowledge retention. Essential References Not Discussed: * A work involving loss reweighting only on forget data ​ LLM Unlearning via Loss Adjustment with Only Forget Data * And several works about logit reweighting: ​ Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference ​ UNDIAL: Self-Distillation with Adjusted Logits for Robust Unlearning in Large Language Models ​ Offset Unlearning for Large Language Models Other Strengths And Weaknesses: Minor weakness: 1. Some notations are misleading, for example, Equation 8 use $p$ as the hyper-parameter, which may cause confusion because most of the cases $p$ in this paper refers to the token probability. And the variable names for the hyper-parameter used in Equations 8/9/10 are inconsistent, which may cause confusion. I would suggest using a consistent name like $\beta$, but differentiate with subscripts. 2. Some information missing for the figures. For example, what's the baseline in Figure 3 a-d? Other Comments Or Suggestions: * Algorithm 1 seems not too important as it does not clearly correlate to the proposed algorithm and can be moved to the appendix. * Incorporating unlearning-retain performance trajectory along training would better illustrate the performance difference between different methods. And it is also a very popular illustration in previous LLM unlearning works. * Incorporating the explanation about ES scores in main text instead of only in appendix is better for readers to digest the paper. Questions For Authors: 1. The proposed reweighting assigns a lower than 1 weight for the token losses, which downscale the GD loss. Is there any intuition on why this is useful? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Many thanks for your constructive comments and suggestions! Please see our responses below. **Q1 More experiment about Claims 1-4** Thanks for the detailed review. For Claim 1, we contain the relevant results in the Appendix Figure 13,14 (m)-(x). We will include the performance for SimImp in all tables. For Claim 2 and 3, more results are shown in the Appendix Figure 13-16. For Claim 4, the motivation that we include the discussion about hard-sampling and soft-weighting is we do not want to overlook any element in our quest to achieve better LLM unlearning performance. As mentioned in Line 324, there have been some concerns that hard sampling may outperforms soft weighting. Thus, it is necessary to figure it out. Only through such a thorough investigation can we derive reliable empirical conclusions. Our analysis can help subsequent work avoid detours. **Q2 Claim 5: Baseline is not match the numbers in other papers and the results on the WMDP should be discussed.** Regrading the WMDP results first, the response is shown in **Reviewer LUgf Q2**. Regarding the inconsistent results about FQ and MU, we indicate that the performance on ES score and FQ\&MU are not always aligning, which means when models obtain best performance on ES score, they may have mean performances on FQ\&MU. It is because the objectives of ES and FQ\&MU are different. Especially the FQ and ES Un., FQ requires the unlearning model towards a standard model yet ES Un. requires the model forget the unlearn data as much as possible. There are some literature that have mentioned this phenomenon. First, we find that WGA also mentions this discrimination. Second, in the paper 'LLM Unlearning via Loss Adjustment with Only Forget Data' that you mentioned as a essential reference, the FQ and MU are similiar as results in Table 2. In this paper, we have made comprehensive analysis on TOFU with ES score. Therefore, in Table 2, we report the results with ES score as the primary metric. All methods were selected with the goal of optimizing the ES score‌. Due to limited rebuttal length, here we report part of results which set the FQ and MU as the primary metrics. Results with retain regularization on LLaMA-2-7B are shown as follows: | Method| 1\% ES Re. | 1\% ES Un. | 1 \% FQ | 1\% MU | | --------- | ------ | ------ | ------ | ------ | | GA | 0.4086 | 0.0732 | 0.9188 | 0.5228| | NPO | 0.4586 | 0.0744 | 0.9188 | 0.5379 | | SimNPO | 0.4243 | 0.0822 | 0.9188 | 0.5332 | | WGA | 0.6037 | 0.0498 | 0.9900 | 0.5926 | | SatImp | 0.7242 | 0.0405 |0.9900 | 0.6350 | **Q3 Confusion about Eq. 8,9,11,12** We sincerely apologize to the confusion about the exploration process of our paper. First, Eq. 8 and 9 are our intuition about importance-based and saturation-based unlearning. Eq. 8. indicates that a token should be noticed if and only if it contains important information. Eq. 9 represents an objective, which summarizes all existing reweighting methods. For instance, Eq. 9 is equivalent to SimNPO when $\tau = \frac{|y|(P+1)}{2} - P$‌. However, considering that there are numerous training samples that without importance-based labels, it is difficult for researchers to labeling them all. Thus, we try to find an approximate representation for both unlearning branches. We notice it is high-correlated between weights and likelihoods (Figure 2(e)-(h)). Thus, we simplify two branches as put more weights on tokens with lower or higher likelihood, respectively. An intuition for this simplification is probability-based mapping, thus we propose SimImp and SimSat (Eq.11). We make a comprehensive investigation and then combine these two solutions (Eq. 12). **Q4 References** Thanks for providing several LLM unlearning methods, we will cite them and compare the performance. According to the results in these papers, none outperforms SatImp. **Q5 Typo and organization of the paper** Many thanks for your wonderful suggestions! 1. We will modify the $p$ in Eq. 8. $\beta$ has been used as the smoothness index in Eq. 9, 11, 12. 2. The baseline in Figure 3a-d is the vanilla GD, which is represented by the dashed line. 3. We will discuss and re-organize the paper about the position of Algorithm 1 and the explanation about ES scores. 4. Actually, we have made a lot figures that recording the retain-unlearn performance at different training steps. However, considering the length of our paper, these figures are not shown in the submission. We will include part of them if our paper is accepted. **Q6 Intuition about re-weighting a lower than 1 weight for the token losses** When we observe the training process of the vanilla GD, we notice that the unlearning process of the model is completed rapidly within just few steps after warm-up, even reaching an over-unlearning state. Such a swift change is detrimental to finding the optimal model‌. Smaller weights can prolong the unlearning, helping the model stop training at a more appropriate step. --- Rebuttal Comment 1.1: Comment: Thanks for the additional results. It's great to see that these performance improvements after changing the checkpoint selection criteria. And since this is a core issue, I would suggest it is necessary to include the forget-retain performance trade-off. trajectory results in the paper. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your timely reply! The retain-unlearn trade-off is a crucial issue. As is well known, reweighting-based LLM unlearning methods are sensitive to hyper-parameters, which has already been mentioned in our paper (Appendix Line 1175). In fact, we have conducted relevant experiments on this trade-off (mentioned in the Appendix Line 816). The ES score-related content has been included in our supplementary materials (results.xlsx, under the SatImp table). Here, we provide additional results on FQ and MU, hoping that these findings will meet your new requirements for our work. Due to space constraints, we first present FQ and MU experiments on the Forget 1\% setting. First, we present the results with forget-only regularizations. FQ/MU denotes the reporting format. | $(\beta_1, \beta_2)$| LLaMA-2-7B| Phi-1.5 | | :--: | :--: | :--: | | (1,0.05) | 0.7659/0.5890 | 0.0143/0.3886 | (1,0.1) | 0.7659/0.5861 | 0.0541/0.3918 | (1,0.2) | 0.9188/0.5833 | 0.1650/0.3891 | (1,0.5) | 0.9188/0.5920 | 0.5786/0.4140 | (1,1.0) | 0.4046/0.5863 | 0.4046/0.4022 | (2,0.05) |0.9188/0.5881 | 0.0286/0.3791 | (2,0.1) |0.9188/0.5847 | 0.0068/0.3824 | (2,0.2) |0.9188/0.5851 | 0.0541/0.3974 | (2,0.5) |0.7659/0.5773 | 0.2657/0.4110 | (2,1.0) |0.9188/0.5920 | 0.9188/0.4243 | (3,1.0) |0.4046/0.5864 | 0.7659/0.4181 | (3,2.0) |0.1650/0.5927 | 0.1650/0.4312 | (3,3.0) |0.0286/0.6135 | 0.4046/0.4378 | (5,1.0) |0.7659/0.5912 | 0.5786/0.4207 | (5,2.0) |0.4046/0.5801 | 0.7659/0.4391 | (5,3.0) |0.0286/0.6232 | 0.0971/0.4490 Second, we present the results with retain regularizations. FQ/MU denotes the reporting format. | $(\beta_1, \beta_2, \lambda)$| LLaMA-2-7B| Phi-1.5 | | :--: | :--: | :--: | | (1,0.05,0.1) | 0.7659/0.6027 | 0.0971/0.4793 | (1,0.05,0.2) |0.7659/0.6136 | 0.0971/0.4859 | (1,0.05,0.5) |0.7659/0.6177 | 0.9900/0.5170 | (1,0.1,0.1) | 0.7659/0.6014 | 0.0541/0.4789 | (1,0.1,0.2) | 0.9188/0.6107 | 0.2657/0.4864 | (1,0.1,0.5) | 0.7659/0.6160 | 0.7659/0.5144 | (5,0.05,0.1) |0.9188/0.6371 | 0.4046/0.5096 | (5,0.05,0.2) |0.9188/0.6323 | 0.9900/0.5187 | (5,0.05,0.5) |0.2657/0.6215 | 0.1650/0.5216 | (5,0.1,0.1) |0.9900/0.6428 | 0.7659/0.5122 | (5,0.1,0.2) |0.9900/0.6350 | 0.9900/0.5195 | (5,0.1,0.5) |0.2657/0.6277 | 0.0971/0.5248 This represents only a portion of the hyperparameter experiments we have completed, we hope these results provide insights into addressing the core issue you've newly identified. Besides, we sincerely appreciate your meticulous, responsible, and highly professional feedback throughout the review process. At the same time, we sincerely hope our responses will merit your further support of this work. Should any additional clarification be needed, we stand ready to address it promptly.
Summary: This paper studies LLM unlearning. The authors investigate the loss reweighting mechanism for LLM unlearning, where each token in the forget set is assigned a different weight in loss calculation. Specifically, the authors propose two ideas for loss reweighting: saturation, which suggests that tokens that are sufficiently unlearned should be assigned smaller weights; and importance, which suggests that tokens that convey important information should have larger weights. The paper studies the influence of two reweighting on unlearned model performance and proposes a method that combines the two ideas into a single weight for unlearning. Experiments are conducted on three benchmarks to demonstrate the performance of the method. Claims And Evidence: Yes, the claims are supported by experiments. Methods And Evaluation Criteria: Most of the evaluation makes sense, except that the paper introduces the unlearning setting where the goal is to forget a subset of knowledge from the web-scale training corpora (i.e., pre-training data), but most of the experiments are conducted on TOFU, which is a synthetic dataset where knowledge to forget is learned in special fine-tuning instead of pre-training. Therefore, it's not clear whether the analyses can generalize to the more practical setting where pre-training knowledge needs to be unlearned. Theoretical Claims: The paper does not involve theoretical claims. Experimental Designs Or Analyses: Yes, I checked the experiment section. Supplementary Material: Yes, Appendix E. Relation To Broader Scientific Literature: As mentioned in the paper, the proposed method is very similar to WGA. Although the paper contains detailed analyses of the impacts of the two reweighting terms, the final method seems to be an incremental change on WGA, making the contribution limited. Essential References Not Discussed: No. Other Strengths And Weaknesses: Weaknesses: 1. The proposed method does not seem to have a significant improvement over the baselines (TOFU 5% and 10% setting, and Table 3 on WMDP). Especially on retain performance, the unlearned model's MMLU performance drops to ~25%, which is random guessing. This indicates that unlearning completely destroyed the general knowledge in LLMs. In Table 5, the utility also drops to 0 on MUSE. In Table 6, the method is no better than RMU. 2. The proposed method seems to be a marginal improvement over baseline WGA. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Many thanks for your constructive comments and suggestions! Please see our responses below. ‌**Q1: Is SatImp an incremental version of WGA?‌** We sincerely appreciate your comments, but we respectfully disagree with your assessment that SatImp is an incremental version. Before reading the response we prepare for you, please first refer to the response in **Reviewer 3V34 (Q2 and Q3)**. We apologize for any confusion that may arise. To begin with, we would like to highlight the difference between the WGA paper and ours. Specifically, the motivation of the WGA paper is presenting a toolkit of the gradient effect that qualifies the impacts of unlearning objectives on model performance. The literature focus on analyzing the details of diverse unlearning objective from a gradient perspective. WGA is an additional discovery in that paper, which could be an auxiliary strategy to enhance unlearning performance. Differently, the motivation of our submission is identifying the best reweighting strategy. Our submission contributes annotations of important tokens, summary of saturation-based methods, correlation between weights and losses in different paradigms, and combinations between different paradigms. Overall, the motivation and contribution are clearly distinct between these two papers. **Q2 Unsatisfied results and marginal improvements** Thanks for your comments about the experiments. However, we present more results and analysis that may help you understand this ‘marginal’ comprehensively. First, regarding the limited improvement, as demonstrated in **Reviewer 8Bwg Q2**, we have mentioned in the Appendix (Line 1152) that our hyper-parameter settings followed the exploration method based on the ES score in previous sections. We selected the hyper-param that achieved the best results on the ES score metric and fixed these params across all benchmarks. However, this setting is likely not optimal for other metrics (FQ, MU) or benchmarks (WMDP). As shown in **Reviewer 8Bwg Q2**, if FQ and MU are prioritized in the experiments, the ES results would decline, but the performance gap between different methods would widen‌. Regarding the WMDP results, the 25% performance on MMLU is caused by 125-step training process, which is too long for forget-only regularization in Table2. Except for NPO, the other methods are all leading to an over-unlearning situation. Thus, we utilize the early-stopping at the 30-th, 60-th, 90-th steps. The MMLU results are shown as follows: | Methods | 30-th | 60-th | 90-th | | ---- | ---- | ---- | ---- | |GA| 0.2865 | 0.2524 | 0.2454 | |NPO| 0.5607 | 0.5329 | 0.5239 | |SimNPO| 0.3418 | 0.2616 | 0.2493 | |RMU| 0.3193 | 0.2709 | 0.2697 | |WGA| 0.2963 | 0.2457 | 0.2463 | |SatImp| 0.3197 | 0.2687 | 0.2512 | Furthermore, we report the results at 30-th step. | Methods | WMDP-Bio | WMDP-Cyber | MMLU | | ---- | ---- | ---- | ---- | |GA| 0.2474 | 0.2431 | 0.2465 | |NPO| 0.5260 | 0.4616 | 0.5607 | |SimNPO| 0.3519 | 0.3562 | 0.3418 | |RMU| 0.2479 | 0.2963 | 0.3193 | |WGA| 0.2467 | 0.2617 | 0.2963 | |SatImp| 0.2474 | 0.2431 | 0.3197 | Additionally, unlearning with retain regularizations on WMDP (Table 6) are all under-unlearned, which are caused by too high $\lambda$ (the ratio between forget and retain objectives in **Eq. (2)**). We adjust $\lambda$ to accomplish more effective unlearning. Results are shown as follows: | Methods | WMDP-Bio | WMDP-Cyber | MMLU | | ---- | ---- | ---- | ---- | | GA| 0.2739 | 0.2657 | 0.4265 | | NPO| 0.2647 | 0.3067 | 0.4434 | | SimNPO| 0.2617 | 0.3163 | 0.4453 | | RMU| 0.3493 | 0.3578 | 0.5523 | | WGA| 0.2490 | 0.2989 | 0.4806 | | SatImp| 0.2598 | 0.2815 | 0.5391 | Regarding the MUSE results, we sincerely apologize that there is a typo in SatImp with retain regularization on MUSE-Books dataset. We wrongly recorded a forget-only regularization result here. The correct results should be: | Methods | VerbMem | KnowMem | PrivLeak | UtilPres | | ---- | ---- | ---- | ---- | ---- | | SatImp| 0.0 | 0.0 | -21.1 | 37.9| Regarding the MUSE results with forget-only regularizations, we have mentioned that we choose the checkpoint based on the Privacy Leak metric, which measures the degree of unlearning (whether the model is over-unlearning or under-unlearning). Extensive results indicate that model would have 0 utility when PrivLeak near to 0 under the forget-only regularization setting. NPO is different because it is significantly under-unlearned, which obtains a very low negative value on the PrivLeak metric. Thus, it is normal that almost all methods have 0 utility under the forget-only regularization setting.
null
null
null
null
null
null
null
null
Reliable Algorithm Selection for Machine Learning-Guided Design
Accept (poster)
Summary: This paper proposed a method for design algorithm selection which combines designs' predicted property values with held-out labeled data to reliably assess whether a candidate design algorithm configuration produces successful designs. Specifically, the method selects configurations for design algorithms such that the resulting outcome (label) satisfies a pre-defined success criterion (e.g., having a sufficient fraction of designs exceeding a certain threshold) with error rate at most $\alpha$. To tackle this problem, the paper first formalizes the problem as hypothesis testing over a menu of candidate design algorithms, and then using prediction-powered inference techniques adapted to covariate shift (from held-out labeled data) to obtain statistically valid p-values for the hypothesis tests. Under the assumption that the density ratio between the design and labeled distributions are known, the paper gives theoretical guarantee that the selected configurations achieves the success criterion for any error rate. Without known density ratio, the paper provides empirical evidence of the procedure by simulation on two biologically motivated examples that their method successfully controls the error rate while identifying configurations that produce high-quality designs. Claims And Evidence: The central claim of the paper is that the proposed algorithm (hypothesis testing procedure combined with prediction powered inference for valid p-values) is able to select design algorithm configurations that achieves the success criterion with guarantees on any error rate (i.e. probability of selecting algorithms that don't match the success criterion). Under the assumption that the density ratio between the design and labeled distributions are known, Theorem 3.1 and Theorem A.1 give theoretical guarantees that using finite-sample or asymptotically valid p-values, the proposed procedure controls the error rate at any level $\alpha$. When this assumption doesn't hold, the paper combines the approach with a multinomial logistic regression-based method for estimating the density ratio. Empirically, the paper compares their approach against other baselines (prediction-only model, GMMForecasts) and shows that on the protein and RNA design tasks, their method outperforms others in selecting successful configurations. Methods And Evaluation Criteria: The paper proposes to use a multiple hypothesis testing approach to test if each configuration fits the user-defined success criterion. To obtain statistically valid p-values for the hypothesis test, the paper uses prediction-powered inference adapted for covariate shift to extract information from these predictions without being misled by prediction error. It then usees a Bonferroni correction to bound the overall error rate. The overall algorithm makes sense and gives guarantees under the assumption of known density ratio between design and labeled distributions. Theoretical Claims: I checked the proofs for the theoretical claims (Theorem 3.1 and Theorem A.1) and think it's correct. Experimental Designs Or Analyses: Without known density ratio, the paper provides empirical evidence of the procedure by simulation on two biologically motivated examples that their method successfully controls the error rate while identifying configurations that produce high-quality designs. In the experiments, the paper measures both the error rate of the selected configurations (probability of selecting at least one unsuccessful configuration) as well as the selection rate to ensure the algorithm is not overly conservative that outputs no configurations. Supplementary Material: The supplementary material includes the mathematical proofs for the theoretical claims as well as details of the algorithm. I reviewed and found no errors. Relation To Broader Scientific Literature: This work proposes the use of multiple hypothesis testing and prediction-powered inference applied to covariate shift to address machine learning-guided design problems where out-of-distribution predictions are common. Their proposed method gives a systematic way to choose design algorithms that fits the success criterion with guaranteed error rates. Essential References Not Discussed: The paper discusses their method within the context of machine-guided design and prediction-powered inference. The overall problem formulation might also be related to the area of Bayesian optimization and active learning. Other Strengths And Weaknesses: n.a. Other Comments Or Suggestions: n.a. Questions For Authors: - Algorithm 1 (line 6) selects p-values by a Bonferroni-style threshold. Would this possibly lead to overly conservative selections? - Related, can you get theoretical guarantees on the selection rate? - Do you have any theoretical guarantees if the density ratio is not known? Or, if the support of the densities are different? Are there possible problems that might arise in your analysis? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your feedback! We would like to refer the reviewer to the first 3 paragraphs of our response to reviewer EEWq, as we believe there has been a misconception regarding our primary contributions. In particular, to our knowledge, our work is the first to formalize and propose a rigorous solution to the problem of design algorithm selection (DAS): a principled method for making the decisions that every practitioner of ML-guided design must make, and which might otherwise be biased by personal preference or convenience. None of the reviewers have suggested any alternatives, which testifies to the novelty and difficulty of DAS and the creativity of our solution. Our responses to specific points follow. - "Would [the Bonferroni correction] lead to overly conservative selection?" Empirically, this was not a problem. Our method had low error rates while maintaining high selection rates, compared to baselines (Figs. 3a,4a). In new results with a 5X-larger menu for the protein GB1 experiments ([Fig. R1](https://shorturl.at/qE6TG), see first comment to reviewer Bfc7 for details), our method actually has higher selection rates than with the original menu, as the expanded menu contains more configurations with higher mean design labels. If one is concerned about conservatism, however, a conceptual strength of our formalization of DAS as a multiple testing (MT) problem is that *any* MT procedure that controls FWER can be used instead of Bonferroni. E.g., one can use procedures that respect the hierarchical [1] or correlation structure between configurations [2], which may yield less conservative multiplicity corrections. Since our goal was to establish a first, general solution to DAS, we instantiated Alg. 1 with Bonferroni, as it doesn't require assumptions about how configurations are related. - "Can you get theoretical guarantees on the selection rate?" No, because if the practitioner specifies an impossible success criterion, then the selection rate must be zero. That is, no non-zero lower bound exists in general. - Potential issues with estimated density ratios (DRs): While the guarantees in the paper do not hold with DRE, this does not nullify the value of having guarantees when DRs are known, in contrast to baselines w/o guarantees at all. We appreciate your concern, however, and present new results in [Fig. R1](https://shorturl.at/qE6TG) showing that our method performs similarly in the protein GB1 experiments using estimated vs. known DRs. Both there and in our RNA experiments in Fig. 4, our method with DRE outperforms the baselines. Also note that DRE yields consistent estimators of the true DRs if the model class is correctly specified (flexible enough) [3], and is routinely used in ML applications involving importance weighting [4]. To further address the concerns regarding Bonferroni and DRE, we also present new RNA experiments in [Fig. R2](https://shorturl.at/mOSna), which use DRE with an expanded menu of size 249 containing the hierarchical configuration space shown in [Fig. R3](https://shorturl.at/X0EOu). Our method performs well in these experiments. - Thank you for suggesting the relevance to Bayesian optimization and active learning! We now mention them in "Related Work," condensed here: Bayesian optimization (BO) is a paradigm for iteratively selecting designs, acquiring their labels, and updating a predictive model in order to optimize a property of interest. In each round, BO chooses the design that globally maximizes some acquisition function quantifying desirability based on the model's predictions. A typical goal of BO is to converge to the global optimum as the rounds progress, under regularity conditions [5]. In contrast, the goal of DAS is to achieve criteria on the distribution of design labels to be imminently proposed. Such guarantees can help justify designs to stakeholders when acquiring labels for even one round is resource-intensive, and the priority is to achieve specific criteria within one or a few rounds. However, nothing precludes BO from being used as design algorithms within our framework: configurations of BO with, e.g., different HPs or acquisition functions can be on the menu. Finally, we emphasize that our method empirically outperforms the baselines. Despite the impact of the DAS problem, no reviewer has suggested any alternative approaches. If the reviewer would like to suggest one, we are happy to include it! [1] Bretz et al. A graphical approach to sequentially rejective multiple test procedures. Stat. Med. 2009. [2] Dudoit et al. Multiple hypothesis testing in microarray experiments. Stat. Sci. 2003. [3] Gutmann & Hyvarinen. Noise-contrastive estimation of unnormalized statistical models. JMLR 2012. [4] Sugiyama et al. Density ratio estimation in machine learning. 2012. [5] Srinivas et al. Gaussian process optimization in the bandit setting. ICML 2010.
Summary: Hyperparameter tuning and algorithm selection can be really tricky in real scenes. This paper proposes a method based on prediction-powered inference techniques for design algorithm selection, aiming to choose some settings (configurations) that satisfy users' demands. Two practical experiments demonstrate this method can effectively select ideal settings. Claims And Evidence: Yes. The authors use p-values to support their design further. Methods And Evaluation Criteria: The method makes sense. Theoretical Claims: I check the theoretical claims in the main context, showing their framework can guarantee a high success rate for selecting desirable settings. Experimental Designs Or Analyses: Yes, both experiments seem sound to me. Supplementary Material: The authors provide detailed proof for theories and descriptions of baseline methods in the appendix. Relation To Broader Scientific Literature: The method can speed up the process of finding optimal hyperparameter and algorithm combinations and significantly reduce the cost of wet lab experiments. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The idea of selecting reliable algorithms has a strong potential to benefit all machine learning communities. 2. The paper is well-written and easy to comprehend. Weaknesses: 1. The major concern lies in that the authors have claimed a big idea, but the fact is that the menus used in both experiments only contain roughly 100 choices, which is far less in practical scenarios. A single method can require multiple hyperparameters to tune when applied to a new task, let alone we are in an era where new methods for a single task are merging every day. 2. The method is guaranteed by the assumption that the density ratios between the design and labeled data distributions are known, which can vary in real experiments or even be impossible to estimate. 3. The method uses held-out labeled data to characterize how prediction error affects the evaluation, which acquires lots extra information. Other Comments Or Suggestions: The reference in line 275 cannot jump. Questions For Authors: 1. What does '=' stands for in Figure 3(b) and Figure 4(b)? 2. Can your method be extended to more complicated tasks? For now, each of the RNA binder design algorithms contains one or zero hyperparameters, and all parameters are continuous. Is this method still satisfying when facing categorical parameters and methods with multiple parameters? Proving this would greatly help this paper to be more convincing. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your feedback! We are glad the reviewer found the idea of selecting reliable algorithms to have strong potential to benefit the ML community. We believe there has been a misconception regarding our primary contributions. **[First to formalize + address DAS]** To our knowledge, although ML has revolutionized the design of proteins, small molecules, and other modalities, this work is the first to formalize the problem of design algorithm selection (DAS) and propose a solution with any guarantees. None of the reviewers have suggested any baselines or alternative approaches, which testifies to the novelty and difficulty of DAS and the creativity of our solution. **[Not just HP tuning]** This work is not simply a variant of hyperparameter tuning as it is performed in supervised learning. Tuning cannot be performed for design tasks the same way, because we never have the labels of designs needed to evaluate each design algorithm configuration. This is why our major technical innovation is to rigorously evaluate which configurations will be successful, in the complete absence of labels for their designs. **[Formalized population-level success criteria w/ high-probability guarantees]** Another innovation is formalizing + achieving a common goal in practice: to produce a *population* of designs that satisfies a desired criterion (e.g., that at least 10% of designs' labels surpass some threshold, or that the average design label does so). Our method provides high-probability guarantees for a broad class of such population-level success criteria. Prior work has focused instead on uncertainty over individual designs, rather than population-level criteria often sought in practice. We are glad the reviewer found the paper well-written, and appreciate the concerns regarding the method's practicality, addressed below. - "Menus ... only contain roughly 100 choices": See [Fig. R1](https://shorturl.at/qE6TG) and [Fig. R2](https://shorturl.at/mOSna) for new protein and RNA experiments with menu sizes 501 and 249, respectively, where our method still performs well. However, even for the original smaller menus, our method outperformed the baselines. Are there alternative approaches the reviewer might propose? We are happy to try them. - "[Density ratios can be] impossible to estimate": Fig. 4 showed empirically that our method is still effective with a relatively simple classifier-based density ratio estimation (DRE) technique. Also see [Fig. R1](https://shorturl.at/qE6TG) for new results showing our method performs well with DRE in the protein GB1 experiments. Note that DRE is routinely used in ML applications involving importance weighting [1,2] and yields consistent estimators of the DRs if the model class is correctly specified (flexible enough) [3]. We agree that the guarantees don't hold with DRE, but this doesn't nullify the value of having guarantees when DRs are known, in contrast to baselines without guarantees at all. - Requires "held-out labeled data (HLD)": Figs. 3&4 compared to baselines that do not require HLD (prediction-only and GMMForecasts), which instead used *all* the labeled data to train the predictive model. Our method outperformed these baselines. We also note that the entire literature on post-hoc calibration for uncertainty quantification (including e.g. conformal prediction) requires HLD. Indeed, there is a fundamental limit to how calibrated models can be w/o using HLD [4]. In practice, collecting some extra labeled data can be a reasonable investment when the method's guarantees can help justify costly resources for synthesizing designs to project stakeholders. Lastly, our method is amenable to cross-fitting variants that do not completely hold out data [5] (happy to elaborate). - Can the method handle "categorical parameters and ... multiple parameters?": See [Fig. R2](https://shorturl.at/mOSna) for new RNA experiments with an expanded menu of size 249, containing the more complex hierarchical configuration space illustrated in [Fig. R3](https://shorturl.at/X0EOu). Our method performs well here (similar error rates with higher selection rates), as the expanded menu contains configurations with higher mean design labels than the original menu. - Fig 3B,4B: The "=" is an equals sign, indicating that the diagonal line is the y = x line. Results on/above the diagonal mean that selected configurations are successful. - Line 275: Thank you for noting the broken link! Now fixed. [1] Sugiyama et al. Density ratio estimation in machine learning. 2012. [2] Grover et al. Bias correction of learned generative models using likelihood-free importance weighting. NeurIPS 2019. [3] Gutmann & Hyvarinen. Noise-contrastive estimation of unnormalized statistical models. JMLR 2012. [4] Bengs et al. Pitfalls of epistemic uncertainty quantification through loss minimisation. NeurIPS 2022. [5] Zrnic & Candes. Cross-prediction-powered inference. PNAS 2024.
Summary: When performing model-guided design, the goal is to propose new objects x that have some desired property, where the relationship between x and the property is approximated by a predictive model f(x). The problem is that f(x) may be unreliably when proposing x far from the training data. This makes it difficult to choose among design algorithms. Each defines a distribution over potential designs. How do we estimate the performance of a design algorithm without actually measuring the property for these designs (which could involve an expensive wet-lab experiment)? The authors propose an appealing frequentist formalism for the problem and leverage the recent 'prediction powered inference' framework to propose an algorithm with desirable properties. Claims And Evidence: The evaluation is a bit confusing at first, as it requires really understanding the frequentist formulation of the design algorithm selection problem. However, given this problem formulation, the way that algorithms are evaluated makes a lot of sense. Methods And Evaluation Criteria: The proposed method is well motivated and appealing. I appreciate that subtle variations are also provided for regimes where certain required quantities (the density ratio term) are note available. I found the set of baselines to be very well presented and well motivated. Theoretical Claims: I do not have the required technical background to assess the correctness of the proofs in the appendix. Experimental Designs Or Analyses: Yes, I think it was posed correctly. Supplementary Material: I only skimmed the proofs Relation To Broader Scientific Literature: It does a good job of framing things in terms of related work Essential References Not Discussed: None Other Strengths And Weaknesses: I found figures 3A and 4A very confusing. I feel like there should be a better way to present this data that is mostly saturated at y = 0 or y = 1. Other Comments Or Suggestions: When I got to the experiments section, I was surprised to find that the 'menu' corresponded to a range of values for a single real-valued hyper-parameter. There is a lot of structure across this menu that the method doesn't exploit. The hyper-parameter range is also discretized at an arbitrary resolution of 100 values, but the menu size drives the performance of the algorithm due to the multiple testing correction. I worry that this choice of 100 could have had a big qualitative impact on the results. For problems where the density ratio was tractable, it would have been helpful to see what the performance difference is between the case when using the true density ratio vs. an approximated one. Questions For Authors: Can you please show that the experiments' outcomes are not sensitive to the discretization resolution discussed above? ## After Authors' response ## Thanks for addressing my comments. I think the proposed changes will strengthen the manuscript. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful for the positive evaluation of our work, and are glad the reviewer found our method and experiments well-motivated and appealing. Our responses to specific comments and questions follow. - "The hyper-parameter range is also discretized at an arbitrary resolution of 100 values, but the menu size drives the performance of the algorithm due to the multiple testing correction." We appreciate your concern regarding this point. See [Fig. R1](https://shorturl.at/qE6TG) for new results where the menu for the protein GB1 experiments is discretized to contain 501 values (np.arange(0.2, 0.7, 0.001) instead of the original np.arange(0.2, 0.7, 0.005)). Our method still keeps error rates below the user-specified level, consistent with the guarantees. Interestingly, it does so with *higher* selection rates for greater values of tau (x-axis) than with the original smaller menu. That is, the multiple testing correction did not make the method more conservative with a 5X-larger menu; in fact, the method actually selected successful configurations more frequently, because the expanded menu contains more configurations with greater mean design labels. - "I was surprised to find that the 'menu' corresponded to a range of values for a single real-valued hyper-parameter. There is a lot of structure across this menu that the method doesn't exploit." Thank you for this insightful observation. Please see [Fig. R2](https://shorturl.at/mOSna) for new RNA experiments with an expanded menu of size 249, containing the more complex hierarchical configuration space illustrated in [Fig. R3](https://shorturl.at/X0EOu). Our method empirically performs well in these experiments, but we agree that it would be possible to better exploit the structure in the menu induced by the relationships between different configurations. This is a conceptual strength of formalizing design algorithm selection as a multiple testing problem: any multiple testing procedure that controls family-wise error rate (FWER) can actually be used in place of the Bonferroni correction in Line 6 of Alg. 1, meaning one can use FWER-controlling procedures that respect, e.g., the hierarchical organization of the configuration space [1] or the correlations between configurations [2]. We instantiated Alg. 1 with the Bonferroni correction for full generality, as it doesn't require any assumptions about how configurations are related, but replacing this with procedures that exploit these relationships is a great direction for future work. Note that using such structure-respecting multiple testing procedures would not improve the existing guarantees on the error rate, but might improve (i.e., increase) the selection rate by yielding a less conservative multiplicity correction. - "For problems where the density ratio was tractable, it would have been helpful to see what the performance difference is between the case when using the true density ratio vs. an approximated one." Thank you for this suggestion. See [Fig. R1](https://shorturl.at/qE6TG) for new results where we estimated the density ratios (DRs) for the protein GB1 experiments. Specifically, we separately estimated the labeled distribution and the design distribution corresponding to each configuration. For the former, we performed maximum-likelihood estimation with Laplace smoothing (with pseudocounts of 1) to estimate the site-specific categorical distributions using the held-out labeled sequences; for the latter, we did the same using the design sequences. For a given sequence, we then took the ratio of its densities under these two estimated distributions as the estimated DR. The results from our method using these estimated DRs are very similar to the original results using the known DRs. - Figs. 3A, 4A: Thank you for noting that the presentation was confusing. To clarify, would the reviewer prefer that we zoom-in the y-axes of plots closer to 0 for error rate and 1 for selection rate? Note that the methods have error rates and selection rates that span the entire [0, 1] range, because we wanted to thoroughly compare their performance across a wide range of success criteria (x-axis values) that practitioners might be interested in. However, we have zoomed-in the error rate plot of [Fig. R1](https://shorturl.at/qE6TG) closer to 0 as suggested, since the plotted rates are all less than 0.1. [1] Bretz et al. A graphical approach to sequentially rejective multiple test procedures. Stat. Med. 2009. [2] Dudoit et al. Multiple Hypothesis Testing in Microarray Experiments. Stat. Sci. 2003.
null
null
null
null
null
null
null
null
Towards Practical Defect-Focused Automated Code Review
Accept (spotlight poster)
Summary: The paper introduces an end‐to‐end automated code review system designed specifically for defect detection in large-scale, industrial codebases. The authors identify four key challenges in automating code review: capturing the full, relevant code context; improving key bug inclusion to ensure that critical defects are detected; reducing the false alarm rate by filtering out redundant or irrelevant comments; and integrating the system into human workflows. To overcome these challenges, the paper proposes an approach that includes: • A static analysis system using several code slicing algorithms • A multi-agent framework where different LLM roles (Reviewer, Meta-Reviewer, Validator, and Translator) collaborate • A robust filtering mechanism that systematically reduces false positives by eliminating nitpicks and hallucinations • A line-aware prompt design that ensures review comments are accurately attached to the relevant lines of code Empirical evaluations demonstrate that this integrated approach can achieve good performance. Claims And Evidence: Most of the paper's claims are supported by empirical evaluations and ablation studies. The evidence might be less convincing on the generalizability of the approach to other programming languages. Methods And Evaluation Criteria: The methods and evaluation criteria proposed in the paper are well-aligned. On the evaluation side, the authors introduce metrics such as Key Bug Inclusion (KBI), False Alarm Rate (FAR), Comprehensive Performance Index (CPI), and Line Localization Success Rate (LSR). Theoretical Claims: The paper does not include formal proofs or theoretical claims. Experimental Designs Or Analyses: One potential issue is that while these metrics and the accompanying analyses are well-structured, some evaluation criteria (like KBI and CPI) are specific to this work and might not fully capture all dimensions of code review quality as experienced by developers and can not be well compared with the literatures.. Supplementary Material: The authors provide the code. Relation To Broader Scientific Literature: The key contributions of the paper extend prior findings to an integrated, context-rich, and workflow-aware system that is validated on industrial-scale data. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: Strengths: The paper presents an innovative integration of multiple techniques—including diverse code slicing algorithms, a multi-agent framework with chain-of-thought reasoning, and a robust filtering mechanism—to tackle real-world challenges in automated code review. The experimental evaluation is comprehensive and grounded in industrial-scale data, using metrics (KBI, FAR, CPI, LSR) that directly address practical defect detection concerns. Weaknesses: Some design choices, such as the thresholds in the filtering mechanism, appear heuristic, and it remains unclear how sensitive the results are to these parameters. The evaluation is primarily confined to a specific industrial C++ codebase, leaving questions about the generalizability of the approach to other programming languages or domains. Other Comments Or Suggestions: N.A. Questions For Authors: 1. Could you elaborate on the trade-offs observed with the validator role? 2. Could you discuss how your approach might generalize to other programming languages or domains? 3. How sensitive is the system’s performance to the thresholds, and have you considered adaptive or data-driven methods for threshold selection? 4. Have you conducted any user studies or qualitative evaluations with developers to validate that these metrics align well with real-world expectations of code review quality? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We sincerely thank the reviewer for the thoughtful and encouraging feedback. We greatly appreciate your recognition of the system design, the practical orientation of our evaluation, and the paper’s contributions toward large-scale industrial defect detection. Below, we address your questions in turn.** --- ### **1. Trade-offs with the Validator Role** **(Response to “Question 1”)** The Validator role involves important trade-offs: - It is highly effective at removing hallucinated comments, which was a core motivation for introducing it. - However, we observed some negatives cases, where valid comments were mistakenly rejected. These cases typically stem from: - **Context loss** during the Meta-Reviewer phase (e.g., mismatched file names or snippet IDs), which prevented the Validator from tracing back to the correct source code. - **Position mismatches**, where a comment was attached to a nearby but unrelated line. - **Token limit issues**, where long inputs (context + chain-of-thought + comment pairs) exceeded the model’s context window. - **Score variance**, where randomly low Q1–Q3 scores led to inadvertent filtering. We believe that introducing more robust communication between roles and improving traceability can reduce such errors. This is a promising direction for future work. --- ### **2. Generalization to Other Languages or Domains** **(Response to “Question 2”)** Our system is **inherently language-agnostic**. It relies on structural elements like **function boundaries, control/data flow, and AST-derived slicing**, which are common across many programming languages. We chose C++ due to its industrial relevance and high complexity. However, both our slicing algorithms and prompting strategies are adaptable to other languages. Extending the framework to support multiple languages is part of our planned roadmap. --- ### **3. Sensitivity to Filtering Thresholds** **(Response to “Question 3”)** The thresholds for Q1 (nitpicks) and Q2 (hallucinations) were set heuristically for interpretability and practical deployment. As described in Section 3.4, we adopted a 1–7 scoring scale (inspired by McAleese et al., 2024), with a threshold of 4+ to denote serious, actionable issues. We evaluated Q3 (redundancy) sensitivity via a top-*k* truncation strategy in **Section 5.4**, and extended it further: 🔗 https://anonymous.4open.science/r/16368/The%20impact%20of%20Top-k%20truncation.pdf Key observations: - **Top-3** truncation works better for **shorter context slices** (e.g., Original Diff, Parent Function), which produce fewer relevant comments. - **Top-5 to Top-10** perform better with **richer slicing methods** (e.g., Left Flow, Full Flow), which generate more detailed reviews. - **Full Flow + Top-10** occasionally shows performance drop after validation, likely due to hitting the token limit of LLaMA3.1, reducing effectiveness. We agree that adaptive or learned thresholds would be valuable. However, to avoid overfitting to a specific codebase, we chose to retain a generalizable, interpretable heuristic strategy in this version. We will reflect this design choice and its limitations more clearly. --- ### **4. Alignment of Metrics with Real-World Review Quality** **(Response to “Question 4”)** While we have not yet conducted formal user studies, our metrics (KBI, FAR, LSR) are grounded in interviews with professional developers. Developers repeatedly emphasized two points: 1. **Catching critical bugs** is a top priority. 2. **Reducing irrelevant comments** is crucial for adoption—"even one false positive can erode trust." This informed our focus on *key defect inclusion* and *false alarm suppression*, which are not well captured by traditional text similarity metrics. The system is currently deployed in an internal development team, though we do not require developers to reply to each comment. Based on initial feedback, FAR has emerged as a particularly sensitive metric, directly affecting developer perception. We agree that systematic user studies would further strengthen the validation and plan to incorporate them in future work. --- **We thank the reviewer again for the constructive questions and positive evaluation. We hope these clarifications further highlight the robustness and extensibility of our work.** --- Rebuttal Comment 1.1: Comment: I am fine with the answers.
Summary: The paper presents a language-model based system for automated code review. The methodology boils down to using static analysis tools to identify the most relevant parts of the code base, and then passing this through several LLM calls to generate the review, identify the key components, and filter out noise. The authors state that this approach is inspired by interviews with real-world code reviewers. Empirically, the authors find that on a dataset consisting of merge requests from an internal base, their method outperforms prior work. In particular, they evaluate their method in terms of whether the review correctly identifies the key bug introduced in the merge request, as well as how many "false alarms" the review raises and whether the faulty line(s) are correctly identified. ## update after rebuttal Following the end of the rebuttal period I have decided to further raise my score to Accept (4). I like the motivations of the work and think it could have real-world impact, especially as C++ is at this point criminally understudied by academia in comparison to its popularity in the industry. The uncertainty quantification included by the authors in their final reply has convinced me that the findings are meaningful, and I appreciate that they have been responsive to changes requested in both the scope of the work as well as its exposition. I will note that I am not sure if ICML is the right venue for this work, as opposed to conferences such as ICSE or FSE, but given that none of the other reviewers seemed bothered by this I will chalk it up to being a symptom of AI/ML becoming increasingly relevant to many other areas of computer science and will not hold it against this paper in particular. Claims And Evidence: Overall, the claims made appear to be well supported by the experiments. Of course it is difficult to judge the validity of evaluating on an internal dataset, but I do not see any immediate causes for concern. The one thing that stands out to me as perhaps not being completely justified by the experiments is whether the method reduces the False Alarm Rate (FAR); the definition of this metric appears to be quite strict, as acknowledged by the authors in Appendix N and Q. Furthermore, it seems to go against the training objectives of prior work like CodeReviewer, so the fact that this new method outperforms it on this metric seems like an obvious result. However, this limitation is acknowledged by the authors (albeit hidden in the appendix), and if prior work failed to take real-world concerns such as limiting the cognitive burden imposed on the user then that is a shortcoming of their approach, not of this paper. Methods And Evaluation Criteria: Yes, the internal dataset used for the evaluation appears to be well suited for the study. As mentioned in the preceding section, the FAR metric appears to be somewhat arbitrary. In particular, having a code review system point out minor issues and not just major faults seems like a feature, not a failure, to me. However, I appreciate that the authors are attempting to limit the amount of information that is presented to the end-user, since this is likely a make-or-break factor when applying the system in the real world. Theoretical Claims: N/A; no theoretical claims. Experimental Designs Or Analyses: I have checked the soundness of the experiments presented in the main text, and they appear valid and sound. The code slicing experiment in 5.2 seems a bit weak, however, since the absolute numbers of key bugs is relatively small, so it is hard to tell whether the different spreads in the Venn diagram are significant or just due to chance. Another, more important issue is that the text mentions that each experiment is repeated 3 times to account for the stochasticity of the system, but as far as I can see the results of each experiment is just reported as the mean of these 3 runs, rather than their complete spread. Just reporting the min and the max of each number would already be a significant improvement in terms of making it easier to tell if the results are significant or not. Supplementary Material: I reviewed appendices C, D, E, L, N, and Q. I briefly reviewed the linked source code release. Relation To Broader Scientific Literature: Automated code review is a problem with significant real-world impact, and has been studied in the software engineering community for many years. This paper appears to be a significant step forward in this domain, since it evaluates the performance of their system on real-world fault reports and merge requests, using metrics that (are intended to) mimic the desiderata of real software developers. The methodology is not itself particularly novel or interesting, but the experiments are extensive and may inspire future work in this direction. I am not sure how interesting this paper would be to an ICML audience, rather than a software engineering audience, though. Essential References Not Discussed: None that I am aware of; this problem is well-studied in the software engineering literature, but the authors appear to have cited the most relevant works already (in particular those by Tufano et al.). I was surprised by the authors' claim in Appendix D that slicing (which is a very well established technique across software engineering) had not yet been applied to automated code review, but could not find any references myself for this, so it may be true. Other Strengths And Weaknesses: Strengths: - The paper is well organized, with clear hypotheses and thorough discussion of the experiments - The authors release the source code of their system, which will help others seeking to replicate or build upon their work Weaknesses: - The figures and tables are, on the whole, completely illegible (at least when printed). The text is much too small (in particular in Figures 1,2, 3, 4) and the black-on-dark-green formatting of the tables is difficult to read. - While very much understandable, it is unfortunate that the dataset could not be released, as this would certainly have been a very useful resource for others working in this area. Other Comments Or Suggestions: Section 6 uses a very strange reference format, which should be simplified. For example, "Gupta et al. (Gupta & Sundaresan, 2018)" should just be "Gupta & Sundaresan (2018)". I really disagree with framing your system as being "multi-agent". There is no interaction with an environment, so there is in fact no agent involved at all. This is an abuse of the nomenclature. Questions For Authors: 1. What was your motivation for submitting this work to ICML instead of ICSE, FSE or ASE, where the majority of prior work in this area was published? 2. How certain are you that there is no prior work applying slicing in the context of automated code review? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We sincerely thank the reviewer for the constructive and thoughtful feedback. We appreciate your recognition of the clarity of the paper, the strength of the experimental design, and the potential real-world impact of our work. Below, we respond to your questions and key suggestions.** --- ### **1. Motivation for Submitting to ICML** **(Response to “Questions for Authors”)** We submitted this work to ICML because we believe that integrating large language models (LLMs) into software engineering workflows represents a growing and impactful frontier at the intersection of machine learning and software development. With the rapid advancement of LLMs, core tasks in software engineering—such as code review automation—are approaching an inflection point. Engaging the ML community is essential to foster deeper cross-domain innovation, especially as challenges like contextual reasoning, modular prompting, and learning from structured data remain open research problems. Moreover, our defect-focused formulation reflects a broader ML interest in real-world, system-level tasks beyond conventional generation. Practically, we also note the limited availability of top-tier SE venues with winter deadlines. --- ### **2. Novelty of Code Slicing in Code Review Context** **(Response to “Questions for Authors” & “Essential References”)** To the best of our knowledge, our work is the first to explicitly incorporate **static code slicing** into an **LLM-based automated code review pipeline** to guide review comment generation and filtering. While slicing is a well-established technique in software engineering, prior works on code review automation have largely focused on snippet-level generation or comment naturalness, rather than **context-aware defect localization**. Under such task formulations, slicing has typically been overlooked, as the repository-level context is not utilized. We acknowledge this is a strong claim and will revise the relevant statement in **Appendix D** to present it more cautiously. --- ### **3. Use of the Term “Multi-Agent”** **(Response to “Other Comments”)** We appreciate this observation. Our usage of “multi-agent” was intended to convey modular collaboration among LLM roles (e.g., Reviewer, Validator, Aggregator), not reinforcement learning-style agents interacting with an environment. To avoid confusion, we will revise the terminology in the revised version to use terms such as **“multi-role framework”** or **“role-based architecture.”** --- ### **4. Additional Suggestions and Minor Corrections** - **Figures and tables** will be updated with larger fonts, improved contrast, and clearer layout to ensure readability in both digital and printed formats. - **Citation formatting** in Section 6 will be corrected as suggested (e.g., “Gupta & Sundaresan (2018)”). --- **We thank the reviewer again for the generous feedback and helpful suggestions. We hope these clarifications address your concerns and reinforce the value of our contribution.** --- Rebuttal Comment 1.1: Comment: Thank you, authors, for your clear and concise response. I am happy with the changes you have outlined and will consider updating my score. One final question I have is whether your updated tables will include (min, max) ranges (over the 3 runs), rather than just the mean, as requested in my review? I did not see this mentioned in your response. I think quantifying the uncertainty involved in your experiments is essential so that the readers of the paper can gain some confidence about whether your results are statistically significant. A 95% CI on the means or something like that would have been ideal but such an interval would be difficult to construct with only 3 samples, so reporting (min, max)-tuples will be sufficient in this case. If you believe this would clutter the tables then at least include it in the appendix. In your reply, if possible, it would be good to include such a table, since if the ranges overlap significantly between your method and the baselines then I would have to reconsider my score on the basis of there being insufficient evidence for your claims. --- Reply to Comment 1.1.1: Comment: **We sincerely thank the reviewer for the thoughtful feedback and for acknowledging our changes. We are glad that the updates align with your expectations. Regarding the additional question, we have provided the following clarifications.** --- We fully agree with the reviewer that quantifying **uncertainty** in experimental results is essential for building confidence in the findings. We appreciate your input on this matter. To address this, we include **min/max ranges** comparing the performance of our system with the baselines, allowing readers to see the variability across different runs. We provide this data in the updated tables: 🔗 [Min/Max ranges for comparing baselines](https://anonymous.4open.science/r/16368/Comparing%20baselines.pdf) These results show that our workflow significantly outperforms baseline approaches, even considering experimental uncertainty. This success highlights the effectiveness of our approach to end-to-end automation, due to its pipeline design and alignment with developers' expectations. Furthermore, as the reviewer emphasized the importance of the slicing algorithms, we have also reported the **min/max ranges** of the comparison for our **slicing algorithms**: 🔗 [Min/Max ranges for slicing algorithm comparison](https://anonymous.4open.science/r/16368/Max%20min%20slicing.pdf) We recommend that the reviewer pay special attention to **single reviewer settings**, as these are closest to the raw output and highlight the potential of the slicing algorithms before applying our filtering mechanisms. For example, the "Single Reviewer – All" setting presents the raw output before filtering is applied. Additionally, to address concerns about the Venn diagram analysis and the possibility of chance-based spread, we have included **results across different fault categories**: 🔗 [Fault category-based analysis](https://anonymous.4open.science/r/16368/Performance%20by%20error%20category.pdf) Our findings indicate that **flow-based slicing** particularly benefits the identification of **security errors**, as it captures **jump, data, and control flow**, including the lifecycle of variables. In contrast, **parent function slicing**, which provides broader and more continuous context, helps the LLMs understand **code logic**, leading to better performance on **logic errors**. --- **We thank you again for your detailed feedback. We believe these updates will significantly improve the presentation of our results and provide the necessary confidence in our findings.**
Summary: This paper proposes an advanced method for automating code reviews, focused on defect detection and improving real-world code review workflows. To address the challenges, the authors introduce a multi-agent LLM framework that utilizes code slicing algorithms, a filtering mechanism to remove irrelevant comments, and a new prompt design for better integration into human workflows. Their system was validated using real-world industry data, achieving significant improvements over previous methods in detecting key bugs, reducing false alarms, and improving code review performance. Claims And Evidence: 1. The evaluation primarily focuses on performance metrics rather than directly on qualitative assessments and user studies that demonstrate improved usability or reduced developer burden. Thus, the claim "Real-world Workflow Integration" is limitedly supported. 2. The title didn't specifically mention C++ focus, but the "10× improvement" is primarily based on C++ code. This raises the concern about the generality of the framework, and the scope of the paper Methods And Evaluation Criteria: Assuming the paper focuses on C++ related code review, the methods and evaluation make sense. Theoretical Claims: The paper primarily focuses on designing and empirically evaluating its defect-focused automated code review framework. Experimental Designs Or Analyses: The paper does not provide an in-depth justification for the chosen thresholds of the redundancy comment filter. A sensitivity analysis regarding these thresholds could strengthen confidence in the filtering mechanism's robustness. Supplementary Material: All pages of the supplementary material are reviewed. Relation To Broader Scientific Literature: The paper extended existing automated code review methods from isolated snippet-level generation to holistic, context-rich analysis. And by proposing a multi-agent, collaborative framework with evaluation metrics that better capture the practical realities of defect detection. These innovations build directly on and address limitations identified in prior work. Essential References Not Discussed: No Other Strengths And Weaknesses: ### Strengths 1. The use of multiple agents adds flexibility and scalability, ensuring that the review process is thorough and precise. 2. The method achieves up to 10x improvement over previous baselines in detecting critical bugs and reducing irrelevant comments. ### Weaknesses 1. The framework is mainly tailored for C++, and its adaptability to other languages may need further research and development. 2. The effectiveness of the framework strongly depends on the underlying LLM engine, with larger models like LLaMA3.1 performing better, which might be a limitation for certain applications where computational resources are constrained. 3. The more detailed slicing methods (like Left Flow and Full Flow) showed better performance but could potentially be computationally expensive or difficult to implement in all environments. Other Comments Or Suggestions: N/A Questions For Authors: Please refer to weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We thank the reviewer for the constructive and insightful feedback. We appreciate your recognition of our system’s architecture and the observed performance improvements on real-world data. Below, we address your concerns regarding generality, LLM dependency, filter sensitivity, and workflow integration.** --- ### **1. Generality Beyond C++** **(Response to “Weakness 1”)** > The framework is mainly tailored for C++, and its adaptability to other languages may need further research and development. We agree clarification is needed. The current results are based on C++ and the 10× improvement reflects this. We will state this explicitly in the revised abstract. As noted in **Appendix Q**, we chose C++ because: - It is one of the most widely used industrial languages; - It is among the most complex in mainstream use. Nonetheless, our framework is **inherently language-agnostic**, relying on universal structures like ASTs to extract elements such as control/data flow and function scopes. These structures exist across most compiled languages, and the framework’s design—including slicing and prompting—does not rely on C++-specific features. Extension to other languages is thus feasible and ongoing work. --- ### **2. LLM Dependency and Computational Cost** **(Response to “Weakness 2, 3”)** > The framework depends on large LLMs... Detailed slicing may be computationally expensive... We acknowledge that performance depends on model capability. While we use large models (e.g., LLaMA 3.1) in our main experiments, we also evaluated smaller models in **Section 5.1 (RQ1)**. These results show that compact models with strong reasoning abilities can still perform competitively, especially there is a trend towards increasing capacity density. For slicing cost: while fine-grained slicing (e.g., Full Flow) is more expensive, it remains within practical limits. We report detailed runtimes via violin plots: 🔗 https://anonymous.4open.science/r/16368/Runtime%20violin%20plot.pdf - **Median runtime per MR** is **6.2 minutes**. - This fits comfortably within typical CI/CD pipelines (15–30 minutes), which include compilation, linting, static analysis, testing, and deployment checks. Our system runs **in parallel** with these processes and does not introduce blocking delays. We believe this makes the cost acceptable for real-world use. Further analysis is included in **Appendix M**. We also emphasize that our focus is on solving core challenges like *defect inclusion* and *false alarm suppression*. Cost-efficiency optimization (e.g., model scaling, caching) is important but future work. --- ### **3. Sensitivity Analysis of Filter Thresholds** **(Response to “Experimental Designs”)** > The paper does not provide an in-depth justification for the chosen thresholds of the redundancy comment filter... Thank you for this suggestion. Our current thresholds are based on heuristic rules aimed at **interpretability**. For Q1/Q2, a score above 4 on a 1–7 scale (inspired by prior work) indicates actionable, serious issues. This setting aligns with developer feedback during internal piloting. For Q3, we conduct a sensitivity study using top-*k* truncation in **Section 5.4 (RQ4)**. We now extend this to multi-reviewer settings: 🔗 https://anonymous.4open.science/r/16368/The%20impact%20of%20Top-k%20truncation.pdf Key observations: - **Smaller top-*k*** (e.g., 3) performs better for shorter slicing strategies (Original Diff, Parent Function), where fewer high-quality comments exist. - **Larger top-*k*** (5–10) works better for richer slicing (Left Flow, Full Flow), which yields more relevant outputs. - For Full Flow with Top-10, performance declines after validation—likely due to hitting token limits, reducing model focus. These findings demonstrate that the filter behavior is stable across reasonable thresholds, and adaptable to context richness. --- ### **4. Real-World Workflow Integration** **(Response to “Claims And Evidence”)** > The claim “Real-world Workflow Integration” is limitedly supported. Thank you for pointing this out. The workflow is illustrated in **Figure 1**, but we agree it merits a more detailed explanation. We will provide a full description in an additional appendix. Briefly, when a merge request is submitted: 1. The system verifies user and file access; 2. Slicing and multi-agent review are launched; 3. Filtered comments are injected into the internal DevOps system; 4. Comments are positioned at exact line numbers and pushed to developers via messaging. This enables seamless integration into daily development workflows. In particular, **line-aware comment injection**, evaluated in **Section 5.5 (RQ5)**, was critical for developer adoption and feedback. --- **We will incorporate these clarifications and additions in the revised version. We hope this addresses your concerns and supports a more favorable assessment. Your feedback has helped improve both clarity and rigor.**
Summary: In this paper, the authors proposed a framework for automated code review. More specifically, the authors first used code slicing to enable the Multi-Agent Code Review System to obtain sufficient context fragments of the code. Then, the Multi-Agent Code Review System conducted code reviews, filters, aggregates, and ranks the reviews. Finally, the proposed approach localized the issues mentioned in the reviews. The authors created a dataset at the merge request level using data from four repositories. The authors experimentally validated the performance using Key Bug Inclusion, False Alarm Rate, Comprehensive Performance Index, and Line Localization Success Rate. The proposed approach performed better on both the Left Flow and Full Flow slicing algorithms, showing effective localization capability in the Inline representation. Claims And Evidence: The authors pointed out that current evaluations rely excessively on textual similarity metrics (e.g., BLEU, ROUGE), which fails to measure real-world effectiveness. Is there a connection between metrics like BLEU and the evaluation metrics used by the authors? The authors could compare the proposed approach and baselines with BLEU and ROUGE. Methods And Evaluation Criteria: The authors have established criteria based on key issues; however, it is unclear how the key issues are defined. Is there a specific set of evaluation criteria, and are these standards reasonable? Regarding the Dataset, a C++ dataset was conducted by using requests. The authors categorized errors into logic errors, code security errors, and performance-related errors. Does the dataset comprehensively cover the majority of error types encountered in C++? Is it feasible to report the performance of the proposed method across these different error categories? Theoretical Claims: The paper does not contain proofs for theoretical claims. Experimental Designs Or Analyses: The authors evaluated the performance of different LLMs; however, further analysis was needed to examine the performance differences resulting from various combinations of LLMs. Additionally, did different tasks yield performance variations when different LLMs were employed? For instance, using LLaMA 3.1 (405B) as the Reviewer and LLaMA 3.1 (70B) as the Validator. For Time Cost in Appendix M, Experimental Setups in Detail, the authors mentioned that one round of automated code review took approximately 9 hours. In the context of code review, were such time and resource overheads acceptable? Supplementary Material: Yes. The implementation files did not include the result files or dataset files. The dataset is not available in the source code hosted on Zenodo. Is the dataset publicly accessible? Relation To Broader Scientific Literature: N/A Essential References Not Discussed: There are some more recent related work. For example: [1] Wang L, Zhou Y, Zhuang H, et al. Unity Is Strength: Collaborative LLM-Based Agents for Code Reviewer Recommendation[C]. IEEE/ACM International Conference on Automated Software Engineering. 2024: 2235-2239. [2] Wei Tao, Yucheng Zhou, et al., 2024. KADEL: Knowledge-Aware Denoising Learning for Commit Message Generation. ACM Trans. Softw. Eng. Methodol. 33, 5, Article 133 (June 2024), 32 pages. [3] Yu Y, Rong G, Shen H, et al. Fine-tuning large language models to improve accuracy and comprehensibility of automated code review[J]. ACM transactions on software engineering and methodology, 2024, 34(1): 1-26. Other Strengths And Weaknesses: Strengths: 1. The proposed merge-request-based code review could be useful for real-world applications. 2. The authors introduce code localization based on the review, which could help quickly determine whether the issues are genuine. Weakness: 1. The proposed approach does not validate the combination of different LLMs. 2. The authors do not discuss whether the time and resource costs of the multi-agent code review system are acceptable. 3. In the LLM prompts, the authors can discuss the effectiveness of Retrieval-Augmented Generation in LLM-generated outputs. 4. The authors evaluate the comments based on certain issues. However, the evaluation should be improved. For instance, the authors should assess whether the answers are vague. Other Comments Or Suggestions: Section 2.1 is mentioned in Appendix A, but where is it in the paper? Questions For Authors: Can the authors provide more information about the dataset? Does the dataset comprehensively cover the majority of error types? How were the 45 fault reports selected? Are they representative of common code review scenarios (e.g., edge cases vs. typical defects)? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **We thank the reviewer for the constructive comments and the recognition of our framework’s practical impact. We have addressed the concerns by enhancing dataset transparency, adding new evaluations (including error category analysis and heterogeneous LLM roles), clarifying metric rationale, and updating implementation files.** --- ### **1. Dataset Scope and Fault Selection** **(Response to “Questions for Authors” and “Methods and Evaluation Criteria”)** > Can the authors provide more information about the dataset?... > It is unclear how the key issues are defined... Is it feasible to report the performance across different error categories? We appreciate your interest in the dataset. To improve transparency, we added a desensitized JSON folder of fault descriptions to our updated Zenodo repository and will revise **Appendix J** accordingly. **Fault selection** follows a practical criterion: all faults caused user-visible issues and were formally logged in the company’s internal defect tracking system (Section 3.6). This **result-oriented** strategy emphasizes **real impact**, even if it doesn’t fully cover all C++ error types. The dataset includes both edge and typical cases, e.g.: - **Case 4694_23117**: array out-of-bounds and null-pointer dereference. - **Case 16231_13308**: misuse of `boost::random::beta_distribution`. To analyze per-category performance, we added a breakdown across **logic**, **security**, and **performance-related** bugs: 🔗 https://anonymous.4open.science/r/16368/Performance%20by%20error%20category.pdf Findings show that flow-based slicing benefits security bugs, while broader context helps with logic bugs. --- ### **2. Evaluation Metrics and Comment Quality** **(Response to “Claims and Evidence” and “Weakness 4”)** > The authors pointed out that current evaluations rely excessively on BLEU, ROUGE... > The authors evaluate the comments based on certain issues... assess whether the answers are vague. We intentionally did **not** use BLEU or ROUGE due to several limitations: 1. Our task involves **many-to-many** mappings between code and reviews, violating BLEU’s single-reference assumption. 2. Code review requires reasoning and domain expertise, and recent studies from earlier this year show that BLEU and ROUGE fail to capture quality effectively in such tasks. 3. Real fault reports and LLM outputs differ significantly in style and expression, making textual similarity unreliable. On vagueness: rather than evaluating writing style, we focus on key outcome metrics like *key bug inclusion* (KBI) and *false alarm rate* (FAR), which directly reflect review effectiveness. These metrics are more objective and interpretable and will be further discussed in **Appendix N**. --- ### **3. LLM Combinations and Time Cost** **(Response to “Experimental Designs or Analyses” and “Weakness 1, 2”)** > The proposed approach does not validate different LLM combinations... > The authors do not discuss whether the time and resource costs are acceptable... Our main experiments use the same LLM across all agents to isolate whether a **strong model** alone can resolve key challenges in code review. However, we agree that heterogeneous combinations are worth exploring. We added experiments varying reviewer/validator assignments, showing that a strong validator paired with a smaller reviewer often yields comparable or better results: 🔗 https://anonymous.4open.science/r/16368/Performance%20of%20combinations.pdf Regarding runtime, we now report detailed timings using violin plots: 🔗 https://anonymous.4open.science/r/16368/Runtime%20violin%20plot.pdf - **Median runtime per MR** is **6.2 minutes**. - The overall CI/CD pipeline (including compilation, analysis, and deployment checks) typically takes 15–30 minutes. - Our module runs **in parallel** from the beginning and does **not introduce blocking delays**. Thus, we believe the overhead is acceptable in practical scenarios. Further analysis will be added to **Appendix M**. --- ### **4. Prompting Strategy and Retrieval-Augmented Generation** **(Response to “Weakness 3”)** > In the LLM prompts, the authors can discuss the effectiveness of Retrieval-Augmented Generation... Thank you for raising this point. Although we do not use explicit RAG pipelines, our **slicing mechanism** serves a similar purpose: retrieving and providing only **relevant context slices** to the model. This is evaluated in **Section 5.2 (RQ2)** and will be clarified as RAG-aligned in the revision. --- ### **5. Minor Corrections and Reference Additions** - The incorrect reference to Section 2.1 in **Appendix A** will be fixed. - We will include the three suggested references and briefly discuss their relation to our work. --- **We hope this response addresses your concerns and supports a more favorable assessment. Your feedback has helped us strengthen the clarity and rigor of our work.**
null
null
null
null
null
null
BOOD: Boundary-based Out-Of-Distribution Data Generation
Accept (poster)
Summary: This paper focuses on addressing the OOD detection task by synthesizing OOD samples. To generate plausible OOD samples, samples near the OOD boundary are first selected and then perturbed along the direction of gradient ascent until their predicted labels change. Finally, a diffusion model is applied to generate OOD samples from these perturbed features, which are then used to train an OOD classifier. Experiments on various datasets demonstrate that this method outperforms existing approaches. Claims And Evidence: The claims are clear; however, the method has inherent limitations that may push an in-distribution (ID) class toward another ID class. Methods And Evaluation Criteria: The proposed methods are clear; however, there are certain limitations. The performance on the benchmark datasets is acceptable. Theoretical Claims: I have reviewed the theoretical claims, which primarily involve gradient ascent and related theoretical concepts. The limitations of the proposed method are already reflected in the theoretical claims. Experimental Designs Or Analyses: Yes, I have checked the experimental designs. The paper uses CIFAR-100 and ImageNet-100 as the ID datasets, which are consistent with the DreamOOD settings. The experimental setup seems sound and appropriately chosen for the task at hand. Supplementary Material: Yes, I have reviewed all the supplementary material, primarily focusing on the ablation studies and the visualization of generated images. Relation To Broader Scientific Literature: This paper, inspired by DreamOOD's use of nearest neighbor distance for OOD sample generation, employs gradient ascent to generate OOD samples, providing a more directed generation approach. Essential References Not Discussed: All relevant works have been appropriately cited in the paper, and there are no crucial studies or findings that have been overlooked. Other Strengths And Weaknesses: Strengths: 1. This paper proposes a novel boundary-based OOD data generation method that leverages a diffusion model to identify ID data closest to the decision boundary and applies an outlier feature synthesis strategy to generate images near the decision boundary. This approach provides high-quality and information-rich features for OOD detection. 2. In terms of performance, the gain on CIFAR-100 as the ID is significant compared to existing methods. 3. The writing and expression of the paper are clear. The images generated under boundary conditions are reasonable. Weaknesses: 1. How can we ensure that during the generation from 0 to c, the original ID sample of class y is not mistakenly perturbed into another ID class? Simply controlling the step size α and number of steps c may make the model overly sensitive to hyperparameters. 2. I would like to understand whether the performance improvement over the DreamOOD method arises from generating OOD data with lower error rates, or from a higher number of OOD samples being situated closer to the decision boundary. 3. I have noticed that 100,000 generated OOD images were used during training. Would reducing or increasing the number of images have an impact on the ID accuracy or OOD metrics? Other Comments Or Suggestions: I have no additional comments or suggestions. Questions For Authors: I have no further questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response to reviewer sEzZ We thank the reviewer for the feedback and constructive suggestions. Our response to the reviewer’s concerns is below: > Weakness 1: How can we ensure that during the generation from 0 to c, the original ID sample of class y is not mistakenly perturbed into another ID class? Simply controlling the step size α and number of steps c may make the model overly sensitive to hyperparameters. We appreciate this important inquiry regarding hyperparameters. We would like to emphasize that OOD features are the features that distributed between classes, so they are **ambiguous and likely to confuse the classifier**. But if BOOD generates features that are distributed in the ID area, the final OOD image will be considered as **noisy data** and **be harmful to the model performance**. While we don't have an explicit way to measure whether image features have been mistakenly perturbed into another ID class, **by setting relatively small $\alpha$ and $c$ values**, we can guarantee that the generated OOD features will located around the decision boundaries. Figure 5 shows that the performance of OOD detection rises firstly and latter decrases as $c$ increases, which illustrates **our control of $c$ are effective and most of the generated OOD images are effective**. In the next step, we are working on filtering mechanism to rule out the potential mistakenly perturbed features. > Weakness 2: I would like to understand whether the performance improvement over the DreamOOD method arises from generating OOD data with lower error rates, or from a higher number of OOD samples being situated closer to the decision boundary. Thank you for the interesting point. Would you clarify the meaning of "**lower error rates**" in the context? Assume the "lower error rates" indicates **the rate of mistakenly generated ID images in the generated OOD data**. Since OOD data are the samples that are not distributed in the input data distributions, thus they are hard to be classified by the classifier and difficult to decide whether they are OOD samples. Our method intend to find the boundaries of ID data and perturb ID features which are cloestest to the boundary to the area between classes and close to the decision boundaries in the latent space, thus it's diffiucult to quantify the error rate of the generated OOD data. We also conduct the experiment comparing the feature's average distance to their closest decision boundaries between DreamOOD[1] and our method: | Average distance | ImageNet-100 | CIFAR-100 | |:--:|:--:|:--:| |DreamOOD | 3.70 | 5.23 | |BOOD| 2.29 | 4.01 | The results show that BOOD's generated features are **closer to the closest decision boundaries**, resulting in the improvement of performance in OOD detection task. > Weakness 3: I have noticed that 100,000 generated OOD images were used during training. Would reducing or increasing the number of images have an impact on the ID accuracy or OOD metrics? Great point! Here we list the performance statistics of BOOD and DreamOOD[1] training on different number of OOD images (use CIFAR-100 as ID dataset): | Total OOD images | FPR95 $\downarrow$ | AUROC $\uparrow$ | ID ACC $\uparrow$ | |:--:|:--:|:--:|:--:| | DreamOOD-10k | 60.23 | 81.84 | 65.39 | | DreamOOD-50k | 48.66 | 85.71 | 72.95 | | DreamOOD-100k | 40.31 | 90.15 | 78.94 | | BOOD-10k | 25.21 | 93.63 | 65.14 | | BOOD-50k | 15.83 | 96.1 | 73.18 | | BOOD-100k | 12.47 | 97.34 | 78.17 | From the table above we can find that **as the number of OOD training images increases**, both BOOD and DreamOOD's[1] performance on OOD detection and ID classification **increases**. We hope we have responded to all your concerns. If you have further questions, we are pleased to discuss with you. Thank you again for taking the time to read our response and your constructive feedback! [1] Xuefeng Du, Yiyou Sun, Xiaojin Zhu and Yixuan Li. Dream the impossible-Outlier imagination with diffusion models. NeurIPS, 2023
Summary: This paper studies the problem of out-of-distribution (OOD) detection for image tasks. The authors leverage text-to-image latent diffusion models to synthesize OOD images that are used to train the binary OOD detector. In doing so, they follow a three-step strategy: 1) Identifying the ID samples that are closest to the decision boundary by measuring the number of steps (k) required to perturb the ID feature along the gradient ascent direction until the model's prediction changes (this number of steps serves as a proxy for the shortest distance between the InD sample and the decision boundary); 2) perturbing the identified features from step 1 to get OOD embeddings, and 3) using OOD embeddings from step 2 to generate OOD images using the diffusion model. Experiments on CIFAR-100 and ImageNet-100 as ID show improvements in performance from previous scoring-based methods as well as synthesis-based methods. Claims And Evidence: Yes, most claims are supported by evidence. However, some parts could be clearer. For example, in the abstract, the authors state that “…BOOD provides a more efficient strategy for synthesizing informative OOD features…”. However, there is no clear evidence showing that the method is truly efficient. Although the authors compare computation and memory usage in Tables 5 and 6, the differences are not significant, and in many cases, BOOD uses more computation than DreamOOD. I also believe the comparison should include more baselines, not just one. It might be better to either remove this claim or rephrase it more modestly. Additionally, in Section 4.3.1, the authors claim that “Employing a large c may force the feature to step into the ID region.” But is there evidence to support this, apart from the performance degradation? Would it be possible to provide an experiment (like a visual example) showing that the feature indeed returns to the ID region? For instance, it would have been useful to see this in Figure 3 (left) with more steps, where the image might begin to resemble the ID image. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, I checked the algorithm and experimental designs, and they look reasonable to me. Supplementary Material: No supplementary material provided. Relation To Broader Scientific Literature: The paper addresses the OOD detection problem, which is a well-established problem in the machine learning literature. Most previous works have focused on using scoring methods (e.g., MSP, DICE, Energy, etc.) to tackle the OOD detection problem. More recent works have shifted towards generating OOD samples using generative models, such as GANs and diffusion models. This work adopts the latter approach, utilizing diffusion models to generate OOD samples and then trains a binary OOD detector using both ID samples and the synthesized OOD samples. Essential References Not Discussed: While the paper discusses most of the relevant works, there is a recent synthesis-based method that was not mentioned, let alone compared against, in the paper. Specifically, the work titled Diffusion-based Semantic Outlier Generation via Nuisance Awareness for Out-of-Distribution Detection by Suhee Yoon et al. (2024) (https://arxiv.org/abs/2408.14841) presents an interesting approach that leverages diffusion models for OOD detection. Including a discussion of this work would have enhanced the paper's context and provided a valuable comparison. There are some other important studies that could have been included to provide a more comprehensive context. Recent works, for instance, have explored the use of unlabeled wild data to enhance OOD detection, achieving state-of-the-art results on datasets similar to those used in this paper. For example, Du et al. (2024) achieve an FPR95 of 0.07 on the CIFAR100-SVHN ID-OOD pair, compared to 5.42 in this paper. Similarly, on the CIFAR100-PLACES365 ID-OOD pair, Du et al. (2024) report an FPR95 of 3.53, while this paper has an FPR95 of 40.55. Some of them are: • Xuefeng Du, Zhen Fang, Ilias Diakonikolas, and Yixuan Li. "How does unlabeled data provably help out-of-distribution detection?" ICLR, 2024. • Julian Katz-Samuels, Julia Nakhleh, Robert Nowak, and Yixuan Li. "Training OOD detectors in their natural habitats." ICML, 2022. While these studies use unlabeled data, whereas this paper focuses on a synthesis-based method, it would have been valuable to discuss the literature on unlabeled data and its impact on OOD detection, as it provides important context and alternative approaches to the problem. Other Strengths And Weaknesses: **Strengths:** The paper is well-motivated, well-written, and easy to follow. The methodology is clearly explained, with illustrative examples provided in the figures, which helped in understanding better. Generally, I finding the algorithm design decisions quite intuitive and systematic. The method is simple yet effective. **Weaknesses:** The third step of the method (the OOD image generation) is based on the approach in Du et al. (2023), while the regularization technique is derived from VOS (Du et al., 2022). These elements are largely borrowed from prior works and integrated into the BOOD framework. This raises concerns about the novelty of the method. The authors did not provide their code to reproduce the results, which raises concerns about the transparency of the findings. Other Comments Or Suggestions: Minor: in the related work section, use ‘citet’ instead of ‘citep’ for “…guidance. (Dunlap et al., 2023) proposes to caption the images…” and “…model. (Li et al., 2024) generated augmented images with the guidance…”. Questions For Authors: Why is regularization used in the OOD detection model in Section 3.3? What is the significance of this regularization, and what is the underlying intuition? It would be helpful to mention this in this paragraph. Moreover, why does setting such a large value of $\beta=2.5$ help; what could be a potential rationale? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Response to reviewer Mr32 We appreciate the review for providing valuable advice. Below are our responses: > Claims and Evidence 1: there is no clear evidence showing that the method is truly efficient We apologize for the inaccurate expression in the abstract: BOOD provides a more **training efficient** strategy for synthesizing informative OOD features. We will fix this part in the updated manuscript, thank you again for pointinng this out. > Claims and Evidence 2: need to provide a visual example showing that the feature indeed returns to the ID region. Thank you for your advice. Please check **Figure 2** in our paper, which shows the perturbation process of two features from one ID class to another ID class: one from **Tiger** to **Fish**, another one from **Streetcar** to **House**. We will also include more visual examples illustrating the perturbation process. > Some essential References Not Discussed. We extend our gratitude to the reviewer for the recommendation of essential references. We will update the corresponding part in the related works of paper in the camera-ready version. > Weakness 1: The third step of the method (the OOD image generation) is based on the approach in Du et al. (2023), while the regularization technique is derived from VOS (Du et al., 2022). These elements are largely borrowed from prior works and integrated into the BOOD framework. This raises concerns about the novelty of the method. We thank the reviewer for the concern regarding the novelty. We would like to emphasize that **our framework mainly focus on the generation of imformative OOD features** in the latent space. **To guarantee the fairness in comparison with the previous SOTA method DreamOOD in synthesis-based OOD detection frameworks**, we follow the same regularization function as them. After generating the OOD dataset, the regularization function can be substitute by other common functions to train the OOD detection model. > Weakness 2: Code not provided. Thanks for concern regarding the code. We will release our code, data and model once the paper is accepted. > Comments: In the related work section, use ‘citet’ instead of ‘citep’ for “…guidance. (Dunlap et al., 2023) proposes to caption the images…” and “…model. (Li et al., 2024) generated augmented images with the guidance…”. Thanks for pointing out the minor mistakes, we will update them in the revised manuscript. > Question: Why is regularization used in the OOD detection model in Section 3.3? What is the significance of this regularization, and what is the underlying intuition? It would be helpful to mention this in this paragraph. Moreover, why does setting such a large value of $\beta$ help; what could be a potential rationale? The significance for regularization the OOD detection model is to keep model's ability of performing visual recognition task while regularizing the model to identify the generated OOD images. Equation (7) shapes the uncertainty surface, which predicts high probability for ID data and low probability for OOD data. Compared to other regularization method such as Energy-bounded[1], Equation (7) is **hyperparameter-free so easier to be implemented.** Our analysis suggest setting a relatively larger regularization weighting $\beta$ to ensure better performance (Figure 6 middle). The potential reationale could be: using a relatively larger $\beta$ will help to **force the model to push OOD samples away** thus enhance the model's ablility for distinguishing ID and OOD input. However, if the $\beta$ is too large, the model will be **over-regularized** and the performance for OOD detection will decrease. We sincerely thank you for reviewing our response and your constructive feedback. If any aspects still need clarification, we are happy to discuss them further. [1] Weitang Liu, Xiaoyun Wang, John Owens and Yixuan Li. Energy-based out-of-distribution detection. NeurIPS, 2020 --- Rebuttal Comment 1.1: Comment: I appreciate the authors for their rebuttal and will keep my rating for acceptance. --- Reply to Comment 1.1.1: Comment: We express our gratitude to the reviewer again for your valuable time and constructive feedback!
Summary: This paper introduces a framework called Boundary-based Out-Of-Distribution data generation (BOOD). BOOD synthesizes high-quality OOD features and generates outlier images using diffusion models. The BOOD framework learns a text-conditioned latent feature space from the ID dataset, selects ID features closest to the decision boundary, and perturbs them to cross the decision boundary to form OOD features. These synthetic OOD features are then decoded into images in pixel space by a diffusion model. The authors claim that BOOD provides a more efficient strategy for synthesizing informative OOD features, facilitating clearer distinctions between ID and OOD data. Experimental results on common benchmarks demonstrate that BOOD surpasses the state-of-the-art method significantly. Claims And Evidence: The claims made in the submission are generally well-supported by clear and convincing evidence. The authors have provided a thorough evaluation of their proposed method, BOOD, and compared it with several state-of-the-art approaches. The experimental results on common benchmarks demonstrate that BOOD surpasses the state-of-the-art method significantly. Specifically, the main claims are supported by the following evidence: * **Claim:** BOOD surpasses the state-of-the-art method significantly. * **Evidence:** The experimental results in Table 1 and Table 2 show that BOOD outperforms other methods on both CIFAR-100 and IMAGENET-100 datasets. * **Claim:** BOOD provides a more efficient strategy for synthesizing informative OOD features, facilitating clearer distinctions between ID and OOD data. * **Evidence:** The ablation studies in Section 4.3.1 and the visualization of generated images support this claim. * **Claim:** The proposed framework is not time-consuming or has strict memory requirements. * **Evidence:** The computational cost comparison in Table 5 and the memory requirements comparison in Table 6 demonstrate that BOOD's computational and memory demands are comparable to those of DreamOOD, a state-of-the-art method. However, it is important to note that BOOD had many hyperparameters that affect the results significantly such as the step size and the number of perturbations steps and require some tuning. Methods And Evaluation Criteria: The proposed methods and evaluation criteria in this paper are generally well-suited for the problem of Out-Of-Distribution (OOD) detection. * BOOD's approach of synthesizing OOD features by perturbing ID features near the decision boundary is a sound method for generating informative outliers. * The use of diffusion models to generate human-compatible outlier images is a reasonable choice, given their strong generative capabilities * The alignment of the image feature space with the diffusion-model-input space is a crucial step to ensure compatibility and effectiveness of the generated OOD data as done in DreamOOD. **Evaluation Criteria:** * The paper employs widely used benchmark datasets for OOD detection, such as CIFAR-100 and IMAGENET, which allows for comparison with other state-of-the-art methods. * The evaluation metrics used, including False Positive Rate at 95% True Positive Rate (FPR95) and Area Under the Receiver Operating Characteristic Curve (AUROC), are standard metrics for evaluating OOD detection performance. * The inclusion of in-distribution classification accuracy (ID ACC) as an evaluation metric is important to ensure that the OOD detection method does not compromise the model's performance on in-distribution data. Overall, the proposed methods and evaluation criteria are appropriate and well-justified for addressing the problem of OOD detection. However, it would be beneficial to report OOD performance on ImageNet-200 and CIFAR-10 and also to add distinction in test set between Near and Far OOD. Theoretical Claims: While the paper presents a novel framework and demonstrates strong empirical results, it doesn't contain significant theoretical claims that require in-depth proof verification. The core contributions are algorithmic and experimental, focusing on a new way to generate OOD data. Experimental Designs Or Analyses: The authors have taken reasonable steps to ensure the reliability and robustness of their findings. Here's a breakdown of the key aspects: **1. Ablation Studies:** * The authors conduct ablation studies to analyze the contribution of different components of their proposed BOOD framework. * Specifically, they ablate the effect of boundary identification and feature perturbation. * By comparing the performance of the full BOOD framework with variants where these components are removed or replaced, they demonstrate the importance of each component for achieving the best results. * This ablation analysis helps to validate the design choices made in the BOOD framework. **2. Hyperparameter Analysis:** * The authors perform a hyperparameter sensitivity analysis to evaluate the impact of different hyperparameter settings on the performance of BOOD. * They analyze the effect of step size, perturbation steps after crossing the boundary, pruning rate, OOD regularization weighting, and maximum perturbation steps. * By varying these hyperparameters and observing the resulting changes in performance, they gain insights into the optimal settings for BOOD and the sensitivity of the method to these parameters. * This analysis helps to ensure that the reported results are not due to a specific choice of hyperparameters and provides guidance for applying BOOD in different settings. **3. Comparison with State-of-the-Art Methods:** * The authors compare the performance of BOOD with several state-of-the-art OOD detection methods on benchmark datasets. * This comparison allows them to demonstrate the superiority of BOOD over existing methods and provides evidence for the effectiveness of their proposed approach. * The use of standard benchmark datasets and evaluation metrics ensures that the comparison is fair and objective. Overall, the experimental designs and analyses in the paper are well-structured, comprehensive, and appropriate for evaluating the proposed BOOD framework. The authors have carefully considered various factors that could affect the validity of their results and have taken steps to address them through ablation studies, hyperparameter analysis, and comparison with state-of-the-art methods. Supplementary Material: I reviewed the additional visualizations of generated images for both CIFAR-100 and IMAGENET-100. This helps to get a better qualitative understanding of the OOD images generated by BOOD. I compared these visualizations with those generated by DreamOOD (Du et al., 2023) to understand the differences in the quality and diversity of the generated images. Relation To Broader Scientific Literature: The key contributions of the paper are related to the broader scientific literature in the following ways: * **Out-of-Distribution (OOD) Detection:** The paper addresses the problem of OOD detection, which is a well-established area of research in machine learning. * The goal of OOD detection is to identify inputs that come from a different distribution than the one the model was trained on. * **Data Augmentation with Diffusion Models:** The paper leverages the power of diffusion models for data augmentation, which is a rapidly growing area of research. * Diffusion models have shown remarkable success in generating high-quality and diverse images. * The paper's approach is related to other works that have used diffusion models for data augmentation, including those that perform image generation with semantic guidance and those that use perturbation-based approaches to synthesize augmented images. * **Synthesizing OOD Data:** A core contribution of the paper is the development of a novel framework, BOOD, for synthesizing OOD data. * This contribution is closely related to prior work that has explored the use of auxiliary outlier datasets to improve OOD detection. * **Explicitly Generating OOD Images near Decision Boundaries:** BOOD's innovation lies in its ability to generate image-level OOD data located around the decision boundaries between classes. Essential References Not Discussed: The key contribution of this paper relies primarily on identifying the in-distribution (ID) features located near the decision boundary and subsequently perturbing these features away from that boundary to generate out-of-distribution (OOD) data. To provide context for this approach, it would be valuable to include references [1,2], as they propose methods for analyzing and approximating the distance to the decision boundary using adversarial attack concepts similar to those employed in this paper. These works offer complementary perspectives on understanding and characterizing decision boundaries in deep neural networks. ### References: 1- Mickisch, David, et al. Understanding the Decision Boundary of Deep Neural Networks: An Empirical Study. arXiv:2002.01810, arXiv, 5 Feb. 2020. arXiv.org, https://doi.org/10.48550/arXiv.2002.01810. 2- Yousefzadeh, Roozbeh, and Dianne P. O’Leary. “Deep Learning Interpretation: Flip Points and Homotopy Methods.” Proceedings of The First Mathematical and Scientific Machine Learning Conference, PMLR, 2020, pp. 1–26. proceedings.mlr.press, https://proceedings.mlr.press/v107/yousefzadeh20a.html. Other Strengths And Weaknesses: **Strengths:** * **Originality:** * The paper introduces a novel framework, BOOD, for generating OOD data. * BOOD's originality lies in its explicit focus on generating OOD images located around the decision boundaries between classes. * **Significance:** * The proposed BOOD framework offers a promising approach to improve OOD detection by generating informative OOD data. * The experimental results demonstrate that BOOD outperforms state-of-the-art methods on benchmark datasets, highlighting the significance of the contribution. * **Clarity:** * The paper is generally well-written and easy to follow. * The problem is clearly defined, and the proposed approach is well-motivated. **Weaknesses:** * **Hyperparameter Sensitivity:** * The performance of BOOD depends on several hyperparameters, such as step size, perturbation steps, pruning rate, and OOD regularization weighting. * Although the authors conduct a hyperparameter sensitivity analysis, the process of tuning these parameters for new datasets or applications may be challenging. * **Generalization to More Complex Datasets:** * The experiments are conducted on CIFAR-100 and IMAGENET-100. * While these are standard benchmark datasets, it is not clear how well BOOD would generalize to more complex and datasets or to NearOOD test sets as the one in the paper are considered FarOOD. Other Comments Or Suggestions: NA Questions For Authors: 1 - The authors mention that they "employ a class embedding alignment strategy during the image encoder training following Du et al. (2023)". It would be helpful to understand why this strategy is important for generating OOD images. Does this alignment ensure compatibility between the feature space and the diffusion model? 2- The authors use an adversarial perturbation strategy to identify ID boundary features. How sensitive is this boundary identification process to the choice of hyperparameters, such as the step size α and the maximum iteration number K? How do these parameters influence the accuracy of identifying features closest to the decision boundary? 3- The authors compare BOOD with several state-of-the-art OOD detection methods. Could they discuss how BOOD's performance might be affected by the choice of the backbone architecture or the diffusion model used for image generation? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Response to reviewer Smhu > Essential References Not Discussed. Thanks for recommendation for essential references. We will add them into our camera-ready paper. > Weakness 1: the process of tuning hyperparameters may be challenging. We appreciate the reviewer for the meaningful concerning. While fine tuning the parameter $c$ and $\alpha$ may be somehow time-consuming, we would like to emphasize that $r$, $\beta$ and $K$'s effect are **not remarkable** (Figure 6): the difference in AUROC are within 0.5% when $r < 10$ and within 0.4% for $\beta \in \{1.5, 2, 2.5, 3\}$, the performance will not change if $K > 50$. Thus, our suggestions for tuning the hyperparameters can make the process of tuning less difficult. We are working on a possible automatic adaptive method to adjust the additional pertuabation steps $c$ in the next step, aiming to reduce the tuning time. > Weakness 2: BOOD's performance on ImageNet-200, CIFAR-10 and NearOOD datasets. Since our baseline method DreamOOD[1] does not include the performance on ImageNet-200 and CIFAR-10, we do not choose them as the ID dataset to guarantee comparison fairness. Due to the limited author response time, we cannot present the result of these two dataset, but we will include them in our camera-ready paper. We summerize the experiment results on two selected NearOOD datasets **NINCO and SSB-hard** as below: ||NINCO||SSB-hard||Average|| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Method|FPR95 &#8595;|AUROC &#8593;|FPR95 &#8595;|AUROC &#8593;|FPR95 &#8595;| AUROC &#8593;| |DreamOOD[1]|57.08|89.51|77.29|74.77|67.19| 82.14| |BOOD|54.37|89.82|75.48|80.82|64.93|85.32| From the table above we can see that in the NearOOD dataset, BOOD shows **better performance** than DreamOOD[1], thus indicating that BOOD can handle more complex OOD datasets. > Question 1: importance of class embedding alignment strategy, capability between feature space and the diffusion model. We appreciate the reviewer for the reasonable concern. We aim to create a latent space that is compatible for the input space of diffusion model, so the alignment between the image features and their corresponding class token embeddings is necessary. By training an image encoder with **Formula (2)**, we ensure that the image features can be decoded by the diffusion models. Please check **Section 3.1** in our paper for detailed explanation. > Question 2: hyperparameter $\alpha$ and $K$'s sensitivity on accuracy of identifying features closest to the decision boundary. While it's difficult to directly measure the accuracy, from our experiment on ablation of OOD feature synthesizing methodologies (Sec 4.3.1 and Table 3), we can find that **perturbing random ID features to the boundaries will significantly decrease the OOD detection performance** compared to using the ID features closest to the decision boundary. Therefore, we can measure the accuracy of identifying features closest to the decision boundary by the final OOD detection performance. Employing **a relatively small $\alpha$** facilitates a more nuanced differentiation between samples, and a large $\alpha$ may lead to **large discrepancy** between each iteration in adversarial perturbation, making the counting of distance to decision boundaries not accurate. Selecting **a relatively large max iteration number $K$** can ensure **comprehensive boundary crossing for most features**. While increased $K$ do affect computational overhead in boundary identification, **the impact remains manageable**. We provide detailed discussion about the sensitivity of the hyperparameters $\alpha$ and $K$ in Section 4.3.2, Figure 5 (left) and Figure 6 (right). > Question 3: backbone architecture and diffusion model's impact on BOOD's performance. We provide comparison between BOOD's performance on Resnet-18, Resnet-34 and Resnet-50 using CIFAR-100 as ID dataset: |Backbone|FPR95 &#8595;|AUROC &#8593;|ID ACC &#8593;| |:-:|:-:|:-:|:-:| |ResNet-18|10.83|97.74|78.11| |ResNet-34|10.67|97.42|78.03| |ResNet-50|11.23|96.89|79.68| From the table above we can see that the selection of backbone architecture will **not significantly affect the performance of BOOD.** For some datasets that has **large domain discrepancy** from the diffusion model's training distribution (Textures), BOOD's performance might be affected by the **diffusion model's capability**. But this **constraint is inherent to all methodologies utilizing diffusion-based generative data augmentation**. While future developments in generative modeling may address these limitations, **we emphasize that our primary goal is to leverage diffusion models to generate informative OOD images** thus increasing the OOD detection model's performance. We believe we have responded to all your concerns. If anything remains unclear, we would be pleased to have further discussions with you. [1] Xuefeng Du, Yiyou Sun, Xiaojin Zhu and Yixuan Li. Dream the impossible-Outlier imagination with diffusion models. NeurIPS, 2023
Summary: This paper introduces a framework for generating synthetic out-of-distribution (OOD) data by explicitly targeting decision boundaries in latent feature space. The proposed method, BOOD, employs an adversarial perturbation strategy to identify in-distribution (ID) features closest to the decision boundary and perturbs them along the gradient ascent direction to synthesize informative OOD features. These features are then decoded into human-compatible OOD images using a diffusion model. Experiments and ablation studies have presented to validate the efficiency of the framework. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: The paper could have broader impact on OOD detection, open-world pattern recognition methods, and image generation. Essential References Not Discussed: no Other Strengths And Weaknesses: Strengths: 1. The paper is well written and easy to follow. 2. The idea of exploring ood features in the boundary and using diffusion models to generate images from ood features is reasonable and interest. 3. The method achieves better results than the compared methods on most evaluated datasets. 4. The ability of generating image-level OOD samples would be beneficial to other domains of the community. Weaknesses: 1. The compared methods are 2023 and before, how about the performance regarding to the most recent works? 2. The discussion of OOD methods misses most works on 2024. 3. The complexity analysis in appendix should be included in the main text, as the timing of diffusion-based methods is always an interested point for researchers in the community. Besides, the details about the complexity analysis are unclear. For instance, what the dataset used for Table 5, how about the timing when generating samples one by one? Does the memory requirements in Table 6 are for training? How about the inference? 4. There are many hyper-parameters. Although the paper provided curves of using different values, setting them for a new dataset is still not an easy job. This could be a disadvantage of this method for practical applications. =================== post rebuttal ======================== After read the rebuttal and other reviews, I would like to raise my score from weak accept to accept. Other Comments Or Suggestions: The quality of figures can be improved. The size of tables in the appendix can be adjusted. Questions For Authors: no Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Response to reviewer wxMm We thank the reviewer for the comments. > Weakness 1: The compared methods are 2023 and before, how about the performance regarding to the most recent works? Thank you for your concern regarding baseline methods. Please note that our work is based on using diffusion models to generate image-level OOD datasets. However, there are few work in this specific area. We provide comparison between BOOD and **a SOTA methods** FodFom[1] from ACMMM 2024, which also harnesses diffusion model to generate OOD images for enhancing OOD detection model. The results are summarized in the table below: | | SVHN | | LSUN-R | | LSUN-C | | iSUN | | Textures | | Places365 | | Average | | |:-:|:-:|:--:|:-:|:-:|:--:|:---:|:--:|:-:|:--:|:-:|:-:|:-:|:-:|:-:| | Method | FPR95 &#8595; | AUROC &#8593; | FPR95 &#8595; | AUROC &#8593; | FPR95 &#8595; | AUROC &#8593; |FPR95 &#8595; |AUROC &#8593; |FPR95 &#8595; | AUROC &#8593; | FPR95 &#8595; | AUROC &#8593; | FPR95 &#8595; | AUROC &#8593; | | FodFoM | 33.19 | 94.02 | 28.24 | 95.09 | 26.79 | 95.04 | 33.06 | 94.45 | 35.44 | 93.38 | 42.30 | 90.68 | 33.17 | 93.78 | | BOOD | 5.42 | 98.43 | 0.10 | 99.94 | 2.06 | 99.25 | 0.22 | 99.91 | 5.1 | 98.74 | 40.55 | 90.76 | 8.91 | 97.84 | Compare to FodFom[1], **BOOD demonstrates superior performance**, illustrating its competitiveness. > Weakness 2: The discussion of OOD methods misses most works on 2024. Thanks for your question related to the discussion on recent OOD methods. We will include them into our camera-ready version. > Weakness 3: The complexity analysis in appendix should be included in the main text, as the timing of diffusion-based methods is always an interested point for researchers in the community. Besides, the details about the complexity analysis are unclear. For instance, what the dataset used for Table 5, how about the timing when generating samples one by one? Does the memory requirements in Table 6 are for training? How about the inference? Thanks for pointing out the this. We will include the detailed complexity analysis in the main text in our revised manuscript. Regarding the dataset used in Table 5, we use **CIFAR-100** as the ID dataset. It takes around **4** second to generate a single image in our framework on **Stable Diffusion v1.4** with **one NIVDIA L40S GPU**. And for Table 6, the memory requirements indicates the space needed for **storing the generated OOD images** used for training the final OOD detection model. There's no extra memory requirements during the inference time. > Weakness 4: There are many hyper-parameters. Although the paper provided curves of using different values, setting them for a new dataset is still not an easy job. This could be a disadvantage of this method for practical applications. We appreciate the reviewer for the meaningful concerning. While fine tuning the parameter $c$ and $\alpha$ may be somehow time-consuming, we would like to emphasize that $r$, $\beta$ and $K$'s effect are **not remarkable** (Figure 6): the difference in AUROC are within 0.5% when $r < 10$ and within 0.4% for $\beta \in \{1.5, 2, 2.5, 3\}$, the performance will not change if $K > 50$. Thus, our suggestions for tuning the hyperparameters can make the process of tuning less difficult. We are working on a possible automatic adaptive method to adjust the additional pertuabation steps $c$ in the next step, aiming to reduce the tuning time. > Weakness 5: The quality of figures can be improved. The size of tables in the appendix can be adjusted. Thank you for your advice, we will polish all the figures and tables in our camera-ready version of paper. We thank the reviewer for taking the time to read our response and the positive feedback again. If you have any further concerns, we are willing to have discussions with you. [1] Jiankang Chen, Ling Deng, Zhiyong Gan, Wei-Shi Zheng and Ruixuan Wang. "FodFoM: Fake Outlier Data by Foundation Models Creates Stronger Visual Out-of-Distribution Detector." ACMMM 2024.
null
null
null
null
null
null
The Lock-in Hypothesis: Stagnation by Algorithm
Accept (poster)
Summary: This paper examines how feedback loops in human–LLM interactions can lead to belief lock-in, where dominant views become entrenched while conceptual diversity declines. The authors propose the lock-in hypothesis and support it with three approaches: (1) empirical analysis of the WildChat-1M dataset, showing a significant decline in user-generated concept diversity over time; (2) a natural-language simulation where a shared knowledge base collapses into a single topic; and (3) a formal Bayesian model demonstrating conditions under which beliefs become locked in. The findings suggest that LLMs, through iterative training and continued user interactions, may reinforce user opinions and limit diversity in discourse. Claims And Evidence: Claim 1: Empirical evidence from real-world data supports diversity loss driven by human–LLM feedback loops (§3) 1. The demographic and cultural composition of WildChat users is not specified, raising the possibility of selection bias. For example, newer ChatGPT users may inherently adapt faster to mainstream discussions, leading to topic convergence that reflects user characteristics rather than an intrinsic lock-in effect. A four-month gap in GPT-4 data may introduce discontinuities that interfere with trend analysis (§3.1). 2. The novel $D_{\text{lineage}}$ metric relies on hierarchical clustering of concepts but lacks validation against established diversity measures (e.g., topic entropy, Jaccard diversity), making it difficult to rule out metric bias. Concept extraction depends on embedding-based clustering (Appendix C.2), but the authors acknowledge that clustering quality is suboptimal (Figure 10), which may impact the reliability of $D_{\text{lineage}}$. 3. RKD can only detect discontinuities but does not account for external factors (e.g., user growth, platform updates) that might coincidentally align with model updates. Without controlling for confounders such as user engagement fluctuations, causal attribution remains uncertain. Claim 2: Simulation experiments demonstrate feedback loops leading to knowledge base collapse (§4) 1. User behavior is overly simplified, as updates follow a fixed add/swap template (Appendix C.3), omitting real-world actions such as deletions, deep modifications, and cross-topic synthesis. This may overestimate the inevitability of lock-in. The Llama-3-8B model is used to simulate users, but its response patterns may significantly differ from real human behavior, potentially exaggerating conformity effects. 2. Only two simulation runs are presented, making it unclear whether the observed knowledge collapse is a robust trend or an artifact of the initial knowledge base structure (e.g., a high initial share of "research integrity" topics). Claim 3: Theoretical modeling proves that lock-in is an inevitable result of feedback loops and moderate trust (§5) 1. The trust matrix $W$ is assumed to be static and symmetric, whereas in reality, user trust in LLMs ($\lambda_2$) likely fluctuates over time due to model errors and evolving user perceptions. Individual heterogeneity is ignored—some users may resist LLM influence, yet the model assumes a homogeneous population. Overall Consistency of the Evidence Chain The connection between diversity decline in WildChat and knowledge collapse in the simulation remains indirect. There is no clear mechanism explaining how users internalize LLM outputs as updates to their knowledge structure. Methods And Evaluation Criteria: The paper employs three methodologies—empirical analysis (WildChat-1M), LLM-based simulations, and formal modeling—which together provide a structured investigation into the lock-in hypothesis. However, as noted in *Claims and Evidence*, each method has limitations that weaken causal inference and generalizability. The empirical study lacks strong controls for external confounders, making it unclear whether observed diversity decline is due to LLM-driven feedback loops or broader social trends. The simulation oversimplifies user behavior and lacks robustness tests across different initial conditions. Theoretical Claims: No obvious errors were found in the derivations. Experimental Designs Or Analyses: 1. Knowledge Base Truncation Strategy. The simulation forces the knowledge base to be truncated at 100 items (Appendix C.1) using an elimination strategy based on "importance ranking." However, this importance is subjectively judged by simulated users and may be influenced by the LLM output. This truncation mechanism could artificially accelerate diversity loss, as new entries may systematically replace older ones. If the LLM tends to repeat mainstream viewpoints, the truncation strategy might reinforce the lock-in effect. Supplementary Material: Most of the content, such as Appendix B (theoretical proofs) and Appendix C (natural language simulation details), but some details may be missing. Relation To Broader Scientific Literature: The paper connects to echo chamber research (e.g., RecSys polarization) but inadequately distinguishes LLMs’ collective belief amplification (via iterative human-AI interaction) from RecSys’ personalized bias. While prior work demonstrates short-term LLM influence (e.g., Jakesch et al., 2023), this study uniquely theorizes systemic diversity loss through cross-scale feedback loops. Links to model collapse (synthetic data recursion) and Bayesian social learning remain underdeveloped; clearer differentiation is needed (e.g., human-AI trust dynamics vs. synthetic data degradation). The formal model’s novelty lies in codifying mutual trust between humans and LLMs—a gap in prior social learning frameworks. Essential References Not Discussed: None Other Strengths And Weaknesses: The paper creatively merges ideas from recommendation system echo chambers, iterated learning, and information cascades, applying them to large language models. This interdisciplinary angle—connecting social science theories of belief reinforcement with modern LLM-based interactions—adds novelty and broadens the conversation about AI’s societal impact. Other Comments Or Suggestions: None Questions For Authors: 1. The knowledge collapse result relies on two simulation runs with Llama-3-8B. Have you tested other models (e.g., GPT-4, Mistral) or varied the initial knowledge base (e.g., seeding with high initial diversity)? Could the observed collapse reflect Llama-3’s inherent biases rather than a generalizable feedback loop? 2. Did you control for platform-wide changes during the data collection period (e.g., ChatGPT interface updates, viral news events)? How do you ensure that diversity loss is not driven by exogenous factors coinciding with model updates? 3. The model assumes static, homogeneous trust (λ₁, λ₂). How would incorporating time-varying or heterogeneous trust (e.g., some users distrusting the LLM) affect the lock-in threshold (N−1)λ₁ λ₂=1? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### Summary of updates on WildChat analysis | | gpt-4-turbo kink | gpt-3.5-turbo kink1 | gpt-3.5-turbo kink2 | user-wise regression | |---------------------|------------------|---------------------|---------------------|----------------------| | Lineage diversity | Negative ($p<.05$) | Negative ($p<.05$) | Negative ($p<.05$) | Negative ($p<.05$) | | Topic entropy | Negative ($p<.05$) | Negative ($p<.05$) | $p>.05$ | Negative ($p<.05$) | | Pairwise Jaccard distance | Negative ($p<.05$) | Negative ($p<.05$) | $p>.05$ | Negative ($p<.05$) | Here, "negative" indicates the statistically significant negative impact of GPT version update on diversity is found. In the user-wise regression, we controlled for user identity, time, language, conversation stats (length etc.), pre-/post-availability gap for GPT-4, and tested the impact of `num_updates_before` (how many version updates have happen before this point) on a user's concept diversity within a 3-day period. Since we already controlled for time, this quantity indicates the counterfactual *acceleration* of diversity loss due to version updates. Since `num_updates_before` as the independent variable indicates *sustained* impact, we rule out the factor of "people rushing to try things at the date of version update". Substitution effects with other providers like Anthropic exist, but Anthropic model releases do not coincide with GPT model version updates (notably, this is *not* the release of new GPT models), and so are unlikely to introduce discontinuities that disrupt RKD. ### Analysis details We find causal evidence of sustained diversity loss induced by model version updates, even after controlling for a range of confounders, testing 3 different diversity metrics, and selecting the subset of strongly value-laden (e.g. political/religious) content from the interaction dataset. Setup: - Filtered concepts to only leave the top 2.5% most value-laden (i.e. political/religious/moral) ones. - Removed conversations that involve long templated prompts; those are probably people how use the platform as a free API, rather than real users. - Doing per-user regression again on this subset of concepts shows negative impact of GPT version updates on diversity, including after controlling for a bunch of things. Results are robust wrt choice of model family, cutoff for user engagement level, etc. ### "The model assumes static, homogeneous trust" We haven't got time to do this, but can confirm that we can incorprorate dynamic trust (to better simulate real-world use) and include new proof in the final publication. ### Simulation truncation strategy A new round of simulation experiment suggests that there was no obvious sign of lock-in when truncation was removed (1 experiment, 300 rounds). Didn’t do more runs because Pólya urn model predicts this result well. ### Redesign of simulations In light of this, we iterate our design of simulation. This time, we run simulations that are better aligned with our formal modeling and analytical simulation (the exact implementation of formal model): we set up an LLM tutor for the assistance of approaching to the true underlying Gaussian distribution (as opposed to in analytical simulation the “tutor” is merely averaging agents’ empirical observations). The LLM tutor acquires agents’ empirical observation in context, and returns its own belief based on this prompt to agents. The agents assign trust to LLM tutor belief because of its perceived credibility without knowing that tutor belief is based on collective agents beliefs. This LLM-based simulation is one step away from the simplified analytical simulation and one step closer to how LLM is actually used by people in the real-world: *LLM-ways of knowing* replaces *empirical ways of knowing*. The results are encouraging: in ~ 5 runs of this simulation, we observe similar lock-in phenomenon: collectively agents and tutor are locked-in false beliefs with high confidence (precision). Although the task is still relatively simple, compared to the analytical simulation setup where the so-called "tutor/knowledge authority" simply calculate group averagte as its own belief, here in this new simulation LLM tutor acquire its belief based on in-context learning of all user beliefs. This points out direction for more effective runs of simulations: if given access to both empirical truth updating and LLM querying, where LLM has no empirical ground truth but only user beliefs while users assign high trust to LLM output, would human users collective converge at false beliefs? Our formal modeling predicts lock-in result, and our simulations verify this intuition. For next steps, we can run more realistic tasks that help us gain understanding of human-LLM interaction dynamic that is otherwise not available due to the limitation of time-series data. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. While I remain a bit skeptical about the possibility of fully simulating complex human behavior, I do believe it’s important to encourage this line of research
Summary: Paper proposes the "lock-in" hypothesis, and presents a series of empirical and simulated experiments, and theoretical analysis to provide evidence for the hypothesis. Claims And Evidence: The main claim is that LLM/human interactions induce a feedback loop which forms echo chambers that leads to loss in diversity in human beliefs. I think the main claim is stated in a rather vague manner, and the paper presents some but not enough evidence to fully support this claim. Methods And Evaluation Criteria: N/A Theoretical Claims: I am not an expert, and I did not check the correctness of the theory. However, the model used for analysis seems very simple (gaussian with unknown mean). It's not clear how the conclusions drawn from the theory might translate to the much more complex LLM/human user interaction feedback loop. Experimental Designs Or Analyses: While I think it is nice that the paper analyzes the behavior of real LLM/human interaction data, I don't think the results necessarily provide evidence for the stated hypothesis. In Figure 1A, there is such great variation in conceptual diversity locally, such that I'm not sure the global difference in average conceptual diversity is meaningful over this relatively small timescale. In Figure 1B, while the superimposed orange lines show discontinuations in the data, I don't think the raw data show evidence for this in GPT-3.5-turbo-0613 and GPT-3.5-turbo-0125. I'm not exactly sure how the posterior mean (superimposed orange lines) is calculated, but it doesn't seem to match the data very well. I think it was also a bit hard to parse the significance of the natural-language simulation experiments. I think the issue is that there is not enough detail in the main paper regarding the setup of the experiment, which makes it hard to connect how this experiment might relate to or model interactions and feedback loops between real LLMs and users. For instance, some questions I had include: what is the structure of the collective knowledge base? what does the interactions between the users and tutors looks like? what are some real examples of questions and responses between users and tutors? how does a user decide when and what to update in the knowledge base? how does the llm get updated based on the knowledge base? Supplementary Material: No Relation To Broader Scientific Literature: While feedback loops between LLMs and users have been studied in prior work, this paper focuses on the dynamics of human beliefs and the potential creation of echo chambers. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength - interesting premise - important implications for society Weaknesses - the hypothesis are vague and not easily measurable - not enough evidence to support hypothesis - different sections present slightly different hypotheses and it's not clear how they all fit together Other Comments Or Suggestions: Overall, I think this paper raises some interesting ideas, and can make a compelling position paper. However, I don't think there is enough evidence presented to support the main claims for a conference paper. I also think this paper can be greatly improve if it provides a single unifying hypothesis about this "lock-in" phenomenon, describe each component of the hypothesis in a concrete and measurable manner, and then instantiate the real, simulated, and theoretical analysis as evidence for this single unifying hypothesis. Questions For Authors: See above sections Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ### There is not enough detail in the main paper regarding the setup of the experiment" All details requested here are either presented in the main text or appendix. - "Structure of the collective knowledge base?" - This is in 4.1 under bold text "knowledge base". Some examples of knowledge base are in figure 4, and more details are in appendix C. (incl' C.5.1 initial knowledge base; C.5.2 knowledge base at time step 100). - "what does the interactions between the users and tutors looks like?" - the interaction logic is in figure 4, and their roles introduced under "Tutor" and "Users" Again, details in Appdendix: C.3 prompts in simulations; - "what are some real examples of questions and responses between users and tutors? ", this is in C.4 interaction history; - "what are some real examples of questions and responses between users and tutors?", this is in C.4 interaction history. ### Evidence to support the hypothesis We find causal evidence of sustained diversity loss induced by model version updates, even after controlling for a range of confounders, testing 3 different diversity metrics, and selecting the subset of strongly value-laden (e.g. political/religious) content from the interaction dataset. | | gpt-4-turbo kink | gpt-3.5-turbo kink1 | gpt-3.5-turbo kink2 | user-wise regression | |---------------------|------------------|---------------------|---------------------|----------------------| | Lineage diversity | Negative ($p<.05$) | Negative ($p<.05$) | Negative ($p<.05$) | Negative ($p<.05$) | | Topic entropy | Negative ($p<.05$) | Negative ($p<.05$) | $p>.05$ | Negative ($p<.05$) | | Pairwise Jaccard distance | Negative ($p<.05$) | Negative ($p<.05$) | $p>.05$ | Negative ($p<.05$) | Here, "negative" indicates the statistically significant negative impact of GPT version update on diversity is found. In the user-wise regression, we controlled for user identity, time, language, conversation stats (length etc.), pre-/post-availability gap for GPT-4, and tested the impact of `num_updates_before` (how many version updates have happen before this point) on a user's concept diversity within a 3-day period. Since we already controlled for time, this quantity indicates the counterfactual *acceleration* of diversity loss due to version updates. ### "Different sections present slightly different hypotheses and it's not clear how they all fit together" This has been a main challenge of this paper: it's hard to control confounders in WildChat hence it's hard to connect three methodologies with a unified hypothesis. Among other contenders, our most non-ambiguous definition of lock-in hypothesis is: due to the feedback loops between humans and LLMs (that LLMs acquire its knowledge from human data and humans acquire knowledge from LLMs), human users and LLMs will irreversibly collectively converge at false beliefs (if this is still vague, see formal model in 5.3, notably Theory5.2). But we cannot do experiments to understand whether "users converge at false beliefs" and "LLM-based chatbot updates" are causal. That being said, to address this concern, we redesign LLM-based simulation (section4) that can be better aligned with our definition of locked-in (captured by the formal model in section 5): we set up an LLM tutor for the assistance of approaching to the true underlying Gaussian distribution (as opposed to in analytical simulation the “tutor” is merely averaging agents’ empirical observations). The LLM tutor acquires agents’ empirical observation in context, and returns its own belief to agents. The agents assign high trust to LLM tutor belief because of its perceived credibility, without knowing that tutor belief is absorbed from collective agents beliefs. The results are encouraging: in ~ 5 runs of this simulation, we observe similar lock-in phenomenon: collectively agents and tutor are locked-in false beliefs with high confidence (precision). ### Connecting the hypothesis In light of encouraging results from new simulations, we further develop the lock-in hypothesis: due to the feedback loops between humans and LLMs (that LLMs acquire its knowledge from human data and humans acquire knowledge from LLMs), human users and LLMs will irreversibly collectively converge at false beliefs. ### "the hypothesis are vague and not easily measurable" As mentioned, we iterate a new version of lock-in hypothesis which was better captured by formal modeling (than data analysis). Despite empirical difficulties of "measuring collective false beliefs", in both formal modeling and LLM-based simulations, the hypothesis can be tested: because of the mutual updates between LLMs and users, collectively they converge at false beliefs (figure 5c demonstrates this). Could you elaborate a bit more if you still think this is not measurable?
Summary: This paper studies a very interesting problem, noted as the lock-in hypothesis, that during long-term development and evolution, language models’ topics and beliefs is reinforced by the user’s preference and feedback loop, effectively creating an echo chamber. The authors use the wildchat dataset through one years’ data collection and the version iteration of chatgpt to study this phenomenon, and provides supporting evidences showing that the enforcement from collective user feedback may be one of the causes of model’s stagnation. Claims And Evidence: The claims are well studied using a solid dataset (wildchat) and meaningful methodology. Methods And Evaluation Criteria: Yes. Theoretical Claims: There is no strong theoretical claims. Experimental Designs Or Analyses: The experiments are meaningful. Although there could be alternative hypothesis and there might be better data collection methods, the experiments conducted in this work is, I would say best effort and already give meaningful signals. Supplementary Material: I have checked the topic identification and exploration part. Relation To Broader Scientific Literature: This work may be interested by a wider social science community that studies how human-AI coevolve. Essential References Not Discussed: No Other Strengths And Weaknesses: I would like to point out certain alternative hypothesis of the stagnation phenomenon: - When the wildchat dataset was created, ChatGPT was just released for a short term, and the human society are still under very large curiosity about what AI models can do. People just want to try the model for free. So naturally, some early distribution of prompts may explore different topics because people would like to find out what the model can do - By the time when GPT 4 was release, people are relatively more familiar of the model’s boundary, and there is a certain level of consensus that coding/ education/ daily helper tasks are what the model is doing well enough. So consequently, the loss of diversity may not because humans reinforce their preference or belief, but because people realized that certain prompt tried in early 2023 may not be achievable by the model. So the prompt distribution naturally converge to the model’s capability boundary. Counterfactually, if the model are strong enough and can achieve the hard queries, that part of diversity may be maintained. - The model’s capability / topic distribution may not only enforced by user preference, but also guided by the companies’ strategy. For example, Anthropic strategically wants to do enterprise business and intentionally enhance the models’ coding capability while GPTs are not. Consequently, certain coding problems that originally sent to ChatGPT may be redirected to Claude later on. All these factors being said, I still believe this paper did a meaningful exploratory study and the user’s feedback, although not the full reason, may be a strong reason of model’s stagnation. I would also love to see how this type of human-AI co-evolve study develop in the future. Other Comments Or Suggestions: No any others except for the comments above Questions For Authors: No any others except for the comments above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the feedback! We think the following results may help answer your question. We find causal evidence of sustained diversity loss induced by model version updates, even after controlling for a range of confounders, testing 3 different diversity metrics, and selecting the subset of strongly value-laden (e.g. political/religious) content from the interaction dataset. | | gpt-4-turbo kink | gpt-3.5-turbo kink1 | gpt-3.5-turbo kink2 | user-wise regression | |---------------------|------------------|---------------------|---------------------|----------------------| | Lineage diversity | Negative ($p<.05$) | Negative ($p<.05$) | Negative ($p<.05$) | Negative ($p<.05$) | | Topic entropy | Negative ($p<.05$) | Negative ($p<.05$) | $p>.05$ | Negative ($p<.05$) | | Pairwise Jaccard distance | Negative ($p<.05$) | Negative ($p<.05$) | $p>.05$ | Negative ($p<.05$) | Here, "negative" indicates the statistically significant negative impact of GPT version update on diversity is found. In the user-wise regression, we controlled for user identity, time, language, conversation stats (length etc.), pre-/post-availability gap for GPT-4, and tested the impact of `num_updates_before` (how many version updates have happen before this point) on a user's concept diversity within a 3-day period. Since we already controlled for time, this quantity indicates the counterfactual *acceleration* of diversity loss due to version updates. Since `num_updates_before` as the independent variable indicates *sustained* impact, we rule out the factor of "people rushing to try things at the date of version update". Substitution effects with other providers like Anthropic exist, but Anthropic model releases do not coincide with GPT model version updates (notably, this is *not* the release of new GPT models), and so are unlikely to introduce discontinuities that disrupt RKD.
null
null
null
null
null
null
null
null
Chameleon: A Flexible Data-mixing Framework for Language Model Pretraining and Finetuning
Accept (poster)
Summary: This manuscript introduces a flexible data-mixing framework for LLM that uses kernel ridge leverage scores (KRLS) computed from learned domain embeddings using a proxy model. It quantifies each domain’s representativeness and interdependence within the embedding space, and then uses this information to generate a weighted data mixture for both pretraining and finetuning. The comprehensive experiments and ablation studies demonstrate its practical benefits on both perplexity and downstream tasks by outperforming existing methods such as DoReMi and DoGE with a small computational overhead. ## Post Rebuttal ## Most of my concerns are properly addressed. I would like to raise my score. ## Post Rebuttal ## Claims And Evidence: Overall, most claims are supported by empirical evidence. However, some of the claims are not supported by clear and convincing evidence. 1. The authors claim that the method is computationally efficient. However, the submission does not include a detailed runtime analysis or complexity comparison with baseline methods. 2. The novelty of the manuscript is the ability to quantify each domain’s representativeness and interdependence in the learned embedding. However, the evidence provided does not clearly demonstrate how reliable or robust these kernel ridge leverage scores are across different scenarios. More analysis or visualization of these scores and their correlation with the performance would strengthen this claim. 3. The authors claim that the computed domain weights transfer directly to new data and different model scales without needing to retrain the proxy model. Although the provided experimental results are promising, the evidence could be more convincing if more experiments or detailed ablation studies were provided to thoroughly evaluate this transferability under a wider range of conditions. 4. The paper would be strengthened by an in-depth theoretical analysis explaining why kernel ridge leverage scores are particularly effective for domain reweighting. Methods And Evaluation Criteria: The proposed methods and evaluation criteria align well with the problem of efficient data mixing for LLM training. The use of kernel ridge leverage scores computed from learned domain embeddings makes sense for quantifying domain representativeness and interdependence. Measuring improvements in perplexity and downstream task performance are standard metrics for evaluating language model training. Theoretical Claims: I have checked the theoretical proof in the appendix. The proof of the equivalence between the ridge leverage scores computed in the feature space and those obtained via kernel ridge regression when using a linear kernel. The proof, presented as Lemma A.1, correctly applies standard matrix identities to show that the diagonal elements of the corresponding hat matrices are equal. While some intermediate steps are missing, I did not find any logic issues. Beyond this lemma, most theoretical claims are supported by empirical evidence and ablation studies rather than fully formalized proofs. Overall, the proofs that were provided are correct, and no major issues were found. Experimental Designs Or Analyses: Yes, the experimental design and analyses are sound. however, I would suggest the author to include a runtime analysis to validate the claim that the proposed method is computationally efficient. Supplementary Material: I have reviewed every part in the supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper are well situated within the broader scientific literature, building upon and extending several established ideas, which include (1) kernel methods and leverage scores; (2) data selection and mixing in LLM Training; and (3) domain adaptation and representation kearning. Essential References Not Discussed: N/A Other Strengths And Weaknesses: #### Strengths: 1. The paper presents a novel method that leverages the kernel ridge leverage score to reweight data domains, which is a combination of ideas from data mixing, transfer learning, and efficient training of LLMs. 2. The proposed work holds significance in practice as it is importance in scaling LLM training without retraining expensive proxy models. 3. The extensive empirical results indicate improvements in both pretraining and finetuning, which validates the practical values of the proposed method. 4. writing is clear and observation-driven, which makes it easy for readers to follow along and engage with the authors to approach the problem. #### Weaknesses: 1. While the paper claims the computational efficiency, it lacks a detailed runtime analysis or formal complexity comparison with baseline methods. 2. The theoretical justification behind using KRLS is largely supported by empirical evidence rather than formal proofs. 3. The clarity regarding the transferability of domain weights across diverse datasets and model sizes could be improved with more detailed ablation studies and analysis. Other Comments Or Suggestions: Please see the Weaknesses section. Questions For Authors: 1. Could you provide a more detailed runtime analysis and complexity comparison of your method with baseline methods? 2. Can you elaborate on the theoretical insights behinds why KRLS are effective for domain reweighting? 3. Could you discuss in more detail how the computed domain weights transfer across different datasets and model scales? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and address all remaining concerns below: > Q1. Runtime analysis and complexity comparison Obtaining embeddings $x_i, \, i=1,\ldots,k$ requires a single forward pass for each $a \in B_i$ through the proxy $h_{\theta_p}(a)$; inference is fast as the proxy is a small model. Computing (KRLS) involves inverting a matrix of size $k \times k$ in $\mathcal{O}(k^3)$, which is computationally cheap since datasets typically have a small number $k$ of domains. We do not add any overhead in proxy training. In contrast, DoGE requires per-domain gradient computation at each iteration, and DoReMi runs inference of the reference model for perplexity comparisons. For a straightforward comparison, we report GPU hours below. **DoReMi and DoGE incur over 10% of the base model training cost, while Chameleon reduces it to under 2%.** These savings are particularly impactful for academic labs with limited computational resources. **Table jzfM-1**: Runtime comparison. |Method|GPU Hours| |-|:-:| |DoReMi|7.4| |DoGE|6.3| |Chameleon|0.8| |684M base model|56| See more details on the FLOPs computations in *Q2 of Reviewer ubpq*. > Q2. Theoretical insights for the effectiveness of KRLS in domain reweighting We use kernel ridge leverage scores (KRLS) to determine domain weights. KRLS is a well-established tool in data analysis. It quantifies the influence or importance of data points [Alaoui & Mahoney, 2015]. This property is leveraged in machine learning for tasks like density estimation [Pauwels et al., 2018] and novelty detection [Ducharlet et al., 2024; Lasserre & Pauwels, 2019]. The inverse KRLS is proportional to the Christoffel function value [Pauwels et al., 2018]. This relationship provides additional theoretical justification for our approach. Christoffel functions (Eq. (1) in [Pauwels et al., 2018]) precisely characterize the local density of the data distribution in the feature space, where higher values indicate denser regions. We compute the score $S_\lambda(D_i)$ of domain $i$ using Eq. (KRLS) on page 4. During pretraining, assigning higher sampling probability to domains with low KRLS (and thus high $S_\lambda^{-1}$/Christoffel value) upweights high-density data regions, which are most influential on base LMs' performance [1]. LLM finetuning aims to specialize on a novel specific task, requiring the model to learn differential features not fully captured during pretraining, so we instead prioritize the domains with high $S_\lambda$. Section 3.2 converts either $S_\lambda^{-1}$ or $S_\lambda$ into probability distributions $\alpha$ by appliying softmax normalization. We will revise Section 3.2 to explicitly link the data mixing goal to KRLS and inverse KRLS, grounding it in statistical learning theory. We will make this discussion self-contained within the main text, incorporating analysis from Appendix A. > Q3. More detail on how domain weights transfer across datasets and model scales - **Transfer across model sizes**: Prior works (e.g., DoReMi, DoGE, RegMix) have shown that domain weights transfer well across model scales. To further validate this, we trained 1.2B models on SlimPajama and found that weights from an 82M proxy model effectively transfer to both 684M and 1.2B models. Notably, Chameleon achieves even greater improvements on larger models, highlighting its scalability. **Table jzfM-1:** PPL with 1.2B model |Domain|Uniform|DoReMi|DoGE|Chameleon|RegMix| |-|:-:|:-:|:-:|:-:|:-:| |Arxiv|6.30|7.09|7.07|6.33|10.61| |Book|28.25|32.66|27.83|24.63|27.55| |CC|31.19|29.96|28.11|26.95|24.70| |C4|34.74|33.05|31.06|29.58|31.94| |Github|2.91|3.03|3.07|2.94|4.08| |Stackexchange|6.01|6.44|5.80|5.76|9.54| |Wikipedia|8.65|7.93|10.88|9.03|20.08| |*Average PPL*|16.86|17.17|16.26|**15.03**|18.36| **Table jzfM-2:** Downstream accuracy with 1.2B model |Task|Uniform|DoReMi|DoGE|Chameleon|RegMix| |-|:-:|:-:|:-:|:-:|:-:| |ARC-E|39.4|41.2|41.9|42.4|43.0| |COPA|64.0|66.0|63.0|61.0|66.0| |HellaSwag|27.5|27.7|28.2|28.4|27.6| |Lambada|17.9|17.3|18.7|21.6|20.7| |LogiQA|22.0|24.0|22.0|21.2|20.7| |MultiRC|57.2|57.2|57.2|57.2|56.9| |OpenBookQA|15.0|13.6|13.8|16.4|17.4| |PiQA|61.5|61.9|61.8|63.8|58.7| |QQP|36.8|36.8|36.9|36.9|36.8| |RACE|26.0|26.7|27.8|29.1|28.4| |SciQ|69.7|68.3|69.0|72.6|72.0| |SocialIQA|36.2|36.5|35.9|37.2|36.1| |WinoGrande|52.8|49.6|48.9|51.5|50.0| |*Average*|40.5|40.5|40.4|**41.5**|41.1| - **Transfer across datasets**: We conduct ablation studies on the Pile (Section 4.2). Specifically, we retrain a proxy model on the Pile for reference. As shown in [Domain weights on the Pile](https://imgur.com/a/2Xnk56t), the weight from a proxy trained on the Pile (the blue column) aligns with the weights transferred from proxies trained on SlimPajama at various sizes (the other columns), confirming Chameleon’s robust transferability. [1] Mallen et al. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories. ACL (2023). --- Rebuttal Comment 1.1: Comment: Thank you for the response. Most of my concerns are properly addressed.
Summary: This paper introduces a new data mixing framework for language pretraining and finetuning wherein the mixing weights for different domains are constructed from a domain affinity matrix generated via kernel functions on domain embeddings. This domain matrix can naturally be transformed into domain weights for pre-training (i.e., emphasizing broader and different domains) and finetuning (emphasizing similar domans). The key advantage of this framework is that it does not rely extensively on training proxy models, and thereby can be seen as a low-cost alternative to existing data mixing frameworks. Claims And Evidence: The paper is well-supported by rigorous numerical analysis. Methods And Evaluation Criteria: Yes Theoretical Claims: There are some theoretical results in the Appendix, which I briefly reviewed. Experimental Designs Or Analyses: The experiments are meaningful and sound. Supplementary Material: I briefly reviewed the supplementary material, focusing on the theoretical results and the intuition behind the method. Relation To Broader Scientific Literature: The paper adds to the data mixing literature, with specific focus on compute-efficiency and adaptation to new tasks. Essential References Not Discussed: The paper comprehensively covers the primary literature. Other Strengths And Weaknesses: Strengths: - The paper is well-written and timely, addressing rigorous experiments Weaknesses: - The reasoning behind obtaining domain weights from the KRLS is unclear. Is there some theoretical relationship or connection that can be derived to show why pretraining weights as designed, or finetuning weights as designed, are appropriate? I appreciate that the authors have provided some intuition but it would be beneficial to get more insight. - There doesn't seem to be a significant performance improvement from the mixing law, and indeed, the major selling point is that it achieves a competitive performance at orders of magnitude lower cost. It would be useful to emphasize this point further, especially in numerical results to better break down all cost calculations (e.g., in domain transfer). - It would be useful to include Data Mixing Laws as a baseline, at least for experiments on generalization [1] [1] Ye, Jiasheng, et al. "Data mixing laws: Optimizing data mixtures by predicting language modeling performance." arXiv preprint arXiv:2403.16952 (2024). Other Comments Or Suggestions: N/A Questions For Authors: Refer to Strengths & Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and address all remaining concerns below: > Q1. Reasoning behind obtaining domain weights from the KRLS We use kernel ridge leverage scores (KRLS) to determine domain weights. KRLS is a well-established tool in data analysis. It quantifies the influence or importance of data points [Alaoui & Mahoney, 2015]. This property is leveraged in machine learning for tasks like density estimation [Pauwels et al., 2018] and novelty detection [Ducharlet et al., 2024; Lasserre & Pauwels, 2019]. The inverse KRLS is proportional to the Christoffel function value [Pauwels et al., 2018]. This relationship provides additional theoretical justification for our approach. Christoffel functions (Eq. (1) in [Pauwels et al., 2018]) precisely characterize the local density of the data distribution in the feature space, where higher values indicate denser regions. We compute the score $S_\lambda(D_i)$ of domain $i$ using Eq. (KRLS) on page 4. During pretraining, assigning higher sampling probability to domains with low KRLS (and thus high $S_\lambda^{-1}$/Christoffel value) upweights high-density data regions, which are most influential on base LMs' performance [1]. LLM finetuning aims to specialize on a novel specific task, requiring the model to learn differential features not fully captured during pretraining, so we instead prioritize the domains with high $S_\lambda$. Section 3.2 converts either $S_\lambda^{-1}$ or $S_\lambda$ into probability distributions $\alpha$ by appliying softmax normalization. We will revise Section 3.2 to explicitly link the data mixing goal to KRLS and inverse KRLS, grounding it in statistical learning theory. We will make this discussion self-contained within the main text, incorporating analysis from Appendix A. > Q2. Computational cost breakdown Chameleon's main computational cost comes from 1) proxy training and 2) embedding extraction, with proxy training being dominant. In our setting, training an 82M proxy model requires $10^{17}$–$10^{18}$ FLOPs, while DoReMi and DoGE take longer to converge, leading to 5-10x higher costs (see line 299, "Stability and Practicality"). Regarding our embedding extraction, it requires only $10^{15}$ FLOPs (<1% of proxy training). Importantly, Chameleon avoids proxy retraining when domains change, incurring only embedding extraction costs. In contrast, DoReMi and DoGE induce their proxy retraining FLOPs. Our method is also significantly cheaper in GPU hours, see *Response to Q1 of Reviewer jzfM* for more details and complexity analysis. Beyond efficiency, Chameleon is also more stable, making it resource-efficient in practical use. Unlike DoReMi and DoGE, which are sensitive to hyperparameters, Chameleon remains robust, see *Q2 of Reviewer jX7F* for more details. Lastly, we note that Chameleon shows favorable accuracy behaviour on larger models as well, as shown in our additional 1.2B model experiments (see *Q3 of Reviewer jzfM*). > Q3. Data Mixing Laws We first provide discussions and then present empirical comparisons. Data Mixing Laws derive domain weights by leveraging scaling laws of training steps, model sizes, and data mixtures to predict the performance of large models trained on diverse data from small-scale training. This requires training multiple small proxy models with varying domain weights, making it more computationally expensive than ours, which trains just one proxy model. We use their reported domain weights to train a 684M model on Slimpajama. Since their weights are optimized with the Pile as the target, they may be suboptimal for SlimPajama. However, given the alignment of their objectives and overlap in data sources, we consider the comparison meaningful. **Chameleon outperforms Data Mixing Laws in both perplexity and downstream tasks** at a fraction of the cost. Data Mixing Laws' FLOPS is calculated for 4 different proxy sizes and 20 separate mixtures, where **our cost is 2 orders of magnitude lower**. **Table ubpq-1:** PPL comparison with Data Mixing Laws ||Data Mixing Laws|Chameleon| |-|:-:|:-:| |Arxiv|7.55|8.31| |Book|45.06|39.23| |CC|44.21|40.11| |C4|45.79|42.59| |Github|4.01|4.20| |Stackexchange|7.96|7.94| |Wikipedia|16.20|13.90| |Avg PPL|24.40|**22.31**| |# Domains Over Uniform|4/7|4/7| |FLOPs|$5.36×10^{19}$|$1.36×10^{17}$| **Table ubpq-2:** Downstream accuracy comparison with Data Mixing Laws |Task|Data Mixing Laws|Chameleon| |-|:-:|:-:| |ARC-E|34.5|37.8| |COPA|59.0|61.9| |HellaSwag|27.4|27.0| |Lambada|14.7|15.1| |LogiQA|26.0|22.6| |MultiRC|57.2|57.2| |OpenBook|25.2|14.4| |PiQA|58.5|60.5| |QQP|36.8|39.2| |RACE|26.4|26.5| |SciQ|57.2|64.3| |Social IQA|36.1|35.7| |WinoGrande|48.4|52.1| |Average|39.0|**39.6**| [1] Mallen et al. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories. ACL (2023). [2] Parmar et al. Data, data everywhere: A guide for pretraining dataset construction. ACL 2024.
Summary: Authors propose a method for data sampling for pretraining and finetuning language models. Their idea is to train a classifier, then extract the middle layer word embeddings of the classifier for each domain in the training data, and then to do matrix factorization to obtain a scalar weight for each domain. These weights are used in two heuristic equations to obtain the probabilities for sampling from the domains during the training of the LM. It is empirically shown that the algorithm is faster than the baselines, and can be used without retraining the classifier when new data is added to the training. It is also shown that the algorithm can be used during finetuning. ================= UPDATE: I updated my review score. Claims And Evidence: The claims on speed need further explanations, see the section below Methods And Evaluation Criteria: Yes Theoretical Claims: Some of them Experimental Designs Or Analyses: Most of them Supplementary Material: Some parts, those mentioned in the paper Relation To Broader Scientific Literature: Builds upon existing literature, primarily DoGE Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths:** - The method is simple, intuitive, and easy to implement. - The paper is well written, and the experiments are well organized. - The topic is very relevant and timely. **Weaknesses:** - The core idea is just a heuristic (in authors words): pretaining needs "broadly shared semantic structures" and finetuning needs "distinct and unique data" to "highlight domain-specific characteristics". To me the statements above are just some vague justifications for what empirically works. - In my opinion the performance improvements are virtually non existent--compared to the model DoGE. The main distinction lies in the speed. Authors might argue that when new data is added, the performance improvement is more tangible. But if we put speed aside, and retrain the baseline proxy networks, then again the only distinction becomes speed. In general the proxy model is relatively small, and its training data is a fraction of the entire training data. How many GPU hours are needed to train the proxy networks across the models? Does increasing speed in training this network make any significant difference in energy consumption? How often "retraining" the proxy network is needed at all? Is it needed at all? I would be happy to revise my score if authors give me convincing answers. **Other comments:** Please don't force the reviewers to read your appendix. Other Comments Or Suggestions: See above Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and address all remaining concerns below: > Q1. Theoretical motivation We use kernel ridge leverage scores (KRLS) to determine domain weights. KRLS is a well-established tool in data analysis. It quantifies the influence or importance of data points [Alaoui & Mahoney, 2015]. This property is leveraged in machine learning for tasks like density estimation [Pauwels et al., 2018] and novelty detection [Ducharlet et al., 2024; Lasserre & Pauwels, 2019]. The inverse KRLS is proportional to the Christoffel function value [Pauwels et al., 2018]. This relationship provides additional theoretical justification for our approach. Christoffel functions (Eq. (1) in [Pauwels et al., 2018]) precisely characterize the local density of the data distribution in the feature space, where higher values indicate denser regions. In our context, we compute the score $S_\lambda(D_i)$ of domain $i$ using Eq. (KRLS) on page 4. During pretraining, assigning higher sampling probability to domains with low KRLS (and thus high $S_\lambda^{-1}$/Christoffel value) upweights high-density data regions, which are most influential on base LMs' performance [1]. LLM finetuning aims to specialize on a novel specific task, requiring the model to learn differential features not fully captured during pretraining, so we instead prioritize the domains with high $S_\lambda$. Section 3.2 converts either $S_\lambda^{-1}$ or $S_\lambda$ into probability distributions $\alpha$ by appliying softmax normalization. We will revise the phrasings in Section 3.2 to explicitly connect the data mixing goal to the mathematical properties of KRLS and inverse KRLS, clarifying its foundation in theoretical principles from statistical learning rather than just empirical heuristics. We will make this discussion self-contained within the main text, drawing upon the analysis currently in Appendix A. > Q2. Impact of our computational efficiency The computational cost associated with determining the domain mixture via proxy training is non-negligible. Table jX7F-1 below reports the required GPU (H100) hours for our experiments in Tab. 2 in the paper. Compared to DoReMi and DoGE, which add over 10% to base model training costs, **we reduce computational overhead to less than 2% of final training cost.** This reduction is crucial for academic labs and smaller-scale training. **Table jX7F-1:** GPU hours for universal generalization experiments. |Method|GPU Hours| |-|-| |DoReMi|7.4h| |DoGE|6.3h| |Chameleon|0.8h| |684M base model|56h| Even for larger base models, the computational cost reported is often an optimistic lower bound for the baselines since *DoReMi and DoGE require extensive hyperparameter tuning*. It has been shown that DoReMi's weights are unstable or difficult to reproduce [2; Fan et al., 2024b] and DoGE approximations make it more sensitive to learning rate [Kang et al., 2024b]. We also noticed that DoGE is also extremely sensitive to their Bregman coefficient $\mu$, as shown in Table jX7F-2, where we report domain weights and validation PPL in the last line. Small variations in $\mu$ drastically change domain weights and degrade validation PPL, necessitating repeated validation on base models. This sensitivity contradicts the goal of data mixing methods: weights should transfer reliably to large models without costly grid searches. **Table jX7F-2:** DoGE's weights are highly sensitive to $\mu$. ||$\mathbf{\mu=0.05}$|$\mu=0.01$|$\mu=0.1$| |:-:|:-:|:-:|:-:| |Arxiv|0.041|0.210|0.222| |Book|0.078|0.025|0.069| |CC|0.268|0.052|0.068| |C4|0.283|0.025|0.050| |Github|0.059|0.021|0.378| |Stackexchange|0.230|0.649|0.103| |Wikipedia|0.041|0.019|0.110| |*Avg PPL* of 124M model|24.97|25.45|26.73| In contrast, Chameleon is stable across training steps, model sizes, $\lambda$, and sample counts (Tables 10, 11). This means **our method can produce promising domain weights without repeated validation**, significantly reducing overall costs for users. Another key aspect, as the reviewer pointed out, is the cost of incorporating new data sources. Our data-centric approach requires only inference to obtain new embeddings and recompute KRLS, whereas proxy optimization-based methods like DoReMi and DoGE necessitate full retraining and additional tuning. Lastly, we further validate performance improvement by training 1.2B models. Chameleon demonstrates gains in both perplexity and downstream task accuracy (see *Q3 for Reviewer jzfM*). [1] Mallen et al. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories. ACL (2023). [2] Parmar et al. Data, data everywhere: A guide for pretraining dataset construction. ACL 2024.
null
null
null
null
null
null
null
null
Score Matching with Missing Data
Accept (oral)
Summary: The paper addresses the problem of parameter estimation from missing data with score matching objective. The authors introduce two general frameworks for estimating the (gradient of) marginal score and demonstrate strong empirical performance on synthetic and real data. Claims And Evidence: The work provides solid theoretical results to demonstrate the applicability of the proposed frameworks. However, the presented empirical evidence is not sufficiently convincing to me. In Section 5.1, the authors conduct an experiment on synthetic data where they only report the Fisher divergence between the true and estimate score functions. Since the goal is parameter estimation, it would be more convincing if the authors could conduct experiments on parameter recovery tasks and compare the discrepancy between the true and estimated parameters in various settings of missing data. Then we would know how effectively the score matching estimates perform compared to MLE. Methods And Evaluation Criteria: The paper learns the parameters by minimizing the explicit score matching objective over the marginal score function. The objective naturally follows the derivation of the original score matching objective from complete data and achieves convergence under the standard assumptions. Theoretical Claims: There is no issue with the correctness of the theoretical claims that I am aware of. Experimental Designs Or Analyses: As discussed above, there is a lack of experiments on how well the proposed method can recover the ground-true parameters, for example the mean and/or variances in the Gaussian case. Furthermore, the current results are limited to Gaussian models. It would be convincing to see how the methods can be applied to various models or data types as the proposed framework is claim to be general. Supplementary Material: I have reviewed the codes in the supplementary materials. Relation To Broader Scientific Literature: The paper introduces a score matching framework for estimating model parameters from missing data. At a high level, score matching aims to find parameters that minimize the Fisher divergence between the observed data distribution and the model distribution. This approach is part of the broader family of parameter estimation methods based on minimum distance estimation. Essential References Not Discussed: Parameter estimation from incomplete data is a fundamental statistical problem, to which maximum likelihood estimation via the EM algorithm has been the dominant approach. MisGAN (Li et al., ICLR'19) is another method that uses GAN for estimation. A recent alternative to MLE is the Wasserstein estimate from the optimal transport framework, wherein one seeks the parameters that minimize the Wasserstein distance between the two distributions. Vo et al. (ICML'24) proposes a framework called OTP for learning parameters from missing data. *Li, S. C. X., Jiang, B., & Marlin, B. MisGAN: Learning from Incomplete Data with Generative Adversarial Networks. In International Conference on Learning Representations.* *Vo, V., Le, T., Vuong, L. T., Zhao, H., Bonilla, E. V., & Phung, D. Parameter Estimation in DAGs from Incomplete Data via Optimal Transport. In Forty-first International Conference on Machine Learning.* Other Strengths And Weaknesses: **Strengths:** The paper is generally well-written and solid. The extension of explicit score matching to dealing with missing data is a straightfoward approach, yet interesting to me. I also appreciate the authors' attempt to consider MNAR cases, though the assumption that the probability of missingness pattern is given is rather unrealistic, rendering the proposal in Proposition A.9 of little use to me. However, it is acceptable to limit the scope of the current work to M(N)AR cases since it is generally known that without making very specific assumptions, the parameters are non-identifiable from MNAR data. **Weaknesses:** Similar to EM, the method requires that the unnormalized density $q$ is tractable for estimation and further developments to compute the marginal scores. Perhaps this is a reason why the experiments are limited to Gaussian models. At the high level, the proposed algorithm somewhat mimics EM where the Marg-IW and Marg-Var variants can be viewed as two ways to perform inference in the E step: the former assumes the density $p'$ is known, similar to the fact the the posterior is assumed tractable; the latter is similar to doing variational inference. Meanwhile, MisGAN and OTP frameworks can sidestep the intractabilty of the density to be learned, though algorithmically, these frameworks also follow an EM-like style: an inference step for missing value imputation, followed by a step of learning parameters from the pseudo-complete data. So far it remains unclear to me regarding the theoretical advantages of the score matching estimates compared to MLE and other alternatives namely the Wasserstein estimates, for instance in terms of properties of the estimate, complexity or convergence rate. This is in fact important to understand how the proposed method applies in practice. Other Comments Or Suggestions: N/A Questions For Authors: * For Marg-IW approach, how is p' chosen in practice? * How is the function $h$ in Corollary 4.7 chosen? Algorithm 2 only discusses how to estimate the conditional densities, without mentioning the function $h$. * Line 706: What do the notations $B$ and $\mathcal{B}_{\mathcal{X}}$ in refer to? * Line 259: Is it supposed to be $\log(n)/n$ convergence rate? Furthermore, in the main text, please define the notations used for Theorem 4.5 such as $\beta_1$, $\delta$ for completeness. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful comments and feedback as well as the additional references, we will make sure to include them in our paper. >Since the goal is parameter estimation, it would be more convincing if the authors could conduct experiments on parameter recovery tasks and compare the discrepancy between the true and estimated parameters in various settings of missing data. We agree that parameter estimation/recovery experiments would be more aligned with our goals. We link to plots on parameter estimation accuracy for the normal case here: https://ibb.co/kk8LKNV and the truncated case here: https://ibb.co/pjDzCpGQ. They show largely the same trends as with the fisher divergence and we will make sure to include them in the paper. These results will be added to the main body of the paper. Our experiments did initially estimate the parameters however we just reported the performance through the Fisher divergence as this allowed us to present a singular performance metric rather than two numbers (one for the mean and the other for the precision matrix.) Parameter estimation error comparisons will be added to the main paper in the revision. > It would be convincing to see how the methods can be applied to various models or data types as the proposed framework is claim to be general. We agree with the reviewer's comments and have ran experiments on an non-Gaussian, ICA inspired model with an intractable likelihood of $p(x)\propto \left(\exp{-\sum_{i,j}\theta_{ij}x_i^2x_j^2}\right)$. The task was estimating $\theta_{i,j}$. We ran experiments with varying sample sizes, missingness proportions, and dimensions and results are presented here: https://ibb.co/7JTJCc6b. The the accuracy is reported in parameter estimation error (Frobenius norm of $\theta$ treating it as a matrix). > So far it remains unclear to me regarding the theoretical advantages of the score matching estimates compared to MLE and other alternatives namely the Wasserstein estimates, for instance in terms of properties of the estimate, complexity or convergence rate. This is in fact important to understand how the proposed method applies in practice. The key advantage of score matching estimates over MLE or Wasserstein distance minimizer is that we can work with unnormalisable models such as truncated Gaussians and our aforementioned ICA inspired model. Additionally, even for some normalisable models, if we cannot compute a tractable conditional probability model from joint model, we cannot apply EM (which maximizes the likelihood) easily. As a direct theoretical comparison between score matching and MLE, [1] was able to show cases where score matching is both computationally efficient and obtains the same optimal rate of convergence as MLE, despite score matching handles unnormalizable models. ## Questions > For Marg-IW approach, how is p' chosen in practice? In practice kept $p'$ simple, choosing to be an isotropic gaussian with zero mean and standard deviation proportional to the standard deviation of the non-missing data. >How is the function in Corollary 4.7 chosen? Algorithm 2 only discusses how to estimate the conditional densities, without mentioning the function . $h$ is just a generic function used to define the functional $\psi$ so $h$ takes the values given in equation (10). In other words this is just notation to try and keep equation 10 as succinct as possible >Line 706: What do the notations $B$ and $\mathcal{B}_\mathcal{X}$ refer to? Apologies, did not properly introduce this notation. $\mathcal{B}_{\mathcal{X}}$ is supposed to represent all the possible events on the RV X (i.e. the sigma algebra on our covariate space $\mathcal{X}$) so that $B$ represent any arbitrary event on $\cal X$. We will make sure to properly introduce it in the final paper. > Line 259: Is it supposed to be $\log(n)/n$ convergence rate? Furthermore, in the main text, please define the notations used for Theorem 4.5 such as $\beta$, $\delta$ for completeness. Yes sorry it is meant to be $\sqrt{\log(n)/n}$ our aim was to highlight that this a $1/\sqrt{n}$ rate up to log factors which diminish rapidly. Also we will make sure to introduce $\beta$ which is a constant >0 and $\delta$ which is any sufficiently small probability. ## References [1] Pabbaraju, C., Rohatgi, D., Sevekari, A. P., Lee, H., Moitra, A., and Risteski, A. (2023). Provable benefits of score matching. In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S., editors, Advances in neural information processing systems, volume 36, pages 61306–61326. Curran Associates, Inc. --- Rebuttal Comment 1.1: Comment: Thank you for the additional results. The method is more convincing to me now. I have updated my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for taking our feedback into consideration and thank you for updating your review!
Summary: The paper proposes a principled score-based method for learning jointly-specified probabilistic models in the presence of missing data in the training set. The key idea is to match the scores of marginal distributions by marginalising the missing variables first. Since marginalisation is often computationally intractable, the paper introduces two approaches based on importance sampling and variational inference. The method is validated for truncated Gaussian parameter estimation and Gaussian graphical model estimation. Claims And Evidence: The proposed methods and their proofs seem to be sound. In fact, the paper is very well-written and the accurately-placed remarks provide well-appreciated observations and caveats. Methods And Evaluation Criteria: To my knowledge, this is first method score-based method in the context of missing data that may be unbiased subject to the typical conditions for importance sampling and variational inference. The potential unbiasedness will be very appreciated by the practitioners working with missing data. Theoretical Claims: The claims are correct and capture the important nuances as far as I can tell. I have checked the proofs in Appendices A and D.4.1, which seem correct. One short question: * In Assumption 4.2 you set $\theta > 0$, is that a typo? I am not sure why the parameters should be positive. Experimental Designs Or Analyses: The experimental setting focuses on fairly simple Gaussian models, and explores parameter estimation and graphical model estimation problems. While the paper could benefit from at least one experiment on a more "interesting" model, such as an EBM, I believe the number of theoretical results and extensions to various score-matching flavors make up for it. I've got one small question still. For the VI approach you parametrise the conditional variational distribution using a neural network, which is likely the reason why it performs worse than the other approaches in Figure 1. I wonder if for this simple setting you could have specified the variational distribution _jointly_ as $p_{\phi}'(X) = \text{TruncNorm}(X, \phi)$, fit it via $J_F$ and then sample the conditionals $p_{\phi}'(X'\_{-\Lambda} \mid X\_{\Lambda})$ in equations 10/11. Supplementary Material: I reviewed appendices A, D and E. Relation To Broader Scientific Literature: The paper correctly compares to the two existing methods in the literature and highlights their weaknesses. The paper also proposes extensions of the proposed method to various flavors of score matching, namely, truncated, sliced, and denoising. Essential References Not Discussed: I think the exposition of the missing data scenario is fairly succint but sufficient. However, for the interested readers, references to the classical literature that formalises missing data problems could be useful, e.g. [1]. References: [1] Little and Rubin (2020). Statistical Analysis with Missing Data: Third Edition. (Section 1.3) Other Strengths And Weaknesses: As I have expressed above, the paper solves an important problem, is well-written, and I believe will be found useful by practitioners dealing with missing data. Other Comments Or Suggestions: Suggestions: * The first equation in Section 4, regarding model estimation from incomplete data via MLE holds not only for MCAR but also for MAR data, even though you only consider MCAR. I believe that the proposed score-based methods would similarly hold for MAR data, without the need for the additional MNAR results given in the appendices. This would make the paper stronger in general, as MCAR is a fairly strong assumption, rarely true in real scenarios. * In appendix A.1.4 you assume that $\varphi_{\lambda}$ is known to have a method that is independent of how $\varphi_{\lambda}$ is estimated. However, it would be important to note that when the data is MNAR, $\varphi_{\lambda}$ and, consequently, the density of interest may not be identifiable at all, e.g. [1]. * In Appendix D.4 the denoising score matching is introduced from the diffusion perspective, where instead of learning a score of a single distribution, we learn a score of multiple time-dependent distributions. As such, references to [2] or [3] may be relevant here as [4], that is referenced in the text, did not explicitly cover this scenario (although this is correctly highlighted in the remark at the end of the section). Typos: * Line 056: "component [of] Z" * Line 109: "missing [and] non-missing" * Line 176: "score matching, [we] can relate" * Line 242: "results"->"result" * Line 327: "$\hat J_F, \hat J_F$" -> "$\hat J_{KL}, \hat J_F$" * Line 437: Last sentence of Section 6 has repeated "score matching" * Line 687: "begin" -> "being" * Line 720: "show a provide" -> "provide" * Line 727: "with" -> "we" * Line 1761: "\cdot" * Line 1780: "takes" -> "take" * Subsection D.4.1 should probably be its own section? References: [1] Nabi et al (2020). Full Law Identification In Graphical Models Of Missing Data: Completeness Results [2] Song and Ermon (2020). Generative Modeling by Estimating Gradients of the Data Distribution [3] Song et al (2021). Score-Based Generative Modeling through Stochastic Differential Equations [4] Vincent (2011). A Connection Between Score Matching and Denoising Autoencoders Questions For Authors: I have no further questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your helpful comments and all the errata found, we will make sure to correct them. We respond to specific questions below. >In Assumption 4.2 you set $\theta>0$, is that a typo? I am not sure why the parameters should be positive. Yes, sorry thank you for spotting this, it is a typo it should be for any $\theta$ in our parameter space. >For the VI approach you parametrise the conditional variational distribution using a neural network, which is likely the reason why it performs worse than the other approaches in Figure 1. I wonder if for this simple setting you could have specified the variational distribution jointly as, fit it via and then sample the conditionals in equations 10/11. This is a good point, and you're correct that for the truncated normal distribution we can indeed approximate with a normal distribution. We have used a more general neural network model in experiments as in many cases, there may not be a joint distribution from which we can easily draw conditional samples (see our ICA inspired example in our response to reviewer 2txV). We wanted to showcase our method in the most generalised way possible. Thanks for raising this excellent point. We will highlight this possible variant method for the truncated normal regime in the revision. >The experimental setting focuses on fairly simple Gaussian models, and explores parameter estimation and graphical model estimation problems. While the paper could benefit from at least one experiment on a more "interesting" model, such as an EBM We appreciate this feedback and have ran experiments on an non-Gaussian, ICA inspired model with an intractable likelihood of $\left(p(x)\propto\exp{-\sum_{i,j}\theta_{ij}x_i^2x_j^2}\right)$. The task was estimating $\theta_{i,j}$. We ran experiments with varying sample sizes, missingness proportions, and dimensions and results are presented here: https://ibb.co/7JTJCc6b. The the accuracy is reported in parameter estimation error (Frobenius norm of $\theta$ treating it as a matrix). >The first equation in Section 4, regarding model estimation from incomplete data via MLE holds not only for MCAR but also for MAR data, even though you only consider MCAR. I believe that the proposed score-based methods would similarly hold for MAR data, without the need for the additional MNAR results given in the appendices. This would make the paper stronger in general, as MCAR is a fairly strong assumption, rarely true in real scenarios. Thanks for your comments! Our approach may not hold for the MAR case. The MLE approach holding for MAR data is specific to the MLE objective rather than the marginal framework in general. However, one can use our MNAR approach for MAR data and in the MAR case the conditional probabilities of being missing should be identifiable. One can see that MAR adjusted score splits into the sum of the original score and a term involving the missingness pattern (i.e. for MAR data we have $s_{\lambda}(x_\lambda)=\nabla\log\varphi_\lambda(x_\lambda)+\nabla\log p(x_\lambda)$). However, we cannot apply score matching since we cannot ignore the term involving the missingness probability. Thanks for pointing this out and we will add this discussion in the revision. As a small side note, we believe the MAR setting to often be not much less restrictive that MCAR, (especially in cases where each coordinate can be missing). Specifically, any case where any two coordinates can be missing at once is likely to be either MCAR or MNAR at least according to the definition in Little \& Rubin (2020). As an example say $X_2$ being missing depends on $X_1$ and each coordinate being missing is independent while this seems like MAR as $X_2$ being observed directly depends upon only $X_1$, for the case were $X_1,X_2$ are both missing (i.e. $\Lambda=[d]\setminus\\{1,2\\}$), we clearly have $\mathbb{P}(\Lambda=[d]\setminus\\{1,2\\}|X)\neq\mathbb{P}(\Lambda=[d]\setminus\\{1,2\\}|X_{[d]\setminus\\{1,2\\}})$ making it not MAR. >In appendix A.1.4 you assume that is known to have a method that is independent of how is estimated. However, it would be important to note that when the data is MNAR, and, consequently, the density of interest may not be identifiable at all, e.g. [1]. This is a good point and we should have better highlighted that learning the $\varphi_{\lambda}$ is difficult and often non-identifiable without further assumptions or supplementary data. Indeed learning the joint distribution with MNAR data is in general a non-identifiable task. We will highlight this fact in the appendix and also include the provided citation. We thank the reviewer for the other citations and will include them in the revision, making sure to highlight that multi-level score matching was introduced specifically for diffusion processes. --- Rebuttal Comment 1.1: Comment: Thanks for the response! I am happy with the response and maintain my original recommendation. --- Reply to Comment 1.1.1: Comment: Thank you for taking our feedback into consideration!
Summary: The paper explores score matching for missing data, proposing two distinct approaches: importance-weighted score matching and a variational approach. The importance-weighted method relies on reweighting score matching objectives using an auxiliary distribution over missing variables, while the variational method frames score estimation as an optimization problem using a parametric variational model. The authors provide theoretical justifications for both methods and evaluate them empirically on synthetic Gaussian data, as well as two real-world datasets: S&P 100 stock data and yeast gene expression data. The results indicate that their methods improve Fisher divergence estimates and AUC scores for missing data imputation, showing modest but consistent improvements over baselines such as Expectation Maximization (EM) and a modified version of MissDiff, a recent diffusion-based model for missing data. Claims And Evidence: The paper presents two main methodological contributions—importance-weighted score matching and a variational approach for marginal score estimation—and claims that these methods improve score estimation and missing data imputation. The theoretical justifications for both methods are sound, with well-formulated derivations that clearly support the proposed objectives. However, the empirical evidence is somewhat limited in scope. The experiments primarily focus on Gaussian data, and only two real-world datasets (S&P 100 and yeast gene expression) are used for evaluation. While the results demonstrate modest improvements over baselines in terms of Fisher divergence and AUC scores, the gains are not substantial. Additionally, the choice of baseline for comparison raises concerns—MissDiff is used in a modified form without its original neural network component, making it unclear how the proposed method would compare against a full diffusion-based model. Furthermore, the importance-weighted approach assumes knowledge of an auxiliary distribution $p{\prime}$ over missing variables, which may not be available in practical applications, potentially limiting its real-world applicability. Overall, while the theoretical claims are well-supported, the empirical evaluation could be more comprehensive, particularly in terms of dataset diversity and baseline fairness. Methods And Evaluation Criteria: The proposed methods—importance-weighted score matching and a variational approach for marginal score estimation—are well-motivated and theoretically sound. The evaluation metrics, including Fisher divergence and AUC scores, are appropriate for assessing score estimation and imputation performance. However, the scope of the experiments is limited. The reliance on Gaussian data raises concerns about the generalizability of the method to more complex distributions. Additionally, the MissDiff baseline is simplified, making it unclear whether the proposed methods would outperform a full diffusion-based model. Expanding the evaluation to non-Gaussian data and stronger baselines would make the results more conclusive. Theoretical Claims: The paper provides clear and strong theoretical justifications that support the proposed importance-weighted score matching and variational marginal score estimation approaches. The derivations are well-structured, and the key results appear correct. Experimental Designs Or Analyses: The empirical soundness is limited due to the concerns previously discussed. The experiments primarily focus on Gaussian data, and while two real-world datasets are included, they may not fully capture the complexity of missing-data scenarios. Additionally, the MissDiff baseline is simplified, making the performance comparisons less conclusive. A broader evaluation with non-Gaussian data and stronger baselines would improve the robustness of the findings. Supplementary Material: I briefly reviewed the supplementary material but did not examine it in detail. While it appears to provide additional derivations and experimental details, my assessment is primarily based on the main paper. Relation To Broader Scientific Literature: The paper builds on prior work in score-based generative modeling and missing data imputation, particularly leveraging score matching techniques for handling incomplete data. The importance-weighted approach follows principles from importance sampling and prior work on weighted score matching, while the variational approach aligns with existing variational methods used in generative modeling. Additionally, the paper compares its methods to MissDiff, a recent diffusion-based approach for missing data, though the baseline is simplified in the experiments. While the paper is well-positioned within the literature, a more thorough discussion of how the proposed methods compare to alternative generative approaches could strengthen its positioning. Essential References Not Discussed: I would suggest that the authors discuss the following related works, which explore diffusion-based approaches for missing data imputation. Their discussion could help better contextualize the contributions of the current paper. [1] Zhang, Hengrui, Liancheng Fang, and S. Yu Philip. "Unleashing the Potential of Diffusion Models for Incomplete Data Imputation." CoRR (2024). [2] Chen, Zhichao, et al. "Rethinking the diffusion models for missing data imputation: A gradient flow perspective." Advances in Neural Information Processing Systems 37 (2024): 112050-112103. [3] Zheng, Shuhan, and Nontawat Charoenphakdee. "Diffusion models for missing value imputation in tabular data." NeurIPS 2022 First Table Representation Workshop. Other Strengths And Weaknesses: All relevant strengths and weaknesses have been pointed out in the review. Other Comments Or Suggestions: I have only found one potential typo: - In line 263 (left column): "Setting it at $r=10$ in our experiments." could be better phrased as "We set it at $r=10$ in our experiments." Questions For Authors: I don't have further questions at this point. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your useful comments and feedback as well as the additional references which we will make sure to include. We address some specific points below >However, the scope of the experiments is limited. The reliance on Gaussian data raises concerns about the generalizability of the method to more complex distributions ... Expanding the evaluation to non-Gaussian data and stronger baselines would make the results more conclusive. Thanks for your comments! To further enhance our experimental section, we have included a new experiment on parameter estimation for an intractable ICA inspired model where $p(x)\propto\exp\{-\sum_{ij}\theta_{ij}x_i^2x_j^2\}$. Links to experimental results on estimate $\theta$ under varying sample size, missingness proportions, and dimensions here: https://ibb.co/7JTJCc6b. As we can see in higher dimensional settings, Marg-Var clearly outperforms other methods while in lower dimensional settings, both Marg-Var and Marg-IW perform best (with EM performing similarly to Marg-IW). > The MissDiff baseline is simplified, making it unclear whether the proposed methods would outperform a full diffusion-based model. In this paper, we primarily focus on the parameter estimation tasks. MissDiff was originally designed for estimating the score function in a diffusion process, not the parameterized density model. As such, Missdiff, in its original formulation could not be directly applied to our tasks as it does not provide parameter estimates. This also holds for any other diffusion based imputation approaches such as CoRR which again do not give us a direct parameter estimation procedure. We include the MissDiff baseline more as an example of what naive marginalisation of the scores would look like and to demonstrate that the idea behind MissDiff can't naturally adapted to the parameter estimation problem. We will make sure to clarify this further in the paper and change the name of the MissDiff-Param approach in our results to make it clear that it is a variant of MissDiff that has been adapted to our problem. --- Rebuttal Comment 1.1: Comment: Dear authors, I mistakenly posted the following as an “Official Comment,” not realizing it wouldn’t be visible to you. I’m re-posting it here to keep you informed: > Thank you for addressing my concerns. The new experiments substantially strengthen the justification of the contributions, and I have adjusted my score accordingly. Thank you again for your thoughtful response.
Summary: The given paper presents novel framework for Score Matching (SM) in missing data framework. Particularly, the setting assumes that the (multi-dimensional) random variable under consideration has missing coordinates. The authors solve this problem by using an auxiliary variable that denotes the masking random vector. Using this, the authors introduce 'Marginal Score Matching' where the score matching is performed only on the visible coordinates of the random variable. The authors then provide several (tractable) variants of marginal score matching - truncated SM, denoising SM, sliced SM. However, these methods are still intractable due to marginalization integral present in marginal score definition. To this end, the authors provide two ways to estimate these loss objective - (a) using importance weighting and, (b) using variational approximation. The authors present empirical results on synthetic datasets, PGMs and real world tabular datasets like Stock prices and Yeast data. They compare their method against MissDiff and EM to show the efficacy of their method. Claims And Evidence: Most of the claims made in the paper are Theoretical which are rigorously justified via mathematical proofs. The empirical claims seem a little unclear. While the authors point out problems with EM-based methods in Related works, it seems that their method doesn't provide significant advantage performance wise (E.g., Fig. 1). Methods And Evaluation Criteria: The proposed method makes sense, however, I am not sure about the evaluations. Primarily because of the limited metrics (AUC) used to demonstrate the efficacy of proposed method. Theoretical Claims: I went through the proofs of most of the results. The authors have provided solid mathematical grounding to the proposed method. I am happy and satisfied with the claims and proofs. Experimental Designs Or Analyses: The experiments are primarily performed on synthetic datasets. While it helps in understanding the validity of the method in simple settings, it is not clear how it would perform in complex settings (such as image generation/inpainting, etc). However, since the paper has theoretical inclination, I won't count it as a negative. Performance wise, it looks like the proposed method, particularly Marg-Var has very slight advantage over EM whereas Marg-IW seems to be performing as well as EM. Can the authors provide an explanation for this? Supplementary Material: I went through Supplementary material except Section A.1.4, A.2 and A.5. I am satisfied with these results and clarification. Relation To Broader Scientific Literature: The work essentially extends the various score matching objectives to the missing data setting. While there are methods like Ambient Diffusion or Cold Diffusion that operate in more generic setting, the current work has theoretical flavor to it which extends the existing approaches to missing data settings. Essential References Not Discussed: N/A Other Strengths And Weaknesses: #### Strengths 1. The paper is well written and presented. Although heavily theoretical, I was able to most of the claims and proofs. 2. The proposed setting has practical relevance, specially for tabular data. 3. Going through the proofs, it seems most of the SM objective can be adapted to missing data scenario. #### Weaknesses 1. The experimental section looks a little weak. Perhaps including tabular data benchmarks could help with this. 2. Marg-IW seems to have similar performance as EM. Is there any insight into this? 3. I am confused as to why these objectives cannot be extended to neural networks? Can't one optimize neural network parameters using these objectives? Other Comments Or Suggestions: 1. Line 102: \del_j -> \del_1 ? Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your useful comments and feedback, we address some specific points below. >The proposed method makes sense, however, I am not sure about the evaluations. Primarily because of the limited metrics (AUC) used to demonstrate the efficacy of proposed method. Thanks for pointing this out! Since we focus on the parameter estimation problem, we have now added new metrics specifically designed for the parameter estimation problem in standard and truncated normal estimation: https://ibb.co/kk8LKNV and here: https://ibb.co/pjDzCpGQ respectively. Additionally we have included a new ICA inspired parameter estimation experiment (more details in 2nd paragraph of response to reviewer 2txV) which can be found here: https://ibb.co/7JTJCc6b . In Gaussian Graphical Model tasks, we focussed on AUC as we were treating it as an edge detection problem and wanted to demonstrate the efficacy over a variety of sparsity settings. The use of classification metric in graphical model learning tasks through the False Positive Rate (FPR) and True Positive Rate (TPR) was used in previous works [2] and [3], thus, we are following the convention, using the AUC to summarize the overall performance of the graphical model learning. For further illustration, we have included a plot of the ROC curves for the first 4 runs of the star based precision with a missing probability of 0.5 here: https://ibb.co/35dgkVnK and will be sure to include them in the appendix of the final paper. In these we can see that the proposed Marg-Var seems to consistently outperform the other approaches. >Marg-IW seems to have similar performance as EM. Is there any insight into this? Thanks for raising this interesting question. We believe this is because the both methods rely on importance weighting to approximate marginal/conditional expectations. Additionally they both aim to maximize the marginal score (EM can be seen as maximizing the marginal likelihood iteratively.) However, the detailed reason of their performance similarity is definitely an interesting observation which is worthy of further exploration and we will highlight this future direction in the revision. >I am confused as to why these objectives cannot be extended to neural networks? Can't one optimize neural network parameters using these objectives? Yes, You are correct that this approach can be extended to estimating neural network models (such as energy-based model) from missing observations, similar to how its variant for handling non-missing observations [4]. Since most neural network models are also unnormalizable, this extension to our method would further expand the usage of such models. We see this flexibility of handling various unnormalizable models as one of the core strengths of our method. We decided to focus on analysing the performance of score matching through simpler and more interpretable models such Gaussian Graphical Models as this was something which we felt was under-explored in existing missing data literature. Moreover, estimating truncated graphical model using score matching has been higlighted as an application of score matching [2,3]. Application of our methods to various neural network architectures and its use in downstream tasks such as learning a diffusion model from missing data is definitely a strong direction for future research, and we will make sure to highlight this in our conclusion. ## References [2] Lin, L., Drton, M., and Shojaie, A. (2016). Estimation of high-dimensional graphical models using regularized score matching. Electronic Journal of Statistics, 10(1):806 – 854. Publisher: Institute of Mathematical Statistics and Bernoulli Society. [3] Shiqing Yu, Mathias Drton, Ali Shojaie (2022). Generalized score matching for general domains, Information and Inference: A Journal of the IMA. [4] Song, Yang, and Diederik P. Kingma. "How to train your energy-based models." arXiv preprint arXiv:2101.03288 (2021).
null
null
null
null
null
null
SAND: One-Shot Feature Selection with Additive Noise Distortion
Accept (poster)
Summary: This paper proposes SAND, a new feature selection method which modifies the original input to a linear combination of additive zero-mean Gaussian noises and the original input. The weight of linear combination $a\in \mathbb{R}^d$ measures the feature importance and the constraint of the number of selected features is imposed in a soft way by requiring the $\alpha$-norm of $a$ equaling $k$. By doing so, there is no need to add regularization term to the original loss function, which can be optimized using popular gradient descent methods like Adam. In the inference stage, only features within the top-$k$ weights $a$ are retained and used as the inputs to the trained model. Experiments on synthetic and real datasets demonstrate the effectiveness of the proposed method compared to existing methods including sequential attention, batch-wise attention, sequential Lasso and group Lasso. Claims And Evidence: The claim that SAND achieves SOTA performance on benchmark datasets is not well supported. SAND performs worse than other methods especially in the case with small $k$ in experiments. Methods And Evaluation Criteria: The proposed method is well motivated and the evaluation criteria makes sense. However, SAND is very similar to STG [34] since both of them introduces randomness in training, but the comparison to STG is missed in experiments. Theoretical Claims: I have checked the analysis of SAND in the linear regression case. The last term in Eq. (14) achieves its minima when there are k of $a_i$'s equal to 1 is not obvious and detailed analysis is needed. Experimental Designs Or Analyses: I have check the soundness of experimental designs. Supplementary Material: I have read the code and the appendix. Relation To Broader Scientific Literature: Apart from feature selection, the idea of introducing noise to the input variables has been investigated in prior feature noising method, which introduces either multiplicative or additive noise to the input variables to enhance the generalization ability and robustness of the model. [1] Maaten L, Chen M, Tyree S, et al. Learning with marginalized corrupted features[C]//International Conference on Machine Learning. PMLR, 2013: 410-418. [2] Wager S, Wang S, Liang P S. Dropout training as adaptive regularization[J]. Advances in neural information processing systems, 2013, 26. [3] Zhuo J, Zhu J, Zhang B. Adaptive Dropout Rates for Learning with Corrupted Features[C]//IJCAI. 2015, 24: 4126-4133. [4] Li Z, Gong B, Yang T. Improved dropout for shallow and deep learning[J]. Advances in neural information processing systems, 2016, 29. [5] Essential References Not Discussed: To the best of my understanding, key references are reviewed in this paper. Other Strengths And Weaknesses: Strengths: 1. The proposed method is well motivated. Weakness: 1. On the benchmark datasets, the performance of SAND is not the SOTA compared to other methods, especially when $k$ is small. 2. Writing can be further improved, especially the analysis of SAND in the linear regression case. Other Comments Or Suggestions: None Questions For Authors: My concerns are listed in the weakness section. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the careful reading of the paper and your valuable feedback. Regarding being SOTA, our work emphasizes that although no single method consistently outperforms all others across every dataset, our approach is notably simpler and competitive—especially when accuracy/MAE metrics are saturated (see Table 4 in Appendix B). We believe that a method that is the best in a significant number of cases and nearly optimal in the remaining ones merits the designation of state-of-the-art—an acknowledgment that can apply to multiple methods when performance differences are marginal (SAND performs better or is comparable with other methods although it is extremely simpler conceptually and considerably faster in practice: no tuning/line-search to get to k features, no alteration of loss function, no post-selectin training). Nonetheless, we recognize that the wording may have been misleading, and we will revise it in the final version of the paper to better reflect these nuances. For small k, no state-of-the-art method consistently outperforms the rest. Specifically, SAND excels on three datasets for small k’s; for example, on the CA Housing dataset (regression, lower MAE is better), SAND shows superior performance for small k. Moreover, SAND is trained with about 33% fewer epochs than competitors, as it requires no post-selection training. This efficiency may explain its underperformance in some cases, as detailed in Appendix B (e.g., the analysis on Har70, chosen for its known correct features and relatively lower SAND performance). Although SAND’s classification accuracy on Har70 is 0.013% lower than the best (LLY), it achieves better feature selection accuracy (correct features out of 6) and greater stability than the two methods (SA and SL) that slightly outperform it in classification Thank you for the suggested references. We can add them to the final version of the paper. Concerning the proof in linear regression, since $a_i$ s are between 0 and 1, for each $i$, $a_i |a_i|^\alpha <= |a_i|^\alpha$ with equality holding only for $a_i=0$ or $a_i=1$. Thus, we have $\sum_i{ a_i |a_i|^\alpha } <= \sum_i{|a_i|^\alpha } = k$ with euqlity holds only if all $a_i$ s are either 0 or 1. Since in the first two terms of Eq. (14), there’s no a_i, and \lambda is positive, the whole expression will be minimized when $\sum_i{ a_i |a_i|^\alpha }$ gets its maximum value (k) and this happens when all of $a_i$ s are either 0 or 1 . Since we have the constraint $\sum_i{|a_i|^\alpha } = k$, we conclude that exactly k of $a_i$ s are 1 and (n-k) of $a_i$ s are 0 . We can add this explanation to the final version of the paper and improve the writing, while respecting the page limit. Concerning the comparison with STG: we tried to compare our method with other state-of-the-art methods, some of which were proposed very recently. Furthermore, a quick comparison was already done with other methods, including STG and LassoNet, as mentioned in footnote 3 (page 6, lines 326-328). However, this comparison was not quantitatively elaborated in the current version of the paper. We agree that a direct quantitative comparison with STG method is useful, given the similarity with SAND (SAND is much simpler with no need for loss function alteration or hyperparameter search for k as in SGT). For this end, we have now included the STG in our previously shared anonymized code repo (https://anonymous.4open.science/r/SAND-6BB1), which allows us to compare it with SAND and the 4 other methods used in our paper for benchmarking. Furthermore, the readme file of the repo now has a table comparing SAND and STG on 5 classification datasets and one regression dataset. SAND outperforms STG on 3 classification datasets and on the regression dataset (lower MAE is better). Note that for the 2 datasets on which SAND is worse, SAND was already not presented as the top performer (in terms of accuracy) in the current version of the paper. These 6 datasets were chosen for their moderate size (less than 70k samples) and for fast experimentation in this rebuttal. This is because STG needs a computationally costly fine-grained tuning to select the specified number of parameters. In the final version of the paper, we can include the comparison with STG on all 9 datasets. We kindly ask the reviewer to consider these clarifications and to reach out during the discussion phase if further explanation is needed. We reiterate the potential of this simple, practical, and efficient method, which may impact areas beyond feature selection (e.g., NN pruning). Our goal is not to introduce a model that dramatically outperforms all baselines but to present a lightweight model that at least matches baseline performance while offering distinct advantages: simplicity, low computational and memory burden, one-shot feature selection and network training, control over the number of selected features, and easy integration with neural networks.
Summary: The paper introduces SAND (Selection with Additive Noise Distortion), a novel feature selection layer for neural networks that automatically selects $k$ informative features during training. SAND operates by multiplying each input feature by a trainable gain $a_i$ while adding Gaussian noise weighted by $1-a_i$. Such a design encourages the informative $k$ gains to 1 (selecting features) and the rest to 0 (discarding features) without altering the loss function or network architecture. Unlike traditional methods, SAND requires no post-selection retraining or extensive hyperparameter tuning. Experimental results on nine benchmark datasets (e.g., MNIST, ISOLET) as well as a new real-world multi-spectral imaging dataset (MSI Grain) demonstrate that SAND matches or exceeds state-of-the-art methods in accuracy and efficiency. A theoretical analysis in linear regression further validates its approach, positioning SAND as a simple yet powerful tool for feature selection. Claims And Evidence: This paper has several claims. While majority of them are well-supported. The claim that *SAND directly controls the number of selected features without hyperparameter tuning* is bit overclaimed. SAND requires to explicitly define the number of selected features $k$, which is usually missing in real-world dataset. Other approaches from feature interpretation domains can only explicitly define the number $k$ to conduct feature selection. Methods And Evaluation Criteria: Both the methods and evaluation criteria are generally reasonable. Theoretical Claims: I quickly check its correctness, but I am not an expert in theory. Experimental Designs Or Analyses: The dataset selection is a bit questionable. The only large-scale dataset is Har70, which has a number of well-known informative features. Based on the hyper-parameter selection in Section 3, such information helps authors better determine $k$. Evaluating SAND on large-scale datasets without a number of informative features pre-known, such as Criteo or ImageNet (could be compressed), can better showcase SAND's ability in real-world scenarios. The paper also misses baselines from discrete optimization domains, which can naturally be adapted for feature selection. Supplementary Material: I quickly browse the supplementary material. Relation To Broader Scientific Literature: One-shot feature selection without post-selection retraining is a very interesting topic that can be applicable to the real world. The reviewer finds this point being the most important contribution of this paper. Essential References Not Discussed: Not that I know of. Other Strengths And Weaknesses: Pros: 1. This work tackles an interesting and applicable problem. 2. The writing of this paper is generally easy to follow. Cons: 1. The experimental setup could be questionable, which limits its contribution and applicability. 2. Certain claims are a bit over-claimed and not unique. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the careful reading of the paper and the valuable comments. We agree that finding out the required number of features for a desired performance is a very valuable question to answer, but it is outside the scope of this work. By the sentence “SAND directly controls the number of selected features without hyperparameter tuning”, we mean that once the number of desired features k is given, the algorithm directly gives a solution for that many features. This is in contrast with other approaches where, even when k is given, there is another hyper parameter by which we can push the algorithm to have more or less number of features, and there is no direct control on the number of features to select in one shot. In LASSO-based approaches, stochastic gates, and many other methods, there is no direct control over the number of features. There, we can only achieve the exact (predefined) number of desired features by sweeping over the other hyper parameter (which is usually a regularization term in the loss). In short, for a given k (whether defined by the hardware constraints as explained in the dataset of Appendix C, or by model complexity/size constraint,...), SAND finds/selects the k features in one shot without a sweeping or line search, in contrast with many other methods. The fact that k is a predefined number in the context considered here is mentioned in the abstract, but this might get confusing later on in the paper. We will make sure to clarify this confusion in the final version of the paper by stressing that this is the standard feature selection problem where k is given and we are after selecting the k features (not identifying k itself). We acknowledge that expanding the range of datasets and experimental scenarios can further strengthen the evaluation of a newly proposed method. In our work, we deliberately selected datasets that are widely recognized as standard benchmarks for feature selection, ensuring a fair and consistent basis for comparison within a reasonable computational timeframe. Moreover, we compared our approach against state-of-the-art methods, including those acclaimed in the literature, such as the recent work by Google on sequential attention feature selection. In addition to these benchmarks, we introduced a novel real-world dataset (detailed in Appendix C) to demonstrate the practical applicability of our method. We believe that the breadth of datasets and competitive methods included in our study represents one of the most comprehensive evaluations in the current literature. This selection was carefully designed to balance thoroughness with computational feasibility while ensuring a robust comparison with leading approaches. We appreciate your feedback and hope that the clarifications provided have been helpful. Should you need any further details during the rebuttal phase, please feel free to reach out, and we kindly invite you to reconsider our work in light of these explanations. --- Rebuttal Comment 1.1: Comment: Thanks for the feedback. I will maintain my score. Good luck
Summary: This paper proposes SAND (Stochastic Additive Noise Decoupling), a method for feature selection that avoids adding any explicit loss or regularization term. Instead, it uses a simple noise-injection mechanism where each input feature is blended with noise according to a learnable gating parameter $a$. The gating vector is constrained via an $\ell_\alpha$-norm, and features with higher $a_i$ values are preserved with less noise. The method is evaluated across several datasets and shows performance that is comparable to existing feature selection approaches. Claims And Evidence: Most claims are supported by experiments and theoretical reasoning. However, an important gap is the lack of discussion around the training behavior of $a_i$ values. While the paper shows that optimal solutions lie in $[0,1]$, there are no guarantees or constraints during training that enforce this. Since the model is trained with gradient-based optimization, it's unclear how $a_i$ values are kept within $[0,1]$ in practice. This should be clarified, especially since negative or $>1$ values could affect the noise interpolation behavior. Methods And Evaluation Criteria: Yes, the methods and evaluation criterion are appropriate. The datasets and architectures used are standard, and the comparisons to existing feature selection methods are relevant. Theoretical Claims: I have verified the derivation provided for the linear regression case, where the authors show that adding the SAND layer is equivalent to introducing a term in the loss function that promotes the selection of $k$ features. The steps and conclusions appear to be correct and are consistent with the behavior expected from the formulation. Experimental Designs Or Analyses: The experiments are straightforward and use commonly adopted architectures and datasets. Supplementary Material: Yes, the supplementary material covers more empirical evaluations. No issues noted. Relation To Broader Scientific Literature: The work is related to prior methods in differentiable feature selection, such as stochastic gates and other gating mechanisms that involve regularization. The key difference is that SAND avoids adding terms to the loss and instead relies on noise injection and norm constraints, making it a simpler alternative. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The approach is simple and can be easily integrated into standard training pipelines. 2. Avoids the need for tuning additional loss weights or regularization terms. Weaknesses: 1. It is unclear how the model ensures that $a_i \in [0,1]$ during training. The paper relies on theoretical reasoning that such a solution exists, but does not explain how it's maintained during optimization. Since gradient updates can move parameters outside this range, it’s important to address this empirically or with a constraint mechanism. 2. Empirical comparison to stochastic gates would make the evaluation more complete, as they also perform embedded feature selection using a gating mechanism, but through loss regularization. This would help clarify how SAND compares in practice and in formulation. Other Comments Or Suggestions: N/A Questions For Authors: 1. Are the $a_i$ values explicitly constrained or reparameterized during training to ensure they stay within $[0,1]$? If not, how is it ensured that values do not become negative or exceed 1 during optimization? 2. How does SAND compare to stochastic gates in practice, especially since both aim to select features using differentiable mechanisms? Including this comparison would help contextualize the method. 3. An analysis of the stability or sparsity of the learned $a$ vectors across different runs or initializations could inform the robustness of the approach. Including such an analysis would enhance the completeness of the paper. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the careful reading of the paper and the valuable comments. Concerning the first question, as mentioned by the reviewer, mathematically, the optimum a_i is guaranteed to be between 0 and 1. In the code, to be on the safe side, we always clip the values between 0 and 1 at each iteration when the vector a_i is normalized. Now that the issue is cleverly raised as a concern by the reviewer, we will add it to the text of the paper as well. Concerning the second question, we tried to compare our method with other state-of-the-art methods, some of which were proposed very recently. Furthermore, a quick comparison was already done with other methods, including stochastic gate and LassoNet, as mentioned in footnote 3 (page 6, lines 326-328). However, this comparison was not quantitatively elaborated in the current version of the paper. We agree that a direct quantitative comparison with the stochastic gates method might be useful, given the similarity with SAND (note that SAND is much simpler with no need for loss function alteration or hyperparameter search for k as in stochastic gates ). For this end, we have now included the stochastic gates (STG) in our previously shared anonymized code repo (https://anonymous.4open.science/r/SAND-6BB1), which allows us to compare it with SAND and the 4 other methods used in our paper for benchmarking. Furthermore, the readme file of the repo now has a table comparing SAND and STG on 5 classification datasets and one regression dataset. SAND outperforms STG on 3 classification datasets and on the regression dataset (lower MAE is better). Note that for the 2 datasets on which SAND is worse, SAND was already not presented as the top performer (in terms of accuracy) in the current version of the paper. These 6 datasets were chosen for their moderate size (less than 70k samples) and for fast experimentation in this rebuttal. This is because STG needs a computationally costly fine-grained tuning to select the specified number of parameters. In the final version of the paper, we can include the comparison with STG on all 9 datasets. For the third question, a quick robustness/consistency experiment was done in Table 5 of Appendix B (supplementary material). The experiment was purposely done on one dataset with known correct features and on which SAND performs the worst compared to other methods (as explained in the appendix). Nevertheless, SAND shows a high consistency in choosing the correct features: SAND has better consistency and more accurate feature selection than the two methods that seem to outperform it in terms of classification accuracy (note that SAND is trained with 33% less number of epochs compared to other methods since it does not require post-selection training; that might be the reason it appears to undeperform in some scenarios as explained in the appendix). Such robustness analysis can indeed be extended to other datasets (even if the correct features are not known a priori) to track and confirm the robustness/consistency of SAND. We can run the experiment more thoroughly and add it to the supplementary material (or the paper, should we have enough space). Thanks for the productive suggestions. We hope these clarifications prove useful, and please feel free to reach out during the rebuttal phase if further discussion is needed; we kindly ask that you reconsider your evaluation in light of these explanations. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Your answers addressed my questions, so I will be update my score to accept.
Summary: The key contribution of this paper is to introduce a method for feature selection during training of a neural network. A key feature of this method is to be able to do the feature selection in a very simple way, without substantive modifications to the architecture itself, while still being able to retain good performance. The main idea is to have a trainable noise that can be added to the feature, which forces the features to be clustered at the end of the training process. Claims And Evidence: I would say so, it feels like the paper is well-written and thought through. The examples chosen are perhaps a bit small-scale but still provide an illustration of the efficacy of the method. Methods And Evaluation Criteria: The methods and the selected data sets used in the evaluation are appropriate. Theoretical Claims: I did. The authors presented a nice illustration with linear regression showing how their proposed method effectively promotes the selection of six features. Experimental Designs Or Analyses: I did. I don't think with the experiments as presented there are any issues. Supplementary Material: I had a look at the supplementary material, it presents some additional details on the settings used for the computations, which does not change my view of the paper. Relation To Broader Scientific Literature: I think in the crowded and extensively studied problem of feature selection, the authors managed, to the best of my knowledge, make a nice contribution to the literature. Essential References Not Discussed: Not to the best of my knowledge. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: Is it possible to make k a learnable parameter, somehow? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your careful reading of our paper and your kind compliments. Regarding your question about learnable k, for the moment what comes to our mind is the simple idea to sweep over k and see where the loss exhibits a drastic change.
null
null
null
null
null
null
GaussMarker: Robust Dual-Domain Watermark for Diffusion Models
Accept (poster)
Summary: This paper introduces GaussMarker, the first dual-domain watermarking technique tailored for diffusion models. The authors preoposed a Novel Dual-Domain Watermarking taht is designed for diffusion models without requiring any fine-tuning, while still achieving strong robustness. The authors also develop GNR, which is trained independently from the diffusion models. This component notably enhances the watermark's resilience against rotation and cropping attacks. Extensive experiments across three stable diffusion models and eight types of image distortions demonstrate that GaussMarker outperforms existing methods in terms of true positive rate and bit accuracy. Claims And Evidence: The claims are supported by experiemnts Methods And Evaluation Criteria: The method includes two part: the first is a dual-domain watermark encoder and the second is a GNR for watermark robustness improvement. However, the assumptions that $\left(\mathcal{T}\left(x^{s, f}\right)\right) \approx \mathcal{T}\left(z_T^{s, f}\right)$ might not stand in some cases where the watermarked images have some differences with the source image. Theoretical Claims: The theoretical claims are correct Experimental Designs Or Analyses: Ablation experiments are conducted, various attacking method are considered when evaluating the method. However, in table 1, it seems that the propsoed method are more rebust than other watermarking method only under rotaion and C&S attack. Any comments? Moreover, as the GNR is trained in latent space, does it mean that it would be reobust on image space attack like JPEG and Brightness. Supplementary Material: I have reviewed the supplementary material Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: The dual domain design and score fusing is novel and can improve the robustness for watermark detection. Weaknesses: Even though the proposed method does not achieve state-of-the-art performance under all attack, it achieves competitive results. However, comparing with other methods with similiar performance (RingID, TreeRing, etc), the proposed method needs additonal training. Other Comments Or Suggestions: Comparing other attacks, why applying Gaussian Noise attack in a multiple user applicaiton would grealty downgrade the accuracy is not clear, could the author provide some comment on this? Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1:** However, in table 1, it seems that the propsoed method are more rebust than other watermarking method only under rotaion and C\&S attack. Any comments? Moreover, as the GNR is trained in latent space, does it mean that it would be reobust on image space attack like JPEG and Brightness. **A1.1:** GaussMarker also achieves better detection performance than Stable Signature, TreeRing and PRC under S. Noise, G. Noise, and Bright attacks, as present in Table 1. A major shortcoming of existing methods is that they do not perform well across different image distortions, whereas GaussMarker exhibits relatively consistent robustness across various image distortions. **A1.2:** Yes, GNR can inherit the robustness of LDM, as present in the 1st and 4th rows of Table 6. However, since GNR uses an approximate objective, it may extract a little high detection score from an unwatermarked image sometimes (low TPR@1%FPR but high bit accuracy in the 4th row and S. Noise line of Table 6). Therefore, it is necessary to fuse the frequency score for a lower FPR. **Q2:** Even though the proposed method does not achieve state-of-the-art performance under all attack, it achieves competitive results. However, comparing with other methods with similiar performance (RingID, TreeRing, etc), the proposed method needs additonal training. **A2:** GaussMarker does need additional training, but the cost is minimal and the training is model-agnostic. As presented in the table below, the training of GNR only needs 72 minutes on 1 V100 32G GPU. Moreover, the time overhead incurred by GNR and Fuser during the detection phase is almost negligible. | Phase | Inversion | GNR | Fuser | |-----------|-----------|-------------|----------------| | Training | - | 72 min | 1.4×10⁻¹ s | | Detection | 6.5 s | 1.2×10⁻³ s | 1.0×10⁻⁴ s | **Q3:** Comparing other attacks, why applying Gaussian Noise attack in a multiple user applicaiton would grealty downgrade the accuracy is not clear, could the author provide some comment on this? **A3:** When the number of users increases, higher bit accuracy is required in extracting watermarks from images to ensure high identification accuracy. For example, consider injecting a 3-bit watermark (ignoring unwatermarked images for simplicity). With two users assigned watermarks \{0,0,0\} and \{1,1,1\}, an estimated watermark $\tilde{w}$=\{0,0,1\} allows us to correctly identify the first user with just 66.7\% bit accuracy. As presented in Table 1, the bit accuracy under Gaussian Noise is the most unstable. Under Gaussian Noise attack, GaussMarker only obtains 0.989 TPR\@1\%FPR, which means that nearly 11 watermarked images don't get high bit accuracy. Therefore, it is affected by the number of users greatly.
Summary: This paper presents **GaussMarker**, a novel watermarking method for diffusion models, with the following key contributions: 1. **Dual-Domain Watermarking**: Embeds watermarks in both the spatial and frequency domains of Gaussian noise to enhance robustness. 2. **Gaussian Noise Restorer (GNR)**: A model-independent learnable module that improves watermark detection robustness, especially against geometric attacks like rotation and cropping. 3. **Score Fusion**: Combines detection signals from spatial and frequency domains to improve extraction accuracy. Experimental results demonstrate that **GaussMarker** outperforms existing methods across three versions of Stable Diffusion (V1.4, V2.0, V2.1) under eight common distortions (e.g., rotation, blur, noise, JPEG compression) and four advanced attacks (e.g., VAE compression, diffusion model regeneration, UnMarker attack). Claims And Evidence: The paper makes the following main claims: 1. Existing single-domain watermarking methods lack robustness against geometric attacks (e.g., rotation and cropping), while dual-domain watermarking significantly improves detection performance. 2. **GNR** enhances watermark robustness by restoring corrupted watermark information. 3. Score fusion improves watermark detection accuracy compared to using single-domain detection alone. While the paper provides experimental support for these claims, several issues remain: - **Effectiveness of dual-domain watermarking**: The paper shows that combining spatial and frequency-domain watermarks improves detection accuracy (Table 4), but lacks theoretical analysis explaining why these two approaches complement each other. - **Role of GNR**: The study (Figure 2) demonstrates that GNR improves robustness against rotation and cropping, and Table 4 shows increased detection accuracy. However, the learning objective of GNR (Equations 9 and 10) is based on noise signal recovery, without considering more complex image transformations, which might limit its generalization. Additionally, GNR is only optimized for rotation and cropping but its effectiveness against other attacks (e.g., Gaussian noise, blurring) remains unclear. Methods And Evaluation Criteria: The paper employs reasonable evaluation methods: - **Baselines**: Compares against various state-of-the-art watermarking methods, including **Tree-Ring, Gaussian Shading, PRC, LatentTracer** (all tuning-free) and **Stable Signature** (a tuning-based method). - **Datasets**: Evaluates on **Stable Diffusion V1.4, V2.0, V2.1** using **MS-COCO** to generate 512×512 watermarked images. - **Metrics**: - **TPR@1%FPR**: True positive rate at 1% false positive rate, measuring detection performance. - **Bit Accuracy**: Measures the correctness of extracted watermark information. - **FID** and **CLIP-Score**: Assess image quality. Theoretical Claims: The paper’s main theoretical contribution is the introduction of **GNR** for watermark restoration. However, several theoretical gaps remain: **Validity of GNR’s Learning Objective:** The paper assumes that the Gaussian Noise Restorer (GNR) can effectively restore watermarked noise under transformations like rotation and cropping (Equation 10). However, it does not provide formal proof of this claim. Furthermore, while **Tree-Ring** mentions that watermark signals exhibit invariance under geometric transformations, it does not extensively discuss robustness against **non-linear transformations** such as **JPEG compression and blurring**. This raises several critical questions: 1. **Scope of Equation 10**: What specific types of transformations are covered by Equation 10? Does it include JPEG compression and blurring? - If **JPEG and blurring** are included, can the authors provide more rigorous theoretical justification or additional experiments to support this claim? - If **JPEG and blurring** are not included, the authors should explicitly state the limitations of Equation 10 and clarify the scope of transformations where GNR is applicable. 2. **Effectiveness of GNR beyond rotation and cropping**: - In **Table 6**, the ablation study suggests that GNR improves watermark robustness across **all distortions**, not just rotation and cropping. - However, the training transformation **T** only includes **rotation and cropping**, which raises concerns about potential overfitting to these specific noise layers. - Typically, deep learning models trained on a limited set of distortions **tend to overfit** to those distortions while exhibiting **weaker generalization** to unseen noise types. - If GNR was only trained on **rotation and cropping**, why does it still improve robustness against **other distortions** (e.g., JPEG compression, blurring)? - Can the authors provide a **rational explanation** for this unexpected gain? Experimental Designs Or Analyses: The experimental design is generally solid, but there are areas for improvement: 1. **Limitations of GNR**: - As mentioned in **Theoretical Claims** 2. **Effectiveness of score fusion**: - The correlation between spatial and frequency-domain scores is not studied. Supplementary Material: I have read all sections in appendix. Relation To Broader Scientific Literature: The paper situates itself well within the watermarking literature, comparing tuning-free and tuning-based methods. Essential References Not Discussed: Most of the essential references are discussed. Other Strengths And Weaknesses: **Strengths** - **Innovative dual-domain watermarking**: The proposed *GaussMarker* introduces a novel approach by embedding watermarks in both the spatial and frequency domains. This idea is well-motivated and aligns with the intuition that leveraging multiple domains can enhance robustness. - **No fine-tuning required**: Unlike tuning-based watermarking methods, *GaussMarker* does not require modifying diffusion model parameters, making it computationally efficient and more practical for real-world deployment. - **Comprehensive experiments**: The paper evaluates *GaussMarker* on multiple versions of Stable Diffusion (V1.4, V2.0, V2.1) and benchmarks against various state-of-the-art watermarking techniques. It tests robustness under eight image distortions and four advanced attacks, demonstrating superior performance. - **Gaussian Noise Restorer (GNR) for improved robustness**: The introduction of GNR helps mitigate the impact of geometric transformations (rotation, cropping), significantly improving detection accuracy in such cases. **Weaknesses** - **Validity of GNR’s Learning Objective:** The paper assumes that the Gaussian Noise Restorer (GNR) can effectively restore watermarked noise under transformations like rotation and cropping (Equation 10). However, it does not provide formal proof of this claim. Furthermore, while **Tree-Ring** mentions that watermark signals exhibit invariance under geometric transformations, it does not extensively discuss robustness against **non-linear transformations** such as **JPEG compression and blurring**. This raises several critical questions: 1. **Scope of Equation 10**: What specific types of transformations are covered by Equation 10? Does it include JPEG compression and blurring? - If **JPEG and blurring** are included, can the authors provide more rigorous theoretical justification or additional experiments to support this claim? - If **JPEG and blurring** are not included, the authors should explicitly state the limitations of Equation 10 and clarify the scope of transformations where GNR is applicable. 2. **Effectiveness of GNR beyond rotation and cropping**: - In **Table 6**, the ablation study suggests that GNR improves watermark robustness across **all distortions**, not just rotation and cropping. - However, the training transformation **T** only includes **rotation and cropping**, which raises concerns about potential overfitting to these specific noise layers. - Typically, deep learning models trained on a limited set of distortions **tend to overfit** to those distortions while exhibiting **weaker generalization** to unseen noise types. - If GNR was only trained on **rotation and cropping**, why does it still improve robustness against **other distortions** (e.g., JPEG compression, blurring)? - Can the authors provide a **rational explanation** for this unexpected gain? Other Comments Or Suggestions: See Weaknesses. Questions For Authors: 1. **Scope of Equation 10**: Equation 10 is designed to improve robustness against rotation and cropping. Does it also apply to **non-geometric transformations** like JPEG compression and blurring? If so, can you provide a formal explanation or additional experiments? If not, can you clarify its limitations? 2. **Generalization of GNR**: In Table 6, GNR consistently improves detection across all distortions, despite being trained only on rotation and cropping. Given that deep-learning models often overfit to specific noise patterns, how do you explain this unexpected gain? 3. **Complementarity of spatial and frequency-domain watermarks**: The paper claims that dual-domain embedding improves robustness. Can you provide an analysis or empirical study to justify why spatial and frequency-domain watermarks are complementary? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1:** Scope of Equation 10: Equation 10 is designed to improve robustness against rotation and cropping. Does it also apply to non-geometric transformations like JPEG compression and blurring? If so, can you provide a formal explanation or additional experiments? If not, can you clarify its limitations? **A1:** Equation 10 does not apply to common non-geometric transformations in our experiments. We will add this limitation. However, during detection, GNR takes the signal map of Gaussian noise that is estimated by LDM as input. Therefore, GNR can inherit the robustness of LDM (the 1st row of Table 6). **Q2:** Generalization of GNR: In Table 6, GNR consistently improves detection across all distortions, despite being trained only on rotation and cropping. Given that deep-learning models often overfit to specific noise patterns, how do you explain this unexpected gain? **A2:** Since the objective of GNR is approximate, as present in Figure 2 (nearly 30%-42%), we use random sign flipping (p=0.35) for GNR training. This prevents GNR from overfitting and ensures its good generalization. Therefore, GNR also enhances the robustness of GaussMarker under other attacks to some extent. We provide additional ablation results ( TPR@1%FPR / bit acc. on SD V2.1) in the table below to verify this. With random sign flipping, the average TPR@1%FPR and bit accuracy of GaussMarker increases by 5.7% and 12.1%, respectively. Although GaussMarker gains the main improvement under Rotate and C\&S attacks, the improvement under S. Noise, G. Noise, and Bright is also significant, especially on the bit accuracy. | Methods | Clean | Rotate | JPEG | C\&S | R. Drop | Blur | S. Noise | G. Noise | Bright | Average | |--------------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | GaussMarker w/o flipping | 1.000 / 1.000 | 0.587 / 0.513 | 0.990 / 0.982 | 0.980 / 0.540 | 1.000 / 0.963 | 1.000 / 0.999 | 0.999 / 0.912 | 0.988 / 0.934 | 0.916 / 0.967 | 0.940 / 0.868 | | GaussMarker w/ flipping | 1.000 / 1.000 | 0.997 / 0.998 | 0.996 / 0.997 | 1.000 / 1.000 | 1.000 / 0.963 | 1.000 / 1.000 | 0.999 / 0.991 | 0.989 / 0.968 | 0.993 / 0.989 | 0.997 / 0.990 | **Q3:** Complementarity of spatial and frequency-domain watermarks: The paper claims that dual-domain embedding improves robustness. Can you provide an analysis or empirical study to justify why spatial and frequency-domain watermarks are complementary? **A3:** We believe that the effectiveness of combining is built on the theoretical foundation of ensemble learning [a]. When both frequency-domain watermarking and spatial-domain watermarking can achieve precise and diverse detection results, as present in Table 6, fusing them may lead to more robust detection performance. This has been fully demonstrated by our experimental results. [a] https://jmlr.org/papers/v24/23-0041.html
Summary: This paper proposed **GaussMarker**, which embeds watermarks into the noise vector of diffusion models within both the spatial and frequency domains. To enhance the detection robustness of watermarks, the authors propose a learnable Gaussian Noise Restorer (GNR) that is capable of restoring from the distorted noise vector, especially under the rotation and cropping attacks. Experiments show that **GaussMarker** is robust against most common image transformations and four advanced watermarking removal attacks. Claims And Evidence: Yes. Methods And Evaluation Criteria: This paper compares several updated watermarking strategies, using standard metrics for detection (TPR and Bit Accuracy) and quality evaluation (FID and CLIP-score) and widely used prompt sets from MS-COCO for image generation. All of them make sense. Theoretical Claims: No theoretical claims in this paper. Experimental Designs Or Analyses: Yes, most of the experimental designs are valid and make sense. Below, I draw several questions and confusion from the paper. 1. Why does **GaussMarker** largely outperform other methods when evaluated against the regeneration attack? Can you explain the underlying mechanism? 2. How do you evaluate the identification accuracy with multiple users? Can you explain the experimental details? 3. For rotation attacks, is the GNR capable of generalizing to other useen rotation degrees (angles not used during training the GNR)? My concern is that since the CNN-based networks face challenges in handling the image rotations, how does GNR overcome this? Supplementary Material: I've checked all the supplementary material. Relation To Broader Scientific Literature: **GaussMarker** can be used to protect the intellectual property of diffusion-based generative models and safeguard their generated images from potential misuse. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths**: 1. The proposed **GaussMarker** is robust than most advanced baselines, making it an effective training-free watermarking strategy. 2. The motivation is clear and straightforward. **Weaknesses**: 1. The formulation and method description of this paper needs further proofreading and justification. For instance, $s$ is used to represent both signal map and spatial space, and sometimes they are mixed $s^{s,f}$; $s^{s,f}$ is less rigorously defined as signal map is only introduced in the **s**patial space but not in the **f**requency space (please point out my mistakes if I misunderstand the method). 2. It is confusing whether GNR requires retraining for each signal map. If so, the scalability of GNR is reduced, and how to select which GNR to use when multiple users generate images from the same LDM? 3. The idea of GNR is similar to the robustly trained watermark decoder (in Stable Signature) and is only designed to enhance the robustness of rotation and cropping. Other Comments Or Suggestions: The method description needs further proofreading to make the paper more fluent and easy to follow. Questions For Authors: 1. How do you derive the Equation in lines 222-224 (needs index) from Equation 10? 2. In table 4, why does SD V 2.1's TPR performance get a significant drop compared to SD V 2.0 in the last two rows (from 0.997 -> 0.875), while the bit ACC remains the same? It seems that Spatial + GNR is sufficient to get a robust watermarking performance. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1:** Why does GaussMarker largely outperform other methods when evaluated against the regeneration attack? **A1:** This advantage stems from GNR, as shown in the SD V2.1 results presented in the table below. Regeneration using diffusion models mainly involves semantic editing, which may also include spatial-like transformations. Notably, since GNR is trained using the composition of rotation and cropping with random sign flipping (p=0.35) for preventing overfitting, it can learn the invariance to various spatial-like transformations. Therefore, GNR enhances GaussMarker's ability to resist spatial-like transformations introduced by regeneration attacks. | Spatial | Frequency | GNR | TPR@1%FPR | Bit Acc. | |---------|-----------|-----|-----------|----------| | ✓ | | | 0.008 | 0.524 | | | ✓ | | 0.295 | - | | ✓ | ✓ | | 0.345 | 0.524 | | ✓ | | ✓ | 0.660 | 0.865 | | ✓ | ✓ | ✓ | 0.667 | 0.865 | **Q2:** How do you evaluate the identification accuracy with multiple users? **A2:** We adopt an approximate algorithm as detailed in Appendix C, following Gaussian Shading. Specifically, we first compute a theoretical threshold $\tau$ based on the predetermined number of users, $N$ and expected FPR. Given a watermarked image, if its bit accuracy exceeds this threshold, we consider the user generating it will be correctly identified among the $N$ users. A practical multi-user identification extension of GaussMarker is provided in Response A5. **Q3:** For rotation attacks, is the GNR capable of generalizing to other useen rotation degrees (angles not used during training the GNR)? My concern is that since the CNN-based networks face challenges in handling the image rotations, how does GNR overcome this? **A3:** GNR is trained with dense rotation angles randomly sampled from (-180$^{\circ}$, 180$^{\circ}$), and can converge easily. Therefore, even with unseen rotation degrees, they are likely to be approximated with minor errors by nearby angles. **Q4:** The formulation and method description of this paper needs further proofreading and justification. **A4:** We will improve the formulation and method description. **Q5:** It is confusing whether GNR requires retraining for each signal map. If so, the scalability of GNR is reduced, and how to select which GNR to use when multiple users generate images from the same LDM? **A5:** When we have multiple users, we also only need to train one GNR. Each LDM only has one unique model-watermark $w \in$ {0,1}$^l$ (or signal map) with its corresponding GNR. Each user will be assigned a unique key $k \in$ {0,1}$^l$ and a unique user-watermark $w_u \in$ {0,1}$^l$ with $w_u = \text{XOR}(w,k)$. We use the estimated $\tilde{w}$ to estimate the user-watermark $\tilde{w}_u = \text{XOR}(\tilde{w},k)$ for calculating the bit accuracy. This easy extension is similar to Gaussian shading, so we omit that detail from the paper. We can add it if needed. **Q6:** The idea of GNR is similar to the robustly trained watermark decoder (in Stable Signature) and is only designed to enhance the robustness of rotation and cropping. **A6:** The core ideas are similar, but training GNR is much more efficient. GNR does not require additional datasets for training, while Stable Signature needs to train its watermark decoder and VAE on the COCO dataset. Besides, in addition to rotation and cropping, GNR also includes random sign flipping and is robust to many spatial-like transformations in the pixel space, as detailed in Response A1. **Q7:** How do you derive the Equation in lines 222-224 (needs index) from Equation 10? **A7:** In Equation 10, GNR takes Gaussian noise as input and outputs its restored version. However, as shown in Figure 2, we find that the approximate invariance exists only in the signal space. Therefore, we redefine GNR to take the signal map of Gaussian noise as input and output the restored signal map, as formalized in the Equation in lines 222-224. **Q8:** In table 4, why does SD V 2.1's TPR performance get a significant drop compared to SD V 2.0 in the last two rows (from 0.997 $\rightarrow$ 0.875), while the bit ACC remains the same? It seems that Spatial + GNR is sufficient to get a robust watermarking performance. **A8:** This is because the Spatial+GNR sometimes extracts high bit accuracy from unwatermarked images, which lowers the FPR and TPR@1%FPR, especially on stronger models (SD V2.1 is a further fine-tuned version of SD V2.0). Note that bit accuracy is only calculated from watermarked images, so it is less affected. The frequency score helps to reduce the FPR through score fusion.
Summary: The paper introduces GaussMarker, a novel semantic watermark technique based on diffusion models. Different from previous works, GaussMarker adds watermarks in both the pixel and frequency domain of images. During detection, GaussMarker trains two additional components: 1. Gaussian noise restorer (GNR) for restoring the geometric attacks and 2. Fuser for fusing the detection score between pixel space and frequency space detection. Experiment results demonstrate the superiority of GaussMarker towards geometric attacks (Rotate, Crop&Scale). Claims And Evidence: The claim "GaussMarker is tunning-free watermarks" is improper. Although GaussMarker doesn't need to tune the diffusion model, it still needs to train the Gaussian noise restorer and Fuser. Thus it is unfair to categorize GaussMarker as tunning-free watermarks. Methods And Evaluation Criteria: Yes, the proposed method is make sense for the problem. Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: The paper experiments with the robustness and generation fidelity of their proposed method on three different versions of Stable Diffusion and various attacks. Furthermore, it contains ablation studies as well as experiments on multiple user cases. The experiments are thorough. Supplementary Material: Yes, I have reviewed all parts of the supplementary material. Relation To Broader Scientific Literature: The key contribution of this paper, watermark, is related to AI safety, and copyright protection problems. Essential References Not Discussed: The paper has a thorough discussion of previous works. Other Strengths And Weaknesses: ## Strength 1. By incorporating the two additional components (GNR and fuser) and adding watermarks in dual domains, GaussMarker achieves much better robustness. This contribution could be clearly illustrated in Table 4. ## Weakness 1. The introduction of additional components caused more concerns: a. More computation expensive: How much is the additional computation cost introduced by the GNR and the fuser for both training and detection? b. Effect on generation fidelity. As shown in Table 2, compared with the most direct baseline Tree-Ring, GaussMarker achieves slightly better FID and slightly worse CLIP Score. But what is the effect of spatial watermark and frequency watermark on the fidelity is still unclear. Similar to Table 4, the paper should have an ablation study related to this. c. Generalizability of these two components: does the paper train three GNRs and users for three stable diffusions or one for all of them? Other Comments Or Suggestions: 1. Some problems in Figure 3, we couldn't visualize the results such as rotate, and C&S. The author needs a better illustration. Questions For Authors: 1. For Table 4, compare with the 4th and 5th rows, including the frequency significantly improve TPR1%FPR for V2.1. But this frequency watermark is not important for V1.4 and V2.0. Could the author discuss the difference between different versions of stable diffusion and why the frequency watermark only matters for V2.1? 2. What's the relation between watermark capacity and the number of users in Figure 3? If the watermark capacity is 2^10, is the number of users 2^(2^10)? 3. In Figure 3, why increasing the watermark capacity will make the watermark less vulnerable to Random Drop? why increasing the number of users will make the watermark less vulnerable to Gaussian noise? Is there any discussion? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1:** About "tunning-free". **A1:** We consider GaussMarker to be a tuning-free method. The term "tuning-free" implies that SDs cannot be fine-tuned due to computational costs and the watermarking method can be attached to the model in a plug-and-play manner without touching the model weights. GaussMarker avoids fine-tuning SDs and instead trains only two small modules, which require minimal computational resources, as elaborated in Response 7. **Q2:** Generalizability of these two components. **A2:** We train a GNR for all SDs due to its model-independent characteristics. We train a Fuser for each SD, as the training data for the Fuser is generated from the corresponding SD. **Q3:** Some problems in Figure 3. **A3:** Some curves overlap at y=1. We will improve the illustration. **Q4:** Why does frequency watermark only matter for V2.1? **A4:** The frequency watermark matters more for stronger SDs **when using GNR**. In our experiments, we find that GNR occasionally extracts a high bit accuracy from unwatermarked images, especially on stronger SDs. This results in the need for a higher threshold to maintain the expected 1\% FPR, which in turn lowers the TPR@1%FPR. For instance, in Table 6, GNR+Spatial demonstrates better bit accuracy but worse TPR@1%FPR compared to Spatial under Gaussian Noise and Salt-and-Pepper Noise on SD V2.1. This significant drop does not occur with SD V1.4 and V2.0. Given the stable FPR of the frequency watermark [a], the frequency score becomes crucial for achieving a lower FPR through score fusion, especially on SD V2.1. [a] https://arxiv.org/pdf/2305.20030 **Q5:** The relation between watermark capacity and the number of users in Figure 3. **A5:** The watermark capacity and number of users in Figure 3 are not directly related. The watermark capacity, denoted by $l$, refers to the number of bits available for watermarking. The number of users that can be assigned unique watermarks must be much less than $2^l$, and a larger $l$ usually supports more users. In Figure 3(b), with $l=256$, we assess how accurately GaussMarker can identify the correct user as the number of users grows. **Q6:** In Figure 3, why increasing the watermark capacity will make the watermark less vulnerable to Random Drop? why increasing the number of users will make the watermark less vulnerable to Gaussian noise? **A6.1:** Maybe you want to use 'more' instead of 'less'. As shown in Equation 5, we use a voting strategy (average pooling) to estimate each bit in the watermark $\tilde{\omega} \in$ {0,1}$^l$ based on $cwh/l$ signs in the signal map $\tilde{s} \in$ {0,1}$^{cwh}$. When $l$ increases, fewer signs can be used for voting, which means that sign estimation needs to be more accurate for estimating the $\tilde{\omega}$. Random Drop masks 80% of the image with black pixels, significantly impacting sign accuracy. Therefore, the bit accuracy under this attack is most affected by the watermark capacity. **A6.2:** When the number of users increases, higher bit accuracy is required in extracting watermarks from images to ensure high identification accuracy. For example, consider injecting a 3-bit watermark (ignoring unwatermarked images for simplicity). With two users assigned watermarks {0,0,0} and {1,1,1}, an estimated watermark $\tilde{w}$={0,0,1} allows us to correctly identify the first user with just 66.7% bit accuracy. As presented in Table 1, the bit accuracy under Gaussian Noise is the most unstable. Therefore, it is affected by the number of users most. Note that we set the watermark capacity to $l=256$ in this experiment, thus, the performance under Random Drop is great with its 1.000 TPR@1%FPR. However, under Gaussian Noise attack, GaussMarker only obtains 0.989 TPR@1%FPR, which means that nearly 11 watermarked images don't get high bit accuracy. **Q7:** Computation cost. **A7:** The cost is minimal, as presented in the table below. These experiments are conducted on 1 V100 32G GPU. | Phase | Inversion | GNR | Fuser | |-----------|-----------|-------------|----------------| | Training | - | 72 min | 1.4×10⁻¹ s | | Detection | 6.5 s | 1.2×10⁻³ s | 1.0×10⁻⁴ s | **Q8:** Ablation study on fidelity. **A8:** The ablation results are shown in the table below. Both spatial and frequency watermarks help preserve the original CLIP Score. For FID, the spatial watermark performs better than the frequency one. Combining both methods often requires more editing, leading to slightly worse CLIP Scores and FID for the dual-domain watermark compared to single-domain approaches. Improving the visual quality of watermarked images using GaussMarker could be a direction for future work. | Spatial | Frequency | Ave. CLIP Score↑ | Ave. FID ↓ | |---------|-----------|--------|--------| | | | 0.3567 | 24.89 | | ✓ | | 0.3568 | 24.36 | | | ✓ | 0.3568 | 24.78 | | ✓ | ✓ | 0.3545 | 24.85 |
null
null
null
null
null
null
FicGCN: Unveiling the Homomorphic Encryption Efficiency from Irregular Graph Convolutional Networks
Accept (poster)
Summary: This paper proposes FicGCN, a framework for efficient privacy-preserving inference of Graph Convolutional Networks (GCNs) using Homomorphic Encryption, by using (1) a latency-aware packing scheme that optimally balances aggregation and combination operations based on data dimensions and model structure; (2) a Sparse Intra-Ciphertext Aggregation method that minimizes rotation overhead in aggregation operations by leveraging graph sparsity; (3) a region-based node reordering technique that reduces computational overhead by optimizing the local adjacency structure. Claims And Evidence: The key claims appear to be well-supported by (1) detailed theoretical analysis and derivations of the optimization methods; (2) extensive experimental results across multiple datasets showing consistent improvements. However, a formal privacy analysis is missing for security properties claimed. Methods And Evaluation Criteria: Same as above. Theoretical Claims: Detailed theoretical analysis and derivations of the optimization methods, but a formal privacy analysis is missing for security properties claimed. Experimental Designs Or Analyses: Experimental design seems to be sufficient. The analyses are generally valid, though they could benefit from additional privacy analysis, either from empirical attack simulation or statistical analysis like privacy sensitivity calculation. Supplementary Material: I reviewed the supplementary materials. All of them, and specifically focusing on Packing Method. Relation To Broader Scientific Literature: This work proposes a specific optimization when introducing HE into GCN for privacy-preserving ML. However, there still lacks some key related work that would raises concerns about the novelty and improvement claim in this paper. Some missing related work are listed below. Essential References Not Discussed: 1. Zhang, Chengliang, et al. "{BatchCrypt}: Efficient homomorphic encryption for {Cross-Silo} federated learning." 2020 USENIX annual technical conference (USENIX ATC 20). 2020. 2. Chen, Tianyu, et al. "The-x: Privacy-preserving transformer inference with homomorphic encryption." arXiv preprint arXiv:2206.00216 (2022). 3.. Jin, Weizhao, et al. "FedML-HE: An efficient homomorphic-encryption-based privacy-preserving federated learning system." arXiv preprint arXiv:2303.10837 (2023). Other Strengths And Weaknesses: Some technical details in appendix could be better integrated into main text to have a better flow for explanation of the proposed idea. Other Comments Or Suggestions: I would love to see a formal rigorous security proof. For example, use UC-Security to provide privacy analysis. Questions For Authors: 1. How can the missing related work listed above further help justify the claims in the paper? Can any techniques in these papers be used to augment this work? For example, non-linear approximation and selective encryption? 2. How does the key management work in this work? 3. How does the system handle a client-server collusion threat model? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1. Security proof of FicGCN** Thanks for your comments. FicGCN fully adopts CryptoGCN's[3] threat model and privacy assumptions which has been rigorously proved. We utilize the CKKS scheme, whose security is guaranteed by the hardness of the RLWE problem[1], ensuring polynomial-time indistinguishability. During inference, we employ only CKKS-supported secure homomorphic operations (PMult,CMult,Rot), and the ciphertext packings. Since CKKS inherently supports arbitrary message vector ordering in its plaintext polynomial encoding[2] and homomorphic operations, FicGCN preserves CKKS's original security guarantees. In the setup stage, we select homomorphic encryption parameters[2] to achieve 128-bit security—meaning any successful attack would require at least $2^{128}$ basic operations. **2. Analysis of the impact of supplementary references on FicGCN** The techniques from the referenced works are orthogonal to our research and do not directly contribute to it. However, they can be easily integrated with our approach in their intended applications to enhance overall end-to-end performance. To illustrate, FedML-HE enables clients to strategically compromise certain node features' security via Selective Encryption, converting partial computations from ciphertext to plaintext for reduced latency. THE-X primarily optimizes nonlinear layers through polynomial approximation—an objective orthogonal to FicGCN's linear-layer optimizations. Similarly, BatchCrypt's quantization uniformly accelerates all layers, constituting another independent optimization dimension. While all 3 studies offer potential improvements for GCN inference, their techniques operate on distinct axes from our approach. We therefore incorporate them as valuable references while emphasizing FicGCN's unique contributions. **3. Key management in FicGCN** In FicGCN, we exclusively employ FHE to execute encrypted inference for ensuring data security. This framework requires three distinct keys: the public key (pk), private key (sk), and evaluation key (evk). - Pk: Used for data encryption/decryption and enabling homomorphic operations, thus being publicly shared between both parties. - Sk: Also involved in encryption/decryption, is strictly client-exclusive to prevent server-side privacy breaches. All decryption occurs solely on the client side to eliminate man-in-the-middle risks. - Evk: Generated by the client and disclosed to the server, facilitates server-side HE computations (e.g. Rotation, CMult). **4. Analysis of collusion threat model in FicGCN** FicGCN's application scenario involves strictly one client and one server following the common FHE assumption[3], where the client possesses all node features and refuses to disclose any data to the server. This fundamentally contradicts the prerequisite for collusion models defined in [4]—which require at least three participating parties capable of covert coordination to violate protocol security. Under our key management framework, **FHE inherently eliminates collusion threats** because: (1) Only two non-cooperating parties exist (2) Even if a server supports multiple colluding clients, they cannot obtain honest clients' private keys (3) Security reduces solely to RLWE hardness Reference: [1] Oded Regev. 2005. On lattices, learning with errors, random linear codes, and cryptography. In Proceedings of the thirty-seventh annual ACM symposium on Theory of computing (STOC '05) [2] Cheon, J.H., Kim, A., Kim, M., Song, Y. (2017). Homomorphic Encryption for Arithmetic of Approximate Numbers. In: Takagi, T., Peyrin, T. (eds) Advances in Cryptology – ASIACRYPT 2017. [3] Ran Ran, Nuo Xu, Wei Wang, Gang Quan, Jieming Yin, and Wujie Wen. 2022. CryptoGCN: fast and scalable homomorphically encrypted graph convolutional network inference. In Proceedings of the 36th International Conference on Neural Information Processing Systems (NIPS '22). [4] Goldreich, O. (2004). Foundations of Cryptography II. Cambridge University Press.
Summary: The paper proposes FicGCN, a framework designed to accelerate private Graph Convolutional Network inference, with three key innovations. First, an optimal layer-wise aggregation scheduling strategy is presented to accelerate inference for data of various scales. Second, Sparse Intra-Ciphertext Aggregation (SpIntra-CA) is introduced to leverage GCN sparsity to minimize the overhead associated with rotations in ciphertexts during aggregation operations. Third, Node Order Optimization (NOO) is proposed to minimize conflicts and improve computation efficiency by reordering nodes based on the adjacency structure. FicGCN is evaluated on several popular datasets, and the results show that FicGCN achieved the best performance across all tested datasets, with up to a 4.10× improvement over the latest design. ## update after rebuttal Based on the authors’ rebuttal, I have revised my original score from 2 to 3 since the rebuttal resolved my major concerns on the motivation of the NOO technique. Claims And Evidence: Yes Methods And Evaluation Criteria: No. I have one question on the method proposed in this paper. It seems this paper mainly focuses on optimizing the node’s order so that the adjacent row in the feature map would represent neighbor nodes in the graph, which could then benefit the aggregation process. However, since the graph topology (the adjacency matrix A) is a public input to both parties, they can first negotiate and determine the optimal packing method before private inference. Moreover, Ciphertext-Plaintext computation in the combination process (XW part) would not change the node order. So I think the need to reorder the node’s packing during runtime requires further justification. (Q1) Theoretical Claims: Yes. I have one question on the correctness of CPOO’s theoretical overhead (Page 5). The original paper claims the worst case requires (n-1)logN^2 rotations, but once a conflict happens, the number of rotations will double as this will split the original ciphertext into two ciphertexts, and result in like 2^logN = N ciphertext instead of logN^2.Therefore, detailed proof for this worst-case scenario is needed. Moreover, I do not find the meaning of n, and I suggest the author make a table to better illustrate the meaning of the notations. (Q2) Experimental Designs Or Analyses: Yes Supplementary Material: No, there is no supplementary material for this paper. Relation To Broader Scientific Literature: This paper mainly focuses on private GCN inference, which is meaningful for Graph relevent tasks including recommendation system, knowledge graphs and can be further extended. Essential References Not Discussed: No, essential references are all discussed. Other Strengths And Weaknesses: Strengths: + Only aggregating a subset of the neighbor node is an effective way to exploit the sparsity. Weaknesses: - More details are required to better clarify this paper, especially questions proposed in the comments. Other Comments Or Suggestions: - On section 3.3, paragraph 2, line 2, I suppose the author wants to express “and thus is inefficient” instead of “and thus is efficient”. Questions For Authors: Besides the questions (Q1 and Q2) described above, I have the following questions to the author: - Q3: What are the crypto primitives used in the framework? Is it HE/MPC or FHE? - Q4: In the proposed method, frequent masking is required and this will result in increased multiplication depth. How does this influence the computation efficiency of HE? Could you provide a theoretic estimation of the multiplication depth required? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1. Justification for the need to reorder nodes(Q1)** Thanks for your comments. Negotiating and determining the optimal packing method based on NOO prior to private inference indeed constitutes a key contribution of our work. We discussed it in Section 3.2.1 and Figure 8(b). However, only packing is insufficient due to the leak of utilization of sparsity of A in online inference(Table 4,6), we thus introduce three techniques and put them together to support efficient FHE-GCN inference: - Latency-aware packing: Optimizes packing for XW to enable dense SIMD computations. - SpIntra-CA: A novel approach leverages irregular sparsity patterns during AX, enabling arbitrary desire node order and supporting parallel and non-redundant node aggregation. - NOO: A reordering algorithm that groups nodes to enhance SpIntra-CA's computational efficiency while preserving XW performance. To illustrate, in Figure 8(b) both parties iteratively determine the node packing order, starting from the final layer and back-propagating to the client's input based on NOO and SpIntra-CA. This reverse derivation is then integrated with our latency-aware strategy to derive the optimal packing method, achieving simultaneous optimization of both AX and XW computation. **2. Proof of upper bound rotation counts in SpIntra-CA (Q2)** Due to space constraints, we present the key proof arguments and the complete formal proof with notation tables is at https://anonymous.4open.science/r/FicGCN-D208. In our paper, n denotes the sampled neighbor counts. The rotation counts of each stage in SpIntra-CA can be calculated as $\sum\_{k=0}^{i-1}Con_{k,j+\sum\_{p=0}^{k-1}2^p} \cdot \frac{1}{2^k}\leq \sum\_{k=0}^{i-1}2^k \cdot \frac{1}{2^k}$, is less than $log(N)$. Thus the total counts is $O(log^2(N))$ rather than $O(N)$. This can be formally proved by the Lemma 1 and Conflict Bound Analysis. **Analysis:** 1) $Con_{i,j}$, the number of conflicts in the $i^{th}$ rotation stage at the $j^{th}$ slot, is the summation of all conflicts during the prior stages in the corresponding slots which can be rotated to $j^{th}$. 2) $PR(ct[m]\rightarrow ct[q])$, the the probability of rotating an element in $m^{th}$ slot to $q^{th}$. The case "$ct[j+\sum^{k-1}\_{p=0}2^p]\rightarrow ct[j]$" means that the rotation step($rs$) bits of an element in $ct[j+\sum^{k-1}\_{p=0}2^p]$ should be all ``1'' among the corresponding k contiguous bits. **Lemma 1:** Given any bit distribution($Pr[bit_p=1]=\frac{1}{2}-\epsilon$), $ PR(ct[j+\sum\_{p=0}^{k-1}2^p]\rightarrow ct[j])\le \frac{1}{2^k}$ **Proof:** Based on **Analysis** 2), we have $PR(ct[j+\sum\_{p=0}^{k-1}2^p]\rightarrow ct[j])\le \frac{1}{2^k}$ when $\epsilon>0$. When $\epsilon\le0$, by replacing cyclic left shifts with cyclic right shifts, the $rs$ bits become the complement of the original: $$PR(ct[j+\sum^{k-1}\_{p=0}2^p]\rightarrow ct[j])=(\frac{1}{2}+\epsilon)^k \leq \frac{1}{2^k}$$ So we have $ PR(ct[j+\sum\_{p=0}^{k-1}2^p]\rightarrow ct[j])< \frac{1}{2^k}$ when $\epsilon\neq 0$ **Conflict Upper Bound Analysis:** $Con_{i,j} \leq log(N)$ under the worst case(uniform distribution). **Proof:** According to **Analysis** 1), $$Con_{i,j}=\sum\_{k=0}^{i-1}Con_{k,j+\sum\_{p=0}^{k-1}2^p} \cdot PR(ct[j+\sum\_{p=0}^{k-1}2^p]\rightarrow ct[j]) $$ According to Lemma 1, it is $$\sum\_{k=0}^{i-1}Con_{k,j+\sum\_{p=0}^{k-1}2^p} \cdot \frac{1}{2^k}\leq \sum\_{k=0}^{i-1}2^k \cdot \frac{1}{2^k}=i \leq log(N)$$ Thus, the number of conflicts per stage is bounded by log(N). Given that SpIntra-CA comprises log(N) stages, it yields a total conflict counts of less than $log^2(N)$ between an aggregation of two nodes. **3. Crypto primitives in FicGCN (Q3)** We use only FHE in FicGCN framework, with no other crypto primitives such as MPC. **4. Impact of masking on multiplicative depth** In CKKS, the consumption of multiplicative depth primarily stems from ciphertext rescaling operations. However, the masking procedures described in our work do not require such rescaling and introduce no additional burden on the multiplicative depth budget. During CKKS, each floating-point message $m$ is first scaled by a precision parameter $\Delta$ and quantized to an integer before encryption: $c=Enc(\Delta \cdot m)\ $. Consequently, when multiplying two ciphertexts with identical initial scales ($\Delta$), the resulting ciphertext's scale becomes $\Delta^2$: $$c_1*c_2=Enc(\Delta\cdot m_1)*Enc(\Delta\cdot m_2)=Enc(\Delta^2\cdot(m_1\*m_2)) $$ CKKS employs rescaling operations to restore the ciphertext's scale to $\Delta$. However during Masking, we multiply the ciphertexts by plaintexts encoding integer 0/1 with the scale=1: $$ Enc(\Delta \cdot m) * Encode(1 \cdot mask) = Enc(\Delta \cdot (m*mask))$$ Therefore the scale of ciphertexts remains invariant. Also, the results of Masking bit-length remains constant because we multiply them by 0/1. Thus the multiplicative depth will not be consumed during Masking.
Summary: The paper presents FicGCN, a method aimed at enhancing the efficiency of homomorphic encryption in irregular Graph Convolutional Networks (GCNs). It introduces a **latency-aware packing method** that optimizes the arrangement of ciphertext slots, balancing computational overhead and utilization. The **Sparse Intra-Ciphertext Aggregation (SpIntra-CA)** method is proposed to minimize redundant computations during aggregation, leveraging the sparsity of graphs for parallel processing of neighboring nodes. Additionally, the **Node Order Optimization (NOO)** technique is introduced, which rearranges nodes to reduce rotation overhead and conflicts during ciphertext processing. The paper also includes a **region-based data reordering method** that improves aggregation efficiency by organizing data based on local adjacency structures. Experimental results demonstrate that FicGCN achieves up to a **4.1x speedup** compared to state-of-the-art methods, particularly benefiting larger datasets. Overall, the study emphasizes the importance of optimizing homomorphic encryption operations to enhance computational efficiency while maintaining data privacy in GCN applications. Claims And Evidence: Yes, all claims are clear and with enough proof. Methods And Evaluation Criteria: The Node Order Optimization (NOO) technique improves computational efficiency during encrypted data processing by determining the optimal aggregation order for each node. This minimizes the shift range within the ciphertext, which is crucial for reducing rotation overhead during aggregation operations. By arranging nodes in an aggregation-friendly manner, NOO enhances the efficiency of the Sparse Intra-Ciphertext Aggregation (SpIntra-CA) method, leading to a significant reduction in the number of required rotations and overall computational complexity. This optimization allows for more effective utilization of ciphertext slots, ultimately improving the speed and efficiency of the graph convolutional network operations. With the provided benchmarks, i think this work handles the sparsity of graph in HE domain very well. Theoretical Claims: I have checked correctness of proof. No issues found by me. Experimental Designs Or Analyses: I have checked the experiments, all looks good and enough to support the techniques. Supplementary Material: most of appendices are configurations and setup. For the extended experiments and ablation study, i have no concerns on that. Relation To Broader Scientific Literature: see above. Essential References Not Discussed: None Other Strengths And Weaknesses: I have only one concern about the scalability to very large model. As graph model might be applied to dataset with over 100k nodes, e.g. reddit, applying node detection here might has very large cost on graph traverse. Other Comments Or Suggestions: none Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Time complexity of Node Order Optimization (NOO) on large-scale datasets** Thanks for your constructive comments. We have extended the application of NOO to large-scale datasets Pokec (1.6M nodes). The following table summarizes the experimental results across datasets, including: - **Graph Statistics**: Node count ($|V(G)|$), edge count ($|E(G)|$), and average degree ($d_{avg}$). - Time Overhead: NOO preprocessing time ($T_{NOO}$) and online stage FHE computation latency ($T_{FHE}$). - Efficiency Ratio: $\rho=T_{NOO}/T_{FHE}$ | Dataset | $\|V(G)\|$ | $\|E(G)\|$ | $d_{avg}$ | $T_{NOO}$ (s) | $T_{FHE}$ (s) | $\rho$ | | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | | Cora | 2.7K | 5.4K | 4.01 | 2.00 | 64.12 | 3.2% | |Citeseer | 3.3K | 4.7K | 2.85 | 5.33 | 79.98 | 6.7% | | Corafull | 19.8K | 127K | 12.82 | 27.91 | 7733 | 0.36% | |Pokec | 1.63M | 30.62M | 18.80 | 181.20 | $\sim 10^7$ | $\sim 0.002$% | As shown in this table, NOO exhibits low time overhead on smaller datasets. For large-scale graphs like Pokec, NOO's overhead remains negligible relative to FicGCN's online phase. Notably, on large graphs, the client-side encryption time alone eclipses NOO's latency, ensuring NOO does not bottleneck FicGCN's performance.
null
null
null
null
null
null
null
null
TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation
Accept (poster)
Summary: This work proposes a three-pronged approach (TokenSwift) to improving the latency and quality of generating ultra-long sequences: 1) Multi-token prediction similar to Medusa; 2) Using a sparse KV-cache whose elements are selected based on their attention scores (query/key inner product before softmax); 3) token reutilization via n-gram look-ups, somewhat similar to prompt lookup but applied to the generated outputs. The authors show that these techniques significantly improve generation latency compared to selected speculative decoding methods and, when paired with a contextual penalty for repeated outputs, improve the distinct-N metric (unique n-grams / total words) of the output sequences. Claims And Evidence: * Overall, I believe the empirical evidence supports the author's claims that TokenSwift improves output generation latency for long sequences. * However, I have concerns regarding some specific claims made otherwise: * TokenSwift doesn’t directly address the growing KV cache problem for long sequences. Although the authors note that the baseline methods “would inevitably encounter failures due to KV cache budget constraints” their proposed method would also fail in such a setting as the verification step requires the *full* KV cache. Therefore, in cases where the “growing size of the KV cache would far exceed the allocated length budget”, TokenSwift will also fail. I believe it would be better to reformulate this claim as a benefit with respect to latency instead of claiming other speculative decoding methods would “fail”. * The authors claim that TokenSwift enhances diversity of the generated output and also significantly reduces latency. While these claims separately appear to be accurate based on the empirical measures provided, I am concerned TokenSwift seems to inherently *benefit* from repeated / duplicated n-grams in long outputs due to the proposed token reutilization scheme. The reported acceptance rate (AR) for TokenSwift without token reutilization falls significantly as noted in Figure 3. I worry that if sampling was improved such that repetitive content generation was not as prominent, TokenSwift’s performance would be greatly diminished and may no longer offer significant latency improvements over TriForce and other speculative decoding methods. As such, the proposed token utilization is at odds with the eventual desire to have more diverse long generated outputs. Methods And Evaluation Criteria: * Overall the methods and evaluation criteria for assessing the quality of the speculative decoding process are standard and make sense. * For assessing the output quality, I find distinct-n to be a useful but insufficient metric on its own. Distinct-n only captures lexical but not semantic diversity. Other text diversity metrics that also capture semantic similarities should be considered. For example: self-BLEU [1], BERTScore [2], or MAUVE [3]. Since repeated content is acknowledged by the authors to also be present in TokenSwift outputs, it would improve the work to further quantify this with additional metrics. [1] Y. Zhu et al., “Texygen: A Benchmarking Platform for Text Generation Models,” Feb. 06, 2018, arXiv: arXiv:1802.01886. doi: 10.48550/arXiv.1802.01886. [2] T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, “BERTScore: Evaluating Text Generation with BERT,” Feb. 24, 2020, arXiv: arXiv:1904.09675. doi: 10.48550/arXiv.1904.09675. [3] K. Pillutla et al., “MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers,” Nov. 23, 2021, arXiv: arXiv:2102.01454. doi: 10.48550/arXiv.2102.01454. Theoretical Claims: N/A, no novel proofs provided. Experimental Designs Or Analyses: The described experimental design and analyses appear to be sound. Supplementary Material: I reviewed Sections C, D, E, G, and H. Relation To Broader Scientific Literature: TokenSwift builds on prior methods from speculative decoding literature work such as Medusa and prompt-lookup decoding. While these individual components are not overly novel in themselves, their combination and integration is. Further, I believe tweaks to these approaches included in this work such as having non-independent multi-token generation heads is unique and a valuable contribution to the literature. Overall speculative decoding for long context prompts has been studied in some detail but as far as I know this is the first work to directly apply these techniques to the setting of long output generation. Essential References Not Discussed: None noted. Other Strengths And Weaknesses: ## Strengths: * Due to the increasing importance of synthetic data and in particular obtaining reasoning traces, this method is of importance to improve the efficiency of such generations. * This is the first work that I am aware of to apply speculative decoding to long-output generation settings. * The extensive ablation studies help the reader understand the pros/cons of the proposed method. ## Weaknesses * The TriForce comparison requires a custom pretrained draft model for the first tier of drafters. Originally, TriForce used a 68M model for this tier whereas in this work we have a 250M model. The large discrepancy between results may influence the overall conclusions reached. * Some details are not made clear. For example, Table 1 is not referenced in the work. Why are the acceleration rates noted here so much worse than the reference methods? Eg., TriForce achieves acceleration over 2x on A100s with a prompt length of 122K and generation length of 256. Without more details regarding the Table 1 results it’s impossible to compare with previously published values. * Some unclear choices made in hyperparameter selections for the penalty value. Other Comments Or Suggestions: * Table 4 Delta T is the number of minutes saved, not hours. * L433: Text in parentheses states that without a penalty theta = 0 which directly contradicts Table 8 caption. Questions For Authors: 1. How does the generated outputs with and without penalty compare using other non-lexical based metrics such as self-BLEU, BERTScore, or MAUVE? Does the contextual penalty also improve repeated semantic similarity? 2. The authors claim that Triforce and MagicDec are limited to generating 256 or 64 output tokens, respectively (see L040, second column). However, in practice there is no such limit, these are merely the generation settings used by these respective works when benchmarking their methods. Since both of these baselines operate in the typical draft-then-verify framework of speculative decoding, we can simply continue speculating for additional drafting rounds until a desired output length is reached. This is also made clear by the author's use of these baselines for generating sequences greater than the noted limits. Please clarify this statement. 3. What are the settings used for Table 1? Prompt length, data, etc? 4. Why was the penalty value selected as 1.2 instead of 1.5 where a much higher diversity is observed? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your time and insightful feedback, which has helped us improve our manuscript. > **Q: Reformulate Claim** **A:** We appreciate your suggestions to replace "fail" with "benefit" and will revise accordingly. 1. **Dynamic KV Cache Compression (Challenge II)** The limitation of baseline methods is not merely budget constraints but their **single-compression design** for long inputs. For long outputs, however, the KV Cache grows dynamically with generated tokens. In contrast, our method maintains a fixed-size KV Cache regardless of output length (no hard budget), as we continuously compress the growing cache. 2. **Memory Limits**: Like all speculative decoding methods, we preserve a full KV Cache for lossless generation. If this cache exceeds **GPU memory** (not a pre-set budget), generation fails—but this is distinct from baselines’ rigid budget-based failures. Note that our focus is on **draft-phase optimization**. 3. **Example**: For TriForce (budget=4096): * Input=100K, Output=256 → KV Cache=4096+256 (valid). * Input=8192, Output=100K → KV Cache=4096+100K (fails). Our method fixes the KV Cache size for any input/output length. > **Q: Sampling Diversity** **A:** We sincerely appreciate your insightful comments regarding the diversity-speedup balance. 1. **Trade-off** In **Table 6**, our experiments demonstrate that the proposed penalty mechanism enables effective balancing of these two objectives: - Without penalty: Achieves 3.58× speedup with baseline diversity - With θ=1.2 penalty: Maintains 3.27× speedup (only 0.31x reduction) while significantly improving diversity metrics This evidences that our approach attains competitive acceleration without severely compromising diversity. 2. **Main Results** All experiments in **Tables 3-4** already employ θ>1.0 penalty configuration (**Table 10**). The absence of severe repetition issues in these results further confirms the practical viability of our trade-off strategy. 3. **Self-BLEU** Self-BLEU provides direct measurement: |Metric|Self-BLEU-2|Self-BLEU-3|Self-BLEU-4|Self-BLEU-5| |-|-|-|-|-| |w/o Penalty|0.7899|0.7445|0.7162|0.6966| |w/ Penalty|0.5588|0.4745|0.4115|0.3788| Consistent 23.11-31.78% relative reduction across n-grams confirms diversity improvement, aligning with Distinct-N trend(Table 8). BERTScore/MAUVE are inapplicable for our **reference-free open-generation** setting, as they require parallel references. 4. **Implementation** The adjustable penalty coefficient allows practitioners to prioritize either speed or diversity based on application needs, demonstrating our framework's adaptability. > **Q: 250M vs. 68M** **A:** For a fair comparison, we used 68M for llama2 and 250M for llama3.1 in the TriForce experiment in Table 3, and the parameter settings of the two draft models are exactly the same. Because the **vocabulary size** of llama3.1 is much larger than that of llama2, the model size becomes larger.Finally, we have open-sourced this model to support future research. > **Q: Table 1** **A:** While Table 1 is briefly referenced in **L49–50**, we will explicitly add this context in the revised manuscript to ensure transparency. We reproduced Triforce's results on A100-80G and PG-19 using identical hyperparameters. The only difference (as noted in L49-50) is that their original experiments were conducted on llama2(**MHA**), while ours used llama3.1(**GQA**). The observed performance gap arises because **MHA typically requires a KV cache several times larger than GQA**, enabling TriForce’s acceleration. > **Q: Clarify Statement of Baseline** **A:** Thank you for your question. 1. **Limitations in Prior Work**: The values 256 and 64 in Table 1 are just examples. We also optimized TriForce and experimented with generating 100K lengths (**Table 3**). It does achieve a 2x speedup on MHA, but not on GQA. 2. **Length Restrictions in Baselines**: As observed in practice, baselines enforce strict limits on generation length: TriForce caps generation at the KV Cache budget size: ``` def update_graph_cache(self, kv_cache=None): self.value_cache[:,:,self.max_budget-(kv_cache.seq_len-self.prefill):self.max_budget] = kv_cache.value_cache[:,:, self.prefill:kv_cache.seq_len].clone() self.key_cache[:,:,self.max_budget-(kv_cache.seq_len-self.prefill):self.max_budget] = kv_cache.key_cache[:,:, self.prefill:kv_cache.seq_len].clone() ``` > **Q: Penalty Value** **A:** During experiments, we observed that while higher θ (e.g., 1.5) increases diversity, it often leads to incoherent outputs or even garbled text in practice. (Table 8) Thus, we adopted θ=1.2 as a empirically stable default, **balancing diversity and quality**. > **Q: Typos** **A:** These were typos in our manuscript, and we will correct them in the revised version. To clarify: when θ=1.0, no repetition occurs. We hope our answers have resolved your concerns. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed rebuttal. RE: Single-compression of baseline methods: My understanding of TriForce’ approach is that the tier 1 drafter uses streaming attention from StreamingLLM which does not in principle require that the KV cache grows as context or output length grows. Instead tokens would be evicted as they fall out of the sliding window. Based on my understanding of the “dynamic KV cache” update strategy for TriForce that you implemented for this work I believe the KV cache for TriForce* should not grow with output length. Can you confirm my understanding? For TriForce’s tier-2 retrieval based drafter, I agree that the KV cache grows as output length increases; however, since this is the *same* cache being used by the target model for verification, I disagree that this provides any limit on output length generation other than that provided by the overall memory limit of the hardware used. In this respect, both TriForce and TokenSwift are limited in output length fundamentally by the target model KV cache, not the draft models. The StreamingLLM authors specifically note that they use a “rolling cache at each decoding phase” and if TriForce differs from this implementation it’s a relatively straightforward update to the KV cache update strategy during decoding as you have done here with TriForce*. Overall, I find the claims that TokenSwift enables larger generation outputs than TriForce under a fixed memory budget to be convincing with respect to the original TriForce implementation using a static KV cache update. However, as it only required a small change to TriForce’s KV update strategy to enable the output lengths that you tested with in Table 3, I think the fundamental and more interesting comparison is between TokenSwift and TriForce* which both offer long output generations only limited by the target model’s KV cache. Regarding diversity of outputs, I agree that your repetition penalty is effective in improving sampling diversity and the additional self-BLEU scores highlight this. My main concern with the token reutilization approach is that it inherently benefits from lower diversity of outputs. Therefore, as generation output diversity is improved with future models, TokenSwift’s AR will decay to that of k=0 as noted in Fig 3. Thank you for the note regarding TriForce tier-1 drafter size, this addresses my concerns. I suggest making this explicit in the camera-ready version. Your discussion and clarification of Table 1 is appreciated and makes sense after highlighting GQA vs MHA. Overall and after considering your rebuttal, I believe TokenSwift does offer a practical approach for the current generation of LLMs in which token reutilization offers major benefits. Fundamentally, I remain concerned that token reutilization is at odds with the desire to improve output diversity but while such phenomena exists we can exploit it for efficiency gains as the authors have done here. Based on this, I have raised my score to 3. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your detailed feedback and recognition of our work. Below we provide point-by-point clarifications to address your concerns: **1. Fundamental Differences Between TriForce and TokenSwift** The key distinction lies in TriForce's **three-layer** architecture, where the second layer validates the first layer and serves as a draft for the third layer (the final KV-verified model). Specifically: - TriForce's first layer (68M) employs StreamingLLM-style eviction. - Its second layer (Retrieval Cache) uses H2O-like compression but only performs a single compression for long prefixes. Crucially, TriForce does not optimize for eviction in long-output scenarios. Our core claim is that **TokenSwift's draft phase remains unaffected by KV cache growth**, whereas TriForce's second-layer drafting inevitably suffers performance degradation as KV cache expands (due to unoptimized eviction mechanisms). **2. Comparison with TriForce*** Your observation about the TokenSwift vs. TriForce* comparison is insightful. We clarify that TriForce* represents our optimized implementation of TriForce for fair comparison. The critical insight is: To achieve acceleration in **long-output scenarios**, static KV updates (as in original TriForce) prove insufficient—dynamic cache updates become essential. This fundamental limitation of TriForce's architecture motivates our design philosophy. **3. Token Reuse & Diversity** We emphasize that "diversity" in our context refers to preventing **meaningless repetitions** (e.g., redundant phrases), not requiring all 100K tokens to be unique. Common lexical repetitions (e.g., "i am", "that is") remain natural and unavoidable. Our approach remains effective in long-context generation because: a) **Inherent Repetition Necessity**: Any extended text (e.g., novels) naturally contains frequent reuse of common tokens. b) **Vocabulary Constraints**: The limited LLM vocabulary (especially high-frequency tokens) guarantees non-degenerate cases (k>0) even in 100K-token generations. c) **Practical Motivation**: This linguistic reality directly informs our design—exploiting predictable token recurrence patterns without compromising output quality. --- We thank you again for your constructive feedback. We would be delighted to provide any clarification or extended analysis required.
Summary: As LLMs become bigger in terms of number of parameters, model inference has become computationally expensive leading need for faster and computationally efficient sequence generation. The authors propose the TOKENSWIFT, a new framework to accelerate autoregressive generation for LLMs. TOKENSWIFT utilizes multi token generation to draft multiple tokens in a single forward pass and dynamically updates the KV cache across iterations to achieve lossless acceleration for autoregressive generation. Claims And Evidence: TOKENSWIFT claims to be the first to accelerate lossless autoregressive generation for LLMs for upto 100k tokens and attain upto 3x speedup across various model architectures. The authors provide empirical evaluation results to substantiate their claims in Table 3 and 4. Nitpick: The title *"FROM Hours to Minutes"* might be slightly misleading, TOKENSWIFT provides upto 3x speedup and Figure 1 itself showcases sequence generation time coming down from ~5 hours to 1.5 hours. Methods And Evaluation Criteria: TOKENSWIFT compares autoregressive generation performance across different models and attention architectures (MHA and GQA) and compute latency based with varying prefill lengths. TOKENSWIFT is compared against existing SOTA literature such as Medusa and Triforce for check for performance gains. Theoretical Claims: Section 3 discusses the TOKENSWIFT framework which includes multi token generation using additional tuned linear layers, token reutilization and dynamic KV cache management which is based on ranking importance scores suggested in equation 2, with parallel verification of the draft tokens. Experimental Designs Or Analyses: Yes, the authors compare TOKENSWIFT to existing SOTA literature for various models, attention architectures, varying prefill lengths. Thorough ablation studies are conducted to evaluate the pros and cons of different components of the TOKENSWIFT framework. Supplementary Material: No Relation To Broader Scientific Literature: The work presents novel method for accelerating autoregressive language generation for LLMs. The wider community would benefit from the findings presented to accelerate sequence generation for longer sequences in a lossless manner. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your valuable suggestions and acknowledgment of our method. We are encouraged by your recognition of ``our thorough ablation studies`` and ``the practical impact of our method, which achieves up to 3× speedup across diverse architectures``. We also appreciate your note on ``the wider community would benefit from our findings``. Below, we will address your concerns: > **Q1:** Nitpick: The title "FROM Hours to Minutes" might be slightly misleading **A1:** We appreciate this observation. We will remove this phrase from the title in the revised version. We hope our answers have resolved your concerns. If you have any other concerns, please feel free to let us know. Thanks again for your review.
Summary: The paper presents TOKENSWIFT, a framework to accelerate ultra-long sequence generation (up to 100K tokens) in large language models (LLMs) with lossless accuracy, addressing the time-intensive nature of such tasks (e.g., LLaMA3.1-8B taking 5 hours). It tackles three challenges—frequent model reloading, dynamic KV cache management, and repetitive content—via multi-token generation with token reutilization, dynamic KV cache updates, and a contextual penalty mechanism. Main findings include a 3x speedup across models (1.5B-14B) and architectures (MHA, GQA), reducing LLaMA3.1-8B’s 100K-token generation to 90 minutes, with improved diversity (Distinct-n) and superiority over baselines like TriForce* and Medusa*, as validated on PG-19. Claims And Evidence: The claims are well-supported by evidence. The 3x speedup is convincingly demonstrated through experiments (Tables 3, 4), with detailed comparisons to AR and baselines, and time savings (e.g., 3.5 hours for Qwen2.5-14B) adding clarity. Lossless acceleration is theoretically backed by speculative decoding (Appendix A) and empirically implied by matching AR outputs. The “first for 100K-token lossless acceleration” claim aligns with cited limitations of prior work (e.g., TriForce’s 256-token limit, Table 1), though “minimal training cost” lacks detailed quantification beyond training three linear layers (§3.1), slightly weakening its evidential strength. Methods And Evaluation Criteria: The proposed methods—multi-token generation, dynamic KV updates, and contextual penalties—are sensible for ultra-long sequence generation, directly addressing identified bottlenecks. The evaluation criteria, including speedup (latency ratio), acceptance rate $\alpha$, and Distinct-n for diversity, are appropriate and standard for assessing acceleration and quality. Using PG-19 as a benchmark is reasonable for long-sequence tasks, though its representativeness for all LLM applications could be broader. Theoretical Claims: I reviewed the primary theoretical claim of lossless acceleration, supported by the proof in Appendix A, which demonstrates that TOKENSWIFT’s speculative decoding (SD) output distribution $p_{SD} $ equals the target model’s distribution $q_{target} $. Experimental Designs Or Analyses: The experimental design is sound, testing TOKENSWIFT across diverse models (e.g., LLaMA3.1-8B, Qwen2.5 series) and lengths (20K-100K tokens) on a single A100 GPU (§4.1), with results averaged over five runs to reduce randomness (Table 3). Ablations on sampling methods (Table 12), temperature (Table 13), and penalty windows (Table 15) are rigorous, validating robustness. However, the resource cost of maintaining full KV cache for verification isn’t quantified, which could affect scalability claims on resource-constrained settings—an area for minor clarification. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your time and constructive feedback. We are encouraged by your positive assessment that ``our experiments convincingly demonstrate a 3× speedup``, ``the sensible design of our method for ultra-long sequence generation``, and ``the rigorous ablation studies on sampling methods, temperature, and penalty windows``. Below, we address their concerns point-by-point. >**Q1:** though “minimal training cost” lacks detailed quantification beyond training three linear layers (§3.1), slightly weakening its evidential strength. **A1:** Thank you for highlighting this point. We will provide a detailed breakdown of the quantized training cost in the revised manuscript. Specifically, for any large language model, our approach requires only ``γ(Number of tokens generated in parallel) × hidden_size × hidden_size`` trainable parameters. The specific training time is shown in **Appendix I**. For example, the training time for models 8B does not exceed two hours. The concrete results for the LLMs used in our experiments are summarized in the table below: | Model | Llama3.1-8B | Llama2-7B | Qwen2.5-1.5B | Qwen2.5-7B | Qwen2.5-14B | | -------- | -------- | -------- | -------- | -------- | -------- | | Param. | 50.3M | 50.3M | 7.1M | 38.5M | 78.6M | > **Q2:** However, the resource cost of maintaining full KV cache for verification isn’t quantified, which could affect scalability claims on resource-constrained settings—an area for minor clarification. **A2:** Thank you for your suggestion. The full KV Cache memory overhead can indeed be derived from the model configuration. Taking Llama3.1-8B (GQA) as an example (batch size=1, bfloat16), the **peak** memory usage is calculated as: 2 × Layer Num × Batch Size × Seq Len × KV Heads × Head Dim × Bytes = 2 × 32 × 1 × 102,400 × 8 × 128 × 2 bytes ≈ 13.4 GB. The table below summarizes the **peak** memory requirements for other LLMs in our study when maintaining full KV Cache: | Model | Llama3.1-8B | Llama2-7B(MHA) | Qwen2.5-1.5B | Qwen2.5-7B | Qwen2.5-14B | | -------- | -------- | -------- | -------- | -------- | -------- | | Param. | 13.4G | 53.7G | 2.9G | 5.9G | 20.1G | We hope the above response can resolve your questions and concerns. Please let us know if there is any further question! Thanks again for your review. --- Rebuttal Comment 1.1: Comment: Thank authors for the updated experiments and they have addressed my questions and concerns. I will keep my original positive score. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your insightful feedback and dedicated efforts in reviewing our work. Thank you for recognizing the improvements and maintaining your positive evaluation of our paper.
null
null
null
null
null
null
null
null
An Instrumental Value for Data Production and its Application to Data Pricing
Accept (poster)
Summary: The paper analyzes how to quantify a datasets' instrumental value by taking into account the prior knowledge/data sources the buyer has. The authors argue that switching from intrinsic value (as in data Shapley) to this instrumental value is useful for avoiding overestimating the data value. Claims And Evidence: The paper's theory builds on some models that are not very realistic for machine learning applications. But for the studied models, the claims and results seem convincing. Methods And Evaluation Criteria: - The data generation process described in the paper is a bit too far from what's happening in practice in machine learning. The data sellers typically do not have control over the data they collect, so the perfect customization in the way the authors present seems unrealistic. - The analysis and its use in estimating data value rely on data generation distribution. However, such a distribution would be very complex and unknown in practice, especially in machine learning. - The Bayesian regression model oversimplifies the problem without any justification or intuition into why it might be a good model for practical data-utility relation. So, I am not sure how much of the analysis in the paper is actually relevant in practice. Or how straightforward is it to extend to other possible relations? - The weaknesses above limit the paper's practical relevance to data valuation in machine learning. Perhaps, the authors didn't necessarily think of data used for machine learning. But then, I am not sure if ICML is the right venue for this study. Theoretical Claims: I checked the theorems and they seem correct to me. Experimental Designs Or Analyses: There are no experimental results in the paper. Supplementary Material: I checked some of the proofs. Relation To Broader Scientific Literature: Due to impractical assumptions and models used in the paper, I don't think the results would be very useful for the machine learning community. They could be more useful in other areas where the assumptions around the data and its value are more realistic. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I think it's great to think about the instrumental value of datasets, especially for applications like training foundation models on a mix of many datasets. However, the current analysis makes strong assumptions about the data generation process and how its value would behave, which are not very realistic for machine learning datasets. Other Comments Or Suggestions: - Questions For Authors: Do the authors have any ideas on how to extend their results to better suit for machine learning applications? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and constructive comments. We will answer your questions according to logical order. We respectfully disagree with the reviewer’s comment on relevance to the machine learning community. Below, we provide our detailed explanations: 1. In many scenarios, such as experimental design and clinical data, sellers can effectively control the subjects and collect data. Even in the field of deep learning, data annotation companies like Scale AI can provide labeling for data with specific features. These are examples of scenarios where perfect customization can be applied. 2. Similarly, the Bayesian setting is also widely used in machine learning. For example, please refer to the *Bayesian Clinical Trials* article published in *Nature* (https://www.nature.com/articles/nrd1927). Also, Bayesian updating is also very common in deep learning literature—recent work (https://arxiv.org/pdf/2503.04722) even shows that LLMs exhibit Bayesian behavior. 3. In Section 2, we consider the general Bayesian setting of valuation, which does not rely on a known data-generating distribution. Then, in Section 3, we examine a special linear case and Gaussian noise. The reasons are as follows: Gaussian noise is the most common type of noise in nature. Regarding linearity, some studies show that data valuation is transferable—the value ranking under linear models often holds under nonlinear ones as well (see right side, lines 253–260). Moreover, when the true model is unknown, linear models are the most robust (lines 65–70). Finally, linearity allows for tractable theoretical analysis, which helps us better understand the underlying economic intuition. Extending our framework and tailoring it to specific scenarios is out of the scope of this paper but a promising direction for future work. 4. For potential machine learning applications, especially in industries such as clinical healthcare, our instrumental value can effectively evaluate the value of sequential data, whereas the Shapley value tends to overestimate it. Moreover, as shown in Section 2.3, despite certain assumptions possibly being violated in large-scale datasets, instrumental value reduces the computational complexity from exponential to polynomial. Appendix B.3 further demonstrates through numerical experiments that instrumental value exhibits a certain degree of correlation with the widely used Shapley value. Using instrumental value as an empirical approximation for data valuation—and potentially for pricing—in large-scale datasets could be a promising direction for future applications. 5. ICML features many fundamental machine learning papers, such as those in statistical learning, and is not solely focused on deep learning or empirical work. This is also why the theory track—including areas like game theory—exists. These theoretical contributions provide the foundation and guidance for future large-scale applications. The mutual reinforcement between theory and application is what makes machine learning a broad and inclusive field. Therefore, we believe that the problem we study, categorized in Theory-> Game Theory, is not only of interest to the ML community but also meaningful and impactful. We hope our rebuttal has clarified the reviewer’s confusion and respectfully hope that the reviewer would consider re-evaluating the merit of our work accordingly. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I see that the authors justify the use of assumptions on data generation distribution and Bayesian regression model, with pointers to a paper on clinical data. Unfortunately, this does not resolve my concerns on practicality for broader machine learning applications. I see that Reviwer TrgW had similar concerns/questions. It is of course fine to focus on a narrow application domain but the paper should be upfront about this limitation. I think the current abstract/introduction is misleading about how generally applicable the results are. The other referenced paper on "LLMs act Bayesian" (https://arxiv.org/pdf/2503.04722) is interesting but I don't see how this is relevant for this paper or the discussion on strong assumptions and Bayesian linear model. The paper is on in-context learning, which, by definition, does not require data beyond the prompts and it's not clear to me how it can justify using Bayesian linear model for LLM data. **About relevance to the ICML:** Thanks for clarifying your perspective on this. The paper seems like a good fit under the Game Theory category. The paper, in its current form, motivates the data valuation problem and the proposed approach for machine learning applications. So, naturally, I expressed my concerns on the practicality of the strong assumptions and the proposed approach (similar to Reviwer TrgW). I am not suggesting including empirical work or focusing on DNNs but the paper should either: (1) make it clear that it targets a narrow application domain (clinical data) and has limitations on how realistic the assumptions are and how practical the methodology is for broader machine learning applications, or (2) explain how the results under these strong assumptions could be extended to other possible data-utility relations. --- Reply to Comment 1.1.1: Comment: Thank you for recognizing that our paper is a good fit for the Game Theory track. In Section 2, we consider **the most general form of Bayesian regression (without the linearity assumption)** and introduce the concepts of **valid value functions** and cost of uncertainty, along with their characterization (Proposition 2.6). In Section 3, we then focus on linear Bayesian regression, where we derive closed-form solutions and explore a mechanism design problem in this setting. We acknowledge that the linearity assumption imposes certain limitations on the applicability of our results. However, we would like to emphasize that in specific scenarios—such as clinical data, which represents a large and important market—this assumption is realistic and not restrictive in practice. **Moreover, for broader machine learning contexts, Section 2 rigorously defines what constitutes a valid valuation, which is independent of the linearity assumption**. While we recognize the technical challenges posed by model complexity, our goal with Section 2 is to provide economic insights and to use our foundational analysis to spark further interest in the emerging field of data pricing—particularly relevant in the current AI era. For example, companies like Scale AI have shown significant interest in data pricing. Specifically, we referred to https://arxiv.org/pdf/2503.04722 to emphasize that the Bayesian approach is a common modeling framework in machine learning, just like the frequentist approach. **As for the linear assumption, it simplifies our model theoretically, making analysis tractable**. Moreover, linear models are known to be robust [1] and transferable[2,3], even in the presence of model misspecification. [1] Besbes, O. and Zeevi, A. On the (surprising) sufficiency of linear models for dynamic pricing with demand learning. Management Science, 61(4):723–739, 2015. [2] Jia, R., Wu, F., Sun, X., Xu, J., Dao, D., Kailkhura, B., Zhang, C., Li, B., and Song, D. Scalability vs. utility: Do we have to sacrifice one for the other in data importance quantification? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8239–8247, 2021. [3] Schoch, S., Xu, H., and Ji, Y. Cs-shapley: Class-wise shapley values for data valuation in classification. arXiv preprint arXiv:2211.06800, 2022. Thank you for your suggestion on clearly stating the scope of our paper. **In the camera-ready version, we will make it explicit that Section 2 provides a pricing guideline for a general class of models. The subsequent sections introduce stronger assumptions, and we will clarify in which domains these assumptions are natural, as well as the challenges and potential of removing them**. This will extend the discussion currently found on the right side of lines 258–262 and lines 425-431.
Summary: The paper studies the mechanism design problem in pricing and designing data generating processes in the context of Bayesian regression. Concretely, the buyer first has a prior $q$ of the regression parameter, reports his feature $x$ that he wants prediction on, and then upon obtaining data (including the data points $D$ and knowledge of the data generating process $g$) at some price $t$, forms a posterior $q$. Buyer's utility in the setup is the Bregman divergence between p and q corresponding to the entropy cost. The paper consider the mechanism design problem of (D,g,t), such that certain nice mechanism design properties are achieved, such as IC, IR, and regret. The paper separately considers perfect customization and limited customization settings, the second one being the harder setting to analyze. Claims And Evidence: All claims are supported by proofs. Methods And Evaluation Criteria: The paper discusses classical mechanism properties such as IC, IR and regret. Theoretical Claims: I checked the proof of Theorem 4.1 which relies on a close-form expression of the valuation function and a generalization of results from Myerson saying that the pricing rule takes the form of derivative of the valuation. Experimental Designs Or Analyses: The main text only contains theories. Supplementary Material: I checked proofs of Theorem 4.1. Relation To Broader Scientific Literature: The paper cited sufficient related works. Essential References Not Discussed: The paper cited sufficient related works. Other Strengths And Weaknesses: Strengths 1. The problem of pricing additional data nowadays is crucial given that large language models require new data to perform alignment/ fine-tuning/RAG etc. The paper provides a theoretical discussion of pricing and mechanism design of incremental value of data in the Bayesian regression setting and devises several mechanisms. 2. Overall the paper is well written and provides sufficient explanation of concepts. I also like Example 1 and 2 to explain the setup of selling data generating processes and changes in variances. 3. The paper contains a few interesting results for pricing data, such as the equivalence between valid valuation functions and Bregman divergence, and also the results that in the limited customization setting, there is not 0-regret generally but if the design matrix is isotropic then there is. Weakness To me there are a few clarifying problems to be addressed to make the problem setup clearer, eg concretely what are exactly known to buyer when the mechanism is announced (see below). Based on the above I recommend weak accept, and I'm happy to adjust the score if my concerns are addressed. Other Comments Or Suggestions: Some comments on writing. 1. In terms of writing, the main paper only really considers the entropy setting, while the section 2 discusses settings of a general cost functions. The Characterization of Valid Valuation Functions is interesting, but for streamlining the paper I feel the main mechanism design problem should maybe be described first in a self-contained way. 2. In example 2 the derivation of the variance is missing. I assume the computation of posterior is omitted? Line 275 notably. Nortation $g^n$ in Theorem 4.1 needs to be explained near the theorem. Questions For Authors: A few clarifying questions. 1. What is the corresponding utility function for the valuation function in Theorem 3.1? 2. In the limited customization setting, are the design points also announced as to the buyer? 3. In section 2.1 it says "This will be useful for selling DPPs since it means the value of a DPP can be quantified without seeing the realized data". But in section 4 the seller has a variance function $\sigma$. So is $\sigma$ known to buyer? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your kind remarks and questions, and we would now like to answer your questions in logic order. Below, we first address the reviewer's major questions, and then clarify a few minor comments. We are happy to engage with any further questions. **Re Corresponding Utility Function in Thm 3.1**: In Theorem 3.1, the corresponding utility function reflects the confidence in the estimation, which is captured by the entropy. As our confidence in the estimation increases, the corresponding entropy decreases. **Re Design Matrix in the Limited Customization**: In the limited customization setting, we assume that the design matrix $X$ is public knowledge, but the buyer does not observe the corresponding responses $Y$. In practice, for example, in clinical trials, information about the subject group is usually known, but the evaluation of the drug’s effectiveness needs to be purchased. **Re Realized Data and Noise**: By "realized data," we refer to the realized response $Y$, while the variance function $\sigma$ is assumed to be known. In real-world scenarios, sellers typically disclose the measurement error or the specifications of the experimental instruments, such as their precision. When the buyer has no information about data quality, pricing in our setting becomes infeasible. **Re Other Comments Or Suggestions:** 1. Thanks for your suggestion. Our idea is to first introduce the characteristics of valid valuations (Sections 2 and 3), and then present a concrete application from game theory, say, mechanism design (Section 4). We'll include a roadmap in the final version to outline the content of each section. 2. Due to the page limit, we have deferred the calculation of the posterior distribution and variance to the appendix. Please refer to Lines 667–677. 3. Thank you for pointing out the typo. We will correct it in the camera-ready version. 4. We introduced the definition of $g^n[x]$ in Lines 334–343, which produces $n$ responses for the buyer’s type $x$. Thank you for your suggestion — we will highlight its definition again around Theorem 4.1. We hope our rebuttal has clarified your concerns and would greatly appreciate your re-evaluation of our work’s merits if these clarifications address your doubts.
Summary: This paper introduces a framework for quantifying the instrumental value of data production processes (DPPs) under the bayesian linear model. The authors focus on how much additional benefit (or marginal contribution) new data brings to a decision‐maker’s task. The proposed data value is mathematically equivalent to information gain. They then leverage this valuation to study optimal pricing in data markets. Two key selling scenarios are explored: (a) perfect customization—where the seller can generate data tailored exactly to the buyer’s needs, allowing for full (first-best) surplus extraction; and (b) limited customization—where the seller can only curate data from an existing pool, yielding a mechanism that, while not fully optimal, achieves revenue within a bounded regret. Claims And Evidence: **Claims:** Instrumental Value Framework: It argues that the value of data should be measured in terms of its marginal improvement on a decision-maker’s utility rather than by an average contribution (as in Data Shapley). Microfoundations via Bayesian Decision-Making: The authors claim that by grounding the valuation in a contextual Bayesian decision-making problem, one can rigorously justify the use of Bregman divergence to capture data’s value. Optimal Pricing Mechanisms: For perfect customization, they claim that there exists an incentive-compatible and individually rational mechanism that extracts full surplus (zero-regret). Under limited customization, they provide a mechanism based on singular value decomposition (SVD) that achieves revenue nearly as high as the first-best benchmark, with regret bounded by a term related to the condition number of the data matrix. **Evidence:** The evidence provided is largely theoretical under strong assumptions such as Bayesian Linear Model and . The authors back their claims through a series of definitions, propositions, and theorems (e.g., Theorem 3.1, Theorem 4.1, and Theorem 4.3), complete with mathematical derivations. Although the paper includes references to numerical experiments in an appendix B.3.1, the investigation is mainly showing it is faster and close to DataShapley, while there are existing works to speed up shapley value calculation (e.g., https://arxiv.org/abs/2107.07436) which the authors did not consider. I find it weird that the only numerical evaluation is to demonstrate the instrumental value is close to data shapley and there is no numerical evidence that the instrumental value is a better metric. Methods And Evaluation Criteria: The methodology is primarily theoretical: * Framework Development: The authors define a data production process (DPP) and introduce a valuation function that depends on the buyer’s prior beliefs and decision context. * Microfoundations: By invoking a contextual Bayesian decision-making framework, the paper ties data valuation to improvements in expected utility. I have concerns on the strong assumption of the bayesian linear model and the DPP assumption. I find the data shapley's framework that replies on the realized data much easier to understand. In practice, it is quite difficult for the data buyer to formulate their prior belief and how their belief will be updated after acquiring the data. If the practical scenario is more complex than a simple bayesian linear regression update, how should I apply this paper's method and why it is better than data shapley? I think the authors should include more realistic examples to strengthen the paper. * the numerical experiments in B.3.1 only shows the instrumental value is comparable to data shapley, which I don't think bring much value to the paper. Ideally I expect to see how in practice the proposed method should be used and why it is better than the data shapley. Theoretical Claims: Overall the proofs seem correct. I have some issues: * it seems the authors are considering the data utility and the customer valuation as the same thing, but I do not think they are equivalent. The valuation describes how much the data can improve the posterior update, but valuation describes how much the customer is willing to pay in the monetary value. How can the authors find the mapping in practice? The paper should also be significantly changed in terms of this. * I think the whole theoretical proof until theorem 3.1 is a re-invention of bayesian active learning in the name of ``instrumental value''. The value of data is commonly defined as information gain in the active learning literatures. There is nothing new here. The paper also did not cite any active learning papers. Experimental Designs Or Analyses: See my comments on numerical analysis above. Supplementary Material: Mostly Appendix B. Relation To Broader Scientific Literature: The paper is closely related to data shapley and data attribution methods. I think most methods in the literature focus on the instrumental value, so the claims that it is novel is weak. The claim that the data buyer can update prior knowledge part I feel is widely studied in theoretical work such as information sharing between retailers with the similar theoretical framework that authors did not discuss (e.g., https://pubsonline.informs.org/doi/10.1287/msom.2020.0915). Essential References Not Discussed: The authors did not cite most data attribution methods after 2022, which seems neglected many related works. Also, I think most these works study values similar to the ``instrumental value'' that considers additional knowledge gains (https://arxiv.org/abs/2405.13954). Active learning is a widely-studied field that measures the value of additional data. It is very established that information gain can be used to quantify the additional ``instrumental value'' of new data. The paper did not cite active learning papers. Other Strengths And Weaknesses: NA. Other Comments Or Suggestions: NA. Questions For Authors: NA. Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and constructive comments. We will answer your questions according to logical order. **Comparison with Other Data Valuation Methods**: Compared to the fairness-based Shapley value, our instrumental value accounts for sequential data settings where Shapley can underestimate value (cf. Example 4). Theoretically, instrumental value avoids the exponential complexity of Shapley, making it more scalable for large ML datasets. The acceleration methods you mentioned just approximate the Shapley value, and similar ideas may apply to instrumental value. Our experiments show the conceptually significant advantages of instrumental value in terms of computation and sequential settings. Defining a universally better metric is unrealistic and not our intention. Meanwhile, Shapley value in fact also requires a suitable choice of valuation function, while we, under the Bayesian framework, rigorously characterize what valuations are valid for decision making (Sec 2). As for https://arxiv.org/abs/2405.13954, it also stems from Shapley's inefficiency but takes a different angle—focusing on gradient-based LLMs. In contrast, we take an economics perspective (hence the game theory track), aiming for decision-oriented valuation (Sec 2.2) with theoretical guarantees, which they do not pursue. Thanks for your suggestions—we’ll discuss these works in the final version. **Discussing Our Assumptions**: Regarding the Bayesian assumption, we can use either strong or uninformative beliefs (see right, lines 120–130), just to cover the general case. Bayesian updating is also very common—recent work (https://arxiv.org/pdf/2503.04722) even shows that LLMs exhibit Bayesian behavior. The paper you mentioned (https://pubsonline.informs.org/doi/10.1287/msom.2020.0915) also adopts a standard Bayesian setting, but we focus on what constitutes a valid valuation under this framework (Sec 2), while it assumes a specific utility function. We then explore the mechanism design problem under valid valuations, which is also entirely new. Regarding linearity, some studies show that data valuation is transferable—value order under linear models often holds under non-linear ones as well (see right side, lines 253–260). Moreover, when the true model is unknown, linear models are the most robust (lines 65–70). Finally, linearity enables tractable theoretical results, which help us analyze the underlying economic intuition. As for the DPP assumption, one example is mechanism design in clinical trials, where experimenters actively select samples and often update beliefs in a Bayesian manner—see *Bayesian Clinical Trials* published in *Nature* (https://www.nature.com/articles/nrd1927). Building an experimental platform based on instrumental value is a promising direction with high scientific and economic potential. **Utility vs Valuation**: Our data valuation measures how much utility the posterior distribution (after observing data) brings over the prior—for example, more precise estimates (with lower variance or entropy) yield higher monetary revenue. This mapping is common in practice; in clinical trials, for instance, pharmaceutical companies often have clear estimates of the value of improved precision. **Difference from Active Learning**: While data sequence has effects on both our instrumental data value and active learning, however these two areas are completely different. Active learning studies how to select sequential data and train models to maximize **accuracy**, whereas our paper studies how to define a valid data valuation function that captures downstream users’ utilities. We approach this study from a utilitarian perspective, and our value corresponds to information gain only at a very canonic special case. We will clarify this point in the camera-ready version. We hope our rebuttal has clarified the reviewer’s confusion and respectfully hope that the reviewer would consider re-evaluating the merit of our work accordingly.
null
null
null
null
null
null
null
null
Fishers for Free? Approximating the Fisher Information Matrix by Recycling the Squared Gradient Accumulator
Accept (spotlight poster)
Summary: - In various contexts where a method is motivated using the Fisher Information Matrix (e.g. EWC, Fisher Pruning, etc), the paper proposes to replace squared sum of gradients, which can sometimes be cumbersome to compute, by the exponential moving average of squared sums of gradients as computed in e.g. Adam, since this quantity is already computed during training. The method is called the "Squisher". - The proposition is motivated by means of the joint empirical Fisher proposed in Lin et al. 2024 - Empirical validation is performed on a variety of settings that employ approximate diagonal Fisher information, which show that there is no substiantial difference (sometimes slightly better, sometimes slightly worse) in downstream performance when replacing the diagonal second moment of gradient by the Squisher. Claims And Evidence: The claim is rather conservative (i.e. downstream performance is sometimes better, sometimes not significantly worse) so the empirical results correctly support the claim. Methods And Evaluation Criteria: My main concern regarding this paper is that it seems rather limited to only study diagonal approximation of the FIM, and I don't agree with the motivating argument that it is still difficult nowadays to obtain per example gradients e.g. using BackPACK, or other libraries easily found on Github. Spontaneously, I would have benchmarked full-fledged (i.e. not diagonal) methods using the true FIM, as can be achieved using (E)KFAC approximations e.g. [Ritter et al. 2018] for continual learning. My feeling after reading the paper (but I might be wrong) is that it is trying to solve a problem which is not anymore so relevant in 2025's deep learning community. Another concern is the use of the concept of "Fisher" where the Squisher is never actually directly compared to a ground truth diagonal Fisher. From the content of the paper alone, the claimed empirical results could be a side-effect of some desirable property of your new method, that has not much to do with true Fisher information. Even the title of the paper might be misleading. Theoretical Claims: No new theoretical claims are introduced in the paper as far as I can tell, everything is inherited from Lin et al. 2024. Experimental Designs Or Analyses: Did you check the soundness/validity of any experimental designs or analyses? Please specify which ones, and discuss any issues. A fundamental flaw in my opinion is that many benchmarked methods claim to be using Fisher Information, but actually end up making at least two stages of approximation: - diagonal (whereas there are efficient approximate method nowadays) - gradients w.r.t. training set targets, whereas the actual FIM would require gradient w.r.t. sampled targets (there exist closed form). Supplementary Material: I did not review it. Relation To Broader Scientific Literature: The literature regarding diagonal approximate method, applied to various settings (i.e. optimization, pruning, model merging, etc) is correctly discussed. A possible improvement would be to discuss methods that use the actual FIM, and approximate methods that scale its use to large nets. Essential References Not Discussed: For computing per-example gradients (methods): Goodfellow, I. (2015). Efficient per-example gradient computations. arXiv preprint arXiv:1510.01799. Rochette, G., Manoel, A., & Tramel, E. W. (2020, November). Efficient Per-Example Gradient Computations in Convolutional Neural Networks. In Workshop on Theory and Practice of Differential Privacy (TPDP). Closed-form FIM that does not require sampling pseudo target classes: Pascanu, R., & Bengio, Y. (2013). Revisiting natural gradient for deep networks. arXiv preprint arXiv:1301.3584. Other Strengths And Weaknesses: A main strength of the paper is that many different setups are covered in the experiments. Another strength is that previous material, as well as the new method, are clearly presented. Other Comments Or Suggestions: Why refer to the FIM's diagonal as the "Fisher" ? If accepted, it might be misleading for future readers, whereas Fisher Information corresponds to the full matrix. From the body only, please clarify what is new contribution compared to Lin et al. Moreover, the current paper inherits some ambiguity from Lin et al.'s paper: - why motivate a method using an expectation over sampled vectors y, since this expection is never actually computed ? - why would it be useful to consider the Riemannian metric defined by the joint empirical FIM ? Questions For Authors: Why would a data scientist or researcher use the Squisher instead of more accurate approximate methods using github libraries? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your thoughtful feedback, questions, and suggested references. We have updated our manuscript in accordance with your suggestions. Please find our responses below. Let us know if you have any follow-up questions, we'd be happy to discuss further. --- > Why would a data scientist or researcher use the Squisher **Cost:** The main advantage of Squisher is that it’s completely free when using an adaptive optimizer that accumulates the squared gradient (e.g., Adam(W)), which is ubiquitous in deep learning. Since this information is already stored in the optimizer’s state dict, Squisher can be obtained without any additional computation or access to training data. In contrast, computing the Fisher requires both extra computation and access to training data. **Universal support:** You are right that there exist libraries like BackPACK that compute the empirical Fisher diagonal more efficiently than a for loop. However, these alternative approaches still introduce additional cost, require modifications in the code, and do not support all architectures -- for example, BackPACK does not support layer normalization. As a result, the for-loop approach remains common in existing Fisher-based methods, including those considered in our paper. Since Squisher simply recycles the already available squared gradient accumulator, it is universally supported without these limitations. --- > it seems rather limited to only study diagonal approximation of the FIM While recent work has explored Fisher approximations beyond the Fisher diagonal, such work has primarily targeted optimization applications (e.g. the natural gradient) as opposed to settings where the Fisher is used as a notion of parameter importance. Additionally, such approximations are actually *less* relevant in modern deep learning due to scale. Computing, processing, or storing anything larger than the Fisher diagonal (which has as many entries as model parameters) is generally infeasible with today's large-scale models, which might have billions of parameters. Consequently, the Fisher diagonal remains especially relevant in today's research. Given the increasing popularity of non-diagonal adaptive optimization algorithms like Shampoo/SOAP, we anticipate that their non-diagonal gradient second moments could be used to replace Kronecker-factorized curvature approximations of the Fisher, such as (E)KFAC. While we leave this exploration as future work, we believe it actually strengthens our argument for using gradient second moments from adaptive optimizers as Fisher proxies for free. --- > the claimed empirical results could be a side-effect of some desirable property of your new method While we agree that the empirical and true Fisher can meaningfully differ (as pointed out by other works, e.g. https://arxiv.org/pdf/1905.12558), most work using the Fisher as a notion of parameter importance uses the empirical Fisher because it produces good downstream performance and is more straightforward and efficient to compute than the true Fisher. Please also see our response to the second quoted note in Reviewer WT93. --- > Why refer to the FIM's diagonal as the "Fisher" ? Please see our response to the same question in Reviewer WT93 (third quoted note). --- > [...] please clarify what is new contribution compared to Lin et al. The work of Lin et al. serves as one motivation for our work and provides perspective on Fisher approximations that first sum then square gradients. Our study aims primarily to empirically validate the connetion between the Squisher and Fisher; we do not make or claim any substantial theoretical contributions. Note that Lin et al. primarily focus on optimization and the role of the square root in the update of adaptive optimizers for neural network training. This is different from our setups, and their empirical findings do not overlap with ours. --- > - why motivate a method using an expectation over sampled vectors y, since this expection is never actually computed ? > - why would it be useful to consider the Riemannian metric defined by the joint empirical FIM ? The motivation for considering the joint Fisher which stems from considering a vector of labels is that it gives rise to a Fisher which first sums, then squares the gradient. This pattern is similar to the statistics of adaptive optimizers like Adam that accumulate the square of summed gradients. As pointed out by Reviewer WT93, the joint and standard Fishers coincide, i.e. they approximate the same underlying "true" Fisher. Therefore, the standard empirical Fisher and the joint empirical Fisher can be regarded as two different empirical approximations of the same underlying Fisher. Since the standard empirical Fisher is used in various problems in practise, our paper studies if the Squisher, which approximates the joint empirical Fisher, can serve as an a "free" drop-in replacement.
Summary: This paper introduces "Squisher," a method that repurposes the squared gradient accumulator from adaptive optimizers (such as Adam) to approximate the Fisher Information Matrix (FIM) without additional computational cost. The authors provide theoretical analysis connecting the squared gradient accumulator to the FIM and evaluate the approach across six settings: - Fisher Merging - Uncertainty-Based Gradient Matching (UBGM) - Fisher Pruning - FISH Mask (sparse training) - Task Embedding with Task2Vec - Elastic Weight Consolidation (continual learning) The main finding is that Squisher consistently performs similarly to the traditional Fisher while outperforming Fisher-free baselines, making Fisher-based methods more practical by eliminating their computational overhead. Claims And Evidence: The claims are well-supported through: - Theoretical analysis connecting squared gradient accumulators to the Fisher, particularly through the "joint Fisher" formulation - Comprehensive empirical evaluation across six different applications - Ablation studies examining the impact of specific approximations (normalization and EMA coefficient) - Honest reporting of both positive and negative results The experimental results convincingly demonstrate that Squisher can effectively replace the Fisher in various applications without significant performance degradation. Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate: - The authors replicate established experimental setups from prior work for each application - For each setting, they compare against both the original Fisher implementation and a Fisher-free baseline - The evaluation metrics are standard for each application (accuracy, mean reciprocal rank, etc.) - Multiple random seeds are used for stochastic elements Theoretical Claims: I checked the theoretical derivations connecting the squared gradient accumulator to the Fisher Information Matrix. The connections drawn between different formulations (particularly the joint empirical Fisher and its relationship to the accumulator in Equations 11-15) appear sound. The scaling factor of N needed to align the Squisher with the Fisher (Equation 15) is correctly identified and explained. Experimental Designs Or Analyses: The experimental design is sound: - The authors base their implementations on prior work for direct comparability - They use multiple random seeds where appropriate (e.g., five seeds for pruning experiments) - Both positive and negative results are reported transparently - The ablation studies in Section 4 isolate specific factors affecting performance Supplementary Material: No supplementary material was provided for this paper. Relation To Broader Scientific Literature: The paper positions its contributions well within multiple research areas: - Natural Gradient Descent and second-order optimization methods - Adaptive gradient methods (Adam, RMSProp, etc.) - Applications of Fisher diagonal as a parameter importance measure - The practical challenges of computing the Fisher in existing frameworks The authors correctly identify that computational costs have hindered the adoption of Fisher-based methods, motivating their approach. Essential References Not Discussed: The paper covers most relevant prior work. However, it could benefit from more discussion of recent second-order optimizers: - Related work: The paper does not mention some other recent second-order optimizers such as AdaFisher [1], which could be relevant in the context of adaptive second-order methods. Including these in the discussion would provide a more comprehensive overview of related work. [1] Gomes, D.M., Zhang, Y., Belilovsky, E., Wolf, G., & Hosseini, M.S. (2025), AdaFisher: Adaptive second order optimization via fisher information. In the thirtheenth international conference on learning representations. Other Strengths And Weaknesses: Strengths: - Practical impact: Eliminates the computational overhead of Fisher-based methods - Broad evaluation: Demonstrates robustness across diverse applications - Clear theoretical grounding: Well-explained connection to the Fisher - Honest reporting: Presents both positive and negative results Weaknesses: - Limited insight into performance differences: More analysis could explain why Squisher outperforms Fisher in some settings but underperforms in others - Hyperparameter sensitivity: The ablation study shows dependence on the EMA coefficient, but provides limited guidance on optimal settings - Some implementation details could be clearer, particularly for mini-batch scaling Other Comments Or Suggestions: The paper is well-written and clearly structured. No significant typos or errors were noted. Questions For Authors: 1- In Fisher Merging experiments, Squisher significantly outperforms the original Fisher. Do you have insights into why this is the case, rather than just performing similarly? Does the Squisher have properties particularly beneficial for model merging? 2- How sensitive is Squisher to optimizer hyperparameters, particularly the EMA coefficient? Given these are typically tuned for optimization rather than Fisher approximation, do you have recommendations for non-optimal settings? 3- How does mini-batch size affect the quality of the Squisher approximation? Since the scaling factor is related to dataset size N, are there special considerations for very large datasets with small mini-batches? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your thoughtful feedback and suggestions. We will make sure to mention AdaFisher as adaptive method that uses a diagonal Kronecker-factored approximation of the Fisher. Please find our responses below and let us know if you have follow-up questions. --- > Limited insight into performance differences: More analysis could explain why Squisher outperforms Fisher in some settings but underperforms in others. > Q1- In Fisher Merging experiments, Squisher significantly outperforms the original Fisher. Do you have insights into why this is the case, rather than just performing similarly? Does the Squisher have properties particularly beneficial for model merging? Since both the diagonal Fisher and Squisher rely on multiple approximation steps, it is difficult to isolate a specific factor responsible for their divergence. We want to emphasize that our goal is not to claim that Squisher is inherently superior or inferior to Fisher. The performance differences are problem-dependent and influenced by training trajectories, but overall, the variation remains small relative to baseline performance and our experimental results thoroughly validate the Squisher as a zero-cost drop-in replacement for the Fisher. Let us illustrate this point for two settings with the largest difference: - For merging, both Fisher and Squisher exhibit high variance in performance depending on the checkpoint used. We will describe these variances in more detail in the text and quantify them with error bars. - For pruning, we will add results under varying the training duration. The table below shows that decreasing the number of epochs slightly increases the Squisher's performance. | Method | Fisher| Fisher |Squisher | Squisher| | -------- | -------- | -------- | -------- | -------- | | Epochs Trained | 10 | 15 |10 | 15 | | Accuracy | 64.0 | 64.8 |64.3 | 63.9 | | Std dev | 0.7 | 0.1 |0.3 | 0.6 | --- > Hyperparameter sensitivity: The ablation study shows dependence on the EMA coefficient, but provides limited guidance on optimal settings > Q2- How sensitive is Squisher to optimizer hyperparameters, particularly the EMA coefficient? Given these are typically tuned for optimization rather than Fisher approximation, do you have recommendations for non-optimal settings? The ablation results show the effect of varying the β₂ hyperparameter. We found that lowering β₂ hurt performance, highlighting the importance of accumulating gradient information over time. The optimal setting aligns with the default AdamW value of 0.999, while performance degradation was observed at 0.95. Notably, even with an unconventionally low β₂, the Squisher estimate still outperformed the baseline, indicating that while β₂ impacts performance, it remains usable if conventional values are used during training. We will clarify this in the final report, and we can provide a more comprehensive plot of β₂ values vs accuracy if necessary. --- > Some implementation details could be clearer, particularly for mini-batch scaling Thanks for bringing this to our attention. We will make sure to update Figure 1 to incorporate an additional stage where mini-batching is introduced. Let us know if this addresses your concern or if there are specific details you’d like us to clarify further. --- > Q3- How does mini-batch size affect the quality of the Squisher approximation? Since the scaling factor is related to dataset size N, are there special considerations for very large datasets with small mini-batches? This is a theoretically interesting question and we will try to add some more experiments on the EWC setting with varying batch sizes. Practically though, we believe that varying the batch size should not be considered a hyperparameter of Squisher, since changing it will also impact the optimization algorithm. Its choice should primarily be influenced by the optimization algorithm and the hardware. Generally speaking, our results did not exhibit a major dependence on batch size, and we anticipate that the Squisher should provide reasonable performance for typical batch sizes.
Summary: This paper proposes reusing the squared gradient estimator of adaptive gradient methods as an approximation to the Fisher information matrix, called the 'Squisher'. Through extensive evaluation in model merging, model pruning, sparse fine-tuning, task embedding, and continual learning, the authors demonstrate that the Squisher performs on par with the empirical Fisher without needing additional gradient computations. ## Update after the rebuttal The authors addressed my remaining concerns. I retain my recommendation of acceptance. Claims And Evidence: The paper's claims, mostly of a performance-centric nature, are well-supported. In particular, the paper claims that the Squisher approximator is on par with the empirical Fisher on a wide range of tasks, _not_ that it is a good approximation of either the empirical Fisher or even the true Fisher (which would require further theoretical analysis on the quality of the approximations, needing even distributional assumptions). Methods And Evaluation Criteria: The authors evaluate the Squisher against both the empirical Fisher and a Fisher-free baseline. The downstream tasks include model merging and pruning, sparse fine-tuning, and continual learning. These applications all require a notion of parameter importance, making them appropriate downstream tasks. The datasets and architectures are all large-scale. Theoretical Claims: No such theoretical claims were made in the submission. Experimental Designs Or Analyses: I reviewed the experimental design in detail, in particular, how the diagonal Fisher approximations are used in the downstream tasks and on what datasets and architectures the evaluation happens. This work uses T5 and BERT Transformer models for text-based tasks, and a VGG-13 for Fisher Pruning. I am satisfied with these choices. The exact model used in the EWC experiment is unspecified. Supplementary Material: I reviewed the appendix in detail, which contains more details about the individual tasks in Fig. 2. I observed no discrepancies between Fig. 2 and the tables in the appendix. Relation To Broader Scientific Literature: The Fisher Information Matrix, proposed by Fisher (1922), is a natural (degenerate) Riemannian metric on the parameter space of neural networks (Amari, 1998). While Kunstner et al. (2019) argue against the use of the _empirical_ Fisher as a _curvature estimate_, it still gained widespread use as an indicator of _parameter importance_. This work makes use of existing buffers in adaptive gradient methods to provide a cheap approximation to the Fisher. Lin et al. (2024) use this approximation to motivate the removal of the square root in adaptive gradient methods and show that it does not suffer from the problems of the empirical Fisher outlined in (Kunstner et al., 2019). This work uses this "free" quantity in downstream tasks ranging from continual learning to model merging. Essential References Not Discussed: I am unaware of _essential_ references that the paper does not consider. Other Strengths And Weaknesses: A clear strength of this paper is its empirical evaluation. Numerous diverse downstream tasks considered where the Squisher performs on par with the empirical Fisher. Further, the proposed method recycles gradient estimators inherent to adaptive gradient methods, imposing virtually no overhead for calculating the Squisher. This insight, combined with the poor baseline performance on downstream tasks, motivates the proposed method well. Lastly, the paper is enjoyable to read and has a clear structure. A weakness of this paper is the lack of motivation on why the empirical version of the "new Fisher matrix" of Lin et al. (2024) is a sensible alternative object to describe parameter importance. This is a conceptually simple fix: one can show that the underlying "true" Fisher remains the same and only the nature of the approximation differs. In detail, for $n$ i.i.d. input random variables (RVs) $X$ and output RVs $Y$, distributed according to some parametric process $p(x, y \mid \theta) = p(y \mid x, \theta)p(x)$, the "true" Fisher is written as $$\begin{align}\mathbb{E}\_{p(X, Y \mid \theta)}\left[{\nabla}\_{\theta} \log p(Y \mid X, \theta){\nabla}\_{\theta} \log p(Y \mid X, \theta)^\top\right].\end{align}$$ Using the additivity of the Fisher information for i.i.d. random variables and a discrete set of inputs {$x_1, \dots, x_N$} to approximate $p(x)$, we obtain $$\begin{align}\sum_n \mathbb{E}\_{p(y \mid x_n, \theta)}\left[\nabla\_\theta \log p(y \mid x_n, \theta) \nabla\_\theta \log p(y \mid x_n, \theta)^\top\right].\end{align}$$ The standard empirical Fisher approximation substitutes labels {$y_1, \dots, y_N$} from the true generative model: $$\begin{align}\sum_n \nabla\_\theta \log p(y_n \mid x_n, \theta) \nabla\_\theta \log p(y_n \mid x_n, \theta)^\top.\end{align}$$ Instead, the formula the authors use can be derived from the "true" Fisher by first Monte Carlo approximating $X$ with a single set of inputs $\mathcal{X} =$ {$x_1, \dots, x_N$}, resulting in: $$\begin{align}\mathbb{E}\_{p(Y \mid \mathcal{X}, \theta)}\left[{\nabla}\_{\theta} \log p(Y \mid \mathcal{X}, \theta){\nabla}\_{\theta} \log p(Y \mid \mathcal{X}, \theta)^\top\right],\end{align}$$ then substituting in the ground-truth labels $\mathcal{Y} = $ {$y_1, \dots, y_N$}: $${\nabla}\_{\theta} \log p(\mathcal{Y} \mid \mathcal{X}, \theta){\nabla}\_{\theta} \log p(\mathcal{Y} \mid \mathcal{X}, \theta)^\top.$$ It would also be insightful to see the performance of the true (MC-approximated) Fisher on the downstream task for comparison, as the empirical approximation loses theoretically desirable properties of the Fisher. Other Comments Or Suggestions: - I prefer retaining the term "Fisher" for the Fisher information matrix and keeping the "empirical" adjective for cases when the ground-truth labels are used. These terms are often used interchangeably in the literature, and simplifying the naming in this paper could potentially add to the confusion. - Eq. (10): $X_n$ should be $X$. The same holds for the Joint Fisher in Fig. 1, except there, the $\frac{1}{N}$ scaling is also incorrect. - Eq. (8): I suggest using $B$ instead of $N$ to prevent confusion about mini-batching vs. full-batch training. - Eq. (15): I suggest adding ", where squaring is performed element-wise" after the equation. Likewise in Eqs. (15, 16) for division and multiplication. - L303: "section 5.3" -> "Section 5.3" - L293: "section 4.1" -> "Section 4.1" - L295: "uses <the> FISH mask" - The first paragraph of Section 3.5 might be hard to understand for readers unfamiliar with Task2Vec. I propose the following clarification: "Task2Vec embeddings are computed using the Fisher of a model trained on a given task. ~~and averaging~~ The values are averaged across parameter ``groups'' (e.g. weight matrices), leading to a smaller-dimensional representation than the number of weights." In the Setup paragraph, consider elaborating on the notion of "intermediate task". It might be instructive to state the full pipeline (pretraining -> intermediate task -> downstream task). - L330: "dataset into five contexts" Questions For Authors: 1. In the paragraph of L292, the authors discuss the fine-tuning setting described in Section 4.1 of Sung et al. (2021). According to this description, it uses the FISH mask to reset parameters. I have not found this in Section 4.1 of Sung et al. (2021); could the authors clarify what they mean by resetting parameters? 2. "Another experiment is to maintain a moving average of Fishers": Do the authors refer to the full Fisher information matrix as the Fisher here instead of the diagonal? Otherwise, Adam already calculates this quantity. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks a lot for your thorough review and the various suggestions that we will make sure to incorporate into the manuscript. We are specifically grateful to the reviewer for pointing out the equivalence between the standard and joint Fisher, which strengthens our motivation to view the standard empirical Fisher and Squisher as two different empirical approximations of the same underlying Fisher. Please find our response to your remaining concerns below. Let us know if you have follow-up questions or suggestions. --- > The exact model used in the EWC experiment is unspecified. We used a 5-layer CNN with 393,088 parameters for the incremental learning task on CIFAR-100. We will add details of its architecture to our draft. Please let us know if you would like further clarification. --- > It would also be insightful to see the performance of the true (MC-approximated) Fisher on the downstream task for comparison, as the empirical approximation loses theoretically desirable properties of the Fisher. For the EWC setting, we actually computed the true diagonal Fisher for the results presented in the paper. As shown, Squisher provided similar performance to the true diagonal Fisher. We used the true Fisher in this case because it was part of the original implementation. We agree that these additional experiments would be interesting for other settings. We will explore their feasibility, though some may require a large number of MC samples, making them impractically expensive. Note that this does not affect the main contributions of our paper, which is proposing a cheap replacement for the empirical Fisher diagonal, which is widely used in the tasks we study, specifically at extreme scales. --- > - I prefer retaining the term "Fisher" for the Fisher information matrix and keeping the "empirical" adjective for cases when the ground-truth labels are used. These terms are often used interchangeably in the literature, and simplifying the naming in this paper could potentially add to the confusion. We agree that, given the frequent confusion of Fisher and empirical Fisher in the community, we should make the terminology less ambiguous. We will do so by introducing acronyms for the attributes **diagonal** and **empirical** to specify the Fishers used in the text more clearly. --- > Q1. In the paragraph of L292, the authors discuss the fine-tuning setting described in Section 4.1 of Sung et al. (2021). According to this description, it uses the FISH mask to reset parameters. I have not found this in Section 4.1 of Sung et al. (2021); could the authors clarify what they mean by resetting parameters? For the FISH mask setting, the diagonal Fisher is used to determine which parameters to mask by identifying those with the lowest estimated importance. The remaining parameters are retained and fine-tuned on the new task. By "resetting parameters," we refer to resetting the masked weights to their original pre-trained values rather than using from their fine-tuned values. We will clarify this in the final paper. Please let us know if you need further details. --- > Q2. "Another experiment is to maintain a moving average of Fishers": Do the authors refer to the full Fisher information matrix as the Fisher here instead of the diagonal? Otherwise, Adam already calculates this quantity. Apologies for the confusion. Typically, the diagonal Fisher is computed at a single point in time, after training for a specific number of epochs. The ablation we propose is maintain a moving average over these values, which is similar to the exponential moving average in adaptive optimizers. The key distinction between this approach and what Adam computes lies in the order of operations: Adam maintains a moving average of squared gradients (i.e., sum then square), whereas the Fisher is derived from the squared expectation of gradients (i.e., square then sum), as discussed in the background section.
Summary: This paper explores the idea of approximating the Fisher Information Matrix by using the squared gradient accumulator that is already computed in optimizers like Adam. The authors have done an excellent job of testing this approximation in six different applications where empirical Fisher is used and show that in most of the settings, this cheap approximation is good enough. Claims And Evidence: Yes. The authors were very careful in not claiming that the proposed approximation is consistently better than using the computationally more expensive Fisher. There is no overclaiming. Methods And Evaluation Criteria: Yes. Theoretical Claims: There are no proofs in the paper. Experimental Designs Or Analyses: Yes. No issues with the experimental design. Supplementary Material: No. Relation To Broader Scientific Literature: The authors have done a good job of positioning their work. They took a really simple idea of coming up with a cheap approximation for Fisher but then did an extensive empirical analysis to understand if the approximation was reasonable. Essential References Not Discussed: No. I appreciate the author's effort in giving credit to Adadelta, while most papers cite RMSProp for the accumulator mechanism. Other Strengths And Weaknesses: Strength: * The paper considers a computationally cheap approximation for diagonal Fisher and demonstrates that it works on par or better than the computationally costlier methods. Other Comments Or Suggestions: * page-2: Adam citation is not correct. * In all the results sections, while you mention that squisher is better than or on par with Fisher, you should also highlight that it is actually much cheaper to compute squisher. This is implicit but you can highlight this better in your experimental sections. Questions For Authors: * Q1: page 4: In the last paragraph of section 2, you talk about the effect of having \beta_2 = 0.999 and mention that it results in a biased estimate. But then the discussion is not complete. Is it good or bad that the estimate is biased? * Q2: For all the applications, the results section just mentions whether squisher underperforms or overperforms Fisher. But you do not reason why in every application. It might be useful to add this discussion. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your effort and valuable feedback. We fixed the Adam citation in the manuscript, thanks for pointing that out. Please find our responses below and let us know if you have any follow-up questions. --- > [...] while you mention that squisher is better than or on par with Fisher, you should also highlight that it is actually much cheaper to compute squisher. Thanks for this actionable suggestion! We will make sure to highlight this more clearly, and also include a small table in the main text that contrasts the computation times of the empirical Fisher with that of the Squisher (essentially zero). --- > Q1: page 4: In the last paragraph of section 2, you talk about the effect of having $\beta_2$ = 0.999 and mention that it results in a biased estimate. But then the discussion is not complete. Is it good or bad that the estimate is biased? Thanks for bringing this to our attention, let us clarify (we will do so in the text, too): There are two different types of bias in the Squisher. The first stems from approximating the Fisher with the empirical distribution. The paragraph tries to highlight the second one, which stems from the optimization algorithm's exponential moving average heuristic (using $\beta_2$). While the empirical Fisher is computed at one point in parameter space, the Squisher contains contributions from an entire trajectory. We can only hypothesize if these averages are good or bad (e.g. for optimization they definitely seem to help as the default value for $\beta_2$ in Adam is $0.999$). Instead, we accept the presence of exponential moving averages as given, since our goal is to investigate whether the optimizer's statistics can be used as drop-ins for the Fisher. In our ablation experiments, we found that the Squisher still works reasonably (i.e. better than the Fisher-free base line) even if we vary $\beta_2$, but the default value seems to work best in most cases. --- > Q2: For all the applications, the results section just mentions whether squisher underperforms or overperforms Fisher. But you do not reason why in every application. It might be useful to add this discussion. Please see our response to the same question in Reviewer D2a9 (first quoted note).
null
null
null
null
null
null
DiLQR: Differentiable Iterative Linear Quadratic Regulator via Implicit Differentiation
Accept (poster)
Summary: This paper introduces a framework that facilitates differentiation through iLQR, which provides the gradient of an iLQR controller through implicit differentiation. Claims And Evidence: The authors theoretically prove the effectiveness of their method in Section 5 and provide experimental results in the same section to support their claims. Methods And Evaluation Criteria: They perform experiments on CartPole and Inverted Pendulum and evaluate the results based on backward time, loss, and computation time, which make sense for the problem. Theoretical Claims: I've checked proposition 4.1. Experimental Designs Or Analyses: For the comparison of model loss between diffMPC and DiLQR, why is there a sudden drop with high standard deviation at the end for DiLQR? Supplementary Material: I've checked the proof and experimental details. Relation To Broader Scientific Literature: This paper provides an analytical solution for the gradient of an iLQR controller through implicit differentiation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths** - The experimental results on speedup and performance improvement are compelling. - An ablation study is provided. **Weaknesses** - The logic of the proof needs to be polished. For example, there is no theorem proving that the proposed algorithm guarantees speedup. - Experiments on more complicated tasks, such as Atari games, could be considered in future experiments. Other Comments Or Suggestions: - The font size inside the figures should generally match the font size of the main text. Currently, some figures have text that is too small and blurry. - Figure 4(a) is missing. - In-text citations are all enclosed in parentheses. However, in some cases, this is not appropriate. For example, at the beginning of Section 4.4, the citation should be without parentheses. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewers' insightful comments and constructive feedback, which have helped improve our paper. Below we provide detailed responses to each point. --- **Q1: No Theorem Proving Speedup** We thank the reviewer for this important question. Our work is situated in the field of differentiable control, which aims to expose model structure to learning and improve sample and computational efficiency. In this area, it is commonly understood—though not typically accompanied by formal theorems—that incorporating control priors and structured solvers leads to faster learning. Foundational works such as: - *Revisiting Implicit Differentiation for Learning Problems in Optimal Control* (NeurIPS 2023) - *Pontryagin Differentiable Programming* (NeurIPS 2020) - *Infinite-Horizon Differentiable MPC* (ICLR 2020) - *Differentiable MPC for End-to-End Planning and Control* (NeurIPS 2018) share a similar focus on demonstrating empirical improvements, without formal speedup theorems. Such approaches have seen success in robotics, e.g., - *Reaching the Limit in Autonomous Racing: Optimal Control vs Reinforcement Learning* (Science Robotics 2023) - *Actor-Critic Model Predictive Control* (ICRA 2024) - *Guiding RL with Incomplete Model Information* (IROS 2024) demonstrating practical benefits in learning efficiency. In addition, our method improves computational efficiency: by leveraging analytic gradients, we avoid differentiating through rollouts, reducing complexity to $ \mathcal{O}(1)$ per iteration—a property determined by algorithm design. --- **Q2: Experiments on More Complicated Tasks** As noted in our discussion section, many prior works (**Amos et al., 2018; Watter et al., 2015; Xu et al., 2024a; Jin et al., 2020**) also use **simplified environments** to illustrate core ideas. Differentiable control methods are built upon **model-based control** and therefore require access to **structural priors**, which makes them less "plug-and-play" than general-purpose RL. However, the benefit is that such structure greatly reduces the **sample complexity**. To go beyond toy tasks, we further evaluate our method on a challenging **rocket control task with a 13-dimensional state space**—the most complex benchmark among recent analytical differentiable control works. Videos are available on our project page: [https://sites.google.com/view/dilqr/](https://sites.google.com/view/dilqr/) . Additional results are currently being prepared and will be shared shortly. The rocket model and controller code are also open-sourced in our codebase. --- **Q3: Sudden Drop and High Standard Deviation for Model Loss** The sudden drop corresponds to learning progress in reducing model loss. The observed standard deviation (~0.17) is reasonable, especially considering the mean gap between methods is 0.47, indicating that the improvement is both significant and consistent. --- **Q4: Typographical Errors** We sincerely thank the reviewer for catching these formatting issues. We will carefully review the entire manuscript and correct all typographical/formatting errors in the final version.
Summary: This paper proposes DiLQR, which derives the analytic gradient of a given scalar loss function with respect to the parameters in the iLQR system (e.g., parameters of the dynamics or cost functions) through the use of the implicit function theorem. Parallelization, and the sparsity of the problem are used to improve the computation speed for computing this analytic gradient. Experiments are performed on a state-based cartpole problem, showing the proposed method outperforms a baseline that just uses a ordinary nn as the policy. It is also shown to achieve a lower parameter estimation loss compared to a closely related prior work Amos et al. The method is also briefly demonstrated to work with images for a inverted pendulum task. Claims And Evidence: I think the claims made in the paper is in generally true, given the limited set of baselines it compared to. However, I feel the paper is lacking comparison to several more recent methods and frameworks that tackle almost the same problem: - Jin et al., Revisiting Implicit Differentiation for Learning Problems in Optimal Control, NeurIPS 2023 - Jin et al., Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework, NeurIPS 2020 The above two papers solve almost the exact same class of problems, i.e., differentiable trajectory optimization (See Eq 2 in the first paper and Eq. 1 in the second paper), as defined in this paper, although they are not necessarily using iLQR as the solver. The lack of comparison to these more recent methods kind of weakens the claim on the effectiveness of the proposed method. - Pineda et al., Theseus: A Library for Differentiable Nonlinear Optimization This paper provides a very general framework to differentiate through non-linear optimization problems (which includes trajectory optimization, though this paper handles the constraints in a soft way by adding the constraints as losses). Many computational and engineering optimization efforts are implemented in this paper to improve the computation speed, stability and quality of the obtained gradient (e.g., Theseus supports many different ways of differentiating through the non-linear optimization, such as implicit differentiation as used in DiLQR, and other highly optimized ways like truncated backward). I think a discussion to this paper, and how the proposed method differs from the methods already implemented in Theseus is needed. - Wan et al., DiffTORI: Differentiable Trajectory Optimization for Deep Reinforcement and Imitation Learning, NeurIPS 2024 This paper shows that differentiable trajectory optimization can be scaled to high-dimensional image and point cloud observations on many common robotics benchmarks. Since it is claimed that DiLQR can be applied to image inputs as well, it would be good to discuss and maybe compare to this baseline to see how effective DiLQR is in such settings. Methods And Evaluation Criteria: The method and evaluation criteria makes sense. Theoretical Claims: I went through the theoretical claims and did not find any obvious issues. Experimental Designs Or Analyses: Overall the experiment design seems to follow that in Amos et al. and is valid. I have the following questions regarding the experiments: - How is the imitation learning loss for sysid baseline computed? Since it does not learn the actions? - What would be the method’s performance if the dynamics and cost function does not assume pre-defined structures? Since the NN baseline does not assume access to such prior information. - As mentioned above, I think it would make the paper much stronger and convincing if the DiLQR can be compared to more recent and advanced baselines, including: 1. Jin et al., Revisiting Implicit Differentiation for Learning Problems in Optimal Control, NeurIPS 2023 2. Jin et al., Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework, NeurIPS 2020 3. Wan et al, DiffTORI: Differentiable Trajectory Optimization for Deep Reinforcement and Imitation Learning, NeurIPS 2024 - Section 5.4 – what is the actual performance of the method on the inverted pendulum problem? Does it always perfectly solve the problem? Supplementary Material: I briefly went through Appendix A.1. Relation To Broader Scientific Literature: This paper extends the work by Amos et al. to differentiate through iLQR, alone with some computational improvements. I feel this contribution alone, and the lack of comparison to more recent differentiable trajectory optimization methods, is not enough contribution to warrant acceptance at ICML. Essential References Not Discussed: Already mentioned above in the lack of comparison and discussion to some very related works. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: - Figure 1 is not referenced in the main text. - Line 228: “Likewise, θθX⋆ and θθU⋆ are the Jacobians” typos? Questions For Authors: Already covered above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the insightful comments and constructive feedback. Below we provide point-by-point responses to the raised questions. --- **Q1: Comparison with DiffTORI** We thank the reviewer for raising this point. **DiffTORI** ([Wan et al., NeurIPS 2024]) is a pioneering work that bridges differentiable control and RL, especially in high-dimensional settings. We fully agree its **generality** (e.g., model-agnostic design, multi-modal rewards) offers broader applicability for perception-driven tasks. **Our work** focuses on **structured control domains** where iLQR is the canonical solver. By deriving *exact gradients* for iLQR, we achieve: 1. **Data efficiency**: Sub-2000 environment steps suffice to achieve the results in our paper, leveraging prior knowledge of dynamics. 2. **O(1) backward complexity** (vs. autodiff's linear cost, **Fig. 2**), enabled by analytic gradient computation. This aligns with the **PDP/DiffMPC lineage** ([Jin et al., NeurIPS 2020; Amos et al., NeurIPS 2018]), emphasizing precision for structured control. Crucially, our method and DiffTORI **address non-overlapping challenges**: - **DiffTORI**: General-purpose RL-control synergy with perception. - **Ours**: Analytic acceleration for iLQR-based pipelines with structured dynamics. We will add more discussion and clarify this complementary in the final version. --- **Q2: Differences with Theseus** Our method provides exact gradients for iLQR, which **Theseus cannot compute automatically**. While Theseus (similarly JAXopt) can differentiate through fixed-point equations ($x^*=f(x^*)$), iLQR's unique challenge is that its **dynamics (D) and cost (C) depend on x*** itself, creating a recursive relationship $x^*=f_{x^*}(x^*)$. This cannot be resolved through auto-implicit-differentiation alone - it requires **reformulating LQR as a pure fixed-point problem**, which our paper implements theoretically (not just at the coding level). This is not a small step. --- **Q3: Relation to DiffMPC** We clarify that DiffMPC approximates gradients by treating D/C as independent of x* (i.e. x*≈f(x*)), while our method properly accounts for their **interdependence through implicit differentiation**. Our **key innovation** lies in embedding Amos's results into a correct fixed-point formulation that yields exact analytical gradients, representing a high-level advancement beyond prior work. --- **Q4: Comparison with (Safe)PDP and IDOC** As noted in our related work (L147), these prior methods compute gradients under the **assumption of optimal forward solutions**, which creates mathematical inconsistencies when the forward pass is suboptimal. In contrast, our approach maintains **alignment between forward and backward passes** throughout iLQR optimization. This key difference is evidenced in our cartpole experiments: - **IDOC**: 4×10⁻² error - **PDP**: 2×10⁻² error - **Ours**: 1×10⁻⁴ error Additional results will be available on our [project page](https://sites.google.com/view/dilqr/). We acknowledge that (Safe)PDP and IDOC offer **greater generality** in handling diverse solvers and constraints - an important contribution to the field. --- **Q4: Losses for SysID** The **SysID** approach follows a two-stage process: 1. Estimates model parameters from data 2. Uses the estimated model in MPC for trajectory prediction Compared to end-to-end learning, this represents a **modular strategy** that decouples system identification from control. --- **Q5: NN Baseline Comparison** This is exactly the key point we want to demonstrate! Our method essentially transforms the neural network's black-box approach into a white-box solution, achieving significant improvements in sample efficiency through structured prior knowledge. --- **Q6: Visual Control Task** We employ a basic **autoencoder (AE) framework**, which may exhibit sensitivity limitations. This work is primarily **conceptual**, with the visual control section demonstrating that: - iLQR can function as a differentiable controller - It successfully integrates with vision modules We believe that combining our method with DiffTORI could lead to significant improvements in handling multi-modal inputs. --- Rebuttal Comment 1.1: Comment: I truly appreciate the authors for the detailed response. Regarding Q5, I guess my question is more like, what if you remove the assumption that the dynamics and cost function structure are known, but just represent it as a neural network (e.g., a MLP), and use Diff-iLQR to it, would 1) Diff-iLQR still work for optimizing a network with maybe thousands of parameters and 2) would it still be able to achieve a low loss. In terms for the visual control task -- if this is purely conceptual and preliminary results, I would just put it into appendix and not in the main paper, since the experiments and results are not rigorous enough. I am now on the fence for this paper. I am keeping my original score, but I feel with the changes and suggestions from all reviewers incorporated, the paper should have a good chance of being accepted at a next venue. --- Reply to Comment 1.1.1: Comment: Thank you again for your follow-up. Upon review, we realized that our original response to Q5 may have introduced ambiguity between two aspects: (1) the use of structured vs. neural representations of dynamics and cost, and (2) the design of white-box vs. black-box controllers. To clarify: DiLQR is not limited to structured models—it supports any differentiable parameterization, including neural networks. However, our focus in this work is to **maximize sample efficiency by leveraging prior knowledge wherever possible**, which is particularly relevant in robotics and control tasks where structured models are common and practical. We understand the reviewer’s interest in the relationship between **parameter size** and performance. In our experiments, we include a SysID baseline that shares the **same structural parameterization as DiLQR**. DiLQR consistently outperforms this baseline, highlighting the contribution of our algorithmic design—especially the end-to-end optimization and analytical gradient computation. While using neural representations may increase generality, it often leads to less efficient training. We view this as a **research preference**: some methods prioritize flexibility, whereas our work emphasizes precision, efficiency, and interpretability through structure. Extending DiLQR to more general settings remains a valuable future direction. We appreciate the opportunity to clarify this point.
Summary: This paper introduces a differentiable iLQR controller, DiLQR, to enable scaling iLQR to longer time horizons and iteration counts. DiLQR leverages implicit differentiation at the underlying fixed-point to recover analytic gradient updates, thereby reducing computation cost in the backward pass, bypassing the need for explicit unrolling of the optimization. The approach is tested on simple pendulum and cart-pole swing tasks and demonstrates improved performance over the considered baselines as measured by computation cost and prediction accuracy. Claims And Evidence: The claims are well-motivated and are supported by experimental result, while experiments currently focus on toy examples. It would be great to see experiments on extending evaluation to higher degrees of freedom tasks. Methods And Evaluation Criteria: The evaluations highlight reduced compute time and increased accuracy over the considered baselines on the tasks considered. Theoretical Claims: The theoretical claims appear sound and are well-motivated. Experimental Designs Or Analyses: The experiments are well-motivated and borrow designs from prior work, while the paper would be significantly strengthened by showing successful application beyond low-dimensional toy domains. What challenges would the approach face when scaling up to more realistic tasks, particularly in light of mentioned extensions to reinforcement learning learning and real-world applications? Where will it become computationally infeasible? Supplementary Material: The supplementary material provides an additional proof, experimental details and architecture details, which are helpful. The title of the appendix should be updated. Relation To Broader Scientific Literature: The comparison to the literature is mostly well-motivated. Discussion and experimental comparison w/ DiffMPC could be expanded, i.e. why not include them in Figure 3 and some of the compute time comparisons? Essential References Not Discussed: The paper adequately cites many key related works. Other Strengths And Weaknesses: - Figure 7 could be more informative by focusing on a single example, providing 4 context images, and comparing predictions to the ground-truth rollout (e.g. based on an optimal trajectory). In the single image case, how does the model know whether to swing left or right? - Several typos below should be corrected Other Comments Or Suggestions: - The method abbreviation diLQR should be more clearly introduced (first mention in experiments, “We evaluated two variations of our method: diLQR.dx: Assumes that the cost of the controller is known (…)”) - Notation should be unified DiLQR vs. diLQR vs. dilqr - Lines 228-229: typo in Jacobians - Line 306: “we compare our approach with Neural Network” - Line 605: Appendix A title “You can have an appendix here” Questions For Authors: - The discussion mentions application to more advanced RL settings while being mindful regarding availability of first/second-order derivatives of the dynamics. Given that many recent RL results move towards dual-arm manipulation or 50+ degrees of freedom humanoid control, where do you see the limit of feasibility? Additional discussion would benefit the paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable feedback. Below we provide responses to each question. --- **Q1: Application to Higher Dimensional RL Tasks** The field of differentiable control remains relatively young. Current real-world applications primarily focus on autonomous vehicles and drones, as evidenced by recent works: [1] **Reaching the Limit in Autonomous Racing: Optimal Control versus Reinforcement Learning** (Science Robotics 2023) [2] **Actor-Critic Model Predictive Control** (ICRA 2024) [3] **Guiding RL with Incomplete Model Information** (IROS 2024) In simulation environments, our 6-DoF rocket control task (along with quadrotor control) represents the most complex benchmark among recent works in this field. **Key Challenges**: The primary difficulty lies not in computation, but in handling locomotion tasks where contact forces violate Lipschitz continuity and complicate gradient computation. Potential solutions include: - Soft contact models - Neural contact approximations --- **Q2: Extension Beyond Toy Examples** Our rocket control task with 13-dimensional state space currently represents the most challenging benchmark in analytical differentiable control, based on peer-reviewed literature: [1] Revisiting Implicit Differentiation for Learning Problems in Optimal Control (NeurIPS 2023) [2] Pontryagin Differentiable Programming (NeurIPS 2020) Initial results are available on our [project website](https://sites.google.com/view/dilqr/), with additional experiments underway. Rocket model and control file can also be found in the codebase. --- **Q3: DiffMPC Comparison in Figure 3** While DiffMPC remains an excellent benchmark in the field, our method demonstrates approximately 5-7% improvement over it. This difference appears modest compared to the 1e4 magnitude improvement over SysID and NN baselines, making visual comparison challenging in the same plot. However, our approach provides superior parameter estimation accuracy - a crucial factor for industrial applications where physical interpretability matters, even when iLQR's robustness can compensate for parameter inaccuracies. --- **Q4: Determining Rolling Direction** As detailed in Page 8, Line 414: "We stack four compressed images as input channels to capture velocity information and determine the pendulum's swing direction." This temporal stacking approach enables the system to infer motion dynamics from visual input. **Q5: Typographical Errors** We sincerely appreciate the reviewer's careful reading and for bringing these typographical issues to our attention. We have carefully reviewed the manuscript and corrected all identified formatting and typographical errors in the final version. These corrections include: 1. Fixed notation inconsistencies in equations (e.g., θX⋆ → ∂X⋆/∂θ) 2. Corrected the Appendix A title 3. Addressed all minor formatting issues throughout the text
Summary: This paper presents a method for differentiating through iLQR. Naively autodifferentiating through a trajectory optimization problem backpropagates through the iterative optimization problem, incurring a growing computational burden. Like prior works such as DiffMPC, proposing smarter ways to compute the requisite objects such as providing analytic expressions can immensely reduce the computational and memory burden. This paper proposes an implicit differentiation approach to compute the exact iLQR graph without unrolling the entire optimization trajectory. The efficiency of this approach is demonstrated on various standard control tasks, demonstrating large improvements in efficiency and generalization compared to standard prior approaches. Claims And Evidence: The main claims of the paper of this paper are that the derivatives at a fixed point of the iLQR process can be analytically computed through implicit differentiation as a function of various objects that can be themselves computed efficiently without unrolling all the iLQR iterations. This is supported by the explicit mathematical derivations and numerical experiments demonstrating the essentially constant backwards computation time as a function of iLQR iterations (Figure 2), where naive autodifferentiation incurs a visibly linear trend. The exactness of the computation is demonstrated by its improved performance on various benchmarks, including DiffMPC, which is derived on a similar principle except. Methods And Evaluation Criteria: The point of this paper is to provide exact derivatives of the iLQR problem without a computation cost that grows with optimization iterations. The method is evaluated on well-known control benchmarks and the main criteria displayed are computational effort and incurred loss, matching the proposed benefits. Theoretical Claims: The main theoretical content of this paper is in the derivation of the analytical derivatives. These are largely applications of matrix calculus, and seem to be correct by my verification. It should be noted that a lot of subtlety can be hidden in these formulas, and that experimental evaluation is likely a better diagnosis of correctness, which this paper provides. Experimental Designs Or Analyses: Additional experiment information is contained in Appendix A.2 and A.3, and the relevant code is provided in an anonymized github. The benchmark methods are tuned appropriately. Supplementary Material: I have read the additional derivations and experimental information in the Appendix. I have also quickly looked through the linked codebase. Relation To Broader Scientific Literature: As aforementioned, I think analytic expressions that avoid unrolling the iLQR trajectory optimization are an important contribution to the controls community. This concept is not necessarily novel, but this paper introduces some interesting improvements and derivations that are likely interesting to the community. Essential References Not Discussed: Not that I'm aware of. Other Strengths And Weaknesses: As aforementioned, this paper presents some interesting improvements to iLQR that demonstrate increased performance relative to naive autodifferentiation and DiffMPC. The analytic formulas in this paper are likely interesting to the community, especially if they are drop-in improvements to existing iLQR applications. I have some questions to clarify how scalable these solutions in a below section. Other Comments Or Suggestions: Minor comments: - The title of Appendix A seems to be unchanged from the ICML default. - Line 228/229 first column: $\theta_\theta$ should be $\nabla_\theta$ - Line 264/265 second column: should clarify exactly what it means to refer $\frac{\partial D_t}{\partial \theta}$ as $\nabla_\theta D_t$, since in eq (14) they both show up. Questions For Authors: It might be helpful to give computation (e.g. flop) estimates for the proposed method (e.g. diLQR.dx and diLQR.cost). For example, iLQR requires second-order derivatives for the quadratic cost estimate. What is an estimate of the computational complexity of deriving the requisite derivatives given a generic $d_\theta$-dimensional parameter space? How does this compare to standard autodiff? These estimates will be helpful to corroborate the numerical results in Figure 2. Additionally, what do the authors anticipate is the main barrier to scaling this method to larger-scale experiments? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the positive evaluation and constructive suggestions. Below we address the specific questions raised: --- **1. Computational Complexity Estimates (Q1):** We clarify the computational costs with corrected exponents and iteration factors: - **Forward pass (iLQR):** O(IT(n² + m²)) for T timesteps and I iterations (standard LQR complexity) - **Backward pass (diLQR.dx/cost):** O(T(n² + m²)) per parameter dimension (Theorem 3.1) *No dependence on I due to analytic formulation* - **Autodiff baseline:** O(IT(n² + m²)) where I = unrolled iterations This explains Figure 2's constant backward cost vs. autodiff's linear scaling with I. We will add a FLOP count table in revision. --- **2. Scaling Barriers (Q2):** The primary challenge lies in obtaining second-order matrices in Section 4.6: - Our codebase dedicates ~500 lines specifically to obtain these matrices element-by-element - Current engineering overhead includes: • Hessian calculations for dynamics/cost • KKT matrix constructions - While substantial, we believe future code optimizations can reduce this overhead ( such as pre-calculating and storing analytical solutions for key matrix operations, and developing automated routines to substitute corresponding values) We have also implemented our method on a challenging rocket control task with 13-dimensional state space, which currently represents the most advanced benchmark in analytical differentiable control literature ( from PDP and IDOC paper). Demo and code are available on our project page: [Project Website](https://sites.google.com/view/dilqr/) --- **3. Typos/Clarifications:** We will fix: - Appendix A title - Line 228/229 notation (θX⋆ → ∂X⋆/∂θ) - Line 264/265 clarification of Eq (14) dependencies All corrections will be highlighted in the camera-ready version. --- **Acknowledgements** We thank the reviewer for catching these important technical nuances, which have improved our paper's precision. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. The additional information is appreciated. --- Reply to Comment 1.1.1: Comment: Thank you again for your thoughtful review and kind support. Since the rebuttal, we have conducted additional experiments comparing **DiLQR vs. DiffMPC** in both **cost** and **dynamics learning**, across **low-data and higher-data regimes** (train=50 and train=100). These results were **requested by other reviewers**, but also reflect our own desire to further strengthen the work (which we believe you'll also find insightful). One key observation is that **DiffMPC frequently produces physically invalid parameters** (e.g., negative Jacobians), while **DiLQR consistently avoids such issues**, particularly in the train=100 setting. This is clearly shown in the summary table: | Train Size | DiLQR | DiffMPC | |----------------|---------------------|---------------| | dx (train=100) | **0.0% bad values** | 16.7% | | dx (train=50) | **8.3%** | 16.7% | All core experiments and results discussed in the rebuttal — including full comparisons between DiLQR and DiffMPC — **were finalized and uploaded before the author response deadline.** Full learning curves (with variance), per-dimension plots, and additional metrics are available here: 🔗 [https://sites.google.com/view/dilqr](https://sites.google.com/view/dilqr) If you find the current experiments useful, we would greatly appreciate it if you could help surface them to the AC and other reviewers. *** As a final note, we may continue uploading visualizations related to other baseline comparisons (e.g., PDP and IDOC) to the project page during the decision period. These would supplement the existing numerical analysis already discussed in the rebuttal and will not introduce new claims or arguments.
null
null
null
null
null
null
Online Robust Reinforcement Learning Through Monte-Carlo Planning
Accept (poster)
Summary: The paper presents a robust variant of Monte Carlo Tree Search (MCTS) aimed at addressing the discrepancies between simulated and real-world environments, focusing on ambiguities in transition dynamics and reward distributions. The authors claim that their method offers a robust approach, supported by both theoretical analysis and empirical results. Sorry, I am not well-versed in this specific field so I can't give a reasonable review. My review should be considered as a general overview. ## update after rebuttal I have raised my score to a weak accept. Claims And Evidence: The paper claims that their robust variant of Monte Carlo Tree Search (MCTS) can handle the discrepancies between simulated and real-world environments, specifically targeting ambiguities in transition dynamics and reward distributions. The theoretical analysis of the method, along with the empirical results, appear to support the claim that their method offers a robust approach. However, I must admit that I do not have the expertise to fully assess the correctness of the theoretical foundations. From a high-level view, the evidence provided seems reasonable, but I would advise experts in the area to review the technical details more closely. Methods And Evaluation Criteria: The proposed method, which involves incorporating robust power mean backups and exploration bonuses into the MCTS framework, seems appropriate for tackling the problem of robust reinforcement learning. The evaluation criteria, based on empirical tests in environments like the Gambler’s Problem and Frozen Lake, show promising results. However, I lack sufficient background to verify the soundness of the proposed method in depth. That being said, the methodology appears to be well thought out. Theoretical Claims: I am not confident in my ability to evaluate the correctness of the theoretical claims and proofs in the paper, particularly the ones related to non-asymptotic convergence rates for robust MCTS. The paper presents some complex mathematical formulations and claims about convergence rates that I cannot verify due to my limited understanding of the related theory. I suggest that this part of the paper be carefully reviewed by experts in the field of robust reinforcement learning. Experimental Designs Or Analyses: The experimental designs seem sound at a high level, as the paper compares the performance of the robust MCTS algorithm with traditional methods like the Stochastic-Power-UCT baseline. However, I cannot assess the statistical validity or appropriateness of the experimental setup in detail. From the description, the results indicate that the proposed method performs better under model mismatch conditions, but I would recommend someone with more expertise in reinforcement learning to verify the robustness of these findings. Supplementary Material: I have not reviewed the supplementary material in detail due to my limited understanding of the domain. If there are specific aspects that I should pay attention to, I would appreciate more guidance from experts in the field. Relation To Broader Scientific Literature: The paper introduces a novel approach by applying MCTS to robust reinforcement learning, addressing simulation-to-reality gaps. This relates to existing work in robust Markov Decision Processes (RMDPs) and distributionally robust optimization, as seen in the works of Iyengar (2005), Nilim and El Ghaoui (2005), and more recent studies like Zhou et al. (2021) and Wang et al. (2024b). The novelty of the paper lies in combining MCTS with robustness principles, which has not been extensively explored in the literature. While previous research focused on model-based dynamic programming or value iteration methods for robust RL, the integration with MCTS opens up new avenues for planning under uncertainty in large-scale environments. The results presented seem to confirm the potential of this approach, particularly in environments with model mismatches, though further exploration is needed in more complex real-world tasks. Essential References Not Discussed: The paper does a decent job of referencing related work in robust reinforcement learning, particularly in the context of MCTS and distributional robust optimization. However, there may be other recent works that could be relevant but aren't cited, especially those that focus on the practical deployment of robust RL methods in real-world settings. It would be helpful to check if all relevant literature has been included. Other Strengths And Weaknesses: While I lack the expertise to fully assess the paper's theoretical and experimental contributions, the algorithm appears to be a promising approach to addressing real-world challenges in reinforcement learning. The novelty of incorporating model ambiguity directly into MCTS is noteworthy. The empirical results seem to suggest that the proposed method can outperform existing algorithms in certain environments, which could be a valuable contribution. However, I would recommend that the authors clarify the scalability of their method and explore its performance in more complex or higher-dimensional environments. Other Comments Or Suggestions: The writing is generally clear, but due to the complexity of the mathematical formulations, it could benefit from further explanation for readers who are not specialists in the area. Simplifying some of the concepts or providing additional intuitive explanations could make the paper more accessible. Questions For Authors: 1. Could you provide further clarification on how your algorithm scales to larger or more complex environments? Are there any practical limitations to consider when deploying this method in real-world applications? 2. How does the robustness of your algorithm compare in environments with higher-dimensional state or action spaces? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their positive review of our paper. We will revise our paper based on these comments. **Computation efficiency** We want to emphasize that the robust value under all ambiguity balls with radius $\rho_T$ can be computed in at most $O(S \log(S))$ time. Thus requiring only marginally more computation than standard Bellman operators $O(S)$. It is an open research direction to design and showcase computation efficiency for other kind of robust ambiguity sets. We have only considered some popular ones from the $f$-divergence class and the Wasserstein ball. Our contribution in this work has been to adopt robust optimization tools for the standard MCTS planning that has seen historical successes in many sequential decision-making applications. **Scaling to higher dimensions** As a next step for high-dimensional settings, we want to test our robust MCTS approach for Go. We recently found that there have been some interest [R1] in discovering adversarial plays for Go and algorithms to mitigate such plays. We believe our approach can handle such adversarial Go strategies. This is our future focus. We think as further next steps, our robust MCTS approaches can resolve some parameter uncertainty in real-world robotic problems [R]. We are excited for all these next steps from this initial contribution we have made to bring in robust optimization for traditional MCTS approach. [R1] Tseng, T., McLean, E., Pelrine, K., Wang, T. T., & Gleave, A. (2024). Can Go AIs be adversarially robust?. arXiv preprint arXiv:2406.12843. [R2] Dam, T., Chalvatzaki, G., Peters, J., & Pajarinen, J. (2022). Monte-carlo robot path planning. IEEE Robotics and Automation Letters, 7(4), 11213-11220. --- We look forward to hearing from you and providing any other clarifications during the discussion period (up to Apr 8). Thank you!
Summary: The authors address the problem of model mismatch in Monte Carlo Tree Search (MCTS)-based algorithms. They formulate their approach as a robust optimization problem under the framework of robust Markov decision processes (RMDPs) and provide both an algorithm that solves the online robust reinforcement learning (RL) problem, as well as non-asymptotic convergence guarantees. The algorithm builds upon a robust Bellman backup operator and facilitates safe exploration over prescribed ambiguity transition and reward sets. The authors provide an empirical evaluation of their approach using two well-known RL benchmarks, for different levels of mismatch between the model used for planning in MCTS and the actual application environment. Claims And Evidence: From my point of view, the description of the algorithm lacks clarity (comment below in Section 5 of the paper) and there are some open questions on the empirical results (also addressed in comment below). Therefore, I would be hesitant to confirm that all claims have been supported by clear and convincing evidence. Methods And Evaluation Criteria: The test environments are well-known benchmarks in RL literature and are able to capture the underlying robustness/uncertainty problem the authors are trying to address. Even though they are not as complex as other benchmark suites (e.g., a version of the game of go or Atari benchmarks) due to computational requirements, they still provide the right level of challenge for the authors to highlight the properties of the proposed solution. Theoretical Claims: I did not check any proofs or theoretical claims. The theoretical parts in the main paper seem correct to me. Experimental Designs Or Analyses: The test environments are well-known benchmarks in RL literature, so I was able to follow the results for both experiments. I would have a point/question on the FrozenLake results (e.g., Table 2). In general, I would expect that an agent that plans more conservatively (higher $p_{slip}$, e.g., 0.5) does not transition to unfavorable states in MDPs with less stochasticity than anticipated (lower $p_{slip}$, e.g., 0.1). In case the authors have specified a maximum number of steps in the episode, it would be useful if the table included the numbers the agent failed due to time-out and the numbers the agent failed due to transitioning to a zero-reward terminal state. If there is no time-out, then I would ask the authors to explain more detailed why this behavior is observed. Supplementary Material: I looked into the code provided by the authors. I have a comment there for the authors in the “Other comments or Suggestions” section. Relation To Broader Scientific Literature: The authors provide a good overview and comparison to prior work in section 2. However, there are at least two papers that discuss similar problems as the paper but are not cited or discussed: - Even though the work there is not as extensive or mathematically rigorous as this paper, I would still suggest the authors to include the following paper in the related work section, as it also tries to address aspects of the robust MCTS problem: Rostov, M., & Kaisers, M. (2021). Robust online planning with imperfect models. In Adaptive and Learning Agents Workshop - ALA - Farnaz Kohankhaki, Kiarash Aghakasiri, Hongming Zhang, Ting-Han Wei, Chao Gao, Martin Muller: Monte Carlo Tree Search in the Presence of Transition Uncertainty Essential References Not Discussed: no Other Strengths And Weaknesses: For me, the weakest part of the paper is the presentation of the algorithm in Section 5. There, symbols (e.g., $T_{s_h}$, $v$, etc.) and design choices (e.g., reward bins) are introduced without explanation, while the dense algorithmic mechanism illustrated in Algorithm 1 is not really described as a whole. This makes the section hard to follow and the reader must often seek information in the table of notations and other papers referenced in related work to be able to fully understand the provided algorithm. Related to this, it would help the reproducibility of the paper: - if the parameters of Table 1 used in each experiment were provided - if the code was documented in a way that code snippets are matched to specific blocks of Algorithm 1 Other Comments Or Suggestions: Comments/suggestions: - Provided code: - It would help if the authors could provide not only the agent implementations, but also training scripts (and running instructions) that could replicate the experiments of the paper - It would be easier to utilize the provided code, if extensive documentation and comments that connect the code to Algorithm 1 in the paper were provided Questions For Authors: - How are the constants B (bins for reward), $b_i$, $\alpha_i$ and $\beta_i$ selected in Algorithm 1? What are their values in the experiments conducted in the paper? - Also related to one of the points in the experimental section: has the FrozenLake environment a maximum number of time-steps? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and constructive comments on our paper. We are encouraged by the fact that they found our paper: *provide a good overview and comparison to prior work*, our test suites are at the right level of challenge. We'd like to address several important points below. We will revise our paper based on these comments. **FrozenLake results** We thank the reviewer for raising this important point about our FrozenLake results. They identified a pattern that seems counterintuitive at first glance - where more conservative planning doesn't always translate to better performance in less stochastic execution environments. This behavior stems from several key aspects of our experimental setup: **1.Episode Structure:** We do implement a maximum episode length (200) in our FrozenLake environment. This creates two possible failure modes: (a) falling into a hole (terminal state with zero reward) or (b) timing out (exceeding the maximum steps without reaching the goal). **2.Fixed Simulation Budget:** To ensure fair comparison across all experimental conditions, we standardized the number of MCTS rollouts for each planning process. This design choice significantly impacts our results. When planning with higher slip probabilities ( $p_{slip} = 0.5$), the search space becomes substantially more stochastic, requiring more simulations to converge to an optimal policy. The key insight for understanding why planning with $p_{slip} = 0.3$ outperforms planning with $p_{slip} = 0.5$ (even when executed in a $p_{slip} = 0.5$ environment) lies in the convergence rate of policy search under different noise levels. In highly stochastic planning environments, the reward signal becomes noisier and the effective search space expands dramatically. With our fixed simulation budget, planning with $p_{slip} = 0.3$ benefits from clearer reward signals and a more focused search space, allowing it to discover strategies that balance safety and efficiency. In contrast, planning with $p_{slip} = 0.5$ with limited rollouts often results in overly conservative policies that increase the likelihood of timeout failures, even when executed in the same stochastic environment used for planning. We fundamentally agree with your expectation that "an agent that plans more conservatively (higher, e.g., 0.5) does not transition to unfavorable states in MDPs with less stochasticity than anticipated (lower, e.g., 0.1)." This principle would likely emerge if we allowed sufficient computational budget for full policy convergence at each slip probability level. **Table 1 parameters used in each experiment** The parameters in Table 1 were chosen specifically to achieve the optimal convergence rate shown in Theorem 3. To obtain the optimal simple regret of $\mathcal{O}(n^{-1/2})$, we need $\frac{b_i}{\beta_i} = \frac{1}{4}$ and $\frac{\alpha_i}{\beta_i} = \frac{1}{2}$, which follows directly from the proof of Theorem 3. These ratios ensure that the algorithm can obtain the optimal rate of $\mathcal{O}(n^{-1/2})$. With these parameter settings, the exploration bonus term in our action selection rule becomes: $C \cdot \frac{(T_{s_h}(t))^{\frac{1}{4}}}{(T_{s_h,a}(t))^{\frac{1}{2}}}$ This specific form of the exploration term provides the necessary balance between exploring uncertain actions and exploiting promising ones under model ambiguity. We chose the exploration constant C = 2 for all the experiments. We apologize for the ambiguity and will update the details in the revised version. **Improved documentation** We appreciate this valuable suggestion regarding code documentation. We will update the provided code with clear docstrings and comments that explicitly connect implementation blocks to their corresponding steps in Algorithm 1, along with a mapping document that shows the direct relationship between notations and implementation. **Related works** We thank the reviewer for pointing us to [R1, R2]. As the reviewer mentions, these are not mathematically rigorous works - but nonetheless, they propose heuristic methods to address environmental uncertainty. We will include these works in our revised paper. [R1] Rostov, M., & Kaisers, M. (2021). Robust online planning with imperfect models. In Adaptive and Learning Agents Workshop [R2] Farnaz Kohankhaki, Kiarash Aghakasiri, Hongming Zhang, Ting-Han Wei, Chao Gao, Martin Muller: Monte Carlo Tree Search in the Presence of Transition Uncertainty --- We look forward to hearing from you and providing any other clarifications during the discussion period (up to Apr 8). Thank you! --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. All my questions and open points have been addressed and I do not have any further questions at this point. --- Reply to Comment 1.1.1: Comment: We greatly appreciate Reviewer k1UW for their reply. We are delighted to learn that our rebuttal addressed all of the reviewers questions and concerns. We politely ask if the reviewer can consider raising the score if our manuscript and rebuttal match their expectations. Again, we thank you for your efforts in reviewing our work. We are open to providing more clarifications until the discussion period (up to Apr 8). Thank you!
Summary: Reinforcement learning utilizes Monte Carlo search for planning with use of a model. However, MC search can fail to perform well in situations where the model does not accurately represent the transition or reward dynamics. This may be due to inaccuracies in the model used for training an agent, non-stationarity of the environment dynamics and several other reasons. This paper focusses on the robustness of agents to deployment in problems where the planning model does not accurately represent the environment dynamics. While robustness has been studied in RL, this paper explicitly introduces a robust version of MCTS for planning. The authors propose a backup update rule and a tree policy for MCTS that takes into account the expected worst-case returns to learn robust policies. They also theoretically show that their approach converges to the optimal at a rate commensurate to that of standard MCTS. They also evaluate their proposed algorithm against an MCTS algorithm. ## Update after rebuttal I raised my score to a weak accept. Claims And Evidence: 1. Their approach solves the online robust RL problem. - A very strong claim that is not backed by evidence. 2. Their approach has theoretical performance on par with standard MCTS. - They provide theoretical proof. 3. Their approach performs better than standard MCTS on problems with simulation-to-real model discrepancies - This claim is only partially supported by empirical evidence. Methods And Evaluation Criteria: The variation of MCTS the authors propose seem reasonable for the problem being motivated. Updating values based on expected worst-case dynamics makes sense for having a measure of the robustness of actions. The evaluation criteria makes sense because they are explicitly measuring the performance an RL agent by the reward it accumulates. However, I think there are two issues: - Not enough baselines - Given the number of relevant approaches mentioned in the literature review, there were a number of different approaches to the same problem that could have been used for comparisons. - The one baseline they use is not appropriate on its own - The authors selected a baseline that does not seem to be used for robust RL. While the authors proposed approach seems to be an extension of this baseline, it does not seem surprising that they outperform it. - Ideally, for me, the evaluation would answer the question: "Does my proposed algorithm have any benefits over other algorithms used to address the problem I want to solve?" Theoretical Claims: The authors make the claim that the finite-time error decreases at a rate at most $\mathcal{O}(n^(-1/2))$. - I cannot verify the correctness of these claims the proofs lay in the supplementary materials and it is too long for me to go through thoroughly in a reasonable amount of time. However, the approach the authors take seem sound. Experimental Designs Or Analyses: - The evaluation domains selected are reasonable. Although, experiments on a domain that is larger would strengthen results. - In the Gambler's Problem, the authors state that robust-power-UCT (RP) has superior performance. This may be supported by the results when execution $p_h \leq 0.5$, but stochastic-power-uct (SP) performs better otherwise. I still think these results are interesting, but I would not agree with the claim that their approach is more robust --- I would say that this demonstrates that their approach makes the agent more cautious. - The authors also state that their results are over 100 randomly seeded runs however they do not state (or I missed where they do) what measures of central tendency they report nor do they show confidence intervals/errors. - Similarly, I do not know how strongly the results in Frozen Lake support the claim that RP improves robustness, per se. Planning with $p=0.3$ has a success rate of 20% in execution with $p=0.5$ meanwhile planning with $p=0.5$ has a success rate of 10%. Why would it perform *worse* on the problem on which it's trained? Supplementary Material: I did not read the whole of the supplementary material. I only read the additional descriptions of the domains and the experimental setup. Relation To Broader Scientific Literature: I found the paper overall interesting and I believe the problems focussed on are of interest to the community in robust MDPs, POMDPs, RL with cautious agents, continual learning and planning in non-stationary environments. Essential References Not Discussed: I do not have anything in mind. Other Strengths And Weaknesses: ## Novelty The paper applies an existing approach to a framework in which it has not been applied. ## Clarity The paper, for the most part, was clear and easy to understand. Other Comments Or Suggestions: - The introduction was very long and I do not know if that space could have been put to better use. - Similarly, the pseudocode does not help with understanding the algorithm and maybe that space could have been used for theoretical analyses. Questions For Authors: No questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. **Baseline comparisons** Our work focuses specifically on robust online planning using MCTS, and to the best of our knowledge, this work is the first to provide theoretical guarantees for robust online planning using MCTS, with convergence bounds matching those of standard MCTS while providing robustness to model misspecification. Our method differs fundamentally from offline learning approaches like robust value iteration or Q-learning. This distinction is critical for several reasons: 1. Online vs. Offline Paradigm: MCTS performs dynamic tree expansion from the current state during execution, while methods like robust value iteration [1,2] require pre-computing policies across the entire state space. This fundamental difference makes direct comparisons methodologically problematic. 2. Controlled Comparison: By comparing Robust-Power-UCT against Stochastic-Power-UCT, we isolate the specific impact of our robustness modifications while controlling for all other algorithmic components, providing a clear assessment of our contribution. 3. Principled Exploration: MCTS naturally incorporates optimism-under-uncertainty for exploration, adapting to each state encountered during execution. This contrasts with offline methods that require separate exploration strategies or complete model knowledge. 4. Integration with Deep Learning: Recent breakthroughs like AlphaZero [3] and MuZero [4] have demonstrated the power of integrating MCTS with neural networks. This would allow us to leverage value functions learned through offline methods (like robust value iteration) to guide online planning while maintaining robustness guarantees. This integration represents a promising direction where our work complements, rather than competes with, offline robust RL methods - creating a hybrid approach that benefits from both paradigms. The empirical results across both environments consistently demonstrate that Robust-Power-UCT significantly outperforms non-robust approaches under model mismatch conditions. [1] Nilim, A. and El Ghaoui, L. (2005). Robust control of Markov decision processes with uncertain transition matrices. [2] Iyengar, G.N. (2005). Robust dynamic programming. Mathematics of Operations Research. [3] Silver, D. et al. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. [4] Schrittwieser, J. et al. (2020). Mastering Atari, Go, chess and shogi by planning with a learned model. **Gambler's problem performance** We agree that our phrasing could be improved for clarity. When discussing "superior performance," we should have been more precise. What we intended to highlight was our algorithm's ability to maintain consistent performance across different execution environments, particularly when the execution probability is lower than the planning probability. This consistency under model mismatch is what defines robustness in our context, rather than achieving the highest possible reward in matched conditions. We'll clarify this important distinction between optimality and robustness in the revised paper to avoid any confusion. **Confidence intervals** We report success rates as the proportion of successful episodes over 100 independent runs, which directly measures the algorithm's ability to complete the task. Success rate is a binary outcome metric (success/failure), so traditional error bars aren't applicable - the reported percentage itself represents the statistical performance. We chose this metric over average reward with confidence intervals as it provides a more interpretable and direct evaluation of performance in environments with sparse rewards. However, if preferred, we can supplement our analysis with mean reward and standard deviation in our revised paper. **Frozen Lake performance** The reviewer's observation about planning with different slip probabilities highlights an important point that we should clarify. When planning with lower slip probabilities (e.g., $p=0.3$), the environment is more deterministic, allowing the algorithm to find more reliable paths within a fixed computational budget. The Frozen Lake environment has a particular characteristic where reducing slippage improves performance up to a certain point - with too little slippage, the agent might fail to consider important failure modes. With the same number of rollouts, planning in a lower-noise environment naturally leads to better policies. As the planning slip probability increases (e.g., $p=0.5$), the higher stochasticity requires significantly more rollouts to converge to optimal paths since the signal-to-noise ratio decreases. This explains why planning with $p=0.3$ can outperform planning with $p=0.5$ even when executed in matching environments. --- We look forward to hearing from you and providing any other clarifications during the discussion period (up to Apr 8). Thank you! --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses. They assuage most of my concerns and I will raise my score. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to consider our rebuttal and for increasing your score. We greatly appreciate your constructive feedback throughout the review process, which will help us improve our paper.
Summary: This paper presents Robust-Power-UCT, a variant of Monte Carlo Tree Search (MCTS) designed for Robust Markov Decision Processes (RMDPs). The key assumption is that exact transition and reward models are unknown, but approximate models exist with bounded uncertainty captured in an ambiguity set. This is particularly relevant for sim-to-real policy transfer, where learned policies must perform reliably in real-world settings despite discrepancies in simulated training environments. The paper provides empirical evaluation on two domains: Gambler’s Problem and Frozen Lake, where Robust-Power-UCT outperforms Stochastic-Power-UCT, especially in the Frozen Lake domain. Claims And Evidence: The paper is generally well-written and technically sound. However, I have a few concerns: - Title and Framing Issue: The title refers to "Reinforcement Learning," but no actual learning (policy updates or value function approximation) is involved. The paper focuses on robust planning using MCTS, which is a search-based algorithm rather than an RL method in the typical sense. A more precise title would avoid potential misinterpretation. While this is a minor issue, it could be easily improved for clarity. - Experimental Support for Claims: The paper claims that Robust-Power-UCT achieves robust performance in the presence of model uncertainty, but the empirical results are somewhat limited: - The performance gains are not always significant: In the Gambler’s Problem, robust planning leads to a conservative policy, which does not necessarily result in better performance. - There is no clear trade-off analysis: The robustness comes from planning for the worst-case, but it is unclear whether this conservatism hurts performance in cases where the real-world dynamics are less adversarial than assumed. - Lack of ablation studies: The paper does not explore how different ambiguity set formulations (Total Variation, Chi-squared, Wasserstein) affect performance trade-offs. Methods And Evaluation Criteria: - While Gambler’s Problem and Frozen Lake serve as controlled testbeds, they do not fully capture the real-world challenges of sim-to-real transfer. - More realistic domains, such as robotics or autonomous driving, would better demonstrate how Robust-Power-UCT performs in complex, high-dimensional environments. - The evaluation criteria focus primarily on success rate, but additional metrics (e.g., computational cost, robustness vs. performance trade-offs) would strengthen the empirical results. Theoretical Claims: The theoretical results appear correct and well-structured. The proofs provide clear convergence guarantees, and the analysis aligns with prior work on robust RL. I did not find any obvious flaws in the derivations. Experimental Designs Or Analyses: See above. Supplementary Material: N.A. Relation To Broader Scientific Literature: This paper contributes to robust RL and planning under uncertainty, addressing a key issue in sim-to-real transfer. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: N.A. Other Comments Or Suggestions: N.A. Questions For Authors: Q1: Impact of Conservative Planning - Have you analyzed cases where worst-case planning leads to overly conservative decisions that might reduce performance in non-adversarial settings? - How does the performance of Robust-Power-UCT compare when the true model lies within the ambiguity set but is not necessarily the worst-case model? Q2: Scalability and Computational Cost - What is the computational overhead of Robust-Power-UCT compared to standard MCTS? - Could the robust backup operator be optimized for efficiency in large-scale applications? Q3: Broader Application Domains - Do you have plans to test Robust-Power-UCT in high-dimensional real-world domains, such as robotics or self-driving cars? How would the method handle partial observability in real-world scenarios? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and constructive comments on our paper. We are encouraged by the fact that they found our paper: well-written, *technically sound*, *aligns with prior work on robust RL*. We'd like to address several important points below. We will revise our paper based on these comments. **Robustness and Conservative Policies** The reviewer correctly notes that robust planning can lead to more conservative policies. This is indeed an intentional feature rather than a limitation. In many high-stakes applications (autonomous vehicles, medical robotics, financial systems), conservative policies that prioritize worst-case performance are precisely what is needed when the cost of failure is high. Our approach enables this safety-oriented planning while maintaining theoretical guarantees. **Lack of ablation studies** While we appreciate the concern about ablation studies, we believe there may be a misunderstanding of our experimental approach. In our paper, we already provide explicit comparisons between different ambiguity set formulations (Total Variation, Chi-squared, and Wasserstein) across all experiments, with Tables 2 and 4 showing comprehensive results for each formulation under various planning-execution scenarios. In particular, the robust policies across Total Variation, Chi-squared, and Wasserstein are separately trained, and none of them affects the other. If the reviewer is suggesting a different form of ablation beyond comparing these different uncertainty sets, we would appreciate clarification on what specific ablation studies would be most valuable to include in our revised paper. **Performance Trade-offs** We appreciate the observation about trade-offs between robustness and performance. Section E.3 in the supplementary material provides a detailed analysis of this trade-off. The ambiguity set radius parameter ρ provides direct control over this trade-off. When ρ = 0, our approach reduces exactly to standard Power-UCT, while increasing ρ progressively enhances robustness at the potential cost of optimality under matched conditions. **Evaluation Environments** As mentionned by reviewer k1UW, the selected environments are well-known benchmarks in RL literature that effectively capture the robustness/uncertainty problems we're addressing. While not as complex as some benchmark suites, they provide the right level of challenge to highlight our solution's properties while remaining computationally tractable for robust planning experiments. **Scalability and Computational Cost** The robust value under all ambiguity balls with radius $\rho_T$ can be computed in at most $O(S \log(S))$ time. Thus requiring only marginally more computation than standard Bellman operators $O(S)$. It is an open research direction to design and showcase computation efficiency for other kind of robust ambiguity sets. We have only considered some popular ones from the $f$-divergence class and the Wasserstein ball. **Broader Application Domains** Thanks for this constructive review. As a next step for high-dimensional settings, we want to test our robust MCTS approach for Go. We recently found that there have been some interest [R1] in discovering adversarial plays for Go and algorithms to mitigate such plays. We believe our approach can handle such adversarial Go strategies. This is our future focus. We think as further next steps, our robust MCTS approaches can resolve some parameter uncertainty in real-world robotic problems [R2]. We are excited for all these next steps from this initial contribution we have made to bring in robust optimization for traditional MCTS approach. [R1] Tseng, T., McLean, E., Pelrine, K., Wang, T. T., & Gleave, A. (2024). Can Go AIs be adversarially robust?. arXiv preprint arXiv:2406.12843. [R2] Dam, T., Chalvatzaki, G., Peters, J., & Pajarinen, J. (2022). Monte-carlo robot path planning. IEEE Robotics and Automation Letters, 7(4), 11213-11220. ----- We look forward to hearing from you and providing any other clarifications during the discussion period (up to Apr 8). Thank you! --- Rebuttal Comment 1.1: Comment: I thank the authors for providing detailed responses to my questions. I have gone through the responses and still feel that while the approach is promising, the experiment domains do not capture the complexities of sim-to-real policy transfer. For that reason, I am going to retain my original score. --- Reply to Comment 1.1.1: Comment: We greatly appreciate Reviewer ZeKp's reply and the opportunity to further clarify our work's standing. We emphasize that our current design primarily aims to provide theoretical intuition for practical algorithm development, supported by a few standard benchmark experiments (used by the prior theoretical robust RL community) to highlight its future potential. We do agree that more experimental validations on diverse and complex practical applications are important, which is a very interesting future direction, as we mentioned our next steps in the rebuttal above. However, our focus and contributions in this manuscript have been to incorporate a principled methodology from robust optimization to the celebrated MCTS algorithm. This discussion has helped us to improve the placement of this work, which we plan to incorporate into our introduction sections to further clarify the contributions. We are delighted with the already positive support from the reviewer. Other than the practicality issue mentioned in this follow-up, we hope our rebuttal addresses all other reviewers' questions and concerns. We eagerly anticipate the possibility of receiving further feedback and support from you. Again, we thank you for your efforts in reviewing our work. We are open to discussing further until the discussion period (up to Apr 8). Thank you!
null
null
null
null
null
null
Learning Multi-Level Features with Matryoshka Sparse Autoencoders
Accept (poster)
Summary: This paper introduces a simple but novel approach for training SAEs with a nested structure in the feature space. As a consequence of this training, the authors present results suggesting that Matryoshka SAEs are more adept at overcoming feature splitting and feature absorption issues currently facing SAEs. The nested feature space concept of Matryoshka SAE is inspired by a previous work which introduces Matryoshka representations. Claims And Evidence: The claims made in the submission seem well supported by the experiments. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are sound. Theoretical Claims: N/A Experimental Designs Or Analyses: Overall, the experimental design and analysis is sound. Supplementary Material: The supplementary material contains useful and extensive ablations. The authors also include interesting experiments on board game models for additional comparison to previous work. Relation To Broader Scientific Literature: This work connects well with the existing literature. Feature splitting and feature absorption are known limitations of SAEs; this work presents a method which overcomes these issues while drawing inspiration from previous works, namely the SAE and Matryoshka representation learning literature. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: - Paper is well written and has clear language - The set of experiments presented seems relevant and extensive. This includes presenting results for which their method is not necessarily the best (e.g., reconstruction and game board experiments) Other Comments Or Suggestions: None Questions For Authors: Overall, the paper looks good, but I do have the following inquiry about the experiments: The extent of the large-scale experiments is on Gemma-2B. I understand that restrictions on compute may prohibit running experiments on additional models of similar or larger size, but evaluation on a single, relatively small language model is a limitation itself. I'm wondering if the authors have any insight on how the results may change as a function of model scale, architecture, or pretraining data? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive review of our paper and thoughtful feedback! We are grateful for your recognition that our work is well-written with clear language, presents relevant and extensive experiments, and effectively addresses known limitations of SAEs. Regarding different model scales and architectures: While computational constraints limited testing on a wider range of the largest models, we have reasons to believe the benefits of Matryoshka SAEs will generalize and potentially even increase with model scale. Feature absorption and splitting are fundamental challenges in SAEs rather than model-specific issues. Based on theoretical considerations, larger models with more complex feature hierarchies might actually benefit more from our approach, as they contain richer hierarchical structures that Matryoshka SAEs are designed to preserve. Different architectures might show varying degrees of improvement, with models having stronger hierarchical representations potentially showing more pronounced benefits. We acknowledge the current experimental scope as a limitation and will add a note to this effect in the camera-ready version. Thank you again for your positive assessment and valuable question. We hope our response reinforces your support of our paper.
Summary: The authors suggest a novel training objective for sparse autoencoders to address the issues of feature splitting, feature absorbtion and feature composition. They test this idea on a toy, synthetic dataset explicitly designed to demonstrate improvements, then a 4-layer transformer-based language model traied on TinyStories, then Gemma-2-2B checkpoints (open-source language model from Google). They demonstrate strong improvements through a variety of experiments, assuming well-tuned baselines. Claims And Evidence: The authors claim that despite worse reconstruction performance, Matryoshka SAEs better achieve the original goals of SAEs. These claims are well-supported by their experiments in Section 4.3. Methods And Evaluation Criteria: The methods (synthetic data, TinyStories, and Gemma-2-2B) and evalution criteria all make sense. Ideally there would be human evaluation as well, but the authors mention this as a limitation and have a public SAE latent viewer for TinyStories. I am very satisfied with the evaluation. It would be interesting to see the effects of steering with the different latent subgroups. But I'm happy for that to be future work. Theoretical Claims: There appear to be no theoretical claims. Experimental Designs Or Analyses: The experimental design is excellent. Baselines appear to be well-tuned. I did not find any issues in the soundness or validity of the experimental designs. Supplementary Material: The supplementary material includes plenty of ablations and additional details. I would like some additional cherry-picked examples if possible. Not necessarily demonstrating the differences between Matryoshka and existing work, but simply demonstrating that Matryoshka SAEs learn interesting features. Relation To Broader Scientific Literature: This work fits in with other novel SAE architecture/objectives like TopK, JumpReLU, etc. It explains its position within the broader literature well. Essential References Not Discussed: There are no essential references not discussed. Other Strengths And Weaknesses: The simplicity of the idea is phenomenal. Most SAE modifications are in the activation layer. It's exciting to see novelty in the training objective, and I would love to see how this objective composes with other SOTA activation layers like JumpReLU. Other Comments Or Suggestions: * There's a fair amount of whitespace. Can you use that whitespace to make bigger (taller) figures? Questions For Authors: * Can you explain why the PCA baseline has the highest variance explained and the lowest CE loss (figure 5)? Is it because PCA doesn't have a sparsity constraint? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your very positive review of our paper! We were happy to read your comment about the experimental design being excellent and your appreciation for the simplicity and novelty of the idea. Furthermore, we appreciate your suggestion to include a few feature examples, which we will do in the camera-ready version. We will also make sure to minimize the whitespace in this final version. Regarding the PCA baseline: Your intuition is exactly correct. The high variance explained and low CE loss are indeed due to the lack of a sparsity constraint, allowing it to use its full capacity for reconstruction unlike the SAEs. We will add a sentence clarifying this in the camera-ready version. Thank you again for your strong support and enthusiastic review! --- Rebuttal Comment 1.1: Comment: Thank you for the clarification.
Summary: This paper presents Matryoshka SAEs, inspired by Matryoshka representation learning, that learns a nested series of SAEs simultaneously to address issues such as feature splitting and feature absorption. Comparisons with multiple well-established baseline SAEs demonstrate Matryoshka SAEs’ superior quality to overcome these problems as well as exhibiting better concept isolation performance. Though their reconstruction performance is worse than baselines, Matryoshka SAEs show better automatic interpretability metrics and better scalability. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. The Matryoshka structure is well-suited to address the feature splitting/absorption problems, and the evaluation metrics used as well as the choice of baseline SAEs all follow the community standard. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. All designs are valid, although I have the following concerns related to Section 4.1, Toy Model Demonstration of Feature Absorption. - Here, different from real-world scenarios, the authors are not training sparse or overcomplete autoencoders; instead, the number of latents is set equal to the ground-truth. According to the superposition hypothesis (e.g., [1]), an SAE is trained to approximate the high-dimensional hypothetical disentangled model using low-dimensional features from the observed model, which translates into d < L in this setup. Will the Matryoshka SAE maintain its superior performance under this condition? - As detailed in Appendix E, there are two important Matryoshka-specific components. First is an “adaptive sparsity control targeting the ground-truth average l_0 of 1.2338”. Could the author be more specific on what this means? How is this ground-truth information incorporated into the training process? The second component is the “l_1 sparsity penalty on normalized activations”. Does this imply that the vanilla SAE lacks an l_1 penalty? If so, then it would be beneficial to include ablations on the vanilla SAEs that also incorporate these biases. - In this setup, all ground-truth features are orthogonal to each other; however, in a more realistic setting, one would expect (at least) the parent feature and a child feature to naturally have a higher cosine similarity score compared to inter-group cosine similarity. I have this concern because I’ve spotted the correlation between the parent feature and the children features in the Matryoshka SAE as depicted in Figure 3, which is unexpected given the goal of avoiding absorption. Will this effect be amplified in a non-orthogonal setup? [1] Toy Models of Superposition. Section link: https://transformer-circuits.pub/2022/toy_model/index.html#motivation-superposition Supplementary Material: Yes, Sections A, B and E. Relation To Broader Scientific Literature: The proposed idea of incorporating Matryoshka representation learning into SAEs helps address several crucial problems in the current SAE community, which improves interpretability of the family of sparse autoencoders. Essential References Not Discussed: I have not found essential missing references. Other Strengths And Weaknesses: The paper is clearly written. Incorporating the Matryoshka learning idea into SAE is novel. It shows the potential to address problems in large-scale SAEs trained on LLMs. Other Comments Or Suggestions: I would suggest the authors mention the ablation studies in Appendix B (for example, use one sentence) into the main text, especially regarding loss weighting. It is noteworthy that average weighting works the best in this context. Questions For Authors: I have the following questions for authors. - In the first-letter experiment, you first discover a direction via probing, and then “find the corresponding latents and measure when these SAE latents fail to activate on tokens starting with that letter and get absorbed into specific token-aligned latents.” How is this finding process performed? Is it through cosine similarity comparison? If this is the case, have you checked the token distribution with a specific letter, to avoid potential issues that lead to identifying a local feature/token feature with “holes” due to an imbalanced distribution (for example, corpus starts with L dominated by “Lily”)? - In the sparse probing experiment, the performance drops at higher sparsity, both in Figure 7 and Figure 10. This looks like a warning sign that when increasing the dictionary size, the Matryoshka SAE loses the ability (which it has when scaling is not performed) to some extent to isolate the target concept. Do you have an explanation or discussion on this phenomenon? - I wonder how the batch size setup in BatchTopK would affect the result. I suspect you chose 2048 to maintain consistency to the baselines, but if a stability result on this hyperparameter can be provided it would be beneficial. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive review and insightful questions. We appreciate your assessment that our paper is clearly written and the idea is novel. We address your key questions below: **Regarding the toy model demonstration:** 1. **On d < L setup:** You correctly note the toy model uses a non-overcomplete set-up, for simplicity and illustrative purposes. The toy model is not meant to be a realistic depiction of features in LLMs, but rather a simplified demonstration to illustrate a specific scenario where vanilla SAEs struggle with feature absorption while Matryoshka SAEs excel. It provides a controlled environment to highlight the core mechanism of our approach. We chose to use a small number of latents to make it easier to visualize and show the resulting learned features, but expect that similar results would hold for toy models with more ground-truth features. 2. **On adaptive l1 penalty:** Thanks for pointing this out, we should have been clearer here. The adaptive sparsity control is not a Matryoshka SAE-specific component, but rather a component we use in both SAEs. The only difference between the two SAEs is the nested reconstruction objective of the Matryoshka SAE. We will update this in the camera-ready version. 3. **On orthogonal features:** You raise an important point about the correlation between parent and child features. Although the parent and child features are orthogonal, the resulting combined activations of the "parent + child" are not orthogonal to each other, due to the shared parent component. This is why we observe a small correlation between parent and child features in the Matryoshka SAE (Figure 3), as the disentanglement is, although close, not perfect. Given that there is some earlier work by Park et al [1] that suggests that in real-world settings, parent and child features are often orthogonal, we think it is reasonable to use orthogonal features in our toy model as well. **Regarding the first-letter experiment:** To find the relevant latents for the first-letter task, we use a train and test set of single token inputs sampled directly from the tokenizer, unweighted by token frequency. We first train logistic regression probes on residual stream activations to define the ground truth direction for each letter. We then use k-sparse probing on the SAE latents to identify the main latent(s) representing that feature. Specifically, we start with k=1 to find the primary latent, then incrementally increase k, adding additional latents to our "main latents" set when they improve the F1 score by more than a threshold (τ=0.03). For each token, we then measure when these main SAE latents fail to fully activate on tokens starting with that letter (their projection onto the ground truth direction is less than the model's projection) while other latents (the absorbing latents) compensate on these tokens. Since we use the SAEBench implementation for our evaluation, we refer to their paper for more details [2]. **Regarding sparse probing performance:** The performance drop at higher sparsity in Figure 7 is indeed interesting, but it's important to note that this phenomenon occurs across many other baseline architectures as well, not just in Matryoshka SAEs. This appears to be a general characteristic of SAEs rather than a limitation specific to our approach. We want to note that the performance drop in sparse probing when scaling up the dictionary size from 16k to 65k (Figure 10) is very minor (0.775 -> 0.772) and we don't believe this is a significant effect. **Regarding BatchTopK batch size:** We chose a batch size of 2048 to maintain consistency with baselines as you suspected. While we didn't perform extensive ablations on this hyperparameter, we suspect that for reasonably large batch sizes it is not a critical hyperparameter, and that standard BatchTopK SAEs and Matryoshka BatchTopK SAEs will be similarly affected by changes in batch size. We hope these explanations address your questions. We would appreciate your consideration if these clarifications strengthen your support of the paper. [1] Park, Kiho, et al. "The geometry of categorical and hierarchical concepts in large language models." arXiv preprint arXiv:2406.01506 (2024). [2] https://www.neuronpedia.org/sae-bench/info#feature-absorption
Summary: This paper aims to improve concept learning in sparse autoencoders (SAEs), which are models that have recently become popular as a means to disentangle features from large deep models, particularly LLMs, into a sparse set of disentangled concepts. This work focuses on the problem with existing SAEs of choosing the optimal size of the latent dimension, where large sizes may help capture more concepts, but cause them to have irregular and inconsistent granularities. Specifically, it looks into the problems of feature absorption, feature splitting, and feature composition. It proposes an alternative training objective inspired by Matryoshka representation learning, where the SAE is forced to reconstruct the input feature independently using varying latent sizes, enforcing some latents to encode coarse concepts and others to encode more specialized, fine-grained concepts. Experimental evaluation is performed to show that the proposed Matryoshka SAEs are performant and interpretable, while also avoiding the aforementioned problems with existing SAEs. ## Update after rebuttal Thank you for your detailed response. I agree that this paper should be accepted. Claims And Evidence: The claims are generally supported by evidence. Some of the experimental evaluation appears to be limited to specific tasks (e.g. first letter recognition) as discussed in the Weaknesses section below, and it would help to expand the evaluation to more tasks. Methods And Evaluation Criteria: Yes, they make sense. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: The experimental design and analyses appear to be sound. It would help to have a more diverse set of experiments however, as discussed in the Weaknesses section below. Supplementary Material: I briefly skimmed the supplementary material, particularly the parts referred to in the main text, but did not read it in detail. Relation To Broader Scientific Literature: This work builds upon existing literature that develops SAEs as a mechanistic interpretability tool to disentangle concepts learnt by deep models such as LLMs. While most prior research on improving SAEs has focused on improving sparsity of SAEs (e.g. TopK SAE, Gao et al. 2024), this work looks at improving the interpretability of SAEs, and to avoid the recently reported problems of feature absorption, splitting, and composition. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: ## Strengths 1. This work deals with an important problem—while many attempts have been made to Pareto-improve the reconstruction error and sparsity tradeoff, this work focuses on improving the interpretability of SAEs and addresses issues caused by scale, which is a valuable contribution. 2. Compared to the baselines, Matryoshka SAE performance improves with scale, which is promising. 3. The idea of learning concepts at different granularities makes intuitive sense and the proposed approach generally appears to be sound. 4. The paper is well written and easy to follow. ## Weaknesses 1. The training objective of the Matryoshka SAE as per Equation 5 only enforces that latent subsets from $0:m$ for all $m\in\mathcal M$ must independently be able to reconstruct the features. In particular, given $m_1<m_2$, there is no constraint that any latent outside of the first $m_1$ latents but within the first $m_2$ latents must activate at all. As a result, why should the SAE learn any specialized concepts at all? Is it simply that learning such concepts helps performance? An analysis of this, with respect to SAE size and reconstruction error, would be interesting to have, both in the toy setup and with real data. 2. In addition to the previous point, it might be interesting to see if it helps to enforce specific values of $K$ for different latent granularities, to control how many coarse and fine-grained concepts the model learns. 3. Currently there doesn't seem to be any evaluation to check how parent and child concepts relate, except in the toy setup. It would be useful to see if similar parent-child trees could also be constructed in the more realistic setups used in Section 4.2 and 4.3. 4. Generally, the evaluations in Section 4.2 and 4.3 are observation-based and limited to very specific tasks, such as first letter recognition. While this is promising, it would be helpful to also extend to at least a few more tasks, to assess the generality of the method. 5. The "truncated Pareto distribution" described in L210 is unclear, and there is no cited reference. A clarification would be helpful. 6. In the experiments in Section 4.2, it would help to also show qualitative results with Matroyshka SAEs. L284-291 claims it avoids feature absorption, but without a qualitative figure. Additionally, what happens if $K$ is varied in this experiment? 7. The metrics used in Figure 6 need to be defined precisely, they are currently unclear. Other Comments Or Suggestions: None. Questions For Authors: Please refer to the Weaknesses section above. Overall I believe this work provides a valuable contribution and should be accepted. However, it would be helpful to have clarifications on the concerns raised in the rebuttal. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed feedback and valuable suggestions. We are particularly encouraged that you found the paper well-written, addressing an important problem, and believe it is a valuable contribution that should be accepted. We address your key questions below: 1. You asked why specialized concepts emerge without explicit constraints. As you suggest, the driving force is indeed the reconstruction loss. The overall objective (summed over all dictionary sizes) incentivizes the full SAE to minimize reconstruction error. Specialized latents emerge naturally because they capture finer details present in some inputs that cannot be adequately represented by the limited number of coarser features alone, thus minimizing the total reconstruction error across the dataset. Our experiments (Appendix C) confirm that later latents, while activating less frequently, significantly contribute to reconstruction quality. 2. We appreciate this suggestion! Explicitly enforcing the number of active latents per group might indeed further improve granularity control. We'll include this in our discussion as a promising future research direction. 3. Thanks for the suggestion! We agree that it would be beneficial to include qualitative examples of parent-child relationships found in Matryoshka SAEs. We will include these in the camera-ready version. 4. You noted the absorption metric relies on the first-letter task. This is correct and follows the methodology established by SAE Bench [1] and Chanin et al. [2], benchmarks widely used in the community. While we acknowledge that broadening the tasks for measuring absorption could be beneficial (and will note this in our limitations), developing improved absorption metrics is beyond the scope of this work. Crucially, however, we want to emphasize that our other key evaluations, such as sparse probing and concept isolation, do utilize a diverse range of classification tasks. 5. Thanks for pointing out this omission. We sample the prefix lengths from a discrete distribution inspired by the Pareto distribution. Specifically, each possible prefix length $l \in \{1, \ldots, m\}$, where $m$ is the total number of latents, is assigned a probability proportional to: $$ P(l) \propto 1 - \left( \frac{l}{m} \right)^{\alpha} $$ where $\alpha > 0$ controls how heavily the distribution favors shorter prefixes (we use $\alpha = 0.5$ in our experiments). This yields a monotonically decreasing probability distribution over prefix lengths. We then normalize this distribution and sample prefix lengths without replacement, always including the full prefix length $m$ to ensure the complete SAE is trained. We will add this to the camera-ready version. 6. Figure 4 shows absorption in vanilla SAEs. As requested, we will add a corresponding figure for the Matryoshka SAE in the camera-ready version, demonstrating how the same feature (e.g., "female tokens") remains distinct (see https://sparselatents.com/tree_view?loc=residual+pre-attn&layer=3&sae_type=S%2F2&latent=65 for the example). We did not experiment with varying $K$ in this experiment, but we extensively tested this quantitatively in section 4.3. 7. Thank you for pointing out that the metrics in Figure 6 are not clearly defined. The metrics in Figure 6 are calculated the same as in SAEBench [1]. We will clarify them in the camera-ready version. The metrics are defined as follows: - The splitting metric counts how many latents are needed to represent a single feature. We measure this by training k-sparse probes and detecting when increasing k by one causes a jump in F1 score by more than threshold τ = 0.03. This indicates that additional latents contain significant information about the feature, suggesting the feature has been split across multiple latents. - The absorption rate measures the fraction of tokens where a latent corresponding to a first-letter feature fails to activate despite the token starting with that letter due to the feature being absorbed by another latent. First, we find false-negative tokens where the main SAE latents fail to fully activate on tokens starting with that letter (their projection onto the ground truth direction is less than the model's projection) while other latents (the absorbing latents) compensate on these tokens. Since we use the SAEBench implementation for our evaluation, we refer to their paper for more details [1]. Thanks again for your valuable suggestions. We hope these clarifications and our planned revisions address your concerns. Given your positive assessment of our work's contribution and importance, we would be grateful if you would consider strengthening your support. [1] https://www.neuronpedia.org/sae-bench/info#feature-absorption [2] Chanin, David, et al. "A is for absorption: Studying feature splitting and absorption in sparse autoencoders." arXiv preprint arXiv:2409.14507 (2024).
null
null
null
null
null
null
A Bregman Proximal Viewpoint on Neural Operators
Accept (poster)
Summary: This paper considers the problem of efficient PDE solutions and operator learning. This paper shows that the neural operator architecture can be interpreted as the minimizer of a Bregman regularization problem, and further designs a novel architecture that includes an inverse activation function. This general framework is then applied to the Fourier neural operator and gives better empirical results. This paper also proves a universal approximation result for Bregman neural operators. ## update after rebuttal Thank you for your response! I will keep my score. Claims And Evidence: The claims look well-supported; however, I have no background in this area and cannot verify the claims and proofs. Methods And Evaluation Criteria: Yes. Theoretical Claims: I didn't check the proofs. Experimental Designs Or Analyses: This paper compares different operators on various PDE benchmarks and shows that the Bregman Fourier operator gives the best results on most benchmarks. Supplementary Material: No. Relation To Broader Scientific Literature: This paper gives a general framework to characterize neural operators as the minimizers of certain Bregman regularization problems. Classical neural operators can be interpreted as special cases of this framework. Essential References Not Discussed: No. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the review and the positive feedback. Should the reviewer require any further clarification, we would be delighted to provide it.
Summary: This paper proposes a novel perspective on neural operators based on Bregman proximity operators, where the action of operator layers is interpreted as the minimizer of a Bregman-regularized optimization problem. By defining the Bregman distance through Legendre functions, activation operators are characterized as Bregman proximity operators mapping from the dual space to the primal space. To this end, the authors designed a novel operator termed B-FNO, and numerical experiments show better performance compared to classical FNO and F-FNO. ## update after rebuttal After rebuttal I opt to maintain the score to positive. Claims And Evidence: The claim that Bregman Neural Operators (BFNOs) enable deeper architectures with improved performance (Abstract, Section 3.3) is partially supported by Figure 4 (Page 7), where BFNO errors decrease as depth increases up to $T=64$, unlike FNO/ResFNO. However, the paper does not rigorously explain *why* the Bregman formulation stabilizes deep networks. While Remark 3.6 suggests the identity-preserving property helps, no ablation studies isolate this effect from residual connections. Additionally, Figure 6 (Page 8) shows marginal improvements (e.g., BFNO vs. FNO on 2D Darcy: ∼∼1% relative error reduction), but statistical significance is not tested. Methods And Evaluation Criteria: The evaluation uses standard PDE benchmarks (Navier-Stokes, Burgers, Darcy) from PDEBench (Section 5.1, Page 7), which are appropriate for operator learning. But there remains rooms for improvement: - Computational costs (training time, memory) of BFNO are not compared to baselines. - The choice of SoftPlus as an invertible ReLU proxy (Section 5.1) is pragmatic but underdiscussed; no experiments validate whether this approximation introduces biases or artifacts. Theoretical Claims: Theorem 4.1 (Page 6) asserts universal approximation for BFNOs with sigmoidal activations. The proof (Appendix B) adapts Cybenko’s approach but assumes $\sigma$ is a homeomorphism (e.g., sigmoid). However, the experiments use SoftPlus ($\approx$ReLU), which is not sigmoidal. Experimental Designs Or Analyses: The experimental framework is broad and systematic, covering multiple PDE benchmarks (1D/2D Navier-Stokes, Burgers, Darcy) from PDEBench (Section 5.1, Page 7), which are widely accepted in operator learning literature. The comparison to strong baselines (FNO, ResFNO, F-FNO) and ablation studies (e.g., activation functions in Appendix D.4) demonstrates rigor. **Areas for Improvement** Layer-wise analysis: Figure 5 (Page 8) shows BFNO weights concentrate near zero, suggesting implicit regularization. However, no causal link is established between this distribution and improved generalization. Activation choice: Table 5 (Appendix D.4) claims BFNO with ReLU matches SoftPlus performance, but this is only tested on 2D Navier-Stokes. ReLU violates Theorem 4.1’s assumptions, yet no theoretical justification is given for its empirical viability. Supplementary Material: I have reviewed the supplementary material typically for additional results. Relation To Broader Scientific Literature: Numerical PDE by deep learning, such as Fourier Neural Operators and it's variations. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: The writing of the abstract and contribution section is quite opaque. It seems difficult for the general conference audience to catch up with the major contributions of this work. To improve readability, the authors should clarify whether the core contribution of this paper is a novel theoretical framework or the introduction of BFNO as a new state-of-the-art (SOTA) neural operator. Questions For Authors: 1.In Section 2.2, the statement "we observe that this property allows training deeper and more accurate models" suggests that the architectural design resembles skip connections, which enable deeper neural networks. Could you clarify the specific source of the BFNO improvement? 2.The test error in Fig. 6 appears marginal. 3.While BFNO demonstrates improved stability in deep architectures, does requiring invertible activation functions (e.g., SoftPlus) limit its applicability in scenarios where non-monotonic or non-smooth activations (e.g., ReLU, GELU) are preferred? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for his time and thoughtful comments and appreciate the global positive feedback. First, we would like to make it clear that the main objective of our contribution is to provide a novel theoretical framework that allows the development of founded and effective models, which we will clarify in the abstract and introduction. # Questions **Q1:** Our theoretical framework formulates BFNO layers as solutions to Bregman regularized optimization problems. When all weights are zero, they naturally reduce exactly to the identity mapping (Remark 3.6), creating an implicit regularization effect distinct from standard residual connections. We created a novel architecture, ResFNO, precisely to perform an ablation study with ResNet-style connection, confirming this distinction. While ResFNO still degrades with increased depth, BFNO maintains or improves performance. This difference manifests in Figure 5, where BFNO exhibits a distinctive Laplace-like weight distribution sharply peaked at zero, contrasting with the Gaussian distributions in other models. From an ODE perspective (see *Add. insights*), the difference becomes clearer: BFNO applies its linear operator outside the activation function $\frac{dz(t)}{dt} = K(t)\sigma(z(t))$, while residual networks place it inside $\frac{dv(t)}{dt} = \sigma(K(t)v(t))$, creating different gradient flow dynamics. Overall, BNFO's improvement can be analyzed as combination of these two elements: an implicit regularization and a different gradient flow. While a rigorous layer-wise analysis establishing a causal link between weight distribution and improved generalization would be valuable, such analysis remains an open challenge in the neural operator literature. **Q2:** The primary goal of our experiments was to validate our theoretical framework and demonstrate that the properties derived from our Bregman proximal viewpoint translate into measurable performance advantages. While focusing on the theoretical foundations, we ensured rigorous experimental comparisons to establish practical relevance (even by considering WNO models in Appendix D.2). To ensure a fair comparison, we conducted thorough validation through multiple runs with four fixed seeds and cross-validation of learning rates for all models. To assess statistical significance, we performed non-parametric one-tailed Wilcoxon signed-rank tests. Compared to FNO, BFNO is superior on 5 datasets of Figure 6 with a p-value of 0.0625, with p=0.125 for the last one 2D NS $(10^{-3})$, which shows that improvements over FNO are consistent across diverse PDE types. Similarly, BFNO achieves the same level of superiority over ResFNO on 4 datasets (Burgers, Darcy, NS $10^{-4}$ and $10^{-8}$), and on 3 datasets over F-FNO (Burgers, NS $10^{-4}$ and $10^{-8}$). **Q3:** From a theoretical perspective, expressing activation functions as proximity operators imposes constraints: **Non-monotonic functions** like GeLU are incompatible, as they cannot be represented as proxs (page 5). For **Classical Neural Operators** (Sec 3.2), activation functions must be monotonic, making ReLU suitable despite its non-smoothness, as it can be expressed as a proximity operator $prox_g$ of $g=\imath_{[0,+\infty[}$. In contrast, **Bregman Neural Operators** (Sec 3.3) impose strict monotonicity, theoretically excluding ReLU. However, our implementation provides significant practical flexibility. As noted in Remark 3.7, our architecture doesn't require computing inverse activations when composing layers since they cancel out with previous activations (enabled by adding an activation before the first operator layer). Therefore, any activation function can be used in practice. Results below, completing Appendix D.4, confirm this flexibility, showing BFNO works effectively with both ReLU and GeLU, performing comparably to SoftPlus, which serves as a common smooth approximation to ReLU in the literature when differentiability is needed. | Architecture | 4 layers | 8 layers | 16 layers | |-|-|-|-| | FNO (GeLu) | 13.7 ± 0.1 | 12.9% ± 0.2 | 12.9 ± 0.1 | | BFNO (GeLu) | **13.3 ± 0.1** | **12.4 ± 0.2** | **12.1 ± 0.1** | As suggested by the results, extending our theoretical framework to non-invertible and non-monotonic activation functions is an interesting research direction. # Remarks **On computational costs:** BFNO shares the same parameter count and memory footprint as FNO by design. Therefore, the training time per epoch is in the same order across all tasks. This will be clarified in the final version. **On universal approximation:** We indeed acknowledge this limitation in our conclusion. The theorem serves as a first theoretical result for our framework and represents, to the best of our knowledge, one of the first universal approximation results for neural operators with **Kovachki et al. (2023)**. As for the other remarks, we tried to address them in the answers above while staying within the allowed limit. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response from authors, which addresses part of my concerns. I would like maintain my score to positive.
Summary: In this paper, the authors proposed a new type of neural operator for solving PDE problems. The idea is to set the neural operator to be the solution of a functional optimization problem, and in particular, they choose the operator to be a Bregman Proximal operator with respect to some Legendre functions. This formulation retrieves a lot of classical operators and extends to coined Bregman neural operators. The authors conducted experiments showcasing the power of the new operator formulation. Claims And Evidence: yes. Methods And Evaluation Criteria: yes. Theoretical Claims: I checked the proofs in the maim body. Experimental Designs Or Analyses: The authors conducted fair experiments on the benchmark datasets and compared them with many prior neural operators. Supplementary Material: no Relation To Broader Scientific Literature: This work extends the extensive research on designing neural operators and using neural nets to solve PDE equations. It formulated a new type of neural operator based on prior arts. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I think this paper is clearly written with many useful intuition. Unfortunately I do not have an extensive background knowledge of using neural nets to solve PDEs and designing neural operators so I cannot provide a precise evaluation of the novelty and significance of this work compared to prior works. However, using Bregman divergence and Bregman operator seems to be an interesting idea, and the experiments showcased the superiority of this method. Other Comments Or Suggestions: N/A Questions For Authors: 1. Normally, in the context of optimization, the Bregman divergence needs to be a strongly convex function so that it represents a notion of distance. However, here it seems it only requires it to be strictly convex. I am wondering why strictly convex functions suffices? 2. I am wondering what is the intuition of choosing the neural operator to be a solution of an optimization problem? In the context of optimization, proximal operators are often used for non-smooth problem since it has an explicit smoothing phenomenon. Is this one of the considerations for choosing a Bregman proximal operator? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for his positive feedback. We provide below our answers to the two questions mentioned in the review. 1. **Normally, in the context of optimization, the Bregman divergence needs to be a strongly convex function so that it represents a notion of distance. However, here it seems it only requires it to be strictly convex. I am wondering why strictly convex functions suffices?** We thank the reviewer for the thoughtful remark. Indeed, strict convexity of the Legendre function $\psi$ is sufficient to define a Bregman divergence $D_\psi(x, y)$, as it ensures that $D_\psi(x, y) \geq 0$ and that $D_\psi(x, y) = 0$ if and only if $x = y$. These properties hold as long as $\psi$ is strictly convex and differentiable on the interior of its domain — which is the standard requirement in the general theory of Bregman divergences. That being said, we would like to emphasize that in our setting — as shown in Table 1 — **all Legendre functions considered are actually 1-strongly convex** on their respective domains. This ensures that the associated Bregman divergences are lower bounded by a quadratic form, i.e., $$ D_\psi(x, y) \geq \frac{1}{2} \|x - y\|^2, $$ which provides metric-like behavior. We have clarified this point in the revised version. 2. **I am wondering what is the intuition of choosing the neural operator to be a solution of an optimization problem? In the context of optimization, proximal operators are often used for non-smooth problems since it has an explicit smoothing phenomenon. Is this one of the considerations for choosing a Bregman proximal operator?** We appreciate the reviewer's insightful question. While it is true that proximal operators are classically used to handle non-smooth terms, especially in composite minimization settings, they are not restricted to non-smooth functions. In fact, proximal maps of smooth (even strongly convex) functions are well-defined and have been studied in the context of regularization and smoothing. In our setting, the motivation for casting the neural operator as the solution to an optimization problem is not solely driven by the smoothing property of proximal operators-although this can be beneficial-but more importantly by their connection to activation functions. It follows that, from our perspective, each layer performs a structured update—balancing proximity to the previous state with the minimization of an energy functional. This interpretation introduces a variational inductive bias into the architecture, which can encode prior knowledge and improve interpretability and performance. **Using Bregman divergences further allows for this trade-off to be geometry-aware**. We hope this novel viewpoint will be leveraged by practitioners to encode structured prior knowledge into the architecture of neural operators through principled, optimization-based layers.
null
null
null
null
null
null
null
null
Grokking in the Wild: Data Augmentation for Real-World Multi-Hop Reasoning with Transformers
Accept (poster)
Summary: This paper examines the problem of learning multi-hop reasoning over knowledge graph facts. Prior works have shown that this is a challenging problem, particularly without chain-of-thought or externalized reasoning. However, recently, synthetic experiments have revealed that such multi-hop reasoning can be learned by models in a grokking regime -- provided that there exists a sufficient concentration of inferred facts in the dataset. This paper examines whether such a grokking phenomenon occurs in a real fact-learning setting. They first show theoretically that there exists a lower bound of the knowledge graph branching factor which determines whether there are "naturally" a sufficient number of inferred facts to enable reliably learning multi-hop reasoning. In real world settings, based on the WikiData, the authors demonstrate that there do not exist a sufficient concentration of inferred facts and they describe possibilities of generating additional data to boost the formation of multi-hop reasoning. Empirically, they show that training on this augmented data does in fact induce grokking dynamics in real transformers. Impressively, they can compare with very small models and achieve comparable performance to the largest models. Claims And Evidence: The main claim of this paper is that grokking can emerge in real settings (esp. for multi-hop fact reasoning settings). This is tested via a set of experiments on Wikipedia-based knowledge graph settings. They have two settings: a structured knowledge setting in which facts are represented as triples. In particular, this is where they observe the Grokking phenomena primarily. However, I believe that the structured setting eliminates much of the complexity and noise present in real fine-tuning datasets. On the other hand, they also consider an unstructured setting. In this setup, however, the results do not appear to confirm grokking as far as I can tell. Thus, I am not sure that this claim is necessarily supported. Methods And Evaluation Criteria: The 2-Wiki-Multi-Hop dataset appears to be a reasonable setting to test and the model size is reasonable -- albeit maybe on a smaller end for real tasks. It would be interesting if the authors had some predictions about the impact of model size on their results. I also found that the discussion of in-distribution/out-of-distribution were a bit confusing. The authors state that in-distribution performance measures entity pairs seen during training. If that is the case, how does this differ from the training loss? Theoretical Claims: I have briefly reviewed the theoretical sections, but not the proofs. The theory appears to be predicated on the necessity of observing a given number of multi-hop paths. The main theoretical result proves a lower bound on the average branching factor of the knowledge graph necessary to achieve generalization. I think this analysis is generally ok, but it does assume the existence of some "known" generalization ratio for each relation. It would be better if the authors can justify why it is ok to model this generalization ratio as only depending on the relation. Could it not also depend on the coverage of the entities in the seen multi-hop examples? Experimental Designs Or Analyses: I did not have specific concerns about the validity of the experimental design, with the exception of my question about in-distribution/out-of-distribution accuracy above. Also for clarity, how exactly was Grokked GPT-2–small trained. Was it trained on the unstructured data or the structured data? Additional details of the prompts tried for the comparison to GPT-4/o1 would be quite helpful to better contextualize the performance of the proprietary models.. Supplementary Material: I reviewed the supplementary material corresponding to the proof of Lemma A.3. Relation To Broader Scientific Literature: This paper appears to borrow heavily from the formulation/problem setup etc of [1]. However, they extend the simulated/toy settings addressed by [1] and instead consider real knowledge graph data -- sourced from WikiData. Their claimed key contribution is that they show the phenomena introduced by [1] can in fact happen in practice. I will note that in the structured setting (the place where they see the most direct example of grokking) the problem is actually not so different from the toy settings examined by [1]. [1] Wang, B., Yue, X., Su, Y., and Sun, H. Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization, May 2024. URL http:// arxiv.org/abs/2405.15071. arXiv:2405.15071 [cs]. Essential References Not Discussed: There is not one that I am aware of. Other Strengths And Weaknesses: I believe that this paper could benefit significantly from better presentation. Overall, the problem studied, experiments, and theoretical arguments are interesting in my opinion but they are hard to parse. In the theory section, the authors introduce a large amount of formalism and notation. I find that this overwhelms me with many details (for example they introduce a norm over edges but this norm isn't used later in the text as far as I can tell). I think the authors could make this paper significantly easier to follow by culling the notation and excessive detail in the theory section and deferring some notions to the supplementary material. Other Comments Or Suggestions: Line 192: "you sure you wanted to change that to a ”not exists” because" -- Is this a typo? Questions For Authors: 1) Please answer the questions in "Experimental Design or Analysis" Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and detailed feedback, which has helped us refine both the presentation and scope of the paper. Below, we address each of the raised concerns. **A. On theoretical presentation and unused formalisms:** We appreciate your observation regarding the mathematical complexity and notation density in the theory section. In response, we have carefully reviewed all definitions and formal constructs. A dependency analysis shows that each definition contributes to Lemma 1 or 2, either directly or through intermediate derivations. While we agree that the presentation could be simplified, we found that deferring key definitions to the appendix would compromise the readability and cohesion of the main theoretical argument. We will, however, streamline redundant notation (e.g., the unused edge norm) and improve clarity in the final version. Thank you also for catching the typo on line 192; we have corrected it. **B. On in-distribution vs. training data distinction:** Thank you for highlighting the confusion here. We will clarify this distinction in the final version. In short: the training data consists of atomic facts and associated inferred facts used in specific combinations. In-distribution (ID) evaluation involves novel combinations of *seen atomic facts*, not seen together during training. For example, if the model saw the atomic facts for "Avignon Rocher des Doms" and "Paris Louvre Museum" in training, an ID question might involve "Avignon Rocher des Doms" and the "Eiffel Tower", a new combination of familiar elements. This contrasts with out-of-distribution (OOD) examples, which include entirely new atomic facts. **C. On structured vs. unstructured settings and realism:** We agree that structured settings simplify linguistic complexity. Our core claim is not that we have solved the problem for all real-world unstructured data, but that the grokking phenomenon (previously limited to toy datasets) can occur with real-world entities and linguistic structure, provided key conditions are met (notably a sufficiently high $\phi_r$). The structured format allowed us to isolate this effect. In the unstructured setting, generalization circuits are harder to induce due to syntactic variability and ambiguity. We view extending grokking to unstructured data as an important avenue for future work. Our current contribution lies in demonstrating that grokking is not limited to synthetic data, and that it can emerge from real-world facts when structured and augmented appropriately. **D. On relation-specific generalization ratios:** The assumption that $\phi_r$ can be modeled as relation-specific follows from prior work [1], which demonstrates that transformer-based generalization circuits rely primarily on relation-specific atomic facts. This behavior has been empirically confirmed in our own replications of their experiments. While entity frequency affects the number of available paths, it is the *relation* that governs the type and reusability of inference patterns, making $\phi_r$ effectively relation-dependent. We clarify this distinction in Lemma 1 and Appendix A.1, where we show that while the number of unique paths grows with the number of entities $|\mathcal{V}|$, the asymptotic behavior is dominated by the branching factor $b^{n-1}$, not by entity coverage. **E. On model size effects:** We are currently conducting experiments with varying GPT-2 sizes (124M–1.5B) to investigate the impact of scale on generalization dynamics. Early results suggest that larger models reach generalization faster, but the critical $\phi_r$ threshold remains stable. For additional details, please see our response to Reviewer hZJa (Section A). **F. On prompt transparency and training details:** Thank you for raising this point. We will include full prompt templates, sampling parameters, and training details for both structured and unstructured settings in the final version. Briefly, we trained two separate GPT-2 small models from scratch—one for structured data (triple format) and one for unstructured data (natural language questions). Neither model used pretraining; both were trained solely on task-specific data. **G. On circuit flexibility and multi-hop reasoning:** The generalization circuits observed in our setting are relatively rigid and rely on deterministic retrieval and composition of atomic facts. For 2-hop reasoning, this involves a single inference operation; for $n$-hop paths, multiple sequential inference steps are required. Based on our theoretical framework and ongoing experiments, we are confident that $n$-hop grokking is feasible without fundamental changes to the architecture -- provided the augmented dataset supports sufficient multi-hop path coverage. We again thank the reviewer for the constructive feedback and will incorporate these improvements in the final submission.
Summary: The paper investigates the application of grokking—a phenomenon where neural networks transition from memorization to generalization after prolonged training—to real-world multi-hop reasoning tasks. The authors propose augmenting sparse knowledge graphs (KGs) with synthetic data to increase the ratio of inferred-to-atomic facts ($\phi_r$), enabling Transformers to form internal reasoning circuits. Experiments on the 2WikiMultiHopQA benchmark show that their approach achieves near-perfect accuracy (95–100%) on comparison tasks, outperforming baselines and matching state-of-the-art models. The paper also provides mechanistic insights into how increasing $\phi_r$ drives circuit formation in Transformers. Claims And Evidence: 1. Synthetic data augmentation raises $\phi_r$ above a threshold, enabling grokking in real-world KGs. Supported by experiments showing that increasing $\phi_r$ via synthetic multi-hop paths correlates with late-phase generalization (Figure 3). 2. Even factually incorrect synthetic data can improve reasoning by forcing reliance on relational structure. Partially supported by qualitative analysis showing improved OOD accuracy despite synthetic noise. However, no explicit ablation studies compare factual vs. non-factual synthetic data. 3. Transformers trained with this method achieve SOTA results on 2WikiMultiHopQA. Validated by results in Table 3, where their GPT-2-small model outperforms GPT-4o and o1-mini on structured tasks. Methods And Evaluation Criteria: **Methods**: - **Data Synthesis**: Augment KGs with LLM-generated atomic/inferred facts to boost \(\phi_r\). - **Training**: Train GPT-2-style Transformers with prolonged optimization (300k steps) to induce grokking. **Evaluation**: - **Benchmark**: 2WikiMultiHopQA (structured/unstructured subsets). - **Metrics**: In-distribution (ID) vs. out-of-distribution (OOD) accuracy. **Strengths**: - The focus on $\phi_r$ as a key lever for grokking is novel and grounded in prior theoretical work. - Evaluation across both structured (triplet-based) and unstructured (paragraph-based) settings adds robustness. **Weaknesses**: - No comparison to retrieval-augmented methods, which are common in multi-hop QA. Theoretical Claims: The paper builds on grokking theory[1] but does not introduce new theoretical proofs. Instead, it adapts existing concepts (e.g., generalization thresholds for $\phi_r$) to real-world KGs. The lemmas on asymptotic bounds for multi-hop paths (Appendix A.2–A.4) are intuitive but lack formal proofs. [1] Grokking: Generalization beyond overfitting on small algorithmic datasets Experimental Designs Or Analyses: **Strengths**: - Clear ID/OOD splits to measure generalization. - Multiple seeds and architecture details ensure reproducibility. **Weaknesses**: - **Synthetic Data Generation**: The process relies on LLM prompting but lacks transparency (e.g., prompts not fully disclosed). - **Baselines**: Comparisons to GPT-4o are unclear due to potential data leakage (GPT-4 may have seen 2WikiMultiHopQA during pretraining). Supplementary Material: The appendix includes: - Proof sketches for lemmas (informal). - Qualitative failure cases (e.g., ambiguous entity resolution). - Data synthesis prompts (partial). **Missing**: Full prompts, hyperparameter details, and code for reproducibility. Relation To Broader Scientific Literature: The work bridges grokking[1] and multi-hop QA [2,3]. It extends grokking beyond synthetic tasks, aligning with efforts to study emergent reasoning in Transformers[4]. However, it does not discuss connections to neuro-symbolic methods (e.g., logical reasoning modules). - [1] Grokking: Generalization beyond overfitting on small algorithmic datasets - [2] HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering - [3] Constructing A Multi-hop QA Dataset for Comprehensive Evaluation of Reasoning Steps. - [4] Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization, Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths**: - First to apply grokking to real-world KGs. - Mechanistic analysis of circuit formation. **Weaknesses**: - Limited scalability analysis (e.g., larger KGs or models). - Risks of synthetic hallucinations are acknowledged but not quantified. Other Comments Or Suggestions: N/A Questions For Authors: How does the required $\phi_r$ threshold vary with model size? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and constructive feedback. **A. On transparency and reproducibility:** We appreciate the emphasis on methodological clarity. In the final version, we will provide full details of the training procedure, including the exact prompts used for structured and unstructured data synthesis, temperature and sampling parameters, and other relevant hyperparameters. We will also include the target responses used for comparison and make code and data available to support reproducibility. **B. On comparisons with large pre-trained models:** We agree that comparing against models such as GPT-4o presents challenges due to potential training data leakage, especially with benchmarks like 2WikiMultiHopQA, which overlap with common pretraining corpora such as Wikipedia. We note that this limitation affects most publicly available models and motivates our decision not to report IID/OOD splits for those baselines. Nonetheless, the fact that our grokked GPT-2-small model surpasses these models in structured comparison tasks underscores the potential of $\phi_r$-driven grokking as a viable, lightweight alternative for reasoning over factual knowledge. **C. On model scaling and $\phi_r$ thresholds:** We are currently conducting experiments across a range of GPT2 model sizes (124M–1.5B parameters) to evaluate the effect of scaling on generalization dynamics. Preliminary results suggest that while grokking behavior remains consistent, larger models tend to reach generalization more rapidly. Notably, the critical $\phi_r$ threshold required for generalization appears stable across model sizes. Please also refer to our response to Reviewer hZJa (Section A) for more details. **D. On theoretical rigor:** We are in the process of finalizing complete versions of the theoretical results, including clearly stated assumptions and formal proofs to replace the current sketches in Appendix A.2–A.4. These revisions will be included in the final version.
Summary: This paper explores extending the phenomenon of "grokking"—where neural networks transition from memorization to generalization after prolonged training—from synthetic tasks to real-world factual reasoning. The authors address the challenge of dataset sparsity in real-world knowledge graphs by proposing a data augmentation strategy to increase the ratio ($\phi$) of inferred facts to atomic facts beyond the threshold required for grokking to emerge. This augmentation process includes the addition of both factually correct and, in some cases, incorrect synthetic data, with the latter aiming to encourage reliance on relational structures rather than memorization. The paper evaluates its approach on the 2WikiMultiHopQA benchmark, focusing on multi-hop reasoning tasks. Results show that a GPT2-small model achieves up to 95-100% accuracy on comparison tasks after grokking occurs. The paper also provides a formal theoretical framework to define necessary conditions for knowledge graphs to be "generalizable," focusing on the branching factor of relations. These findings suggest that enhancing the inferred-to-atomic fact ratio is a critical factor for enabling robust generalization circuits in transformers. The authors acknowledge that while the benchmarks demonstrate improvements, further investigation is needed to evaluate the applicability of these methods to other domains and real-world scientific discovery tasks. Claims And Evidence: The primary claims -- that grokking can be extended from synthetic to real-world factual reasoning, that the $\phi$ ratio threshold provides a figure of merit for grokking, and the theoretical bounds necessary for generalization -- are well supported by the work. The late-phase out-of-distribution jump demonstrated in Figure 3-b illustrates the general premise, and considerable attention is given to the formalism and bounds. Methods And Evaluation Criteria: The paper uses data augmentation to increase the ratio of inferred-to-atomic facts in knowledge graphs, testing on 2WikiMultiHopQA across structured/unstructured formats. In-distribution and out-of-distribution evaluations are used to quantitatively assess generalization capabilities, and experimental evidence aligns with theoretical results. Theoretical Claims: The paper presents two key theoretical claims: 1) an asymptotic bound on the number of n-hop paths in knowledge graphs as a function of entities, branching factor, and edge probability; and 2) necessary conditions for a knowledge graph to be "generalizable" based on relation-specific branching factors. The sketched proofs seem like they could be made more rigorous with more clearly articulated assumptions, but they provide sufficiently compelling intuition for the paper -- I'm not sure whether "Lemma" is the appropriate word for the results. Experimental Designs Or Analyses: The experimental design effectively isolates the impact of the $\phi$ ratio on grokking and provides valuable comparisons across structured/unstructured formats. Minor weaknesses include testing only on GPT2-small and the lack statistical significance. Supplementary Material: Yes, I reviewed the lemma calculations. Relation To Broader Scientific Literature: I see no missing material on the relation of the papers results to broader scientific literature. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: None that are fair to ask, as I think they are follow-on work to this paper -- there are a lot of interesting questions about the $\phi$ scaling with architecture and model size that I hope to see the authors pull the thread on in the future. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive feedback. **A. On model size and generalization:** We appreciate the suggestion regarding scaling effects. We are currently conducting additional experiments across GPT2 model sizes (124M to 1.5B parameters). Preliminary results indicate that the grokking behavior is qualitatively consistent across sizes, especially on structured data. However, larger models tend to reach generalization more quickly. We will include these findings, along with supporting graphs and analysis, in the final version. Notably, the critical ratio $\phi$ remains stable across model sizes. **B. On applicability to other domains:** We agree that extending this work to broader domains is an important next step. We are actively exploring applications beyond fact-based reasoning, including logic-intensive tasks such as mathematical problem solving, and real-world scientific domains. These follow-up studies aim to evaluate the generality of $\phi$-driven generalization. We believe this line of inquiry could inform the design of data-efficient training strategies in broader AI applications. **C. On theoretical rigor:** Thank you for the feedback on the formal results. We are currently refining the theoretical section to provide complete, rigorous proofs with clearly stated assumptions. The final version will replace the current sketches with detailed derivations, and we will revise the terminology (e.g., replacing “Lemma” if more appropriate) to ensure precision. We will also discuss the limitations and scope of the theoretical claims to better contextualize the results.
null
null
null
null
null
null
null
null
Code-Generated Graph Representations Using Multiple LLM Agents for Material Properties Prediction
Accept (poster)
Summary: This paper presents Rep-CodeGen, a framework using multiple LLM agents to generate, evolve, and evaluate code for obtaining graph representation of crystal structures following physical constraints. The representation obtained thereby is tested for constraint satisfaction and performance in materials property prediction, in comparison to baseline methods. ## update after rebuttal The authors have answered my questions well. I've raised my score accordingly; however, my concern remains that proposing a new representation is not a task for which LLM could be very valuable. The contribution to materials science is limited. Claims And Evidence: The main claims, which are related to the 3 key questions in Sec. 5, are well-supported. However, my concerns are about the validity of 6 constraints and the usefulness of this Rep-CodeGen method (see Other S/W and Questions). Methods And Evaluation Criteria: The methods for comparing materials property prediction performance and testing constraint satisfactions make sense. Theoretical Claims: The proofs of constraint satisfactions look correct. However, whether all constraints in Sec. 3.2 are necessary is questionable (see Questions). Experimental Designs Or Analyses: Experimental design and analyses are sound. The comparison of evolution ability (Sec. 5.3) needs a little clarification (see Questions). Supplementary Material: I reviewed Appendices A, C, and E. Relation To Broader Scientific Literature: Creating new representations of materials structures that satisfy *necessary* physical constraints could help ML-based materials modeling and design. Essential References Not Discussed: Related works to my knowledge are comprehensively discussed. Other Strengths And Weaknesses: This work uses LLM to “create” a new graph representation through code generation. However, designing this new representation and is not beyond human’s capability given the goal of which constraints to satisfy. Once a function for generating such representation is written, it can be reused for all materials. There are not many constraints of this kind, so this “representation creation” process will not need to be repeated so many times that it needs an LLM to automate (not to mention using LLMs decreases trustworthiness). Given these, I don’t see an advantage of using LLM here. Other Comments Or Suggestions: None. Questions For Authors: * Materials representations need to trade off conciseness and expressiveness. Sec. 3.2 lists 6 types of constraints, but are they all necessary for representing materials? - Enforcing reflection equivariance (rather than invariance) helps distinguish chirality, which has little or no impact on the properties tested in this study but increases the complexity of representation. - For crystals with lattice type and atomic radii known, a cutoff radius for formulating graphs could be reasonably chosen. Whereas enforcing Lipschitz continuity may result in different structures with a small distortion (e.g., Jahn Teller) not well distinguished. * Lots of different code pieces are generated during the evolution process, some may contain certain merits that others miss. Is there a systematic way to combine them? * In Sec. 5.3, is the “base” code provided by user or generated by each LLM from prompt? * (Just for curiosity) Why choose to generate code instead of natural language description? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your time and thoughtful evaluation, as well as your recognition of our methodological design, theoretical foundations, and experimental framework. We are particularly grateful for your recognition of our motivation, ”Creating new representations of materials structures that satisfy necessary physical constraints could help ML-based materials modeling and design”. Below, we will address your concerns point by point. Q1: Materials representations need to trade off conciseness and expressiveness. Are six types of constraints all necessary? R1: We agree that representations should balance simplicity and expressiveness. At the same time, the importance of the six types of constraints proposed in this paper has been substantiated in corresponding studies (ComFormer, PerCNet, and PDD e.t.). These constraints can help capture the structure information of crystals and enhance the accuracy of property prediction. Improving the prediction accuracy of ML-based methods can benefit material science in various aspects. Most importantly, our framework offers fully customizable constraints for crystal representation. Domain experts can freely modify these constraints, such as physical constraints, representational simplicity constraints, and other specific constraints, based on application needs, enabling the creation of tailored crystal characterization codes for various scenarios. We selected six representative constraints to demonstrate our framework's capability in generating crystal representations that simultaneously satisfy multiple constraints that existing algorithms fail to meet collectively. Q2: How to combine merits from different code pieces? Is the ‘base’ code provided by user? R2: In the initial iteration, we manually provide the 'base' code to Agent_A. This code is designed to meet fundamental properties, including permutation, rotation, and translation invariance. It is worth noting that the base code can alternatively be generated by LLMs. The merits of the generated codes are integrated throughout our framework's process. We not only retain the codes (produced by Agent_B) that meet specific constraints but also document the reasoning (merits and demerits) by Agent_C. Then, the parent codes with merits can be effectively selected through Formula 1 and 2. Subsequently, the codes and their corresponding reasoning are sent back to Agent_A. Agent_A evaluates the codes and their rationale, devising an improved plan accordingly. Q3: Why choose to generate code instead of natural language description? (Using LLMs decreases trustworthiness.) R3: This question helps shed light on our motivation. The primary advantage of code lies in its superior clarity and interpretability. Its structured nature allows humans to easily understand the reasoning behind the LLM's outputs, thereby boosting trust in its results. In contrast, natural language descriptions of crystal structures lack the same level of transparency, making it harder to discern the rationale behind the LLM's conclusions and potentially undermining confidence. Q4: “Given these, I don't see an advantage of using LLM here.” R4: It is worth noting that our goal is to assist human experts in accelerating material research instead of completely replacing them. This acceleration manifests in two aspects. First, while human experts excel in their specialized fields, they often invest significant effort in acquiring cross-disciplinary skills, such as coding or understanding graph learning. Rep-CodeGen enables materials science professionals to concentrate on their core expertise by reducing the need to master complex computer science concepts, thereby saving both time and resources. Second, the generated representations can be viewed as a novel source of knowledge that can inspire the researchers. Rep-CodeGen shows the capability to rapidly generate a diverse array of solutions. For instance, to ensure that representations satisfy periodic equivariance (i.e., changes in lattice coordinates should be reflected in the representation), current graph representation methods typically rely on obtaining sufficiently large cutoffs. By contrast, RepCode-Gen introduces innovative strategies beyond the traditional cutoff method, such as converting periodic data into length and angle metrics, incorporating angles between lattice and edge vectors, and proposing supercell configurations extending unit cells along periodic dimensions. --- Rebuttal Comment 1.1: Comment: I appreciate the Authors' time and effort. My concerns regarding clarity are well addressed; I see a good point of generating code instead of natural language. My concern remains that proposing a new representation is not a repetitive/laborious task which LLM could be very valuable to. I'll change my rating accordingly.
Summary: This paper introduces a novel framework named Rep-CodeGen, which leverages multiple Large Language Model (LLM) agents to automatically generate code for obtaining graph representations of material properties. The primary contributions of this work are threefold. First, the paper proposes an interpretable framework capable of automatically generating code to derive graph representations, particularly effective when addressing new constraints. This represents the first framework of its kind to automate the generation of code for graph representations. Second, through the generated code, the paper achieves a graph representation that satisfies six distinct constraints, including permutation invariance, rotation invariance, reflection equivariance, Lipschitz continuity, periodicity equivariance, and translation invariance. Lastly, extensive experiments on two real-world material datasets demonstrate that the material property prediction method based on this graph representation achieves state-of-the-art performance across multiple tasks. ## update after rebuttal After reviewing the authors’ rebuttal, my questions regarding experimental details, the evolution process, and constraint handling have been satisfactorily addressed. I have no further concerns and maintain my recommendation of Accept. Claims And Evidence: Yes. Specifically, the claims made in the paper can be divided into four parts. First, the paper asserts that Rep-CodeGen is the first framework for automatically generating codes to obtain representations that can be used when facing new constraints. This claim is supported by the fact that no prior work has adopted this approach for generating crystal representations, as demonstrated in Section 2.1. Second, the paper claims that the framework is indeed capable of generating representations that satisfy new constraints. This claim is substantiated by the experimental results presented in Sections 5.1 and 5.3. Third, the paper proposes a representation that satisfies six constraints using the Rep-CodeGen framework. This claim is validated in Section 5.1 and further supported by the code provided in Appendix Code 1 and the proof section. Finally, the paper claims that the obtained representations achieve state-of-the-art (SOTA) performance in crystal property prediction tasks, which is evidenced by the results of Experiment 2. Methods And Evaluation Criteria: Yes. The authors' approach of using a multi-agent framework to generate crystal representation codes is well-founded and logical. Additionally, the experimental design is appropriate and aligns with the research objectives. The datasets and evaluation metrics employed for the crystal property prediction task are publicly available and widely recognized in the field, further validating the suitability of the proposed methods and criteria. Theoretical Claims: Yes. The authors provide proofs in the appendix demonstrating that the representations generated by the Rep-CodeGen framework satisfy the six proposed constraints. No significant issues were identified in the reasoning or validity of the proofs. Experimental Designs Or Analyses: Yes, the experimental designs and analyses presented by the authors are sound and valid. Specifically: 1. Experiment 1: The authors introduce the generation results of the Rep-CodeGen framework, compare the generated representations with other representations, and provide theoretical proofs demonstrating that the generated representations satisfy the proposed constraints. 2. Experiment 2: The authors evaluate the performance of the generated representations in crystal property prediction tasks, demonstrating their effectiveness. 3. Experiment 3: The authors compare the generation results of three different large language models (LLMs) when used independently versus in combination with the proposed framework, highlighting the advantages of their approach. Overall, the experiments are well-designed, and the analyses are thorough and logically consistent. Supplementary Material: Yes, I have reviewed all the context in the appendix. Relation To Broader Scientific Literature: The key contributions of this paper are closely tied to the broader scientific literature on crystal property prediction. Contributions to Literature: 1. This paper introduces a novel solution by proposing a multi-agent framework for automatically generating crystal representations that inherently satisfy the desired constraints. This approach represents a departure from traditional manual design methods. 2. The proposed framework generates representations that differ from existing ones and, for the first time, satisfies six specific constraints, addressing a gap in prior research. 3. The experimental results demonstrate that the generated representations outperform existing methods in crystal property prediction tasks, achieving higher accuracy. Essential References Not Discussed: This paper have discussed all the essential references. Other Strengths And Weaknesses: Strengths: 1. Originality: The approach to solving the problem is novel. The paper utilizes the multi-agent framework Rep-CodeGen to generate codes for crystal representations, which, compared to previous manual design methods, has the capability to satisfy unknown new constraints. Moreover, the paper is the first to propose a representation that satisfies six constraints using the Rep-CodeGen framework. 2. Experimental Design: The experimental setup is reasonable, and the results demonstrate that the proposed representation achieves state-of-the-art (SOTA) performance in crystal property prediction tasks. Weaknesses: Some experimental details should be more clearly specified. See suggestions and questions. Other Comments Or Suggestions: Suggestions: 1. It is recommended to provide the initial code to help readers better understand the extent of changes during the evolution process. 2. It is recommended to provide the complete prompts, particularly the descriptions of the constraints, to enhance reproducibility and clarity. Questions For Authors: Questions: 1. Could there be situations where constraints conflict with each other, meaning that satisfying some constraints might make it impossible to satisfy others? If such cases arise, how does the proposed framework handle them? A clear explanation of this scenario and the resolution strategy would significantly impact the evaluation of the framework's robustness. 2. In the conclusion, the authors mention that the proposed representation method could also be applied to crystal generation. Could the authors provide more details on how this representation would be utilized in crystal generation tasks? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your time and for recognizing the originality of our work, the contributions to the problem, and the experimental design. Below, we will address your suggestions and questions point by point. Q1: It is recommended to provide the initial code to help readers better understand the extent of changes during the evolution process. R1: Due to space limitations, we are unable to present the complete code here.The initial code will be provided in the appendix. In terms of design philosophy, the initial code determines neighbors of atoms within the unit cell based on a distance cutoff, and the features are limited to the distances between neighbors. However, the representation code that satisfies all conditions after the framework evolution differs in both the neighbor determination method and the features. Q2: It is recommended to provide the complete prompts, particularly the descriptions of the constraints, to enhance reproducibility and clarity. R2: We will include the complete prompts in the appendix. Below is our full description of the constraints: Permutation_invariance: Changing the atomic indices should not alter the graph representation. Rotation_invariance: Rotating the atomic coordinates should not change the graph representation. Reflection_equivariance: Performing a mirror symmetry on the lattice and atomic coordinates should result in a change in the graph representation. Lipschitz_continuity: If the coordinates of the crystal undergo continuous changes, the corresponding graph representation should also change continuously. This means the neighbor relationships of each atom (i.e., which atom is a neighbor) should not change, but the attributes of the corresponding neighbors (such as distance or angle) should vary accordingly. Periodicity_equivariance: The graph representation should implicitly incorporate lattice information, meaning that any modification to the lattice coordinates (e.g., scaling the lattice by 1.5 times) should result in a change in the graph representation. Translation_invariance: Translating the atomic coordinates should not affect the graph representation. Q3: Could there be situations where constraints conflict with each other, meaning that satisfying some constraints might make it impossible to satisfy others? R3: Theoretically, this situation is highly unlikely to occur because these physical constraints are grounded in real-world conditions, making conflicts almost improbable. Moreover, even if conflicts were to arise, the framework would evolve in a direction that satisfies the majority of the constraints. Q4: In the conclusion, the authors mention that the proposed representation method could also be applied to crystal generation. Could the authors provide more details on how this representation would be utilized in crystal generation tasks? R4: First, taking state-of-the-art algorithms in crystal generation tasks, such as CDVAE[1], DiffCSP[2] and DiffCSP++[3], as examples, the graph representation we propose can serve as input to denoising models and be directly applied to crystal generation tasks. Second, the “constraints” that Rep-CodeGen can solve, are not limited to physical constraints but can also include arbitrary requirements from human experts. Therefore, our framework can generate new crystal representation codes that meet the needs of various scenarios, such as sequence-based crystal representations required by LLMs. [1]T. Xie, X. Fu, O.-E. Ganea, R. Barzilay, T. Jaakkola, Crystal diffusion variational autoencoder for periodic material generation, arXiv preprint arXiv:2110.06197 (2021). [2]R. Jiao, W. Huang, P. Lin, J. Han, P. Chen, Y. Lu, Y. Liu, Crystal structure prediction by joint equivariant diffusion, Advances in Neural Information Processing Systems 36 (2024). [3]Jiao, Rui, et al. "Space Group Constrained Crystal Generation." The Twelfth International Conference on Learning Representations. --- Rebuttal Comment 1.1: Comment: The author has answered my questions well. I don't have any more questions. I will keep my score as Accept.
Summary: This paper proposes a novel code generation framework for material property prediction. The LLM agents are employed to replace the human experts and automatically generate codes to process the CIF files fitting GNN-based models. After the processing, the obtained input vectors are called representations. Due to the symmetry of crystals, the representation should satisfy a few constraints. This paper selects six widely used constraints as the target of the generated representations. Note that the code used from the beginning satisfies three types of constraints. As the framework evolves, the generated representations can satisfy six types of constraint, which did not exist before. This ability is summarized as "obtain representations that can be used when facing new constraints" by authors. The experiments show how the generated codes process the CIF files. The main difference is the neighborhood selection compared to the expert-designed methods. Furthermore, the generated representations with an existing GNN-based property model can achieve the SOTA performance in property prediction tasks. Claims And Evidence: Yes. There are two important claims in this paper. First, the authors claimed they found a new graph representation for crystalline materials. Second, the authors claimed the proposed code-generation framework can satisfy new constraints. The two claims are supported well by experimental results. For the first claim, the new representation is shown in Fig.3 compared with two methods. For the second claim, the results in Table 2 show the evolution results among different LLMs. Methods And Evaluation Criteria: Yes. The proposed framework is tested on two widely used material datasets, named JARVIS and Material Project separately. And the LLM agents are tested on GPT3.5, DeepSeek, and QianWen. Theoretical Claims: Yes, I have checked the proofs in E. The proofs show why the generated representation can satisfy the six constraints. Most methods based on graph representations(e.g. CGCNN, ALiGNN, ComFormer) do not satisfy the reflection equivariance and Lipschitz Continuity. How to define "too large" in Lipschitz Continuity? Experimental Designs Or Analyses: Yes. The experiments are designed in accordance with the work on predicting crystal properties. Supplementary Material: Yes. I have reviewed Appendices A, C, and E in detail. Relation To Broader Scientific Literature: 1. The idea of this work holds significant importance for both material and computer science. The test sets bridges the two fields. Normally, researchers need to investigate how to satisfy the constraints by sophisticated design. By contrast, this work converts the constraints to test sets and discards the representations that failed in test sets. In other words, this framework can not only solve the problem of material science but can also deal with problems in protein with desired test sets accordingly. 2. A novel graph-based representation that satisfies six constraints is found by the proposed framework, thereby achieving enhanced accuracy in crystal property prediction. Essential References Not Discussed: No important references are found to be missing. Other Strengths And Weaknesses: The main strengths of this paper are as follows: - Most existing studies based on LLMs focus on directly using these models to predict crystal structures, i.e., generating new structures. In contrast, this work explores how to generate representations of crystal structures using LLMs, which is both novel and fundamental. - The framework is effective in generating new codes for the representation. And the results are interesting and impressive. In particular, the results shown in Figure 3 are especially noteworthy, as they may inspire human experts with their unique representation. The weaknesses of this paper are as follows: - Section 5.3 is not very convincing. The main focus of the proposed framework is to generate codes (representations) satisfying six constraints, but the experiments are stopped at five due to resource limitations. - Although the generated codes are interpretable, the paper does not provide detailed insights into understanding the logic and structure of these codes. There is also no discussion on how to further optimize these codes to improve performance. Other Comments Or Suggestions: typos: line 014 "crystalline material data..." Questions For Authors: 1. Why the results of GPT+Rep-CodeGen and Seek+Rep-CodeGen are 0 for five constraints in Table 2? Does this mean that the framework is only effective for QianWen? 2. The experiments used specific network architectures of PerCNet. Can the impact of network choices on the results be further discussed? 3. The test set is constructed manually, will it be better to employ the LLMs to generate the test set? 4. Will the framework get stuck in local optima when facing more complex constraints? If so, how can mechanisms be designed to avoid this? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much to the reviewers for your time and for recognizing the innovation of our method and its contributions to the field. We will revise the paper carefully. Below, we will summarize your reviews and provide a response to each point separately. Q1: The experiments are stopped at five due to resource limitations. Why the results of GPT+Rep-CodeGen and Seek+Rep-CodeGen are 0 for five constraints in Table 2? Does this mean that the framework is only effective for QianWen? R1: Although the choice of five constraints is employed due to the resource limits, it can show a clearer comparison between LLMs with and without our framework. The zero results for GPT+Rep-CodeGen and Seek+Rep-CodeGen occur because we capped the experiment at 1,000 programs to ensure a fair comparison. We should also pay attention to the results other than three constraints which also reflect the evolutionary capability of our framework. Thus, the framework is not limited to QWen and can also be applied to other LLMs. Q2: Although the generated codes are interpretable, the paper does not provide detailed insights into understanding the logic and structure of these codes. There is also no discussion on how to further optimize these codes to improve performance. R2: In this paper, the framework goal is to identify crystal representations that satisfy all constraints, and thus we treat the entire evolutionary process as a black-box procedure. Our framework enables better utilization of the interdisciplinary knowledge of LLMs while guiding them to optimize and generate code effectively. Since the entire process is automated by the LLM, not interpreting the intermediate codes does not affect the final outcome. Q3: The experiments used specific network architectures of PerCNet. Can the impact of network choices on the results be further discussed? R3: We use PerCNet because it is currently the only network architecture in this field capable of handling dihedral angles. We employ this network to ensure a fair comparison between the representations obtained by our framework and those of PerCNet. In the future, we plan to use LLMs to simultaneously generate both the representations and the corresponding network architectures, rather than solely relying on network architectures from other algorithms. Q4: The test set is constructed manually, will it be better to employ the LLMs to generate the test set? R4: Using LLMs to generate the test set is possible, but in terms of effectiveness, a manually designed test set would be better. Due to the hallucination issue of LLMs, they might generate answers that appear correct but are actually wrong. Since the test set determines the direction of evolution, a manually designed test set would be more reliable and effective. Q5: Will the framework get stuck in local optima when facing more complex constraints? If so, how can mechanisms be designed to avoid this? R5: It is unlikely to fall into local optima because our program selection method (i.e., Equation 1 in the paper) ensures that the two parent programs have a higher probability of satisfying different constraints, thereby maintaining the diversity of the parent programs.
Summary: This paper introduces Rep-CodeGen, a framework that uses multiple LLM agents to autonomously generate graph representations for material property prediction. Unlike traditional methods, Rep-CodeGen iteratively refines representations through crossover generation, evaluation summary, and parent selection, ensuring adaptability. The framework optimizes graphs to satisfy six material constraints, improving prediction accuracy. Experimental results demonstrate its superiority over conventional approaches. By integrating LLMs for automated representation learning, this work reduces reliance on domain expertise, offering a promising solution for AI-driven materials science. ## update after rebuttal During the rebuttal period, I interacted with the authors. The authors have answered my questions well, so I decided to raise my score for the paper. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. I reviewed the Appendix A, B, C and E. Relation To Broader Scientific Literature: The 'new' representations can benefit the material virtual screening. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strength:** The generated codes are more interpretable, allowing humans to read and edit them easily. This AI knowledge may further inspire human researchers. This paper shows that a good representation can improve the prediction results with the same model. **Weakness:** Please see the questions. Other Comments Or Suggestions: I keep up with the literature in this area. Questions For Authors: 1. The neighborhood selection is the main difference between the 'new' representation and the existing methods. Why does this change affect the property predictions in Table 1? 2. The LLMs may trained on methods and their codes in Table 3. Thus, the 'new' representation may be a combination of the existing codes. Then, how can the framework satisfy the unseen constraints? 3. This framework costs a lot of tokens to generate a target code. How to compare the cost from both time and money to DFT calculations? Besides, what is the size of LLMs in Table 2? As the LLMs become more and more powerful, a new version of GPT or Deepseek may accomplish this task well. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your valuable time and your recognition of our work's motivation, novel contributions, and experimental design. We provide point-to-point responses to your questions as follows. Q1: The neighborhood selection is the main difference between the 'new' representation and the existing methods. Why does this change affect the property predictions in Table 1? R1: The neighbor selection method in the generated representation enhances the graph's capability to capture long-range atomic interactions. As a result, the energy-related properties are improved. Specifically, the interactions between atoms are not limited to adjacent atoms, and long-range interactions can extend to distant atoms, which play a key role in the stability of crystals. Taking ionic crystals as an example, the Coulomb force between ions is a type of long-range interaction with a relatively large range of action. Cations and anions are attracted to each other through the long-range Coulomb force, which maintains the structural stability of the crystal. In our representation, neighbors are determined based on periodic regions instead of a cutoff value, which can cover the atoms over a longer distance. Besides, the long-range interactions also affect the electrical and thermal properties of crystal materials. Q2: The LLMs may train on methods and their codes in Table 3. Thus, the 'new' representation may be a combination of the existing codes. Then, how can the framework satisfy the unseen constraints? R2: LLMs can be trained not only on materials science research papers but also on works from other fields, such as protein design and drug discovery. With all these training resources, LLMs have a better understanding of atom interactions inside the materials. Therefore, we aim to guide the LLMs in generating the desired representations. To the best of our knowledge, the neighborhood selection for the new representation, which is based on periodic regions, has not been observed. As shown in Table 2, the representations satisfying four and five can be viewed as unseen constraints. Among these representations, we also observed 'new' representations that are different from existing methods. Q3. This framework costs a lot of tokens to generate a target code. How to compare the cost from both time and money to DFT calculations? Besides, what is the size of LLMs in Table 2? As the LLMs become more and more powerful, a new version of GPT or Deepseek may accomplish this task well. R3: The cost of Rep-CodeGen is minimal compared to DFT calculations. We utilize the Qwen2.5-Coder-7B model to develop three agents, opting for the 7B version over the Qwen2.5-Coder-32B primarily due to cost considerations. While the Qwen2.5-Coder models are open source, the operational expense of the 32B model is significantly higher than that of the 7B model. Additionally, the 7B model offers considerably faster inference times. Our aim is to create a framework accessible to a broad spectrum of researchers. Specifically, we only utilize one NVIDIA RTX 24G 3090 GPU for all experiments. A larger LLM can benefit the framework, but a single one may not accomplish the task well, as shown in Table 3. GPT-3.5 (gpt-3.5-turbo used in our experiments) and DeepSeek (DeepSeek-Coder-v2 used in our experiments) represent a larger general-purpose LLM and a larger code-generation LLM, respectively. Nonetheless, their performance remains unsatisfactory.
null
null
null
null
null
null
Faster Rates for Private Adversarial Bandits
Accept (poster)
Summary: This work studies the adversarial bandit problem where the reward function can vary across stages and considers a differential privacy guarantee. The author proposes a novel algorithm within a batched learning framework, adding Laplace noise to the average reward in each batch to ensure differential privacy. Compared with previous work, this approach achieves a better dependency on the differential privacy parameter $\epsilon$ while still maintaining a $\sqrt{T}$ regret. Claims And Evidence: The author provides a clear claim of the result in the theorems and includes a proof sketch to outline the key steps of the theoretical analysis. Methods And Evaluation Criteria: The main contribution of this work focuses on the theoretical analysis of regret, with the simulation serving as an auxiliary tool to support its efficiency, which is reasonable and sufficient. Theoretical Claims: The author provides a clear proof sketch outlining the key steps of the theoretical analysis. The critical step involves transitioning from the Non-Private to the Private setting. With the batched learning framework, it becomes reasonable to reduce the magnitude of the Laplace noise added to the average reward function, saving a factor of $\epsilon$. On the other hand, the batched updating in an adversarial environment introduces an additional dependency of $\sqrt{\epsilon}$. Therefore, the improvement in the dependency on $\epsilon$ appears reasonable. However, a concern regarding correctness arises with the logarithmic dependency on the number of rounds $T$. Based on the proof sketch, the only stated improvement concerns $\epsilon$, and it remains unclear whether the batch learning framework with a batch size of $1/\epsilon$ can effectively remove the logarithmic dependency on $T$ compared to previous work. Experimental Designs Or Analyses: The main contribution of this work focuses on the theoretical analysis of regret, with the simulation serving as an auxiliary tool to support its efficiency, which is reasonable and sufficient. Therefore, there is no need to evaluate the soundness or validity of the experimental designs or analyses. Supplementary Material: No, due to time limitations, I only reviewed the main paper and did not check the supplementary material. Relation To Broader Scientific Literature: This work mainly focuses on providing a differential privacy guarantee for the adversarial bandit problem, which may be related to other works on bandit analysis. Essential References Not Discussed: This paper provides a comprehensive discussion of related work in adversarial bandits and private bandits. Other Strengths And Weaknesses: Compared with previous works in the adversarial bandit problem, this work relies on a stronger assumption of an oblivious adversary, where all reward functions are chosen at the beginning and cannot adapt based on the agent's previous actions. Therefore, it is not entirely fair to directly compare the results with prior work, as the improvement may stem from this assumption. It would be beneficial to explicitly highlight this difference in the introduction and include it in the comparison table. Other Comments Or Suggestions: The definition of differential privacy should be refined. For instance, in Theorem 4.5, the algorithm provides a $(\epsilon, \delta)$ privacy guarantee, but the earlier definition only involves $\epsilon$, making it unclear what $\delta$ represents. Additionally, in lines 236-237, there is a contradiction between the stated upper bound and the previously established lower bound. This discrepancy appears to stem from different definitions of privacy (local vs. non-local differential privacy). However, the paper does not clearly define local differential privacy or explain what specifically accounts for the gap in the regret bound. Clarifying these aspects would strengthen the theoretical consistency of the work. Questions For Authors: The results for Bandits with Experts are somewhat confusing, and several questions arise: 1. While the Generic Conversion method is well-motivated, it is unclear which of the three different results for Bandits with Experts is the best, and what specific advantages the other algorithms provide. A clearer comparison of these results would be helpful. 2. Based on the proof sketch, the proposed method appears to be a general framework applicable to any algorithm. However, it is unclear why this method fails to achieve a $1/\sqrt{\epsilon}$ dependency in the second and third regret guarantees. A more detailed explanation of this limitation would be beneficial. 3. Regarding the Adversarial Bandit result and the first result for Bandits with Experts, the proof sketch suggests that the only stated improvement concerns $\epsilon$. However, it remains unclear whether the batch learning framework with a batch size of $1/\epsilon$ can effectively remove the logarithmic dependency on $T$ compared to previous work. Clarifying this aspect would improve the theoretical discussion. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We address the reviewer's main concern below and hope that they will reevaluate their score accordingly. > Therefore, it is not entirely fair to directly compare the results with prior work, as the improvement may stem from this assumption. While adaptive adversaries are well-studied in non-private bandit literature, oblivious adversaries have been the main focus in the private bandit literature. In particular, there are only two previous works that study private adversarial bandits, namely Tossou \& Dimitrakakis (2017) and Agarwal \& Singh (2017). In the last two paragraphs of Section 2.1, Tossou \& Dimitrakakis (2017) make explicit that they first consider an oblivious adversary. They then note that their guarantees can be apply to an $m$-bounded adaptive adversary. As for Agarwal \& Singh (2017), Theorem 4.1 makes it clear that they only consider an oblivious adversary as the sequence of loss vectors is fixed before the game begins. Nevertheless, if one considers an adaptive adversary, then the lower bound from Asi et al. (2023) shows that sublinear regret is not possible under pure differential privacy even for a constant $\epsilon$. Moreover, for approximate differential privacy, the lower bounds from Asi et al. (2023) show that sub linear regret is not possible if $\epsilon \leq \frac{1}{\sqrt{T}}.$ Thus, the adaptive adversary case is not as interesting because existing algorithms are already optimal (up to log factors). > The definition of differential privacy should be refined... Our definitions of differential privacy (see Definitions 2.3 and 2.4) do include $\delta$, and this is the standard $\delta$ which people use to capture failure probability. > Additionally, in lines 236-237, there is a contradiction between the stated upper bound and the previously established lower bound. Our algorithm obtaining the upper bound in Corollary 3.2 satisfies the notion of central differential privacy. Therefore, the lower bound from Basu et al. (2019) does **not** apply here as it hold for the more strict notion of local differential privacy. We will clarify this in the paper. We will also make sure to include an explicit definition of local differential privacy in the camera-ready version. > (Q1) While the Generic Conversion method is well-motivated, it is unclear which of the three different results for Bandits with Experts is the best, and what specific advantages the other algorithms provide. None of the regret guarantees of our three algorithms strictly dominate the other. Below, we highlight three regimes and the corresponding algorithm that obtains the best regret bound in that regime. We will make sure to include this detailed comparison in the final version. **Low-dimension and High Privacy ($N \leq K$):** In this regime, our upper bound $O(\frac{\sqrt{NT}}{\sqrt{\epsilon}})$ is superior. **High-dimension and Low Privacy ($N \geq K$ and $\epsilon \geq \frac{K}{N}$):** In this regime, our second upper bound $O(\frac{\sqrt{KT \log N} \log(KT)}{\epsilon})$ is superior. **High-dimension and High Privacy ($N \geq K$ and $\epsilon \leq \frac{1}{\sqrt{T}}$):** In this regime, our third upper bound $O(\frac{N^{1/6}K^{1/2}T^{2/3}\log(NT)}{\epsilon^{1/3}} + \frac{N^{1/2}\log(NT)}{\epsilon})$ is superior. These are all regimes that are interesting in practice, so its important to get the best rates for each one of these regimes. > (Q2) However, it is unclear why this method fails to achieve a $1/\sqrt{\epsilon}$ dependency in the second and third regret guarantees. The second guarantee (Theorem 4.4) actually does not use the batching method. Instead, it simply adds Laplace noise to each loss vector. Thus, one should not expect to get a $\frac{1}{\sqrt{\epsilon}}$ here, as this approach actually gives a stronger, local DP guarantee. We will make this more explicit in the camera ready version. As for the third guarantee, this is more subtle and occurs because more noise needs to be added to privatize the batched expert losses. We will include a more complete discussion in the camera-ready version. > (Q3) However, it remains unclear whether the batch learning framework with a batch size of can effectively remove the logarithmic dependency on compared to previous work... Our technique allows us to improve upon existing regret bounds in terms of both $\epsilon$ and $T$. As the reviewer noted, our batching with noise technique allows to improve the dependence on $\epsilon$ from $1/\epsilon$ to $1/\sqrt{\epsilon}.$ However, our technique also allows us to *expand* the set of bandit algorithms that we can privatize beyond EXP3. In particular, we are now able to privatize *heavy-tailed* bandit algorithms which do not have $\log{T}$ terms in their regret bounds, one of which is the HTINF algorithm from Huang et al. (2022). Because our reduction carries over the regret bound from the base bandit algorithm, we are able to remove a log factor in $T.$ --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal and it addresses my concern regarding the definitions of local differential privacy and centralized differential privacy. I will maintain my positive score.
Summary: The paper presents novel differentially private algorithms for adversarial bandits and bandits with expert advice. The primary contribution is an efficient conversion method that transforms any non-private bandit algorithm into a differentially private one, leading to improved regret bounds. The proposed algorithms achieve regret upper bounds of O(\sqrt{KT/ϵ}) (the sota had an additional log term) for adversarial bandits, which improves on existing results. Additionally, for bandits with expert advice, the paper introduces the first differentially private algorithms with various regret bounds that cater to different settings of actions, experts, and privacy parameters. ## update after rebuttal" I have read the discussions and I keep my assessment unchanged Claims And Evidence: The paper claims an improvement over the previous best regret bound. This improvement is significant because it ensures sublinear regret even when ϵ≤\sqrt{T}​, which was not previously achieved. The claim is supported by mathematical proofs demonstrating regret upper bounds under different adversarial conditions. The study establishes a fundamental separation between central and local differential privacy in adversarial bandits, proving that sublinear regret is possible for ϵ∈o(1/\sqrt{T}) under central differential privacy, whereas this is not achievable under local differential privacy. Three different private algorithms are introduced, achieving different regret bounds with various techniques. Methods And Evaluation Criteria: The paper primarily employs theoretical analysis to evaluate the effectiveness of the proposed algorithms. No numerical experiments are reported. Theoretical Claims: Generic Conversion Framework: The paper establishes a (very simple) framework to convert any non-private bandit algorithm into a private one while maintaining sublinear regret. The core technique involves batching rounds and adding Laplace noise to the observed losses. Lower Bounds and Limits: The authors prove that it is difficult to achieve an additive separation between ϵϵ and TT in the adversarial bandit setting, unlike in full-information settings. They also show that standard EXP3-based approaches are challenging to privatize effectively due to compounding privacy loss over multiple rounds. Experimental Designs Or Analyses: nothing here Supplementary Material: The supplementary material includes the proofs for theoretical results, a detailed analysis of privacy-preserving mechanisms, a discussion on fundamental barriers to achieving even faster rates for private adversarial bandits, and an algorithmic pseudocode for the proposed differentially private bandit algorithms. Relation To Broader Scientific Literature: The paper builds upon and extends several foundational works in adversarial bandits and differential privacy, with no remarkably original mechanism or proof scheme. Essential References Not Discussed: The interest of the contribution is justified by the application of bandits to "online advertising, medical trials, and recommendation systems" but the relevance of adversarial bandits in these settings would deserve references. Other Strengths And Weaknesses: + The proposed framework (Alg 1) applies to any non-private bandit algorithm, it is very simple and natural. + The introduction of batching and noise addition significantly improves private bandit performance. - No experimental results are presented to support theoretical findings. - The paper focuses purely on theoretical settings without discussing real-world implementations. Other Comments Or Suggestions: It would be useful to contrast the results with private stochastic bandits to understand trade-offs. Questions For Authors: How do the proposed algorithm compare experimentally to the SOTA? Could alternative privacy mechanisms improve regret rates? How does the proposed approach generalize to contextual bandits? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewers for their comments. We address their concerns below and hope they will reevaluate their score accordingly. > No experiments. We acknowledge the concern regarding the lack of experiments. However, our work is intentionally theoretical, aimed at understanding fundamental rates and limits of private adversarial bandits. Theoretical contributions often provide key insights that guide future empirical research. While experimental validation is certainly valuable, we believe that our results are meaningful in their own right and align with the norms of theoretical research in this area. > (Q1) How do the proposed algorithm compare experimentally to the SOTA? Our paper is mainly theoretical in nature. That said, we agree with the reviewer that this is an important direction of future work. > Could alternative privacy mechanisms improve regret rates? This is a great question. We believe that one way to improve our regret bounds is through the use of the Binary Tree Mechanism, which has been used to obtain private regret guarantees in the full-information setting. However, our current attempts at doing have been unsuccessful. In particular, some subtleties arise in the bandit setting as one needs to deal with unbiased estimates of the true loss, whose sensitivity can be massive. We will make sure to discuss this in the camera-ready version. > How does the proposed approach generalize to contextual bandits? The bandits with expert advice setting can be viewed as the contextual version of the traditional adversarial bandit setting [1]. Accordingly, our results in Section 4 show that our techniques do extend to the contextual setting. Table 1 summarizes the exact regret bounds we achieve for the bandits with expert advice setting. [1] Auer, Peter, et al. "The nonstochastic multiarmed bandit problem." SIAM journal on computing 32.1 (2002): 48-77.
Summary: This paper presents novel differentially private (DP) adversarial bandit algorithms with improved dependency on the DP parameter $\epsilon$. It improves the regret bound from $O(\sqrt{KT\log KT}/\epsilon)$ to $O(\sqrt{KT}/\epsilon)$, ensuring no-regret even when $\epsilon \sim 1/\sqrt{T}$. Moreover, the paper's construction is general in that it provides a framework for constructing DP adversarial bandit algorithms from a baseline adversarial bandit algorithm with unbounded and negative losses. Claims And Evidence: This paper is mainly theoretical and its claims are supported by proofs. Methods And Evaluation Criteria: N/A Theoretical Claims: I could not find any specific issues with their theorems, particularly their main results, Theorem 3.1 and Theorem 4.2, which provide a general conversion from a class of bandit algorithms to differentially private ones. Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: This paper extends the line of research on DP online learning, particularly DP adversarial bandit algorithms. Their results demonstrate that no-regret learning is achievable for any privacy parameter $\epsilon \in \omega(1/T)$ in the oblivious adversary setting under the central DP model. This setting is optimal in the sense that it is known that no-regret learning is impossible for $\epsilon \in o(1/\sqrt{T})$ in either the adaptive adversary setting or the local DP setting. Their improved dependence on $\epsilon$ implies a better trade-off between privacy and utility, advancing the frontier of privacy-preserving algorithms. Essential References Not Discussed: All essential references are discussed to the best of my knowledge. Other Strengths And Weaknesses: **Strengths** - This paper provides a simple conversion technique from adversarial bandit algorithms with unbounded losses over the reals to a DP adversarial bandit algorithm with improved rates. This could have a broader impact due to its simplicity and its independence from the details of the baseline (host) algorithm. - They provide a lower bound of $\Omega(\sqrt{T/\epsilon})$ for a class of algorithms, including EXP3 and its batched variants. **Weakness** - While they have shown, through experiments and quantitative arguments, that Theorem 3.3 of Tossou & Dimitrakakis (2017) is problematic, it is unclear whether Lemma 5.1 in this paper definitively rules out one of Tossou & Dimitrakakis (2017)'s algorithms claiming a regret of $\widetilde{O}\left(\frac{T^{2/3}\sqrt{K\log K}}{\epsilon^{1/3}}\right)$ (This is also my question: Does Lemma 5.1 rule it out?). Other Comments Or Suggestions: In **Line 1850**, Shouldn't "...$M_t:[K]^{i-1}$...” be “...$M_t:[K]^{t-1}$...”? Questions For Authors: Please refer to the strengths & weaknesses section above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and noting that our work advances the frontier of privacy-preserving algorithms and could have a broader impact. We address the reviewer's main concern below and hope they reevaluate their score accordingly. > Does Lemma 5.1 rule it out? The reviewer is correct in that Lemma 5.1 does not rule out the possibility that the algorithm from Tossou \& Dimitrakakis (2017) obtains a regret of $O(\frac{T^{2/3} \sqrt{K \log K}}{\epsilon^{1/3}})$ as this upper bound is larger than our lower bound $O(\sqrt{T/\epsilon})$ in Lemma 5.1. Nevertheless, even if one considers the upper bound $O(\frac{T^{2/3} \sqrt{K \log K}}{\epsilon^{1/3}})$, our upper bound of $O(\sqrt{KT/\epsilon})$ in Corollary 3.2 is strictly better for all $\epsilon \leq 1$. > In Line 1850, Shouldn't...”? Yes, this is a typo and the reviewer is correct. We will make sure to fix this in the camera-ready version. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. After consideration, I have raised my score. I believe achieving a better privacy-accuracy trade-off through a relatively simpler method is a valuable contribution to the community.
Summary: This paper studies adversarial bandit problems and bandit problems with expert advice, and introduces a differentially private algorithm that achieves better regret bounds than previous approaches. Claims And Evidence: The writing is not clear enough. See the weakness part for details. Methods And Evaluation Criteria: They make sense. Theoretical Claims: I didn't check the details of the proofs. Most of them seem true. Experimental Designs Or Analyses: There is no experiment. Supplementary Material: I went over the appendix but did not check the details. Relation To Broader Scientific Literature: Their results improve upon previous regret bounds. Essential References Not Discussed: The paper investigated most of the related work. Other Strengths And Weaknesses: Strengths: The problem in the paper is interesting. And it is good to give the lower bound. Weaknesses: 1. The writing is not clear enough. For example, in lines 43-44, "Motivated by this gap", which gap? In lines 71- 73, "it is well known that this is not the case for local differential privacy." Then what is your contribution? What is the logic of the sentence? 2. There are no experimental results. 3. See the Questions part for more details. Other Comments Or Suggestions: See the Questions part for more details. Questions For Authors: 1. The regret of $O(\sqrt{KT\log K}/\epsilon)$ is for local differential privacy model. You consider classical DP in your paper, i.e., central differential privacy. How can you compare your CDP results with LDP since they are different DP models? 2. You give 3 different DP private bandit algorithms and get 3 regret bounds. What are the same points and differences among the three algorithms? What is your motivation to design them? 3. You define Definition 2.3 on history $\{\mathcal{H}_t\}$. What is the exact information you want to protect? Loss function or action? Which part is sensitive information? 4. Why did you choose pure differential privacy, not approximation DP for your work? 5. Compare your results with related work. For example, compare the result of Corollary 3.2 with non-private HTINF and explain the effect of privacy. 6. There is a gap of the factor of $\sqrt{K}$ between your upper bound and lower bound. Could you please explain it? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We address their concerns below and hope they will reevaluate their score accordingly. > in line 43-44, "Motivated by this gap", which gap? By this phrase, we are referring to our comment in the previous sentence on lines 38-40: "it was not known how large $\epsilon$ needs to be to obtain sublinear expected worst-case regret." We will clarify this in the final version. > "it is well known that this is not the case for local differential privacy." then what is your contribution? In this paper, we study the weaker notion of *central differential privacy*. Our results (Corollary 3.2) shows that under central differential privacy constraints, sub-linear regret is possible even when $\epsilon \leq \frac{1}{\sqrt{T}}.$ This is not the case under the strict notion of *local differential privacy*, where sublinear regret is **not** possible if $\epsilon \leq \frac{1}{\sqrt{T}}.$ Hence, our results establish a separation between central and local differential privacy in terms of when sublinear regret can be achieved. > No experiments We acknowledge the reviewer's concern. However, our work is intentionally theoretical, aimed at understanding fundamental rates and limits of private adversarial bandits. > (Q1) The regret of $O(\sqrt{KT \log K}/\epsilon)$ is for the local differential privacy model... Since local differential privacy is stronger than central differentially privacy, any $\epsilon$-locally differentially private algorithm is also an $\epsilon$-centrally differentially private algorithm. Accordingly, comparing our regret bounds against $O(\sqrt{KT \log K}/\epsilon)$ *is still meaningful since this is the only known regret bound for adversarial bandits under central differential privacy*. That is, we don't intend to compare our CDP results with LDP; it just happens to be the case that the previous best known regret bound for adversarial bandits under CDP is $O(\sqrt{KT \log K}/\epsilon)$. > (Q2) What are the same points and differences among the three algorithms? For bandits *with expert advice*, we give three different algorithms/regret bounds to cover different combinations of small and large $K, N$ and $\epsilon.$ In particular, amongst the three algorithms we give, none of their regret bounds strictly dominate the other -- there are certain choices of $K, N$ and $\epsilon$, where the regret bound of one algorithm is better than the other. Below, we highlight three regimes and the corresponding algorithm that obtains the best regret bound in that regime. We will include this detailed comparison in the final version. **Low-dimension and High Privacy ($N \leq K$):** Our upper bound $O(\frac{\sqrt{NT}}{\sqrt{\epsilon}})$ is superior. **High-dimension and Low Privacy ($N \geq K$ and $\epsilon \geq \frac{K}{N}$):** Our second upper bound $O(\frac{\sqrt{KT \log N} \log(KT)}{\epsilon})$ is superior. **High-dimension and High Privacy ($N \geq K$ and $\epsilon \leq \frac{1}{\sqrt{T}}$):** Our third upper bound $O(\frac{N^{1/6}K^{1/2}T^{2/3}\log(NT)}{\epsilon^{1/3}} + \frac{N^{1/2}\log(NT)}{\epsilon})$ is superior. These regimes are all interesting in practice, so it's important to get the best rates for each one. With regards to the algorithms, two out of the three algorithms, namely those obtaining the guarantees in Corollary 4.3 and Theorem 4.5, use the batching plus noise technique. The last algorithm, the one obtaining the guarantee in Theorem 4.4, does not batch and adds noise to each loss vector. > (Q3) What is the exact information you want to protect? We are taking *only* the sequence of loss functions as the sensitive information. This is made explicit in our definition of differential privacy in Definition 2.4 and is standard in the private bandit literature. > (Q4) Why did you choose pure differential privacy? Pure differential privacy is a stronger privacy guarantee and often easier to derive minimax rates for. Surprisingly, and unlike under full-information online learning, we found that even under pure differential privacy, the minimax rates for adversarial bandits were not known. This motivated our study of pure differential privacy. > (Q5) Compare your results with related work. We will make sure to include a comparison in the final version. For HTINF, the non-private version achieves a regret bound of $O(\sqrt{TK})$ whereas our private version achieves a regret bound of $O(\frac{\sqrt{TK}}{\sqrt{\epsilon}} + \frac{1}{\epsilon}).$ So, the blow is by a factor of $O(\frac{1}{\sqrt{\epsilon}}).$ > (Q6) There is a gap of the factor of $\sqrt{K}$ between your upper bound and lower bound. This gap comes from the fact that in our lower bound we only consider a setting with two arms. It should be possible to upgrade of lower bound with a factor of $K$ using the standard strategy for proving lower bounds for adversarial bandits, but we leave this for future work.
null
null
null
null
null
null
CAN: Leveraging Clients As Navigators for Generative Replay in Federated Continual Learning
Accept (poster)
Summary: This paper introduces Clients as Navigators (CAN) method, which tackling catastrophic forgetting in FCL tasks due to heterogeneous client data. CAN introduces a novel Generative Replay strategy that different from existing methods by selecting teaching clients for generator training and using an adaptive data distillation process to combat forgetting, dynamically adjusting to the client’s needs. The author responses match with my expectation ## update after rebuttal Claims And Evidence: Yes Methods And Evaluation Criteria: Yes. The methods are well evaluated. Theoretical Claims: No specific theoretical claims are made. Experimental Designs Or Analyses: Yes, the experimental design is comprehensive, and the authors effectively validate the method’s effectiveness through extensive experiments. Supplementary Material: Yes, I have reviewed the code submitted by the authors. It is well-organized and facilitates a clear understanding of the proposed method. Relation To Broader Scientific Literature: The client-centric approach proposed by the authors offers valuable insights for the Federated Continual Learning (FCL) field, as previous research has rarely explored this perspective. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths 1. This paper is well-motivated and provides a thorough analysis of why client's expertise knowledge is critical. The work has potential value across multiple related fields. 2. The paper's figure is color-coordinated and well-constructed, making it very easy for me to understand the content. 3. The experiments are thorough, and the experimental results sufficiently demonstrate the effectiveness of the method. Weaknesses 1. The challenges associated with FCL mentioned in the article are very interesting. To address these challenges, the authors proposed CAN, which has been shown to perform well through extensive experimentation. However, the improvements in methodology that CAN seems to be not substantial compared to the baseline LANDER. I am quite curious about the specific methodological improvements. 2. Although the author has already provided the settings for some of the hyperparameters in the experiments, I would like to know about the settings for other important parameters, such as the number of communication rounds, the number of clients, and so on. Other Comments Or Suggestions: None Questions For Authors: I am curious whether the buffer size affects the allocation algorithm mentioned in the article. When the buffer size is small enough, is there a possibility that the accuracy of some labels is too low to allow for proper allocation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer TT97: We appreciate your engagement with our work and the thoughtful observations you made. We aim to address your concerns in our detailed responses below, hoping to provide clarity and demonstrate the effectiveness of our proposed approach. ### Response to Weaknesses > **W1:** The challenges associated with FCL mentioned in the article are very interesting. To address these challenges, the authors proposed CAN, which has been shown to perform well through extensive experimentation. However, the improvements in methodology that CAN seems to be not substantial compared to the baseline LANDER. I am quite curious about the specific methodological improvements. **Our Response:** Thank you for your interest in the methodological contributions of CAN. A similar concern regarding the distinction between CAN and prior approaches such as LANDER was raised by Reviewer aG9E. To avoid redundancy, we have provided a detailed explanation in our response to Reviewer aG9E, outlining the key innovations of CAN in terms of client-side expert supervision and adaptive buffer replay based on forgetting patterns. We kindly refer you to that response for the complete clarification. > **W2:** Although the author has already provided the settings for some of the hyperparameters in the experiments, I would like to know about the settings for other important parameters, such as the number of communication rounds, the number of clients, and so on. **Our Response:** Thank you for your suggestion. To clarify the experimental setup, we provide the key hyperparameter configurations in the table below. These settings were consistently applied across all experiments unless otherwise specified. | Hyperparameters | Value | | :------------------: | :---: | | Communication Rounds | 100 | | Clients Numbers | 5 | | Local Epochs | 2 | | Local Batch Size | 128 | | Synthesis Batch Size | 256 | ### Response to Questions > **Q1:** I am curious whether the buffer size affects the allocation algorithm mentioned in the article. When the buffer size is small enough, is there a possibility that the accuracy of some labels is too low to allow for proper allocation? **Our Response:** Thank you for raising this important question. We agree that in scenarios with small buffer sizes, the accuracy on certain labels or tasks might be very low, which could potentially lead to unstable or skewed buffer allocations. To mitigate this issue, we introduce a predefined lower-bound threshold $ \epsilon $ in the calculation of the forgetting weight (Equation 7). Specifically, we define: $$ w_c^t = \frac{1}{\max(\text{Acc}_c^t, \epsilon)} $$ This ensures that when the classification accuracy $ \text{Acc}_c^t $ is too low, the forgetting weight is capped to avoid extreme values. By preventing division by near-zero accuracies, this threshold stabilizes the allocation algorithm and ensures a reasonable distribution of buffer space even under small-buffer regimes. The value of $ \epsilon $ is empirically chosen (set to 0.1 in our experiments) and does not require tuning across datasets.
Summary: CAN addresses the challenges imposed by non-IID data in Federated Continual Learning (FCL) and introduces a novel approach that leverages the specialized expertise of individual clients. By employing Expert-Driven Data Synthesis, CAN enhances the quality and representativeness of the generated data, ensuring effective retention of client-specific knowledge. Moreover, the integration of Adaptive Replay optimizes the utilization of synthesized data, improving memory efficiency and model stability over time. Collectively, these components enable CAN to improve continual learning performance within the Federated Learning framework, effectively addressing key issues in knowledge retention and adaptation across heterogeneous clients. Claims And Evidence: Yes Methods And Evaluation Criteria: The methods are logically structured, and the proposed approach effectively mitigates the problem of catastrophic forgetting in federated continual learning. Theoretical Claims: No theroretical claims. Experimental Designs Or Analyses: Experiments are sound,extensive experiments across three datasets (CIFAR100, TinyImagenet, ImageNet100) with varying non-IID degrees. Supplementary Material: Reviewed Appendix A (hyperparameters) and B (cost analysis). Code structure in the supplement is clean, Relation To Broader Scientific Literature: Builds on FCL works like TARGET and LANDER but uniquely integrates client expertise. Essential References Not Discussed: None Other Strengths And Weaknesses: Pros: 1. The article is thoroughly composed, featuring well-organized figures and tables that effectively support the main findings. 2. The proposed approach is particularly compelling, as leveraging client-specific expertise offers a novel perspective and valuable insights for the research community. 3. Experimental results demonstrate that CAN achieves state-of-the-art (SOTA) performance across various datasets and task configurations. Cons: 1. The reference model Π is described as being trained on synthetic data; however, its architecture and training parameters (e.g., number of epochs, optimizer details) are not provided, which complicates a complete understanding of this section. 2. Given that Federated Learning emphasizes communication efficiency, quantifying aspects such as the number of communication rounds and the computational cost associated with generative replay would enhance the study’s practical relevance. The paper currently provides only a descriptive account without experimental evidence to substantiate these claims. Other Comments Or Suggestions: It could be worth exploring how to apply such methods in Diffusion-based approaches within FCL. Questions For Authors: Please refer to Strengths And Weaknesses. No extra questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer z1gv: Thank you for your encouraging remarks and the critical questions you posed. We have reflected thoroughly on your feedback and provide our detailed responses below to further clarify our method and contributions. ### Response to Weaknesses > **W1**: The reference model Π is described as being trained on synthetic data; however, its architecture and training parameters (e.g., number of epochs, optimizer details) are not provided, which complicates a complete understanding of this section. **Our Response:** Thank you for the suggestion. A similar question regarding the reference model $ \Pi $ was raised by Reviewer aG9E. To avoid redundancy, we have provided a detailed response under that review, including training configuration, architecture, and computational overhead. We kindly refer you to our response to Reviewer aG9E for the complete explanation. > **W2:** Given that Federated Learning emphasizes communication efficiency, quantifying aspects such as the number of communication rounds and the computational cost associated with generative replay would enhance the study’s practical relevance. The paper currently provides only a descriptive account without experimental evidence to substantiate these claims. **Our Response:** Thank you for the suggestion! All experiments in our paper are conducted under a consistent setting of 100 communication rounds. To support our claims on computational efficiency, we provide a detailed comparison with DDDR (a diffusion-based method) and LANDER. As shown in the table below, CAN reduces overall computation cost by over 75% compared to DDDR (97 vs. 406 minutes). This highlights the advantage of avoiding multi-step diffusion generation, which is computationally intensive and less practical in federated settings. Compared to LANDER, CAN incurs the same client-side training cost (57 minutes), ensuring fairness in edge device usage. Although CAN introduces slightly more server-side cost due to expert-guided processing, we argue that this overhead is minor and acceptable, especially given that most computation in FL occurs on the client side. | | Local Training | Image Generation | Overall | | :----: | :------------: | :--------------: | :-----: | | DDDR | 186 | 220 | 406 | | LANDER | 57 | 28 | 85 | | CAN | 57 | 40 | 97 | ### Response to Suggestions > **S1:** It could be worth exploring how to apply such methods in Diffusion-based approaches within FCL. **Our Response:** Thank you for the thoughtful suggestion. Integrating Diffusion-based approaches into the FCL setting is indeed a valuable direction. Compared to GANs, diffusion models are known to generate higher-quality and more diverse samples, which could further improve replay effectiveness in non-IID settings. However, applying diffusion in a federated and continual learning context also brings new challenges, such as communication efficiency, and privacy preservation. We believe this is a promising extension and plan to explore how our expert-guided replay strategy could be adapted to this setting in future work.
Summary: The paper focusing on Federated Continual Learning (FCL) scenario, highlights two key observations: Client Expertise Superiority and Client Forgetting Variance. The study shifts attention from the server to the client and proposes using the unique capabilities of client-side knowledge to improve Generative Replay. This new perspective explores how Federated Learning can reduce forgetting in FCL tasks. Claims And Evidence: Yes, the claims made in this paper are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, The proposed method “CAN” is practical and easy to follow, make sense for the problem or application at hand. Theoretical Claims: This paper doesn’t include theoretical claims, just with necessary formulas. Experimental Designs Or Analyses: The experimental design is well-structured, and the tests were conducted on three widely used datasets: CIFAR100, TinyImageNet, and ImageNet100. However, the study falls short in including experiments with broader generalizability,see details in Weaknesses. Supplementary Material: Yes, the authors have submitted the code. Relation To Broader Scientific Literature: No. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: 1. The paper is well-written and clearly organized, making it easy to understand. 2. Comparative experiments demonstrate that the proposed method achieves notable effectiveness in terms of computational cost and reduction of forgetting rates. 3. The proposed method does not compromise the privacy-preserving capabilities of the federated learning paradigm. Weaknesses Personally, I don’t see any big problems with the method described in the paper. What I’m really curious about are some of the experimental details. 1. Could more experiments be conducted under additional buffer sizes, such as 2560 and 5120? I’d like to see whether CAN can still maintain the same level of performance under larger buffer size conditions. 2. The paper does not mention the number of clients in experiments, which is a critical factor in federated learning. Could experiments be conducted in scenarios with a larger number of clients? 3. The "V phenomenon" mentioned in Sec. 5.3 suggests a trade-off between buffer size and data quality. The authors should discuss how to optimize this balance in practice. Other Comments Or Suggestions: Typo: Table 3 should refer to Sec. 5.4, not Sec. 5.3. Questions For Authors: I was wondering that which dataset was used for the experiments in Table 3? It doesn’t seem to be mentioned in the paper. No other questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer czny: We are truly grateful for your insightful review and the constructive feedback provided. Your suggestions helped us identify areas for clarification, and we respond to each point below with careful consideration. ### Response to Weaknesses > **W1:** Could more experiments be conducted under additional buffer sizes, such as 2560 and 5120? I’d like to see whether CAN can still maintain the same level of performance under larger buffer size conditions. **Our Response:** Thanks for your valuable insights! Following your advice, we conducted additional experiments under larger buffer sizes (2560 and 5120). As shown in the table below, CAN consistently outperforms LANDER across all settings, including under larger buffer budgets. Notably, even when the buffer is doubled (5120), CAN maintains a clear performance advantage under both IID and non-IID scenarios. These results further confirm the scalability and robustness of our method across different buffer regimes. | | LANDER(2560) | CAN(2560) | LANDER(5120) | CAN(5120) | | :-------: | :----------: | :-------: | :----------: | :-------: | | IID | 47.97 | **49.36** | 49.19 | **51.81** | | NIID(1.0) | 46.33 | **48.31** | 47.82 | **50.86** | | NIID(0.5) | 43.3 | **46.3** | 45.43 | **47.72** | | NIID(0.1) | 40.01 | **40.98** | 42.14 | **43.72** | > **W2:** The paper does not mention the number of clients in experiments, which is a critical factor in federated learning. Could experiments be conducted in scenarios with a larger number of clients? **Our Response:** We appreciate the suggestion. In our main experiments, we used 5 clients by default. To evaluate scalability under more realistic federated settings, we conducted additional experiments with 10 and 15 clients. As shown in the table above, CAN consistently outperforms LANDER across all client counts, especially under non-IID scenarios. These results demonstrate that CAN maintains its effectiveness even as the number of clients increases, confirming its robustness in more complex and distributed environments. | | LANDER (10) | CAN (10) | LANDER(15) | CAN(15) | | :-------: | :---------: | :-------: | :--------: | :-------: | | IID | 45.53 | **46.39** | 45.32 | **45.68** | | NIID(1.0) | 43.00 | **44.64** | 41.79 | **42.65** | | NIID(0.5) | 41.53 | **42.81** | 40.02 | **40.25** | | NIID(0.1) | 27.89 | **28.86** | 22.73 | **24.28** | > **W3:** The "V phenomenon" mentioned in Sec. 5.3 suggests a trade-off between buffer size and data quality. The authors should discuss how to optimize this balance in practice. **Our Response:** We appreciate the reviewer’s insightful observation. This behavior arises from the interplay between buffer size and adaptive allocation efficiency. When the buffer is small, the system allocates limited replay slots more selectively to the most forgotten tasks, resulting in efficient knowledge retention. As the buffer increases, this selective effect weakens, and additional samples contribute less to mitigating forgetting, leading to a temporary dip in performance. In practice, we suggest starting from a moderate buffer budget and gradually increasing it while monitoring key metrics such as retention accuracy on early tasks and per-task replay effectiveness. If performance plateaus or degrades, it may indicate that further buffer increase offers diminishing returns. Such observations can guide the choice of buffer size under memory or communication constraints. ### Response to Suggestions > **S1:** Typo: Table 3 should refer to Sec. 5.4, not Sec. 5.3. **Our Response:** Thank you for pointing this out. We acknowledge the typo and will revise the reference from Sec. 5.3 to Sec. 5.4 in the final version of the paper. ### Response to Questions > **Q1:** I was wondering that which dataset was used for the experiments in Table 3? It doesn’t seem to be mentioned in the paper. No other questions. **Our Response:** Thanks for pointing this out! The experiments presented in Table 3 were conducted on the CIFAR-100 dataset. This table reports the final accuracy on the first task after the completion of all tasks, which serves as an important metric to evaluate catastrophic forgetting. We will clarify it in the final version of the paper.
Summary: This paper examines Federated Continual Learning, specifically exploring how client expertise aids in generating replay data and adjusting replay buffer sizes based on the forgetting variance among clients. The CAN approach first identifies accurate expert clients, utilizing their predictions to train the generator. By promoting discrepancies between the server model (S) and reference model (Π), it extracts fine-grained features. Additionally, the replay buffer is dynamically managed during training to minimize further forgetting. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The author provides multiple formulas to explain their method, and the theoretical claims are correct. Experimental Designs Or Analyses: The experimental results are comprehensive, with the author conducting various experiments under different settings to ensure thorough evaluation. Supplementary Material: The author provides the algorithm code, which is highly beneficial for understanding the method’s pipeline and the details of related approaches. Relation To Broader Scientific Literature: None Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths 1. The idea of the method is intuitive and easy to follow. 2. The authors provide a clear motivation and a well-structured discussion of related work. 3. The experiments conducted across different settings effectively demonstrate the strategy’s effectiveness and generalizability. Weaknesses 1. The paper mentions that CAN is similar to TARGET [1] and LANDER [2]. However, as far as I know, both methods fall under the category of Data-Free Knowledge Distillation (DFKD). The authors should provide a more detailed distinction and explanation regarding how their approach differs from these methods. 2. The explanation of the Synthetic Data Initialization section is unclear, which raises concerns. The authors should provide a more detailed and precise clarification of this part. 3. The origin of the Reference Model is not clearly explained. Does it require training from scratch? What is the computational overhead? How many training iterations are needed, and how are the training parameters set? These aspects should be addressed in more detail. [1] Zhang, J., Chen, C., Zhuang, W., and Lyu, L. Target: Federated class-continual learning via exemplar-free distillation. In CVPR, pp. 4782–4793, 2023. [2] Tran, M.-T., Le, T., Le, X.-M., Harandi, M., and Phung, D. Text-enhanced data-free approach for federated class-incremental learning. In CVPR, pp. 23870–23880, 2024. Other Comments Or Suggestions: The **Expert-Driven Data Synthesis** section lacks a clear focus, making it difficult to grasp the key points. The authors need to improve the logical flow of this section. Questions For Authors: Please refer to the Strengths and Weaknesses section for specific details. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer aG9E: We greatly value the time and expertise you invested in reviewing our submission. Your feedback has been instrumental in helping us improve the clarity of our work. We address your comments in detail below. ### Response to Weaknesses > **W1:** The paper mentions that CAN is similar to TARGET and LANDER. However, as far as I know, both methods fall under the category of Data-Free Knowledge Distillation (DFKD). The authors should provide a more detailed distinction and explanation regarding how their approach differs from these methods. **Our Response:** We appreciate your point regarding the similarity between CAN and existing Data-Free Knowledge Distillation (DFKD) methods. While all three methods adopt a generative replay framework, CAN introduces two key innovations that fundamentally distinguish it from TARGET and LANDER: 1. **Client Expertise Superiority**: Unlike TARGET and LANDER, which rely solely on the global server model for guiding data synthesis, CAN explicitly identifies and leverages *client-side experts* to guide generation. Specifically, our *expertise map* selects the best-performing client for each class, whose model is then used to supervise the generator through an *Expert Navigation Loss*. This enables class-specific fine-grained guidance that cannot be captured by a centralized server alone. 2. **Client Forgetting Variance**: Prior methods use uniform replay buffers across clients, overlooking the heterogeneous forgetting behavior induced by non-IID data. In contrast, CAN profiles each client’s unique forgetting patterns and adapts the buffer allocation accordingly, improving both performance and efficiency. > **W2:** The explanation of the Synthetic Data Initialization section is unclear, which raises concerns. The authors should provide a more detailed and precise clarification of this part. **Our Response:** Thank you for pointing this out. We agree that the Synthetic Data Initialization section could benefit from more detailed clarification. Below we provide a clearer description of the process. The initialization begins by sampling class-conditional latent codes $ z_y \sim \mathcal{N}(0, I) $, where $ y $ denotes the target class. These latent codes are passed through the generator $ G $ to obtain synthetic images $ x_{\text{syn}} = G(z_y) $. We then use the server model to perform inference on each $ x_{\text{syn}} $, obtaining predicted class probabilities. To ensure that the generated samples are aligned with their intended class labels, we compute an entropy-based loss between the prediction and the target label $ y $. This loss encourages the generator to produce more discriminative and class-consistent samples. Once the initial set of synthetic data is generated, we train the reference model $ \Pi $ from scratch on this data. The purpose of $ \Pi $ is to capture fine-grained features potentially missed by the server model. We then compare the predictions of $ \Pi $ and the server on synthetic samples and apply a KL divergence loss (described as $ \mathcal{L}_{\text{gap}} $) to encourage the generator to produce samples that expose the discrepancy between the two models. This results in more informative and transferable data for replay. We will revise Section 4.2.1 in the final version to clearly explain this. > **W3:** The origin of the Reference Model is not clearly explained. Does it require training from scratch? What is the computational overhead? How many training iterations are needed, and how are the training parameters set? These aspects should be addressed in more detail. **Our Response:** Thank you for the helpful comment! Please allow us to explain it. The purpose of the reference model $ \Pi $ is to guide the generator to produce more diverse samples. $ \Pi $ is trained from scratch on synthetic data and shares the same architecture as the server model (a ResNet-based classifier). This helps encourage the generation of informative, boundary-sensitive synthetic data, rather than overfitting to overly confident or less representative examples. We train it for 40 epochs using SGD with a learning rate of 0.1, momentum 0.9, and weight decay 0.0002. In terms of computational overhead, $ \Pi $ is only used during the training phase to guide the generator and is lightweight and isolated from client participation. It introduces less than 5% additional training time, and does not affect inference or deployment, thus incurring minimal cost in practice. These implementation details will be included in the appendix. ### Response to Suggestions > **S1:** The **Expert-Driven Data Synthesis** section lacks a clear focus, making it difficult to grasp the key points. The authors need to improve the logical flow of this section. **Our Response:** Thanks for your valuable suggestions to improve the presentation of our paper! We will revise the *Expert-Driven Data Synthesis* section to improve its clarity and logical flow in the final version.
null
null
null
null
null
null
On the Clean Generalization and Robust Overfitting in Adversarial Training from Two Theoretical Views: Representation Complexity and Training Dynamics
Accept (poster)
Summary: I happen to review this paper again. This paper has almost no changes when compared to previous version. This paper studies the Clean Generalization and Robust Overfitting (CGRO) of neural networks under adversarial training. It studies the CGRO from two views: representation complexity and training dynamics. A CGRO Classifier is defined as: clean test error is small, robust training error is small, but robust test error is large. Representation complexity (Section 4): the main result shows that clean classifier requires $ploy(D)$ parameters, and CGRO requires $ploy(D + ND)$ parameters, while robust classifer requires $\exp(D)$ parameters. It shows the separation between clean/robust classifiers from approximation theory, i.e., Clean Classifier ($poly(D)$) ≲ CGRO Classifier($poly(D)+ND$) ≪ Robust Classifier (Ω($\exp(D)$)) In training dynamics, the results show that adversarial training makes the neural network conducts partial true feature learning and exactly memorizes spurious features. That means, after adversarial training, the network correctly classifies unseen clean data with high probability in Theorem E.12; but fails to classify the adversarial examples generated from unseen clean data with probability at least 1/2. Claims And Evidence: I have the following two concerns: Clean Classifier ($poly(D)$) ≲ CGRO Classifier($poly(D)+ND$) ≪ Robust Classifier (Ω($\exp(D)$)) However, the results heavily depend on Assumption 4.3 rather than proven, as mentioned by two reviewers as well. Though it may observe from empirical results, it requires a careful theoretical demonstration. Regarding the training dynamics, the statement is incomplete as the comparison to classifier under standard training is missing. In this case, it is unclear to us whether a model trained via standard (non-adversarial) training would only learn the true feature and ignore the spurious feature? Establishing this distinction is important, as it would suggest that CGRO arises from the nature of adversarial training itself rather than from the artificial construction of the data model. Methods And Evaluation Criteria: This is a theoretical paper and the numerical experiments can support the theoretical findings. Theoretical Claims: Yes, I checked the proof and mentioned it in the previous part on "Claims and Evidence". Experimental Designs Or Analyses: N/A Supplementary Material: Yes, I high level checked this supplementary material. Relation To Broader Scientific Literature: yes, this topic is important to the machine learning community. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the positive support and valuable feedback! We greatly appreciate the insightful review, and the recognition of highlighting the significance of our contribution and solidity of our theory, as well as the clarity of our writing. We are very glad to address the questions and suggestions raised by the reviewer, which we believe will help further refine our work. Below are our responses to the questions and suggestions raised by the reviewer. >**[C1]** However, the results heavily depend on Assumption 4.3 rather than proven, as mentioned by two reviewers as well. Though it may be observed from empirical results, it requires a careful theoretical demonstration. **[A1]** We sincerely thank the reviewer for the valuable and thoughtful suggestion. As the reviewer mentioned, we would like to clarify that the main theorem relies on Assumption 4.3, where we apply a teacher-student learning framework. This framework assumes the existence of a ground-truth neural network of moderate size that can achieve perfect clean classification. This teacher-student setup is widely used in deep learning theory (e.g., [1][2][3]). We also emphasize that the polynomial-size assumption arises from empirical observations, where mildly over-parameterized networks achieve good clean classification performance but poor robust classification performance (e.g., [4][5][6]), rather than from a rigorous mathematical proof. Furthermore, providing a lower bound for the representation complexity required to achieve robust generalization under Assumption 4.3 is highly non-trivial due to the complex decision boundary induced by the ground-truth polynomial-size network. To address this, we build on the technique from [7] to establish an exponential lower bound in the worst case. >**[C2]** Regarding the training dynamics, the statement is incomplete as the comparison to classifiers under standard training is missing. In this case, it is unclear to us whether a model trained via standard (non-adversarial) training would only learn the true feature and ignore the spurious feature? **[A2]** We sincerely thank the reviewer for the insightful suggestion. As the reviewer pointed out, we can prove that a model trained using standard (non-adversarial) training will only learn the true feature and ignore the spurious feature. We will include this statement in the revision of our paper. **Reference** [1] Allen-Zhu, Z., Li, Y., & Liang, Y. (2019). Learning and generalization in overparameterized neural networks, going beyond two layers. Advances in neural information processing systems, 32. [2] Lv, B., & Zhu, Z. (2022). Implicit bias of adversarial training for deep neural networks. In International Conference on Learning Representations. [3] Allen-Zhu, Z., & Li, Y. (2023, July). Backward feature correction: How deep learning performs deep (hierarchical) learning. In The Thirty Sixth Annual Conference on Learning Theory (pp. 4598-4598). PMLR. [4] Biggio, B., Corona, I., Maiorca, D., Nelson, B., Šrndi´ c, N., Laskov, P., Giacinto, G. and Roli, F. (2013). Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23-27, 2013, Proceedings, Part III 13. Springer. [5] Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. and Fergus, R. (2013). Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. [6] Goodfellow, I. J., Shlens, J. and Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. [7] Li, B., Jin, J., Zhong, H., Hopcroft, J., & Wang, L. (2022). Why robust generalization in deep learning is difficult: Perspective of expressive power. Advances in Neural Information Processing Systems, 35, 4370-4384.
Summary: This paper investigates the Clean Generalization and Robust Overfitting (CGRO) problem – defined as “robust overfitting and high clean test accuracy” (without clean overfitting/memorisation) – from perspectives of representation complexity and training dynamics. On the one hand, they show that under data assumptions of boundedness, well-separation, clean classifiers require $poly(D)$ complexity (assumption 4.3), CRGO classifiers require $poly(D) + \tilde{O}(ND)$, and adversarially robust classifiers require $exp(D)$. On the other hand, they demonstrate a three stage phase transition during learning, to understand how a convolutional classifier (on structured data) converges to robust memorisation and CRGO under adversarial training. Claims And Evidence: The first claim compares the representation complexity for clean vs. CRGO vs. robust classifiers: Clean($poly(D)$) $\lesssim$ CRGO($poly(D) + \tilde{O}(ND)$) $\ll$ Robust($\Omega(exp(D))$). This claim relies on assumptions of bounded input, well-separated data for classification, and the existence of a ReLU clean classifier with $poly(D)$ complexity. I find these assumptions empirically reasonable for the experimental setups of CIFAR and MNIST, though I am unfamiliar with results on polynomial-sized ReLU networks for approximating clean classifiers. The second claim concerns the dynamics of adversarial training, that the network will partially learn the true feature for well-separated classes and will exactly memorise the spurious features of specific training data. They argue that the former demonstrates clean generalisation while the latter is a case of robust overfitting (since the data-wise random noise is used for memorisation). Besides the training regime, the authors additionally connect claim 2 to CRGO in the test data regime, where the clean test error and robust training error are both small while the robust test error is significant. From what I understand, this analysis appears sound. Methods And Evaluation Criteria: This is a predominantly theoretical work, where theoretical insights are cross-checked in practice. Experiments are conducted on the image modality (boundedness), on simple vision datasets of CIFAR-10 and MNIST (well-separated), with sufficiently expressive convolutional models of WideResNet-34 and LeNet-5, under standard $l_\infty$ PGD attacks. The clean and robust accuracies are recorded for training and unseen testing examples. Theoretical Claims: Referencing the above section on "claims and evidence", I examined whether the assumptions of claim 1 (4.1-4.3) are reasonable in practice; followed the proof sketch of claims 1 (section 4) and 2 (section 5) in detail; did a summary review of the claims' full proofs, comprising supplement sections D, E, F. Experimental Designs Or Analyses: To verify the complexity of CRGO and robust classifiers, the authors vary the model size and record the resultant changes in robust training loss and robust generalization gap. To examine the dynamics under adversarial training, they additionally test on synthetic, structured data, replicating the three-stage phase transition and phenomenon of CRGO. The experiments are well-aligned with the theory and provide numerical support. Supplementary Material: I have reviewed the supplement, including section A, additional robust memorisation experiments on CIFAR-10 and MNIST; section B, outlining a robust generalisation bound based on the global flatness of the loss landscape; section C, lemmas and the tensor power method; section D, introduction to feature learning; Section E, full proof for adversarial training dynamics and convergence to CRGO; section F, proof of Thm 4.4 and 4.7; section G, proof for section B’s generalisation bound. Relation To Broader Scientific Literature: This work contrasts the representation complexity of clean vs. CRGO vs. robust classifiers, which enriches our understanding of the properties and demands of adversarial robustness. Furthermore, this work discusses the learning dynamics of networks under adversarial training, outlining the processes of partial true feature learning (which results in clean generalisation) and spurious memorisation of noisy, data-wise components (which results in robust overfitting). Together, this analysis sheds light on why CRGO eventually occurs, which is a standing problem in trying to understand adversarial training. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: 1. **Strength (Soundness) -** This paper analyses CRGO in adversarial training from refreshing perspectives of representation complexity and training dynamics analysis. In my judgement, the analysis is technically sound and contributes to better understanding of the challenges (exponential complexity) and processes (signal and noise) underlying adversarially robust classification. 2. **Weakness (Significance) -** This paper considers adversarial training and robust classifiers under highly structured data settings. It may be challenging to determine for an arbitrary data modality, dataset and task, whether the assumptions hold and whether the theoretical insights apply. Other Comments Or Suggestions: N/A Questions For Authors: 1. How would practitioners efficiently verify that a given task satisfies assumptions 4.1-4.3 in order to leverage insights from claim 1 to construct their model and vary its capacity? 2. Can claim 2 (eventual CRGO and the three stage phase transition) be demonstrated for a real, existing dataset without special construction? 3. Have the authors considered experimentally comparing against a standardly trained model baseline in Section 6.2 on the synthetically constructed dataset? This would serve as a relevant control to understand unique dynamics of adversarial training. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the positive feedback! We greatly appreciate the recognition of the novelty and significance of our contribution to the topic of adversarial robustness in the deep learning community, as well as the positive remarks on the clarity of our writing. We are very glad to address the questions and suggestions raised by the reviewer, which we believe will help further refine our work. Below are our responses to the questions and suggestions raised by the reviewer. >**[Q1]** How would practitioners efficiently verify that a given task satisfies assumptions 4.1-4.3 in order to leverage insights from claim 1 to construct their model and vary its capacity? **[A1]** For Assumption 4.1, due to the normalization (or centralization) of the data, we assume that Assumption 4.1 holds for general deep learning problems. For Assumption 4.2, we compute the $\ell_{p}$ distance between data from different classes in the training or test set, as done in the empirical work [1], to validate Assumption 4.2. For Assumption 4.3, we train a suitably-sized ReLU neural network (with network width equal to $\operatorname{poly}(D)$ and network depth $L$ as a constant, resulting in a total parameter count of $\operatorname{poly}(D)$) as a clean classifier. We then test its clean test accuracy and robust test accuracy to verify whether Assumption 4.3 holds. We thank the reviewer for this insightful suggestion, and we will add the above discussion in the revision of our paper. >**[Q2]** Can claim 2 (eventual CRGO and the three stage phase transition) be demonstrated for a real, existing dataset without special construction? **[A2]** We would like to clarify that the patch structure we use can be seen as a simplification of real-world vision-recognition datasets. Specifically, images are divided into signal patches that are meaningful for classification, such as the whisker of a cat or the nose of a dog, and noisy patches, like the uninformative background of a photo. Our assumption about patch data can also be generalized to situations where there exists a set of meaningful patches. However, analyzing the learning process in such cases would complicate our explanation and obscure the main idea we wish to present. Therefore, we focus on the case of a single meaningful patch in our work. We would like to point out that, for real data and real models, it is difficult to rigorously define true feature learning and spurious feature learning. As a result, verifying the phase transition in real-world experiments is challenging, and this issue is commonly encountered in feature learning theory papers, such as [2][3][4][5]. We also believe that validating this on real data is an important and promising direction for future research. >**[Q3]** Have the authors considered experimentally comparing against a standardly trained model baseline in Section 6.2 on the synthetically constructed dataset? This would serve as a relevant control to understand unique dynamics of adversarial training. **[A3]** We thank the reviewer for the valuable suggestion. We have added a standardly trained model baseline on the synthetically constructed dataset. The experiment results are presented as follows: | | Train | test | |----------------------|-------|------| | Clean Acc | 100.0 | 100.0 | | Robust Acc | \ | 1.5 | We will include this experiment in the revision of our paper. **Reference** [1] Yang, Y. Y., Rashtchian, C., Zhang, H., Salakhutdinov, R. R., & Chaudhuri, K. (2020). A closer look at accuracy vs. robustness. Advances in neural information processing systems, 33, 8588-8601. [2] Allen-Zhu, Z. and Li, Y. (2023b). Towards understanding ensemble, knowledge distillation and self-distillation in deep learning. In The Eleventh International Conference on Learning Representations. [3] Chidambaram, M., Wang, X., Wu, C. and Ge, R. (2023). Provably learning diverse features in multi view data with midpoint mixup. In International Conference on Machine Learning. PMLR. [4] Chen, Z., Deng, Y., Wu, Y., Gu, Q., & Li, Y. (2022). Towards understanding mixture of experts in deep learning. arXiv preprint arXiv:2208.02813. [5] Zou, D., Cao, Y., Li, Y., & Gu, Q. (2023, July). The benefits of mixup for feature learning. In International Conference on Machine Learning (pp. 43423-43479). PMLR. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. The authors have addressed my concerns regarding whether assumptions 4.1-4.3 could be satisfied in practice; the proposed sanity check for 4.3 is reasonable, as is the explanation given for A2. I find this paper to be a valuable addition to adversarial training, especially since it injects a fresh perspective (from approximation theory and feature learning) to CRGO and robust generalisation gap problems. I find it important to also articulate these insights in empirical terms and look forward to the extended discussion on how assumptions are satisfied in practice / how the theory can inform practitioners' design choices (e.g. when attempting to construct and train a robust model). To eliminate ambiguity, I raise my score from a 3 (borderline/weak accept) -> 4 (accept). --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the thoughtful and encouraging feedback. We're glad that the clarifications regarding Assumptions 4.1–4.3 and the sanity check for Assumption 4.3 were helpful. We also appreciate your recognition of the theoretical perspective introduced in our work and its relevance to CGRO and the robust generalization gap. In the revision, we will make sure to elaborate further on how these assumptions are practically satisfied, and discuss how the theoretical insights can guide robust model design in real-world settings. Our goal is to make the theoretical framework not only rigorous but also actionable for practitioners. We truly appreciate your support and are encouraged by the increased score!
Summary: This study focuses on the phenomenon of clean generalization and adversarial overfitting. The authors theoretically formulate this phenomenon and analyze it from the perspective of representation complexity and learning dynamics. First, they derive the complexity required to learn CGRO models and robust models, showing that robust models require more complexity than CGRO models. Second, the authors study the learning dynamics to discover three stages in training. Claims And Evidence: Remark 4.8 states that “which may lead the classifier trained by adversarial training to the CGRO regime.” However, the theoretical analysis is not agnostic to adversarial training and it cannot explain the effects of adversarial training. Methods And Evaluation Criteria: - The main result of this paper, Theorem 4.4, is based on Assumption 4.3, but Assumption 4.3 is not proven or justified. Especially, the restriction of $poly(D)$ requires justification. - Why is the function in Lemma 4.5 a CGRO model? While a CGRO classifier needs to satisfy three conditions in Definition 3.4, the function $f_S$ in Lemma 4.5 is not proven to satisfy the third one, i.e., $L^{p,\delta}_D (f) = \Omega (1)$. - The analysis from the perspective of representation complexity fails to explain the mechanism of adversarial training. Moreover, many methods (e.g., (Wu et al., 2020)) have been proposed to improve the adversarial generalization of DNNs, and this study fails to explain the effectiveness of these studies. Wu et al., Adversarial Weight Perturbation Helps Robust Generalization, in NeurIPS 2020. - The analysis from the perspective of learning dynamics is limited to a specific dataset and two-layer network with a pre-defined parameter structure. Its scalability to real datasets and complex architectures is questioned. Theoretical Claims: Assumption 4.3 is not justified. Whether the function in Lemma 4.5 belongs to CGRO classifiers needs clarification. Experimental Designs Or Analyses: I’m concerned about how experiments in Table 1 and Figure 2 support the theoretical analysis. I expect a comparison between models with only linear complexity and models with exponential complexity. Supplementary Material: Yes, although I did not carefully check all the proof. Relation To Broader Scientific Literature: This study provides two perspectives to understand the prior discovery of the adversarial overfitting phenomenon. Essential References Not Discussed: Liu et al., (2023) also studied the relationship between network architecture (weight sparsity) and adversarial generalization. Liu et al., Exploring the Relationship between Architectural Design and Adversarially Robust Generalization, in CVPR 2023. Other Strengths And Weaknesses: No. Other Comments Or Suggestions: Previous studies have discovered that adversarial robust generalization requires more data. It would be better to use the theoretical analysis in this study to explain this discovery, since the theoretical analysis here is also related to the dataset size. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful feedback. We are very glad to address the questions and suggestions raised by the reviewer, which we believe will help further refine our work. Below are our responses to the questions and suggestions raised by the reviewer. **Response to claims and evidences:** >See our response to **[Q3-1]**. **Response to methods and evaluation criteria:** >**[Q1]** As the reviewer mentioned, we would like to clarify that the main theorem relies on Assumption 4.3, where we apply a teacher-student learning framework. This framework assumes the existence of a ground-truth neural network of moderate size that can achieve perfect clean classification. This teacher-student setup is widely used in deep learning theory (e.g., [1][2][3]). We also emphasize that the polynomial-size assumption arises from empirical observations, where mildly over-parameterized networks achieve good clean classification performance but poor robust classification performance (e.g., [4][5][6]), rather than from a rigorous mathematical proof. Furthermore, providing a lower bound for the representation complexity required to achieve robust generalization under Assumption 4.3 is highly non-trivial due to the complex decision boundary induced by the ground-truth polynomial-size network. To address this, we build on the technique from [7] to establish an exponential lower bound in the worst case. >**[Q2]** We sincerely thank the reviewer for the valuable and thoughtful suggestion. Indeed, if the data distribution $\mathcal{D}$ satisfies that the covering number of the supporting set is exponential in the data input dimension $D$, we can rigorously prove the third condition, i.e., $L_{\mathcal{D}}^{p,\delta}(f) = \Omega(1)$, which complete the proof of Lemma 4.5. We will include this in the revision of our paper. >**[Q3-1]** We would like to clarify that, from the perspective of expressive power, our analysis demonstrates that CGRO classifiers can be achieved with only polynomial representation complexity (Theorem 4.4), whereas an exactly robust classifier necessitates representation complexity that is exponential in the worst case (Theorem 4.7). Given the simplicity bias inherent in neural network training (e.g., [8][9][10][11]), this fundamental complexity gap may explain the implicit bias observed in adversarial training. >**[Q3-2]** We would like to clarify that our paper focuses on the underlying mechanism behind CGRO, and explaining the effectiveness of robustness improvement methods (such as [12]) is beyond the scope of our paper. >**[Q4]** We simplify real-world vision-recognition datasets by dividing images into meaningful patches (e.g., a cat's whisker or a dog's nose) and noisy ones (e.g., an uninformative background). While this can be generalized to scenarios with multiple meaningful patches, analyzing such cases would complicate our explanation. Hence, we focus on a single meaningful patch. Defining true versus spurious feature learning in real data and models is difficult, making phase transition verification challenging, as seen in feature learning theory papers. We believe validating this on real data is a promising direction for future research. **Response to experimental designs and analyses:** > We would like to emphasize that our exponentially large lower bound for representation complexity holds only in the worst case. On the other hand, the computational cost of training neural networks with exponentially large input dimensions is unacceptable in practice. Therefore, as an alternative, we conducted experiments on MNIST and CIFAR10 by appropriately scaling up the network, and the phenomena observed are consistent with our theory. **Response to essential references not discussed:** > We thank the reviewer for pointing out the related work [17]. We will include it in the related work section in the revised version of our paper. **Response to other comments and suggestions:** > We thank the reviewer for the valuable suggestion. Indeed, when the training data is sufficiently large, i.e., when $N = \Omega(exp(D))$ (at which point Lemma 4.5 no longer holds, as seen in our response to **[Q2]**), the upper bound of the CGRO classifier's complexity matches the lower bound of the robust classifier's complexity. This also indicates that adversarial robustness requires more data. We will include this part in the revision of our paper. **Reference (Arxiv Index)** [1] 1811.04918 [2] 2001.04413 [3] 2102.06701 [4] 1708.06131 [5] 1312.6199 [6] 1412.6572 [7] 2205.13863 [8] 1901.06523 [9] 2201.07395 [10] 2110.13905 [11] 2410.10322 [12] 2004.05884 [13] 2012.09816 [14] 2210.13512 [15] 2208.02813 [16] 2303.08433 [17] 2209.14105
Summary: The authors explained the common Clean Generalization and Robust Overfitting (CGRO) phenomenon in adversarial training from a theoretical analysis. The authors first proved that a two-layer ReLU net will achieve CGRO with small extra parameters, and the ideal robust classifier requires exponential parameters. Then, as the main contribution, the authors proved that the network has a three-stage phase during training. The network learns some “true features” in the first stage, and the noise component increases in the second stage; at last, the network learns the data-specific noise, which results in CGRO. Claims And Evidence: Yes, all the claims are supported by the theoretical analysis. Methods And Evaluation Criteria: Yes, the evaluation criteria make sense to me for all the experiments. Theoretical Claims: Yes, I have checked all the theoretical claims and their proofs, including the Representation Complexity part and Learning Process part. I didn't find any significant errors in it, but I can't guarantee that it's completely correct. Experimental Designs Or Analyses: Yes, I have checked all the experimental designs and analyses. All the experiments seem fine, however, there are some concerns: (1) In Table 1 and Figure 2, the interval of model sizes seems to be less fully reflective of model robustness, e.g., there are no results for the acc surge process in MNIST and no surge phenomenon in CIFAR10. (2) It is not clear about the smallest/largest/average noise memorization. Supplementary Material: Yes, I have reviewed all the supplementary material, including Additional Experiments and the relationship between Robust Generalization and Global Flatness, and Proof part. Relation To Broader Scientific Literature: The main results of this paper focus on explaining the CGRO Phenomenon from the Representation Complexity and Learning Process on Structured Data. There are several prior works that figure out the robust generalization requires more data and larger models, which may somewhat diminish the first contribution of the paper. However, the analysis of the Learning Process seems interesting and inspiring, which may provide new insights for future research. Essential References Not Discussed: No Other Strengths And Weaknesses: The quality of the paper is good, presenting a reasonable motivation and methodology. The paper is generally well-written and easy to follow. The paper theoretically explains the GCOR and the learning process in adversarial learning, which makes the paper inspiring. However, these are some concerns. For example, the intervals do not seem convincing, and there is confusion about the smallest/largest/average noise memorization. On the other hand, even though the authors explain the learning process, they do not give the solution for the GCOR which may slightly diminish the contribution. Other Comments Or Suggestions: No Questions For Authors: As mentioned above, I have some concerns about this paper: (1) the intervals in Table 1 and the figure need modification. There are no results between 8 and 12 in MNIST, which makes readers wonder how exactly accuracy changed during this surge. On CIFAR-10, the robust test accuracy didn't spike with robust training accuracy as it did in MNIST. Could the authors increase the intervals (e.g., less than 1 and greater than 10) or provide an explanation? (2) It is not clear about the smallest/largest/average noise memorization in figure 2(c). (3) From Figure 2(c) and Phase III analysis, it seems the true feature learning would not decrease during the training, so how do the authors explain the general phenomenon that robust test accuracy will decrease after some point in the training phase? (4) similarly, why the true feature learning would not suffer catastrophic forgetting since the “signal component is now mostly dominated by the noise component” as claimed in paper? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the encouraging and insightful feedback, and for highlighting the strength of our theoretical contributions and the clarity of our writing. We are very glad to address the questions and suggestions raised by the reviewer, which we believe will help further refine our work. Below are our responses to the questions raised by the reviewer. >**[Q1]** the intervals in Table 1 and the figure need modification. **[A1]** We thank the reviewer for the valuable suggestion. - For MNIST, we have added experiments with model size factors of 9, 10, and 11. The experimental results are presented in the following table. | Model Size Factor | 1 | 2 | 8 | 9 | 10 | 11 | 12 | 16 | |-------------------|-------|-------|-------|-------|-------|-------|-------|-------| | **Clean Test Acc**| 11.35 | 11.35 | 11.35 | 11.35 | 11.35 | 95.24 | 95.06 | 94.85 | | **Robust Test Acc**| 11.35 | 11.35 | 11.35 | 11.35 | 11.35 | 73.22 | 77.96 | 83.43 | | **Robust Train Acc**| 11.70 | 11.70 | 11.70 | 11.70 | 11.70 | 95.50 | 99.30 | 99.50 | We can observe that when the model size factor is less than or equal to 11, the adversarially trained LeNet model fails to learn non-trivial classifiers. However, when the model size factor is greater than or equal to 12, there is a sudden improvement in model performance. This phenomenon has also been mentioned in previous empirical work [1], and studying the theoretical mechanism behind it is an interesting future direction. - For CIFAR10, we did not observe the robust test accuracy spike phenomenon (we used the WideResNet architecture here, and a model size factor smaller than 1 results in an inability to achieve a clean classifier, even with clean training). We speculate that this is due to the inherent complexity of the CIFAR10 dataset, which is much higher than that of the MNIST dataset. We also acknowledge that theoretically analyzing the occurrence or absence of the spike phenomenon is an interesting and important direction for future research. >**[Q2]** It is not clear about the smallest/largest/average noise memorization in figure 2(c). **[A2]** We would like to clarify that, for our two-layer neural network and a given data point $(\boldsymbol{X}, y)$, we define the noise memorization as $\mathcal{V} := \sum_{r=1}^{m}\sum_{j \in [P] \setminus \operatorname{signal}(\boldsymbol{X})} \langle \boldsymbol{w}_r^{(T)}, y \boldsymbol{X}[j] \rangle^q$, which is mentioned in lines 291-296 (right page). Then, for the $N$-size training dataset, we obtain a total of $N$ noise memorizations $\mathcal{V}_1, \mathcal{V}_2, \dots, \mathcal{V}_N$. We define the smallest, largest, and average values of these as the smallest, largest, and average noise memorization, respectively. We thank the reviewer for pointing this out, and we will include the relevant explanation in the revision of our paper. >**[Q3,4]** From Figure 2(c) and Phase III analysis, it seems the true feature learning would not decrease during the training, so how do the authors explain the general phenomenon that robust test accuracy will decrease after some point in the training phase? Similarly, why does the true feature learning not suffer catastrophic forgetting since the “signal component is now mostly dominated by the noise component” as claimed in the paper? **[A3,4]** As the reviewer mentioned, we would like to clarify that during training, the true feature learning does not decrease. However, its increment is dominated by the noise component (as seen in Lemma 5.12). Thus, our theory cannot directly explain the observation that robust test accuracy decreases after some point in the training phase and that true feature learning suffers catastrophic forgetting, which implies a gap between theory and observation. We believe that theoretically explaining this gap is an interesting and important future direction. **Reference** [1] Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2017). Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
null
null
null
null
null
null
UltraTWD: Optimizing Ultrametric Trees for Tree-Wasserstein Distance
Accept (poster)
Summary: This paper introduces an unsupervised approach to constructing an ultrametric tree for the Tree-Wasserstein Distance (TWD) that approximates the Wasserstein distance. Claims And Evidence: - The method relies on a tree-based representation, yet the paper does not clearly define what constitutes the tree (e.g., whether it is strictly binary or can be flexible) or how the structure is updated. - The optimization goal is related to metric multidimensional scaling, but there is no discussion about it. - Although the authors claim an efficient approximation of the Wasserstein distance, the complexity analysis (especially the O(n³) cost for UltraTWD-IP) does not clearly indicate a substantial improvement over the O(n³ log n) complexity of traditional optimal transport methods. Methods And Evaluation Criteria: The proposed methods appear generally appropriate for document data applications and the use of Wasserstein distance approximations. The experimental evaluation is a strength; however, since the paper posits the paper’s competing work with the supervised method, it would be better to also include the results from the supervised tree-Wasserstein distance by Takezawa et al. and discuss this related work. In addition, since the paper claims to be an efficient approximation of Wasserstein distance and the application is document data, the empirical results of WMD should be included in the comparison. Theoretical Claims: The complexity claim for UltraTWD-IP is stated as O(n³). The improvement over classical optimal transport (O(n³ log n)) is not significant enough to be a clear theoretical advantage. Experimental Designs Or Analyses: - Document datasets are appropriate, and the empirical evaluation is comprehensive regarding performance metrics. - The absence of experiments comparing with supervised tree-Wasserstein distance and WMD limits our understanding of the method’s benefits. - There is no clear explanation of how the tree structure is updated. Supplementary Material: I reviewed the supplementary material for extended experimental results and the background of related methods. Relation To Broader Scientific Literature: The paper contributes to the literature on efficient approximations of the Wasserstein distance for document data. Essential References Not Discussed: - The triplet constraints or triplet relation were explored in previous work, e.g., An Improved Cost Function for Hierarchical Cluster Trees. Although they have not yet been applied within iterative projection frameworks, it would be valuable to reference this earlier research considering the triplet in tree relation. - Related work not discussed: - Distance-Based Tree-Sliced Wasserstein Distance - Tree-Wasserstein Distance for High Dimensional Data with a Latent Feature Hierarchy - Projection Optimal Transport on Tree-Ordered Lines Other Strengths And Weaknesses: Strengths: - The paper is well-written and easy to follow. - Extensive experimental results on document datasets. Weaknesses: - Minor issues include undefined symbols (e.g., $n$ in Section 1, the norm in Eq. (7), and the height in Eq. (8)). Other Comments Or Suggestions: NA Questions For Authors: - Could you provide a clear definition of the tree structure used in your method? Specifically, is the method limited to binary trees, or does it accommodate more flexible tree structures? - How is the tree structure updated during the iterative projection process? Does the update involve regrouping leaves or modifying the overall tree topology? - Can you elaborate on the practical benefits of the O(n³) complexity for UltraTWD-IP, especially when compared to the O(n³ log n) complexity of traditional optimal transport methods? - When reporting the computation time in Figures 7 and 8 in Appendix C.3, which algorithm was referred to in UltraTWD? - Could you include empirical comparisons with supervised tree-Wasserstein distance methods and standard WMD, given that your method claims to be an efficient approximation? - Could you provide a visualization of the constructed tree from the proposed methods? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable questions. **We will revise the paper accordingly and add more discussion on related work.** --- **Question 1.** *Could you provide a clear definition of the tree structure used in your method? Specifically, is the method limited to binary trees, or does it accommodate more flexible tree structures?* **Answer 1.** Our method is based on ultrametric trees, which are rooted but not necessarily binary. While we empirically followed [1] and used the hierarchical minimum spanning tree procedure to construct a binary ultrametric tree, the method itself is not limited to binary structures. Non-binary ultrametric trees can also be constructed using algorithms like UPGMA. We plan to explore more flexible tree structures in future work. [1] Chen, S., Tabaghi, P., and Wang, Y. Learning ultrametric trees for optimal transport regression. AAAI, 2024. --- **Question 2.** *How is the tree structure updated during the iterative projection process? Does the update involve regrouping leaves or modifying the overall tree topology?* **Answer 2.** Great question. During the iterative projection process, we update the distance matrix $D_T \in \mathbb{R}^{n \times n}$ to satisfy the strong triangle inequalities. This projection operates directly on the matrix entries and does not explicitly modify the tree structure or regroup leaves. The tree topology is constructed only once, in the final step of Algorithm 2, using a minimum spanning tree algorithm applied to the projected matrix. --- **Question 3.** *Could you provide a visualization of the constructed tree from the proposed methods?* **Answer 3.** Please see the [figure](https://anonymous.4open.science/r/rebuttal_twd/visualization.pdf). We will include it in the revision. --- **Question 4.** *Can you elaborate on the practical benefits of the $O(n^3)$ complexity for UltraTWD-IP, especially when compared to the $O(n^3 \log n)$ complexity of traditional optimal transport methods?* **Answer 4.** We would like to clarify that the $O(n^3)$ complexity of UltraTWD-IP refers to a one-time cost for learning the ultrametric tree. After this step, **each pairwise distance $W_T(\mu, \nu)$ can be computed in just $O(n)$ time**-much faster than the $O(n^3 \log n)$ cost of computing exact $W_1(\mu, \nu)$. As discussed in Section 3.6, the total time complexity for computing $W \in \mathbb{R}^{N \times N}$ is: - **$O(N^2 \cdot n + n^3)$** for UltraTWD-IP - **$O(N^2 \cdot n + n^2)$** for UltraTWD-GD - **$O(N^2 \cdot n^3 \log n)$** for traditional $W_1$. For example, on the BBCSport dataset with 3,000 valid words per distribution, computing $W_T \in \mathbb{R}^{100 \times 100}$ using UltraTWD-IP takes only **0.9 hours** (including 0.3 hours for tree learning), compared to **2.3 hours** with traditional optimal transport. --- **Question 5.** *When reporting the computation time in Figures 7 and 8, which algorithm was referred to in UltraTWD?* **Answer 5.** Apologies for the confusion. Since all three UltraTWD variants use the same type of ultrametric tree, their computation times are nearly identical. We reported the average time across three algorithms and will clarify this in the revision. --- **Question 6.** *Could you include empirical comparisons with supervised tree-Wasserstein distance methods and standard WMD, given that your method claims to be an efficient approximation?* **Answer 6.** As suggested, we will include the following comparisons: - **Supervised methods:** We compare with STW [1] and UltraTree [2] (UltraTree is discussed in the main text, Line 367, page 7). STW learns a new distance via contrastive loss rather than approximating $W_1$, resulting in large errors and poor precision. STW training is also time-consuming, taking **3.4 hours** on BBCSport, whereas UltraTWD-GD requires only **34 seconds** to train. As shown in Table R1, our methods consistently outperform both STW and UltraTree in accuracy and efficiency. **Table R1. Performance comparison on the BBCSport dataset.** |Metric|RE-W$\downarrow$|Precision$\uparrow$|Total Time (hour)| -|-|-|- STW|0.643|0.335|3.5 UltraTree|0.022|0.842|0.6 UltraTWD-GD|0.016|0.868|**0.1** UltraTWD-IP|**0.014**|**0.885**|0.5 [1] Takezawa, Y., Sato, R., & Yamada, M. Supervised tree-wasserstein distance. ICML, 2021. [2] Chen, S., Tabaghi, P., and Wang, Y. Learning ultrametric trees for optimal transport regression. AAAI, 2024. - **WMD:** Standard WMD is used as the ground-truth $W_1$ in the main text. Figures 7 and 8 (page 16) show our methods are significantly faster, even when including tree learning time (see also **Answer 4**). Table R2 compares SVM classification accuracy, where WMD yields the highest accuracy due to exact optimization, while our methods remain competitive with much lower cost. **Table R2. Comparison of SVM classification accuracy.** |Dataset|BBCSport|Reuters -|-|- WMD|**0.874**|**0.919** UltraTWD-GD|0.838|0.900 UltraTWD-IP|0.839|0.905 --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. Most of my concerns were addressed. Could you please elaborate on how you plan to incorporate and discuss the related work in your manuscript? --- Reply to Comment 1.1.1: Comment: We're very glad to see that most concerns have been addressed. To provide a comprehensive discussion, we summarize the related works in the following three categories: - **Comparison with Triplet-Based Methods:** Both UltraTWD and prior methods [1, 2] leverage triplet information for tree construction, but differ in motivation and formulation. [1] introduces a ratio-cost function based on lowest common ancestor (LCA) relations to recover hierarchical clustering structures. [2] extends this idea by defining a Hyperbolic Diffusion LCA (HD-LCA) to capture latent feature hierarchies, followed by computing a tree-Wasserstein distance on the resulting tree. In contrast, UltraTWD directly optimizes ultrametric trees by enforcing triplet constraints of $d_{ij}^T \le \max ( d_{ik}^T, d_{jk}^T ) $, minimizing $||D_T - D||_F^2$ via iterative optimization. Rather than pursuing hierarchical consistency, UltraTWD focuses on accurately approximating the Wasserstein distance through principled ultrametric learning. [1] Wang, D., & Wang, Y. An improved cost function for hierarchical cluster trees. Journal of Computational Geometry, 2020. [2] Lin, Y. W. E., Coifman, R. R., Mishne, G., & Talmon, R. Tree-Wasserstein distance for high dimensional data with a latent feature hierarchy. ICLR, 2025. --- - **Comparison with Line-Based Methods:** Recent works [3, 4] approximate Wasserstein distance by projecting measures onto structured line systems, termed tree systems. These methods generalize the Sliced-Wasserstein (SW) distance by replacing single lines with connected line systems metrized by tree distances, making them interpretable as tree-based SW variants. When the system includes only one line, they reduce to standard SW. In contrast, UltraTWD learns a single ultrametric tree that closely approximates the cost matrix, providing a more accurate representation of Wasserstein geometry. As shown in Table R3, UltraTWD achieves significantly lower approximation error than standard SW [5] with 1,000 random projections, making it more suitable for high-precision tasks. *(We may not have time to include [3, 4] in current experiments but will incorporate them in the revision.)* **Table R3. Performance comparison on the BBCSport dataset.** |Metric|RE-W$\downarrow$|Precision$\uparrow$|MRR$\uparrow$|ACC$\uparrow$|Total Time (min) -|-|-|-|-|- SW|0.567|0.466|0.421|0.800|10 UltraTWD-GD|0.016|0.868|0.921|0.838|28 UltraTWD-IP|**0.014**|**0.885**|**0.924**|**0.839**|**9** [3] Tran, H. V., Pham, H. T., Huu, T. T., Nguyen-Nhat, M. K., Chu, T., Le, T., & Nguyen, T. M. Projection optimal transport on tree-ordered lines. 2025. [4] Tran, H. V., Nguyen-Nhat, M. K., Pham, H. T., Chu, T., Le, T., & Nguyen, T. M. Distance-based tree-sliced Wasserstein distance. ICLR, 2025. [5] Bonneel, N., Rabin, J., Peyré, G., & Pfister, H. Sliced and radon wasserstein barycenters of measures. Journal of Mathematical Imaging and Vision, 2015. --- - **Comparison with Supervised Methods:** Both STW [6] and UltraTree [7] are supervised methods for learning tree-based distances but differ in their objectives. STW trains with a contrastive loss to learn task-specific distances without approximating $W_1$, resulting in higher errors and costly training. UltraTree learns to regress tree distances to precomputed Wasserstein distances but requires expensive supervision. In contrast, UltraTWD learns unsupervised ultrametric trees and achieves superior performance with high efficiency. For example, on the BBCSport dataset, STW takes 3.4 hours to train, whereas UltraTWD-GD completes in only 34 seconds. **Table R4. Performance comparison on the BBCSport dataset.** |Metric|RE-W$\downarrow$|Precision$\uparrow$|Total Time (hour)| -|-|-|- STW|0.643|0.335|3.5 UltraTree|0.022|0.842|0.6 UltraTWD-GD|0.016|0.868|**0.1** UltraTWD-IP|**0.014**|**0.885**|0.5 [6] Takezawa, Y., Sato, R., & Yamada, M. Supervised tree-wasserstein distance. ICML, 2021. [7] Chen, S., Tabaghi, P., and Wang, Y. Learning ultrametric trees for optimal transport regression. AAAI, 2024. --- We will include a dedicated section discussing related works. If space is limited, we will either re-organize the content or move the discussion to the appendix. If any part of the above discussion seems inappropriate or unclear, please feel free to let us know or edit the original review—we will revise accordingly. **If you find our response helpful, a higher score would be greatly appreciated. Thank you again for your valuable feedback.**
Summary: The Wasserstein distance is a well-known metric for comparing distributions, and has been used as loss function in many ML models. To improve the efficiency of computing the Wasserstein distance, researchers have considered embedding the distributions to a tree metric and computing the Wasserstein distance over the tree embedding, which can be done in linear time. This paper proposes to embed the distributions into a tree that has ultrametric distances, and provide three algorithm for computing the ultrametric tree distances that are close to the underlying metric. More specifically, the authors formulate the nearness of a tree metric to the ground metric in two ways, namely the $\ell_\infty$ norm and the Frobenius norm of the tree distance and the ground distance. For the $\ell_\infty$ norm, they present a simple algorithm that computes the tree ultrametric using minimum spanning trees. For the Frobenius norm, they present an IPM-based algorithm and a gradient-descent-based approach to compute the ultrametric. They show in experiments on text datasets that their proposed tree Wasserstein distance (Ultra-TWD) has a lower approximation ratio as well as better performance in document retrieval, ranking, and classification tasks compared to other variants of the tree Wasserstein distance in the literature. ## Update after rebuttal The authors resolved my concerns. They provided additional experimental results and promised to add the new experimental results as well as the missing citations. I increased my score from 2 to 4. Claims And Evidence: Yes, most claims are well-supported. I have raised some concerns about theoretical and experimental results below. Methods And Evaluation Criteria: Yes, the datasets are suitable for the problem. Theoretical Claims: I have a concern with one of the claims made in Section 3.6: The UltraTWD (or any other tree Wasserstein distances) provides an approximation of the Wasserstein distance. Therefore, comparing the running time of $W_T$ with the exact computation of $W_1$ is unfair. The UltraTWD and an approximation algorithm such as Sinkhorn would have roughly the same execution time (although it is not clear how many iterations does the GD algorithm require). Experimental Designs Or Analyses: I have a concern with the experiments. Specifically, I am concerned that the experimental setup for competing distances might not be set optimally. For instance, for the Sinkhorn algorithm, the authors set $\lambda=1$ and the maximum number of iterations to 100. Can the authors explain how they chose these parameters, as I expect to see better performance by setting $\lambda=0.01$ and max number of iterations to 300 for instance. I also would like to know how the authors have chosen the regularization parameter $\lambda=0.001$ for the weight-optimized methods and the number 3 of trees in the sliced methods. Supplementary Material: Yes, I reviewed most parts of the supplementary material. Relation To Broader Scientific Literature: The problem of computing Wasserstein distance is a well-studied area, and designing fast algorithms that approximate the Wasserstein distnace is of high importance. Approximating the ground distances using a tree metric can help speeding-up the computation of Wasserstein distance, as the Wasserstein distance on a tree metric has a simple closed-form formula that can be computed in linear time. Essential References Not Discussed: The use of trees and hierarchical partitionings in approximating the 1-Wasserstein distance has been extensively used in the literature, and I believe the following papers can be cited in this work: Tree-based Algorithms: * P. Indyk. "A near linear time constant factor approximation for Euclidean bichromatic matching (cost)." SODA 2007. * P. K. Agarwal, S. Raghvendra, P. Shirzadian, and R. Sowle. "A higher precision algorithm for computing the 1-Wasserstein distance." ICLR 2023. MWU-based approaches for boosting the accuracy of greedy tree algorithms: * A. Khesin, A. Nikolov, and D. Paramonov. "Preconditioning for the Geometric Transportation Problem." SOCG 2019. * E. Fox and J. Lu. "A deterministic near-linear time approximation scheme for geometric transportation." FOCS 2023. * P. K. Agarwal, S. Raghvendra, P. Shirzadian, and K. Yao. "Fast and accurate approximations of the optimal transport in semi-discrete and discrete settings." SODA 2024. Streaming Algorithm: * X. Chen, R. Jayaram, A. Levi, and E. Waingarten. "New streaming algorithms for high dimensional EMD and MST." STOC 2022. Other Strengths And Weaknesses: **Strengths** * The paper is well-written and easy to follow. **Weaknesses** * There are no theoretical guarantees on the convergence rate of their proposed IPM and SGD algorithms, and hence, it is hard to judge whether the proposed methods have a comparable time as the previous methods or not. * The experimental results only compare the proposed methods with the existing TWDs and not other approximations of the Wasserstein distance, such as sliced Wasserstein distance (the SWD and not sliced TWD). Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We truly appreciate your positive feedback. Your valuable suggestions will help us improve the work. --- **Comment 1.** *Comparing the running time of $W_T$ with the exact computation of $W_1$ is unfair. It is not clear how many iterations does the GD algorithm require.* **Response 1.** To ensure a fair comparison, we consider the total time complexity required to compute $W \in \mathbb{R}^{N \times N}$. - For exact $W_1$: $O(N^2 \cdot n^3 \log n)$. - For UltraTWD: (including the tree learning time) - $O(N^2 \cdot n + n^3)$ for UltraTWD-IP - $O(N^2 \cdot n + n^2)$ for UltraTWD-GD As described in Algorithms 2 and 3, we use **1 iteration** for UltraTWD-IP and **10 iterations** for UltraTWD-GD. On the BBCSport dataset with 3,000 valid words: - UltraTWD-GD learns the tree in 34s and computes each $W_T(\mu, \nu)$ in 0.4s using $O(n)$ time, - Each $W_1(\mu, \nu)$ takes 1.7s with $O(n^3 \log n)$ time. To compute $W \in \mathbb{R}^{100 \times 100}$, the total time is **0.9h (IP)** and **0.6h (GD)**, both significantly faster than **2.3h for exact $W_1$**. We will revise this comparison to be more rigorous. --- **Comment 2.** *For the Sinkhorn algorithm, the authors set $\lambda=1$ and the maximum number of iterations to 100. Can the authors explain how they chose these parameters? I also would like to know how the authors have chosen the regularization parameter $\lambda=0.001$ for the weight-optimized methods and the number 3 of trees in the sliced methods.* **Response 2.** We would like to clarify: - For Sinkhorn, we followed [1], using $\lambda=1$ and 100 iterations to balance accuracy and runtime. Smaller $\lambda$ and more iterations may improve accuracy but greatly increase computation time. - For weight-optimized and sliced methods, we followed [2], using 3 trees. We selected $\lambda=0.001$ based on the highest Pearson correlation with true Wasserstein distance. As suggested, we test additional $\lambda$ values. As shown in Table R2, larger $\lambda$ improves RE-W but often lowers retrieval performance. UltraTWD methods consistently outperform these baselines across metrics. We will include these results in the revision. **Table R2. Performance comparison on the BBCSport dataset.** |Metric|RE-W$\downarrow$|Precision$\uparrow$|MRR$\uparrow$|ACC$\uparrow$| -|-|-|-|- Sliced_qTWD (3 trees) $\lambda=0.01$|0.142|0.809|0.859|0.821 $\lambda=0.1$|0.118|0.798|0.852|0.815 Sliced_cTWD (3 trees) $\lambda=0.01$|0.108|0.867|0.898|0.820 $\lambda=0.1$|0.036|0.823|0.870|0.832 UltraTWD-GD|0.016|0.868|0.921|0.838 UltraTWD-IP|**0.014**|**0.885**|**0.924**|**0.839** [1] Chen, S., Tabaghi, P., and Wang, Y. Learning ultrametric trees for optimal transport regression. AAAI, 2024. [2] Yamada, M., Takezawa, Y., Sato, R., Bao, H., Kozareva, Z., & Ravi, S. Approximating 1-wasserstein distance with trees. TMLR, 2022. --- **Comment 3.** *I believe the following papers can be cited in this work.* **Response 3.** We will cite these relevant literature in the revision. --- **Comment 4.** *There are no theoretical guarantees on the convergence rate of their proposed IP and GD algorithms, and hence, it is hard to judge whether the proposed methods have a comparable time as the previous methods or not.* **Response 4.** We acknowledge the concern. While formal convergence rate analysis is challenging due to the non-convexity of the problem, our algorithms are designed to be lightweight and stable in practice. As shown in Figure 4 (Section 4.4), both IP and GD exhibit fast empirical convergence within several iterations. Moreover, Table 4 demonstrates that UltraTWD-GD achieves favorable runtime–performance trade-offs, learning trees in just **18–91 seconds** across datasets while outperforming baselines. --- **Comment 5.** *The experimental results only compare the proposed methods with the existing TWDs and not other approximations of the Wasserstein distance, such as sliced Wasserstein distance (the SWD and not sliced TWD).* **Response 5.** As noted in Section 1, *our work focuses on advancing the TWD framework*. While sliced Wasserstein and its variants require equal-sized point clouds and **are not directly applicable** to our bag-of-words distributions, we reformulate the text data into weighted point clouds by treating each word as a point. We then compare with two sliced OT methods that support unequal masses—**SOPT [1] and SPOT [2]**. As shown in Table R3, **our methods achieve significantly better performance across all metrics**. **Table R3. Performance comparison on the BBCSport dataset.** |Metric|RE-W$\downarrow$|Precision$\uparrow$|MRR$\uparrow$|ACC$\uparrow$| -|-|-|-|- SOPT|113.6|0.082|0.079|0.357 SPOT|0.666|0.064|0.063|0.231 UltraTWD-GD|0.016|0.868|0.921|0.838 UltraTWD-IP|**0.014**|**0.885**|**0.924**|**0.839** [1] Bai, Y., Schmitzer, B., Thorpe, M., & Kolouri, S. Sliced optimal partial transport. CVPR, 2023. [2] Bonneel, N., & Coeurjolly, D. Spot: sliced partial optimal transport. ACM TOG, 2019. --- Rebuttal Comment 1.1: Comment: I thank the authors for their thorough response. I would like to follow up on a few of my earlier comments: Comment 1: My concern remains unresolved. My objection is that UltraTWD does not compute the exact Wasserstein distance. Therefore, comparing its running time with that of an exact computation is not particularly informative. A more appropriate comparison would be between UltraTWD and an approximation method such as Sinkhorn. UltraTWD takes $O(N^2 n^2)$ or $O(N^2 n^3)$ time under the assumption that the distributions have disjoint supports, while the Sinkhorn algorithm also runs in $O(N^2 n^2)$ time. Comment 2: Could you please revisit the comparison of running times between Sinkhorn (with smaller $\lambda$ values and a higher number of iterations) and your method? I believe Sinkhorn can remain fast even with significantly more than 100 iterations, and the regularization parameter should not drastically affect the running time. The fact that Sinkhorn yields higher error than Quadtree in your experiments strongly suggests that the parameters may not have been set correctly. Comment 5: It seems the problem formulation or the chosen parameters in your experiments might need adjustment, as an accuracy of 8% is unexpectedly low. For example, you could consider sampling from the bag-of-words distributions and computing the sliced Wasserstein distance directly, rather than using a partial version. --- Reply to Comment 1.1.1: Comment: **Response 1.** Thank you for the follow-up, but we respectfully disagree with the premise of the objection. The family of tree-Wasserstein distance is NOT designed for arbitrary distributions with disjoint supports. Instead, it targets scenarios with a **fixed total support** (e.g., a shared vocabulary of $n_\text{all}$ word embeddings). Thus, **a single tree is learned once over the shared support and reused across all distribution pairs**. This enables each $W_T(\mu, \nu)$ to be computed in $O(n_\text{all})$ time. By contrast, Sinkhorn handles disjoint supports but incur $O(n^2)$ per pair. We clarify this with a new experiment: $N=100$ distributions, each with $n$ random words sampled from a fixed vocabulary ($n_\text{all}=1000$). We compare the total runtime to compute $W \in \mathbb{R}^{N\times N}$: - **Sinkhorn:** $O(N^2 \cdot n^2)$ - **UltraTWD-IP/GD:** $O(N^2 \cdot n_\text{all} + n_\text{all}^3)$ or $O(N^2 \cdot n_\text{all} + n_\text{all}^2)$ A single tree is learned once, enabling linear-time computation. - **UltraTWD-IP/GD-pairwise:** $O(N^2 \cdot (n^3+n))$ or $O(N^2 \cdot (n^2+n))$ A separate tree is learned for each distribution pair; **however, this setup is not the intended use case of TWD.** **Table R4. Total time (second) comparison.** |Support size $n$|100|200|500|800| -|-|-|-|- Sinkhorn ($\lambda=0.01$, 300 iterations)|54|79|217|492 UltraTWD-IP (learn a tree using 6s)|**8**|**10**|**14**|**17** UltraTWD-GD (learn a tree using 1s)|**4**|**6**|**10**|**13** UltraTWD-IP-pairwise|299|1657|12277|25180 UltraTWD-GD-pairwise|562|1261|3518|5212 **Conclusion:** - UltraTWD is highly efficient **when a fixed support is available**, computing all pairwise distances in **linear time** per pair. - Sinkhorn is more flexible for **arbitrary supports**, but incurs significantly higher cost when the support is fixed. - We acknowledge UltraTWD is not suitable for fully disjoint supports—but that is **not its design goal.** --- **Response 2.** As suggested, we test the sinkhorn algorithm with smaller $\lambda$ and more iterations. By definition, as $\lambda \to 0$, the Sinkhorn distance converges to the exact $W_1$, at a substantial computational cost. As shown in Table R5, with $\lambda=0.01$ and 300 iterations, Sinkhorn achieves lower RE-W, but its runtime becomes almost twice as long as UltraTWD. This cost will scale poorly as the dataset size grows. **UltraTWD is specifically designed for large-scale computation with shared supports**, where it offers: - **Faster runtime**, especially when the number of distributions $N$ is large. - **Competitive accuracy** with orders of magnitude smaller complexity. We will revise the experimental section to clarify the trade-offs and ensure the Sinkhorn baseline is treated more rigorously. **Table R5. Updated comparison on the Recipe dataset.** |Metric|RE-W|SVM Accuracy|Total Time (hour)| -|-|-|- Sinkhorn ($\lambda=1$, 100 iterations)|0.244|0.381|0.1 Sinkhorn ($\lambda=0.01$, 300 iterations)|0.002|0.498|2.2 UltraTWD-IP|0.026|0.495|1.2 UltraTWD-GD|0.023|0.495|1.1 --- **Response 5.** As suggested, we re-evaluated sliced Wasserstein variants. For each document, we sampled 100 points from its bag-of-words distribution. We then followed the official implementations of [1] to test SW [2], MaxSW [3], and KSW [4] using default settings (e.g., 1000 random projections). It is important to note that **sliced Wasserstein distances are designed for computational efficiency—not for accurately approximating $W_1$**. As a result, they produce large errors and low retrieval performance, as confirmed in Table R6. Although SW and KSW achieve reasonable SVM accuracy (~0.80), **UltraTWD consistently outperforms them by over 3%**. This is because UltraTWD emphasizes structure-aware accuracy, not just speed. It better captures the true Wasserstein geometry, making it more suitable for high-precision tasks. **Table R6. Performance comparison on the BBCSport dataset.** |Metric|RE-W$\downarrow$|Precision$\uparrow$|MRR$\uparrow$|ACC$\uparrow$|Total Time (min) -|-|-|-|-|- SW|0.567|0.466|0.421|0.800|10 MaxSW|21.79|0.163|0.123|0.711|**4** KSW|0.560|0.469|0.433|0.809|33 UltraTWD-GD|0.016|0.868|0.921|0.838|28 UltraTWD-IP|**0.014**|**0.885**|**0.924**|**0.839**|9 [1] Markovian sliced Wasserstein distances: Beyond independent projections. NeurIPS, 2023. [2] Sliced and radon wasserstein barycenters of measures. Journal of Mathematical Imaging and Vision, 2015. [3] Max-sliced wasserstein distance and its use for gans. CVPR, 2019. [4] Orthogonal estimation of Wasserstein distances. AISTATS, 2019. --- **We’ve provided a thorough comparison and will revise accordingly. We hope the reviewer can focus on the core contribution—*tree-Wasserstein distance*—where we demonstrate clear strengths in unsupervised learning, joint optimization, and the balance between accuracy and efficiency. If you find it valuable, a higher score would mean a lot. Thank you for your feedback.**
Summary: 1. Wasserstein distance has been applied to many tasks. This paper mainly focuses on how to learn an optimal tree-Wasserstein distance, which is not limited to a specific task. 2. The primary motivation of this paper is to address the suboptimal tree structures and inadequately tuned edge weights in traditional tree-Wasserstein distance. The proposed new framework (UltraTWD) is the first unsupervised framework to simultaneously optimize both tree structure and edge weights by leveraging the ultrametric property. 3. The authors formulate ultrametric nearness problems to optimize trees equipped with the nearest ultrametric to a cost matrix and propose efficient algorithms to address them. These algorithms are based on minimum spanning trees, gradient descent, and an accurate method using iterative projection, all of which deliver high-quality solutions. 4. The proposed new framework achieves the lowest estimation errors compared to both unsupervised methods and the state-of-the-art supervised method across four benchmark datasets. Additionally, it demonstrates exceptional performance in document retrieval, ranking, and classification, showcasing its practicality for Wasserstein-distance-based applications Claims And Evidence: Yes. The claims in the submission are supported by clear and convincing evidence, such as the complexity analysis and experiments. Methods And Evaluation Criteria: I think that the proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense for the problem or application at hand. Theoretical Claims: All the author's theoretical analysis comes from other references, so I think there is no need to check the correctness of any proofs for theoretical claims. Experimental Designs Or Analyses: The author's experiment is relatively simple, and I don't think there are any significant problems in these experiments. Supplementary Material: The supplementary material mainly involves the code of the article, which I have not verified yet. Relation To Broader Scientific Literature: Compared with other related literature, the key contribution of this paper is to improve the accuracy of traditional methods. I think this paper has strong application and may be applied to other fields of natural science, such as image analysis, pattern recognition, etc. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The motivation of this paper is clear. The authors mainly address the limitations of suboptimal tree structures and inadequately tuned edge weights in traditional methods. The studied problem is meaningful. 2. This paper is well organized, logically clear, and written fluently, making it easier for readers to understand. 3. In this paper, the authors address a key problem: How to bridge the gap between WT and W1. A series of algorithms have been proposed to solve this problem. The author calculated the complexity of the method and showed that his method has lower complexity. Combining minimum spanning trees, iterative projection, and gradient descent provides a robust and innovative approach to optimizing tree structures and edge weights. 4. The current experiments are Ok. Weaknesses: 1. The rationale for the core idea is not fully explained. The authors formulated the tree-metric nearness problem to closely approximate D. Specifically, the authors use Wasserstein distance as a constraint to control this tree-metric. This idea is good. The authors aim to make the TWD distance as close as possible to a certain distance to improve its accuracy. There is a problem here: The authors use approximation strategies throughout the paper but do not analyze the difference between this approximation and real Wassestein distance. Is the closer your distance is to the Wasserstein distance, the better the performance of your new distance will be. If so, it stands to reason that the traditional distance effect is better than your method. If not, what is the significance of this approximation? Your experiments have verified that your distance is better than the traditional Wasserstein distance. How to explain this? 2. Theoretical analysis is not enough. 1)Although the authors provide some algorithms to solve the proposed problem, the paper does not provide strong theoretical guarantees on the convergence or optimality of the proposed algorithms. A more rigorous theoretical analysis would enhance the credibility of the framework. 2)How close the proposed distance is to the traditional distance has not been analyzed theoretically. For example, there is no theoretical analysis of the error between your new method and the Wasserstein distance. 3)The generalization analysis of the proposed new method is also lacking. Other Comments Or Suggestions: No Questions For Authors: See Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thoughtful review, and we sincerely appreciate your recognition of the strengths of our work, including the clear motivation, innovative algorithmic design, and strong application. We will revise the paper accordingly. --- **Comment 1.** *Is the closer your distance is to the Wasserstein distance, the better the performance of your new distance will be. If so, it stands to reason that the traditional distance effect is better than your method. If not, what is the significance of this approximation? Your experiments have verified that your distance is better than the traditional Wasserstein distance. How to explain this?* **Response 1.** Thank you for raising this important point. we apologize for any confusion and would like to clarify: In general, the closer a distance is to the true Wasserstein distance, the better it preserves the underlying geometric structure, which is beneficial for downstream tasks. Our method aims to approximate $W_1$ more accurately than existing tree-based approaches, and it outperforms other approximations. However, **it still performs slightly worse than the true Wasserstein distance**. In the main text, we did not include comparisons with the true $W_1$ due to computational cost, but we have now conducted these experiments. As shown in Table R1, the true Wasserstein distance achieves the highest classification accuracy, while our method ranks second. We will include these results in the revised version. **Table R1. Comparison of document classification accuracy.** |Dataset|BBCSport|Reuters|Ohsumed|Recipe| -|-|-|-|- Best baseline|0.826|0.902|0.423|0.495 Our distance|0.839|0.905|0.428|0.495 Tradition distance|**0.874**|**0.919**|**0.498**|**0.499** --- **Comment 2.** *Although the authors provide some algorithms to solve the proposed problem, the paper does not provide strong theoretical guarantees on the convergence or optimality of the proposed algorithms.* **Response 2.** Theoretical convergence guarantees are extremely challenging to establish due to the non-convex and NP-hard nature of the problem. Similar to our work, prior studies [1, 2] employing gradient-based approaches also acknowledge the absence of such guarantees. For instance, [1] states: "*While we provide no theoretical guarantee to find the global optimum...*" Despite this, our methods demonstrate **empirical convergence**, as shown in Figure 4. Moreover, they are grounded in well-established principles: projection theory for UltraTWD-IP and gradient descent for UltraTWD-GD, both of which exhibit robust performance across multiple datasets. [1] Chierchia, G. and Perret, B. Ultrametric fitting by gradient descent. NeurIPS, 2019. [2] Chen, S., Tabaghi, P., and Wang, Y. Learning ultrametric trees for optimal transport regression. AAAI, 2024. --- **Comment 3.** *How close the proposed distance is to the traditional distance has not been analyzed theoretically.* **Response 3.** Thank you for pointing this out. Our analysis focuses on empirical approximation quality, reported via relative error metrics RE-D and RE-W (Table 3). Theoretically, we now clarify that the approximation error of $|W_T(\mu,\nu)-W_1(\mu,\nu)|$ can be bounded by the difference in the cost matrices. Specifically, we have: \begin{equation} |W_T(\mu, \nu) - W_1(\mu, \nu)| \le 2||D||\_\infty + ||D_T - D||\_\infty \le 2||D||\_\infty + ||D_T - D||\_F \end{equation} This result shows that the closer our learned ultrametric $D_T$ is to the cost matrix $D$, the closer the corresponding tree-Wasserstein distance is to the 1-Wasserstein distance. Since our optimization directly minimizes $||D_T-D||_\infty$ or $||D_T - D||_F^2$, the bound ensures that $W_T$ serves as a meaningful and controlled approximation to $W_1$. We will include this analysis in the revised version. --- **Comment 4.** *The generalization analysis of the proposed new method is also lacking.* **Response 4.** As an unsupervised method, UltraTWD does not rely on labeled training data and is inherently designed to generalize across domains without retraining. **We provide an empirical generalization analysis in Appendix C.4 (Table 7, page 16).** In this analysis, we build the tree using the BBCSport vocabulary and evaluate its generalization by testing on 100 randomly generated distribution pairs with varying sparsity levels, where each distribution contains $n$ valid words (sparsity = 1 - # valid words / # total words). As shown in Table R2, both UltraTWD-GD and UltraTWD-IP consistently achieve low approximation errors (measured by RE-W), demonstrating strong generalization performance across different levels of sparsity and values of $n$. **Table R2. Approximation error (RE-W) under different test sparsity.** |# valid words ($n$)|1000|2000|3000|4000|5000| -|-|-|-|-|- Sparsity|17%|33%|50%|66%|83% Best baseline|0.116|0.150|0.167|0.176|0.176 UltraTWD-GD|0.088|0.111|0.122|0.127|0.125| UltraTWD-IP|**0.065**|**0.073**|**0.076**|**0.076**|**0.072**
Summary: The paper proposed a method to find out an Ultra tree Wasserstein distance. The method is based on minimizing a distance between trees satisfying certain conditions for trees. Algorithm 2 proposed to find the solution under projections, meanwhile Algorithm 3 tries to reduce the computation when avoiding working with pairwise distance between leafs, instead it works with the node heights. The second last section is devoted for testing the proposed method on several datasets. ##update after rebuttal: I keep my score, since the authors mostly explained the difficulties of the problem. Claims And Evidence: New framework for tree-Wasserstein distance exploit the ultra metric property. The problem is formulated into ultra metric optimization questions and proposed algorithm to solve it Empirical evidences are shown to demonstrate the effectiveness of the methods. No guarantee for the convergence of the proposed algorithms. Methods And Evaluation Criteria: The proposed methods sound reasonable. The metrics are fine. Theoretical Claims: All theoretical results are standard in theory of tree Wasserstein distance. The authors do not propose any new theoretical result. Algorithm 2 would be costly because the number of projections is too large. For the algorithm 3, theoretically is the $D_T$ determined uniquely by $H_T$? Since the number of entries reduce significantly? Experimental Designs Or Analyses: According to Table 2, the UltraTWD-IP and UltraTWD-GP are competitive to each other, no dominates other in metrics RE-W and Precision. Is there any explanation for that, since UltraTWD-IP appears to be more natural and take longer time to optimize. In the same table, UltraTree also produce quite competitive results, does this mean that the proposed method not really improve the overall performance significantly enough? Supplementary Material: No Relation To Broader Scientific Literature: Yeap, I think it is related to all other topics involving tree-data. Essential References Not Discussed: No Other Strengths And Weaknesses: No Other Comments Or Suggestions: Why in Algorithm 2, line 3: the weights are set to be $\frac{1}{t+1}$ and $\frac{t}{t+1}$ in which $t$ is the $t$th iteration? Questions For Authors: Please read the above comments Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable comments. We will modify it accordingly. --- **Comment 1.** *No guarantee for the convergence of the proposed algorithms.* **Response 1.** Theoretical convergence is extremely hard to guarantee due to the non-convex and NP-hard nature of the problem. Prior work [1, 2] using similar gradient-based methods also lacks convergence guarantees. As [1] states: "*While we provide no theoretical guarantee to find the global optimum...*". Despite this, our methods show **empirical convergence** (Figure 4), and are built on solid principles: projection theory for UltraTWD-IP and gradient descent for UltraTWD-GD, both demonstrating strong performance across datasets. [1] Chierchia, G. and Perret, B. Ultrametric fitting by gradient descent. NeurIPS, 2019. [2] Chen, S., Tabaghi, P., and Wang, Y. Learning ultrametric trees for optimal transport regression. AAAI, 2024. --- **Comment 2.** *Algorithm 2 would be costly because the number of projections is too large.* **Response 2.** Although Algorithm 2 involves $O(n^3)$ projections, each is a **closed-form, constant-time operation**, and we use **only one iteration** in practice (Section 3.4). This keeps it practical even for moderately large datasets. For better scalability, UltraTWD-GD (Algorithm 3) has lower $O(n^2)$ complexity per iteration and offers a strong trade-off between speed and accuracy (Table 4, Section 4.4). --- **Comment 3.** *For the algorithm 3, theoretically is the $D_T$ determined uniquely by $H_T$? Since the number of entries reduce significantly?* **Response 3.** Yes. Given the rooted tree structure $T$, the node height vector $H_T \in \mathbb{R}^{2n-1}$ uniquely determines $D_T$, where $d_T(i, j) = h(\text{LCA}(l_i, l_j))$ is the height of the least common ancestor (LCA) of leaves $l_i$ and $l_j$ (Equation 8). Since the LCA structure is fixed by $T$, $H_T$ fully specifies $D_T$. This reduces the parameter space from $O(n^2)$ in $D_T$ to $O(n)$ in $H_T$, and allows efficient optimization over $H_T$ in Algorithm 3. --- **Comment 4.** *According to Table 2, the UltraTWD-IP and GD are competitive to each other, no dominates other in metrics RE-W and Precision. Is there any explanation for that, since UltraTWD-IP appears to be more natural and take longer time to optimize.* **Response 4.** UltraTWD-IP and GD solve the **same problem** but use different strategies: UltraTWD-IP uses local projections to enforce ultrametric constraints, while UltraTWD-GD applies gradient descent on node heights. Thus, their results may vary slightly. As a result, UltraTWD-IP tends to yield slightly better downstream performance due to precise constraint enforcement, while UltraTWD-GD runs significantly faster. **This trade-off highlights the flexibility of our framework—both methods consistently outperform baselines.** --- **Comment 5.** *UltraTree also produce quite competitive results, does this mean that the proposed method not really improve the overall performance significantly enough?* **Response 5.** While UltraTree is a strong supervised baseline, our UltraTWD methods show clear advantages in performance, generalizability, and efficiency. - **Performance gap:** UltraTWD-IP and GD consistently outperform UltraTree across tasks. In document retrieval (Precision), UltraTWD-IP significantly improves over UltraTree by **4%–5%** on all datasets (see Table R1). **Table R1. Precision comparison of document retrieval.** |Dataset|BBCSport|Reuters|Ohsumed|Recipe| -|-|-|-|- Best Unsupervised|0.863|0.849|0.742|0.831 UltraTree|0.842|0.834|0.749|0.830 UltraTWD-GD|0.868|0.860|0.776|0.848 UltraTWD-IP|**0.885**|**0.876**|**0.788**|**0.866** Improvement $\uparrow$|+5.1%|+5.0%|+5.2%|+4.3% - **Limitations of UltraTree:** UltraTree requires training data with precomputed Wasserstein distances, which is costly and dataset-specific. Its performance also drops when training data sparsity differs from the test set (Table 3, Section 4.2). - **Advantages of UltraTWD:** Our methods are fully **unsupervised**, robust to sparsity, and optimize $||D_T - D||_F^2$ directly. UltraTWD-GD achieves better performance while being **up to 36$\times$ faster** than UltraTree in tree learning time (Table 4, Section 4.4), making it much more scalable. **In summary, UltraTWD methods are more accurate, efficient, and broadly applicable than UltraTree.** --- **Comment 6.** *Why in Algorithm 2: the weights are set to be $\frac{1}{t+1}$ and $\frac{t}{t+1}$ in which $t$ is the $t$-th iteration?* **Response 6.** The weight update in Algorithm 2 follows the HLWB projection scheme (Theorem 6, Section 3.4), where $\sigma_t = \frac{1}{t+1}$ forms a **steering sequence**. This gradually reduces the influence of the initial matrix $D$ while stabilizing updates ($\sigma_t \to 0$ as $t \to \infty$). While simple, this setting ensures convergence in convex settings and shows stable behavior empirically in our non-convex case (Figure 4, Section 4.4).
null
null
null
null
null
null
FeatSharp: Your Vision Model Features, Sharper
Accept (poster)
Summary: This paper introduces a novel upsampling method designed to address the low-resolution feature map limitations of vision encoders, particularly Vision Transformers. The proposed approach builds upon FeatUp, the current state-of-the-art upsampler, by integrating FeatUp’s Joint Bilateral Upsampling (JBU) with a mosaic of tiles, followed by processing through a single local attention block. The authors demonstrate the effectiveness of this method across various dense prediction tasks, including semantic segmentation, object detection, depth estimation, and surface normal prediction. Additionally, the study highlights how incorporating FeatSharp within RADIO training enables low-resolution-only teacher models to generate high-resolution distillation targets, further enhancing model performance. Claims And Evidence: The claims made are supported by evidence. Methods And Evaluation Criteria: Yes the proposed methods/evaluation criteria make sense. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: See strength and weakness. Supplementary Material: Yes, I review all the supplementary material. Relation To Broader Scientific Literature: Related to vision encoder/model distillation. Essential References Not Discussed: See strength and weakness. Other Strengths And Weaknesses: Strengths: 1. Proposes a comprehensive framework for enhancing upsampling in vision encoder feature maps by effectively integrating multiple techniques, including FeatUp’s Joint Bilateral Upsampling (JBU), de-biasing, and local attention. 2. Demonstrates strong experimental results across a diverse range of vision tasks, including semantic segmentation, object detection, depth estimation, and surface normal prediction. Additionally, the study validates the effectiveness of incorporating FeatSharp into the training framework. 3. Provides a detailed cost analysis, showing that FeatSharp introduces only a minimal computational overhead compared to FeatUp, as evidenced by time-per-token evaluations. Weaknesses: 1. Limited Novelty: The proposed method primarily builds upon FeatUp, incorporating well-established concepts such as tiling for handling high-resolution images and a simple learnable buffer for de-biasing. The integration of FeatUp, tiling, and attention layers is straightforward, and given these components, the performance improvement over FeatUp is unsurprising. The paper does not introduce fundamentally new techniques or discoveries. 2. Insufficient Baseline Comparisons: The work only compares against FeatUp, assuming it to be the sole relevant competitor for feature map upsampling. This narrow comparison overlooks other state-of-the-art methods, limiting the credibility of the results. A broader evaluation against multiple competitive approaches is necessary to justify the effectiveness of FeatSharp. 3. Limited Generalization and Marginal Gains: The paper evaluates FeatSharp exclusively within the RADIO model as the teacher, without testing its applicability to other architectures. This raises concerns about its generalizability. Moreover, the reported improvement over FeatUp is only 0.39%, which is almost negligible, calling into question the practical significance of the proposed enhancements. Other Comments Or Suggestions: I recommend that the authors compare their method with the following state-of-the-art articles. [1] Yue, Y., Das, A., Engelmann, F., Tang, S., Lenssen, J.E. (2025). Improving 2D Feature Representations by 3D-Aware Fine-Tuning. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. [2] Zhou et. al. A Refreshed Similarity-based Upsampler for Direct High-Ratio Feature Upsampling. In: arXiv July 2024. https://arxiv.org/abs/2407.02283. Questions For Authors: See Strengths and Weakness. Ethical Review Concerns: No concerns. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review. > Limited Novelty: The proposed method primarily builds upon FeatUp, incorporating well-established concepts such as tiling for handling high-resolution images and a simple learnable buffer for de-biasing. The integration of FeatUp, tiling, and attention layers is straightforward, and given these components, the performance improvement over FeatUp is unsurprising. The paper does not introduce fundamentally new techniques or discoveries. We disagree that pulling together disparate concepts that are seen in other domains of computer vision itself lends to limited novelty. Tiling is the best example of this. While it is used as a method to work around the inflexibility of most ViTs within the context of VLMs, or implicitly when doing things like sliding-window segmentation, the major insight in our work is that the tiles provide fine-grained guidance to the upsampler. This type of guidance is not present in other upsampler works, as they either rely on the raw pixels (which lack semantic meaning), or they rely on the encoder hierarchy (which isn't applicable to ViTs). ReSFU [2], for example, uses raw pixel guidance. The other critical point about upsampling has to do with the fact that small features are irrecoverable from a combination of raw-pixel + low-res featurizer. Methods like FeatUp's implicit model (>1000 views), and FeatSharp with tiling, however, are able to recover this detail because they're observing the small features. We can see this effect in Figure 13, SigLIP section, left column. These single input visualizations show what happens when you only get a single input source (plus raw pixels). We otherwise use the same model, including the single transformer block. Only the tiles observe enough detail to separate the text lines. For the RADIO section, relying on the JBU stack from FeatUp entirely misses the street lamp, blurring it into the background, and bilinear is unable to recover the latticing. Further, as part of our [rebuttal to oKYc](https://openreview.net/forum?id=lioemOcq3H&noteId=ZiLr91BWoe) we further our upsampler in an object detection setting, with additional comparisons with SAPA and ReSFU, and find that FeatSharp consistently does better. Particularly, we make the largest gains on small objects, directly providing evidence that tiles allow for the introduction of fine-grained detail. The set of techniques we chose to incorporate (multiview consistency, de-biasing, tiling, attention) are not ad-hoc, but form a cohesive novel solution to a problem. > Limited Generalization and Marginal Gains: The paper evaluates FeatSharp exclusively within the RADIO model as the teacher, without testing its applicability to other architectures. This raises concerns about its generalizability. Moreover, the reported improvement over FeatUp is only 0.39%, which is almost negligible, calling into question the practical significance of the proposed enhancements. We believe that this characterization is not supported by the evidence. We chose the RADIO training harness because it is currently the state of the art for general perception foundation models, and because it has a training setting that could directly benefit from a feature upsampler, specifically, the low-res-teacher/hi-res-student part of their training protocol. However, it's also not the case that this setting is a 0.39% improvement over FeatUp. FeatUp damages RADIO training, resulting in a -0.13% change across the benchmark suite. So FeatSharp is +0.52% relative. FeatSharp is also 0.6% better than the RADIO-AMP reported results. While 0.6% may seem small, we stress that this is for a single model over an entire suite of 31 tasks. Further, the RADIO training setting was one of two benchmark evaluations performed in the main body, the other being ADE20k semantic segmentation with six different base featurizers, and in all cases, FeatSharp was shown to be superior. In the appendix, we additionally study upsampling DFN CLIP and RADIO on Probe3d and NYUDv2 benchmarks. For an additional point of clarification, section 4.4 isn’t using an existing RADIO model as a teacher, but rather, we’re using the training protocol, and we’re upsampling DFN CLIP and SigLIP. We further chose SigLIP2-SO400M-512 as an additional featurizer in the rebuttal object detection study to provide even more evidence of broad applicability. > Insufficient Baseline Comparisons: The work only compares against FeatUp, assuming it to be the sole relevant competitor for feature map upsampling. This narrow comparison overlooks other state-of-the-art methods, limiting the credibility of the results. A broader evaluation against multiple competitive approaches is necessary to justify the effectiveness of FeatSharp. Thank you for this feedback. We have included SAPA and ReSFU in the [object detection study in our rebuttal to oKYc](https://openreview.net/forum?id=lioemOcq3H&noteId=ZiLr91BWoe) upon your advice. --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ responses and the additional experiments conducted for SAPA and ReSFU. The improvements in AP small compared to other methods are noteworthy and strengthen the empirical evaluation. However, my concerns regarding the novelty of the approach and the generalization remain. Given these considerations, I am adjusting my score to borderline (weak accept).
Summary: The authors proposed a novel method for efficiently upsampling feature maps of low-resolution ViTs (CLIP) to capture fine-grained details typically lost due to limited resolution. Built upon FeatUp, their method adds de-biasing and tiles fusion modules to incorporate detailed tile features, resulting in higher levels of detail, with extensive experiments demonstrating the effectiveness. Claims And Evidence: They claim FeatSharp can upsample low-resolution feature maps while picking up on fine-grained details, and it is well evidenced by the fidelity plot in Figure 5 and other visualizations of upsampling feature comparisons. Methods And Evaluation Criteria: No. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Throughput analysis and implementation details. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The paper is easy to follow. 2. FeatSharp can upsample low-resolution feature maps while picking up on fine-grained details, and it is well evidenced by the fidelity plot in Figure 5. 3. The effectiveness is validated through multi-view consistency and semantic segmentation. The performance improvement on ADE20K is convincing. 4. Compared with FeatUp, the PCA visualization of FeatSharp is much closer to Real 4x. Weaknesses: 1. When the number of tokens is larger, the inference cost of FeatSharp is much higher than FeatUp, as shown in Figure 15. Considering its performance improvement evidenced by the fidelity plot and other results, the cost is acceptable, which can be further studied as the future work. 3. The paper has some presentation mistakes. One fire sign in Fig. 1 is out of the box, and the colors of the boxes are confusing. The citation in Line 249 is confusing. One captioned data in Fig. 5 is out of the box. The presentation can be further promoted. Other Comments Or Suggestions: Please refer to strengths and weaknesses. Questions For Authors: I'm curious about the performance on fine-grained image image classification datasets, like CUB-200-2011 [1]. [1] Wah, Catherine, et al. "The caltech-ucsd birds-200-2011 dataset." (2011). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your review. > I'm curious about the performance on fine-grained image image classification datasets, like CUB-200-2011 Thank you for your suggestion. Due to time and compute constraints, we were limited to running only one further study comparing upsamplers, and we selected COCO 2017 detection due to its ubiquity in the literature. We have those results in [this rebuttal to reviewer oKYc](https://openreview.net/forum?id=lioemOcq3H&noteId=ZiLr91BWoe). Given the results on COCO, it is plausible that similar effects could be observed with detailed datasets, particularly with small foreground categories, such as Caltech-UCSD Birds-200-2011, but are forced to leave the analysis to future work. > The paper has some presentation mistakes. One fire sign in Fig. 1 is out of the box, and the colors of the boxes are confusing. The citation in Line 249 is confusing. One captioned data in Fig. 5 is out of the box. The presentation can be further promoted. Thank you, we will absolutely revise these issues in a prospective camera ready. --- Rebuttal Comment 1.1: Comment: The authors' reponse has addressed my major concerns, and I'll keep my rating as Accept.
Summary: The paper discusses improving vision model features by refining their sharpness and resolution. It builds on the JBU algorithm to provide more detailed feature maps and studies how to clean features effectively using ViT-Denoiser's methods. The paper also enhances the AM-RADIO framework, achieving better benchmark performance and feature adaptation. Claims And Evidence: Yes, the claims are well supported by their evidence. Methods And Evaluation Criteria: Overall, there are two primary criteria used for evaluating the FeatSharp method. 1. The qualitative results look nice, with the upsampled feature maps produced by FeatSharp look indeed "sharper" than baseline reseults. 2. However, the quantitative results are not very convincing to me. For example, in Fig. 7, the neumerical results on semantic segmentation, why does 3x sampling generally underperform both 2x and 4x? Intuitively, if the upsample method is correct, the semantic segmentation task should continuously benefit from the high-definition feature map. However, the authors observed counterintuitive results but did not provide explanations or analyses. Additionally, in most experiments in the appendix, I also found that FeatSharp does not consistently bring quantitative gains across various visual prediction tasks. These issues raise concerns about the significance and contributions of the methods in this paper: Merely looking good in visualizations does not prove the method's effectiveness. The authors need more numerical superiority in the results to demonstrate the specific benefits of the new model for visual tasks. Theoretical Claims: I did not find incorrect theoretical claims in the paper. Experimental Designs Or Analyses: The authors conducted experiments across a variety of visual tasks which technically sound. However, as I mentioned in “Methods And Evaluation Criteria”, some results are unsatisfactory and lack explanations and analyses. To validate the practical significance of sharp upsampling, I suggest that the authors supplement experiments on vision-language benchmarks, which involve many fine-grained reasoning tasks, such as detecting a small object in a large scene. I believe these tasks could better substantiate the claims of FeatSharp's effectiveness. Supplementary Material: I reviewed the supplementary material. There are additional experimental results helping me understand the method. Relation To Broader Scientific Literature: It has a close relationship to open-vocabulary semantic segmentation tasks witch typically leverage CLIP to perform pixel-level predictions. It might also be a good idea t Essential References Not Discussed: No essential references missing, but I can recommend some papers in open-vocab segmentation that might help evaluating the FeatSharp method: [1] Extract free dense labels from clip (ECCV'22) [2] Clip surgery for better explainability with enhancement in open-vocabulary tasks (arxiv'23) [3] SCLIP: Rethinking Self-Attention for Dense Vision-Language Inference (ECCV'24) [4] Clearclip: Decomposing clip representations for dense vision-language inference (ECCV'24) Other Strengths And Weaknesses: See comments above. Other Comments Or Suggestions: There are some minor issues in the paper which might cause confusion. Eg, what does "2x upsampling" exactly mean? Does it mean scale up both width and height by 2x or scale up the area by 2x? Questions For Authors: No more questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review. > what does "2x upsampling" exactly mean? We increase the width and height by 2x. > However, the quantitative results are not very convincing to me. For example, in Fig. 7, the neumerical results on semantic segmentation, why does 3x sampling generally underperform both 2x and 4x? Intuitively, if the upsample method is correct, the semantic segmentation task should continuously benefit from the high-definition feature map. However, the authors observed counterintuitive results but did not provide explanations or analyses. We agree that we're missing the analysis for the 3x case. However, it's not necessarily true that increasing the resolution of the features should increase semantic segmentation benchmarks. Our goal is to produce higher-resolution features that are consistent with the featurizer itself, and that is occasionally at odds with doing well on a particular downstream task. For example, resolution is not the reason that PaliGemma is worse than DINOv2-L which is worse than RADIOv2.5-L, because they all encode the same number of tokens. Instead, in figure 7, what we observe is that FeatUp/FeatSharp are cleaning up the noisy models by favoring the view-consistent representations (compared to the baseline horizontal lines). We further observe that FeatSharp consistently does better than FeatUp on the task. An alternative way to look at this is by comparing "Baseline Inpt-1x" against "Baseline Inpt-2x". This gives us a sense of whether it's merely increasing the resolution that yields better segmentation. We see that DINOv2-L and RADIOv2.5-L improve when increasing resolution. However, the noisy models (DFN CLIP, PaliGemma, SigLIP) either see negligible change, or even get a bit worse. We observe a similar effect in tables 11, 12, and 13 where the optimal choice of upsampler depends on the task and the featurizer itself. For DFN CLIP in the NAVI Correspondence task, it appears as though a simple bilateral upsample works the best, with entirely untuned weights, owing to both encoder noise, and also perhaps OOD alignment between the model and task. For table 13, we see that FeatSharp operates similarly to running RADIO at the resolution that matches the number of output tokens, with FeatSharp always being better than the low-res baseline (512px input). > Additionally, in most experiments in the appendix, I also found that FeatSharp does not consistently bring quantitative gains across various visual prediction tasks. These issues raise concerns about the significance and contributions of the methods in this paper: Merely looking good in visualizations does not prove the method's effectiveness. The authors need more numerical superiority in the results to demonstrate the specific benefits of the new model for visual tasks. Thank you for this feedback, as it has motivated us to run another benchmark setting in order to provide further evidence of the efficacy of our method. We have integrated our method, as well as SAPA [3], and ReSFU [4], into Detectron2, and evaluated on COCO 2017. We used Edge [5], which allowed us to create a <featurizer>+<upsampler>+DINO [6] harness. We chose to use RADIOv2.5-L and SigLIP2-SO400M-512 [7] as featurizers, with the latter to further demonstrate versatility. ### RADIOv2.5-L (512px input) |Upsampler|Upsample Factor|Fidelity|AP|AP Small|AP Medium|AP Large| |-|-|-|-|-|-|-| |Baseline|1||51.38|28.73|56.56|73.72| |Bilinear|2||51.61|28.43|56.98|74.14| |SAPA|2|2.81|41.44|15.92|45.08|69.77| |ReSFU|2|3.69|49.81|26.22|55.37|73.55| |FeatUp|2|3.71|46.71|21.77|52.01|72.25| |FeatSharp|2|**5.35**|**54.83**|**34.72**|**59.40**|**74.40**| ### SigLIP2-SO400M-512 (512px input) |Upsampler|Upsample Factor|Fidelity|AP|AP Small|AP Medium|AP Large| |-|-|-|-|-|-|-| |Baseline|1||52.66|30.31|57.94|74.31| |Bilinear|2||52.69|30.19|57.84|74.16| |SAPA**|2| |ReSFU|2|1.62|50.84|28.45|56.18|73.69| |FeatUp|2|1.54|47.42|22.87|53.17|72.80| |FeatSharp|2|**1.89**|**55.93**|**36.85**|**61.00**|**74.62**| ** *SAPA failed in backprop with a cuda configuration error, due to the size of the feature map and channel count of the model.* While FeatSharp improves the metrics across the board, it makes its largest improvements on AP Small, followed by AP Medium. This makes intuitive sense as the tiling allows for the detection of small objects that otherwise are missed by any upsampler which doesn't have access to multiple views. FeatSharp provides a holistic method which not only keeps representations consistent as resolution increases, but also allows for the incorporation of new details which are missed by the low res encoding, which is an important advancement of the literature. [3] [SAPA](https://arxiv.org/pdf/2209.12866) [4] [ReSFU](https://arxiv.org/pdf/2407.02283) [5] [Edge](https://dgcnz.github.io/edge/part2/adapting.html) [6] [DINO](https://arxiv.org/abs/2203.03605) (*not to be confused with DINOv2*) [7] [SigLIP2](https://arxiv.org/abs/2502.14786) --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. My concerns are addressed and I raise my score to 3.
Summary: The paper introduces FeatSharp, a method which builds upon FeatUp (specifically its JBU upsampling variant) [1], by incorporating higher-resolution tiled views and combining them with the upsampled feature maps from FeatUp. Additionally, FeatSharp includes a de-biasing module designed to remove fixed-pattern noise from the frozen vision encoder. The authors find that their method out-performs FeatUp-JBU across a number of pretrained vision encoders (e.g., CLIP, DINOv2, SigLIP, SAM, RadioV2.5) for ADE20K segmentation. Additionally, they utilise their method to improve the training of a pre-existing method, AM-Radio (Multi-teacher distillation method), across a number of datasets/tasks. [1] Fu, S., Hamilton, M., Brandt, L. E., Feldmann, A., Zhang, Z., and Freeman, W. T. Featup: A model-agnostic framework for features at any resolution. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum? id=GkJiNn2QDF. [2] Ranzinger, M., Heinrich, G., Kautz, J., and Molchanov, P. Am-radio: Agglomerative vision foundation model reduce all domains into one. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12490–12500, June 2024b. Claims And Evidence: The main claim that FeatSharp improves upon FeatUp-JBU is well supported. The fidelity results (Figure 5), qualitative PCA visualizations (Figure 6), and ADE20k semantic segmentation results (Figure 7) all demonstrate improvements across a variety of vision encoders. Additionally, FeatSharp is tested in a multi-teacher distillation/agglomerative model training setup ( Section 4.4) on multi-task learning benchmarks (Table 1), where it also it shown to be beneficial. The authors show the FeatSharp-trained RADIO generally performs better than the state-of-the-art RADIO-AMP-L [3]. [3] Heinrich, G., Ranzinger, M., Hongxu, Yin, Lu, Y., Kautz, J., Tao, A., Catanzaro, B., and Molchanov, P. Radio amplified: Improved baselines for agglomerative vision foundation models, 2024. URL https://arxiv.org/ abs/2412.07679 Methods And Evaluation Criteria: The proposed methods and evaluation are sensible, although only segmentation results on a single dataset (ADE20k) are shown for multiple of the vision encoders (DINOv2, SigLIP, SAM). More extensive evaluation is done to show the benefit on incorporating FeatSharp within the AM-Radio method (eg. Classification: imagenet1k - Zero Shot Retrieval: COCO + Flickr30k…) but overall the methods and evaluation are reasonable. Theoretical Claims: Only equation (6) which appears sensible and has experimental results validating it in Appendix E. Experimental Designs Or Analyses: 1. Multi-view consistency / Fidelity Experiments: This measures the mse distance between warped-upsampled features and the encoder’s low-resolution features. While this ensures consistent alignment, it doesn’t guarantee improved semantic detail. One could achieve high fidelity by over-smoothing the features so these results are generally unconvincing but do provide some additional insight to their remaining experiments. 2. The ablations given in Table 6 rely entirely on the measures of fidelity / measures of smoothness. These ablations would be more insightful if they included results using task-specific performance (eg. segmentation). Supplementary Material: Yes, sections: A - Radio Results, C- Implementation Details, D - Additional Benchmarks, E - Throughput Analysis Relation To Broader Scientific Literature: The paper relates to prior work in feature upsampling, vision foundation models and multi-teacher distillation and agglomerative models. It builds directly on FeatUp [1], which introduced a model-agnostic feature upsampling framework using Joint Bilateral Upsampling (JBU) and an implicit upsampler. The authors extended the JBU variant of this work. The work is also relevant to Vision Transformers [2] and their resolution limitations due to quadratic complexity. [1] Fu, S., Hamilton, M., Brandt, L. E., Feldmann, A., Zhang, Z., and Freeman, W. T. Featup: A model-agnostic framework for features at any resolution. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum? id=GkJiNn2QDF. [2] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: The paper is generally well-written. The technical contributions of FeatSharp are incremental, but it effectively integrates ideas from prior works and helps to build upon FeatUp and AM-Radio. A major limitation I see is that that there are no direct comparisons made with FeatUp using implicit upsampling, which is the better performing variant of FeatUp. This is understandable to an extent as it requires training on a per-image basis, which takes ~1min/sample (Appendix G). If the performance of FeatSharp was shown to be close/comparable to FeatUp with implicit upsampling, while greatly reducing training time this would strengthen the paper for me. The ablations in Table 6 should show the design choice effects on some task-specific performance measures (e.g., segmentation accuracy or retrieval scores), but they instead focus on on multi-view consistency measures. Other Comments Or Suggestions: None Questions For Authors: My main concern is that lack of comparison against FeatUp (with implicit upsampling). The exclusion of FeatUp's implicit model is understandable given its computational cost (~1 min/sample, Appendix G). However, a small empirical comparison—at least in terms of performance and training time trade-offs—would significantly strengthen the paper's claims. Can you better justify why this is not feasible to do, at least for smaller scale experiments and add it to the paper? Can you clarify why RADIO-AMP-L is performing worse than the baseline in Table 1? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your thoughtful review. > Multi-view consistency / Fidelity Experiments: This measures the mse distance between warped-upsampled features and the encoder’s low-resolution features. While this ensures consistent alignment, it doesn’t guarantee improved semantic detail. One could achieve high fidelity by over-smoothing the features so these results are generally unconvincing but do provide some additional insight to their remaining experiments. We argue that over-smoothing is actually what's happening with either bilinear upsampling, or even functionally how the JBU-stack is operating. Figure 6 shows how FeatUp 4x (JBU-stack) leads to over-smoothed results, but with edge preservation when there are strong enough edges in RGB space. This smoothness is why we compare FeatSharp against bilinear and FeatUp on fidelity directly in Figure 5, and FeatSharp consistently does better. So it's not that smoothing in the upsampled space doesn't work, but rather that it doesn't work as well as what FeatSharp is doing. Fidelity is telling us how internally consistent the representations are with respect to a particular model, under arbitrary crops and deformations. Over-smoothing would cause issues with fidelity when the crops are small, since nearly all variation will be gone. > My main concern is the lack of comparison against FeatUp (with implicit upsampling). The exclusion of FeatUp's implicit model is understandable given its computational cost (~1 min/sample, Appendix G). However, a small empirical comparison—at least in terms of performance and training time trade-offs—would significantly strengthen the paper's claims. Can you better justify why this is not feasible to do, at least for smaller scale experiments and add it to the paper? Upon your feedback, we have looked into this. It is ~1 min/sample for only the ViT-S/16 featurizer, but balloons quickly for the larger featurizers, like ~5 min/image for SigLIP2-SO400M-512, or ~4 min/image for RADIOv2.5-L/16 at 512px. However, the official implicit upsampler was also projecting the feature dimension down to 128 using PCA to achieve this speed [1, 2]. For ViT-S/16, it’s about 2x slower to use full features (e.g. 2 min/image). Due to the cost of running this mode, we ran a limited study on 100 images from COCO 2017 validation, and found that the fidelity is 3.17, higher than the 2.42 found with FeatSharp. However, FeatSharp at the same 16x upsample ratio takes 250ms/image (500x faster). These speeds are using a single H100 gpu. > Can you clarify why RADIO-AMP-L is performing worse than the baseline in Table 1? We did our best to match the training setting between our reproduction and their claimed results. A key difference seems to come down to the fact that in RADIO-AMP, they bilinear downsample the student to match the teacher in the hi-res-student/low-res-teacher setting, whereas we bilinear upsample the teacher to match the student in our work. We made this change so that we'd have a comparison with a true upsampling baseline. >The proposed methods and evaluation are sensible, although only segmentation results on a single dataset (ADE20k) are shown for multiple of the vision encoders (DINOv2, SigLIP, SAM). More extensive evaluation is done to show the benefit on incorporating FeatSharp within the AM-Radio method (eg. Classification: imagenet1k - Zero Shot Retrieval: COCO + Flickr30k…) but overall the methods and evaluation are reasonable. We have also included further benchmarking of our approach on object detection in response to reviewer oKYc below. [1] https://github.com/mhamilton723/FeatUp/blob/main/featup/train_implicit_upsampler.py#L197 [2] https://github.com/mhamilton723/FeatUp/blob/main/featup/configs/implicit_upsampler.yaml#L26
null
null
null
null
null
null
LBI-FL: Low-Bit Integerized Federated Learning with Temporally Dynamic Bit-Width Allocation
Accept (poster)
Summary: The proposed method quantifies weights, activation, and gradients to lower accuracy than INT8 to significantly reduce communication and computational costs, and the proposed LBIFL can reduce communication costs by 8 times compared to full-precision FL. A large number of experiments show that compared with INT8 precision low-bit training, the proposed LBI-FL reduces the average BitOPs per client by more than 50% at a precision loss of less than 2%. Claims And Evidence: yes Methods And Evaluation Criteria: Experiments are conducted on CIFAR-10 and CIFAR-100 datasets, and it is suggested to add experiments on more different types of federated learning scenarios to verify the effectiveness and scalability of the method in different environments. Theoretical Claims: The theory in this paper is proved to be correct, the convergence of the framework is analyzed, and the convergence rate of the framework is deduced, which is proved to be equivalent to the standard FedAvg algorithm. At the same time, the quantization error is analyzed, considering the influence of the noise introduced by gradient quantization on the model updating and convergence, and the quantization error will not have a fundamental impact on the convergence. Experimental Designs Or Analyses: The experimental design of the paper is basically reasonable, and the purpose of the experiment is to verify the effectiveness of the proposed method in reducing communication cost and computing cost. CIFAR-10 and CIFAR-100 data sets are selected for verification, and TOP-1 classification accuracy is adopted. The experiment mainly focuses on the image classification task, and the data set is relatively small, and the method comparison in this paper is not compared with some advanced compression methods. Supplementary Material: yes Relation To Broader Scientific Literature: The main contributions of this paper are mainly in three areas: federated learning, model compression and reinforcement learning Federated learning technology is used to protect privacy, but federated learning faces the limitation of communication and computing resources. Therefore, low-bit quantization technology is applied to federated learning to compress the model and reduce the model size and computing cost. Finally, reinforcement learning is applied to optimize the allocation of bit width. Reinforcement learning provides new scenarios in hyperparameter tuning and network architecture searching. Essential References Not Discussed: For work on the combination of model compression and federated learning, using a combination of model pruning and quantification, the methods proposed in this paper focus on the application of quantitative methods in federated learning, but there may be differences in the comprehensiveness and efficiency of model compression compared with these methods combining pruning and quantification, which may not be fully discussed in this paper. Other Strengths And Weaknesses: In addition, Figure 1 of the article is not referenced elsewhere in the article. Other Comments Or Suggestions: no Questions For Authors: 1.Can the source code be made public? 2.The low-bit training is well understand, is any difference with other methd, like in deep learning? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your valuable suggestions and below are our detailed responses to the raised weaknesses and questions. > **W1: Experiments on different scenarios.** **Response:** We have made comprehensive evaluations on our LBI-FL, including image classification on CIFAR-10 using diverse architectures of ResNet-18/50/101, MobileNet-V2, and ViT-S and on Tiny-ImageNet using ResNet-18, and image segmentation on DSB2018 using U-Net. In all the evaluations, our LBI-FL achieves acceptable performance (tolerable 1.5\% loss) with over 45\% reduction of BitOPs, compared with UI8 training. | Model | Method | Acc (\%) | BitOPS | RR (\%) | | ---- | ---- | ---- | ---- | ---- | | ResNet50 | FP32 | 85.19 | 258.4G | - | | | UI4 | 77.90 | 4.04G | 75 | | | UI8 | 84.93 | 16.15G | 0 | | | LBI-FL | 83.52 | 8.45G | 47.68 | | ResNet101 | FP32 | 85.52 | 492.0G | - | | | UI4 | NaN | 7.69G | 75 | | | UI8 | 85.05 | 30.75G | 0 | | | LBI-FL | 83.55 | 16.31G | 46.96 | > **W2: Experiments on more tasks and datasets and comparison with advanced compression methods.** **Response:** i) We train U-Net for Image segmentation on DSB2018. We adopt the image size of 96$\times$96. The table below shows that, compared with UI8 training, we achieve acceptable performance of about 0.6\% loss in Dice similarity coefficient (DSC) with over 50\% reduction of BitOPs. | Dataset | Method | DICE (\%) | BitOPS | RR (\%) | | ---- | ---- | ---- | ---- | ---- | | DSB2018 | FP32 | 89.55 | 19848G | - | | | UI4 | 87.01 | 310.1G | 75 | | | UI8 | 89.48 | 1240.5G | 0 | | | LBI-FL | 88.84 | 604.9G | 51.27 | ii) We train ResNet-18 for image classification on Tiny-ImageNet. The table below shows that our LBI-FL achieves over 50\% reduction of BitOPs with less than 0.1\% accuracy loss | Dataset | Method | Acc (\%) | BitOPS | RR (\%) | | ---- | ---- | ---- | ---- | ---- | | Tiny-ImageNet | FP32 | 35.84 | 457.14G | - | | | UI4 | 34.64 | 7.14G | 75 | | | UI8 | 35.21 | 28.57G | 0 | | | LBI-FL | 35.13 | 14.05G | 50.81 | iii) We further employ centralized low-bit training method AMPA [D1] in FL for comparison. AMPA uses layer-wise bit-width allocation based on sensitivity measurement during training. Our LBI-FL with RL agents obtains superior performance with evidently reduced BitOPs in training ResNet-18, MobileNet-V2, and ViT-S on CIFAR-10. [D1] Li Ding et al. AMPA: Adaptive mixed precision allocation for low-bit integer training. ICML 2024. | Model | Method | Acc (\%) | BitOPS | RR (\%) | | ---- | ---- | ---- | ---- | ---- | | ResNet-18 | AMPA | 84.10 | 3.49G | 51.11 | | | LBI-FL | 84.16 | 3.28G | 54.06 | | ViT-S | AMPA | 72.54 | 73.08G | 38.79 | | | LBI-FL | 72.55 | 60.3G | 49.51 | | MobileNet-V2 | AMPA | 87.89 | 9.78G | 46.19 | | | LBI-FL | 89.02 | 8.83G | 51.39 | > **W3: Discussion on combining pruning and quantification.** **Response:** This paper focuses on quantization for low-bit training in FL. Our LBI-FL introduces a temporally dynamic bit-width allocation scheme for weights, activations, and gradients, which evolves along the training trajectory. It achieves higher efficiency than UI8 training with great flexibility in diverse FL scenarios. We will discuss the meaningful topic of combining pruning and quantization such as [D2] in the final version. However, pruning is more difficult to deploy on hardware than quantization and relies on manually tuned hyperparameters such as pruning ratios and threshold values. [D2] only offers results of LeNet and ResNet-20 on CIFAR-10. We will explore to combine pruning with low-bit training (simultaneously quantizing weights, activations and gradients) in future. [D2] Pavana Prakash et al. IoT device friendly and communication-efficient federated learning via joint model pruning and quantization. IEEE IoT Journal 2022. > **W4: Figure 1 not referenced.** **Response:** Thank you. We will include the reference to Figure 1 in the final version. > **Q1: Can the source code be made public?** **Response:** We will release the source code upon acceptance. > **Q2: Difference with other methods like in deep learning.** **Response:** While low-bit training has been widely studied in conventional deep learning, its direct application to federated learning (FL) is challenging due to the decentralized and heterogeneous nature of FL. In particular, applying a uniform low-bit strategy across all clients can lead to training instability caused by heterogeneity in data distributions and model dynamics. To address this, we propose a aware bit width adaptation framework based on reinforcement learning. Specifically, we introduce an agent that dynamically allocates bit-widths for each client by considering local states, including the current bit-width, training phase, and quantization-induced loss. --- Rebuttal Comment 1.1: Comment: Thank you for your improvement. --- Reply to Comment 1.1.1: Comment: Dear Reviewer KpQQ, We greatly appreciate your constructive suggestions, which have helped us a lot to improve our paper. Thank you again for dedicating your time and effort to reviewing our paper and providing insightful comments. Best regards, The Authors
Summary: The paper introduces Low-Bit Integerized Federated Learning (LBI-FL), a framework designed to reduce both communication and computational costs in Federated Learning (FL) by using quantization techniques. Unlike conventional approaches limited to INT8 precision, LBI-FL dynamically adjusts the bit-width of weights, activations, and gradients during training through reinforcement learning. A trained agent optimizes bit-width allocation by considering factors such as current precision, training stage, and quantization loss. The method generalizes well across different network architectures and non-IID datasets. Authors claim that theoretical analysis confirms that gradient quantization maintains the same convergence rate as FedAvg. Experiments show that LBI-FL achieves 8× communication cost reduction and over 50% fewer BitOPs per client with less than 2% accuracy loss compared to INT8-based low-bit training. Claims And Evidence: In this paper, the authors make several claims that may raise concerns. The first statement, found in lines 14-17 on the left side of page 1, reads: *"Existing compression methods for FL cannot simultaneously reduce the up-link and downlink communication cost and mitigate the computation burden on clients."* This claim is problematic, as there is at least one paper—Meinhardt, Georg, et al. (2024), *"Prune at the Clients, Not the Server: Accelerated Sparse Training in Federated Learning"*—that addresses the simultaneous reduction of both communication costs and computational load through client-side pruning. Another concerning claim appears in the contribution section: *"To our best knowledge, this is the first successful attempt to achieve low-bit training FL that evidently reduces the communication overhead and computation cost compared to full-precision and INT8 training."* This assertion should be carefully justified, as there is at least one paper addressing low-bit training in distributed setups, which is not referenced here. Therefore, calling it the "first successful attempt" seems unsupported. Mishchenko, Konstantin, et al. "IntSGD: Adaptive floatless compression of stochastic gradients." arXiv preprint arXiv:2102.08374 (2021). A further troubling statement is: *"We demonstrate in theory that federated learning with gradient quantization achieves an equivalent convergence rate to the standard FedAvg algorithm with sufficiently large number of communication rounds and further empirically verify the convergence rate."* This analysis, found in the contribution section, heavily relies on Assumption 4.2, which states that gradient quantization at each client is equivalent to introducing Gaussian noise. This assumption is quite restrictive and not realistic in practical scenarios. In contrast, the analysis of FedAvg does not rely on assumptions about the distribution, making the comparison of convergence rates less valid and the statement potentially misleading. Methods And Evaluation Criteria: The proposed method appears to be well-founded, with the authors employing both theoretical analysis and experimental results to evaluate its effectiveness. Theoretical Claims: I have carefully reviewed the proof presented in the appendix. The analysis relies extensively on Assumption 4.2, which essentially reduces the complexity of the proof, making the analysis overly simplistic and trivial. This reliance on such a restrictive assumption undermines the robustness of the theoretical framework. Specifically, the assumption that gradient quantization is equivalent to introducing Gaussian noise at each client is unrealistic and does not reflect the practical dynamics of Federated Learning. Consequently, the current analysis fails to provide a meaningful or comprehensive theoretical understanding of the method's behavior in real-world scenarios. The overly simplistic nature of the proof, due to this assumption, limits its applicability and generalizability to more complex or realistic settings. =========================================== After rebuttal: Authors provided the updated analysis as requested, so I increased the score accordingly. Experimental Designs Or Analyses: The authors evaluate the proposed method using LeNet (LeCun et al., 1998), ResNet-18 (He et al., 2016), MobileNetV2 (Sandler et al., 2018), and ViT-S (Dosovitskiy et al., 2020) on the CIFAR-10/100 dataset. The agent selects the bit-widths for weights, activations, and gradients from INT4, INT6, and INT8 every five epochs, starting from the 10th epoch. For LeNet, the training process consists of 2000 epochs with 100 clients, where 10% of the clients are selected for updates in each epoch. For the larger networks, training is conducted over 200 epochs with 10 clients, all of whom participate in updates at every epoch. The local update epoch is set to 2, and the learning rate follows a decay of 1. I appreciate the detailed experimental evaluation; however, conducting experiments on larger models could further strengthen the study. Quantization has a particularly significant impact on large-scale models, where computational and communication efficiency are critical factors. Evaluating the proposed method on more complex architectures would provide deeper insights into its scalability and effectiveness in real-world scenarios. =========================================== After rebuttal: Authors provided additional results, which is valuable. Supplementary Material: I have reviewed the Supplementary Material, which includes proofs and additional experimental details. Relation To Broader Scientific Literature: The contributions presented in this paper are relevant to the broader Federated Learning literature, particularly in the context of communication compression in Federated Learning. Essential References Not Discussed: As I mentioned earlier, there are several papers that have not been included: Meinhardt, Georg, et al. (2024), *"Prune at the Clients, Not the Server: Accelerated Sparse Training in Federated Learning" Mishchenko, Konstantin, et al. "IntSGD: Adaptive floatless compression of stochastic gradients." arXiv preprint arXiv:2102.08374 (2021). Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Please review the previous sections. Questions For Authors: Is it possible for you to obtain convergence results without Assumption 4.2, or with a relaxed version of it? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive suggestions. > **W1: Claims may raise concerns.** **Response:** Claim i): We use this sentence to emphasize that our LBI-FL can reduce both the overhead in the uplink and downlink communication with low-bit training. In fact, the paper "Prune at the Clients, Not the Server" cannot reduce the downlink communication only using client-side pruning and has to resort to accelerated server pruning, which could completely fail in accuracy and loss as mentioned in the paper. For rigidity, we will change the claim to "existing methods based on quantization" in the final version. Claim ii): In lines 19-21 and 55-59, "low-bit training" refers to simultaneously quantizing the weights, activations, and gradients. The paper "IntSGD: Adaptive floatless compression of stochastic gradients" develops adaptive integer compression operators for distributed SGD. It only quantizes the gradients and does not quantize weights and activations. To our best knowledge, we are the first to achieve "low-bit training" in the FL scenario. Claim iii): We have improved the theoretical result by relaxing Assumption 4.2 without affecting the final convergence rate. Instead of assuming the quantization noise to follow a Gaussian distribution with identical variance across all the clients, we now assume only that (a) The gradient quantization noise on each client has a well-defined expectation and variance ($\mu_N^i$ and $(\sigma_N^i)^2$ for the $i$-th client), and (b) The expectation $\mu_N^i$ can be viewed as zero. Based on this assumption, we provide an updated derivation of the theorem. The noise term does not influence the $A_1$ term in Equation (12). Neglecting the differences across epochs, we reformulate Equation (13) in the appendix as: $$ \begin{aligned} A_2 & \le \frac{\eta^2}{m^2} \mathbb{E}\_t\left[\left\|\sum\_{i=1}^m \sum_{k=0}^{K-1} \mathbf{g}\_{t, k}^i\right\|^2\right] + \frac{\eta^2}{m^2} K \cdot \mathbb{E}\_t \left[ \left\| \sum\_{i=1}^m n_t^i \right\|^2\right] \end{aligned} $$ Considering the quantization noise on each client can be supposed independent, the expectation in the second term is calculated as: $$ \mathbb{E}\_t \left[ \left\| \sum\_{i=1}^m n_t^i \right\|^2\right] = \sum\_{i=1}^m \mathbb{E}\_t \left[ \left\| n_t^i \right\|^2\right] + 2\sum_{i<j} \mathbb{E}_t\left[n_t^i n_t^j\right] = \sum\_{i=1}^m ((\mu\_N^i)^2 + (\sigma^i\_N)^2) + 2\sum\_{i<j} \mu_N^i \mu_N^j $$ Therefore, $\Phi$ in Equation (10) changes as $$ \Phi = \frac{1}{c}\left[\frac{L \eta}{2 m} \left(\sigma_L^2 + \frac{1}{m} \left( \sum\_{i=1}^m ((\mu\_N^i)^2 + (\sigma^i\_N)^2) \right)\right) +\frac{5 K \eta^2 L^2}{2}\left(\sigma_L^2+6 K \sigma_G^2\right) \right] $$ However, the convergence rate does not change, since the quantization noise expectation is usually at 0. Note that previous convergence result is a special case of this new result. > **W2: Assumption for theoretical claims.** **Response:** We perform experiments with ResNet-18 on CIFAR-10 to support the assumption (b) in W1 that the expectation $\mu_N^i$ can be viewed as zero. The distribution of quantization noise is shown to concentrate around zero. > **W3: Evaluations on more complex architectures.** **Response:** We have evaluated on larger-scale ResNet50 and ResNet101 on CIFAR-10. The table below shows that our LBI-FL yields less than 1.5\% performance loss with over 45\% BitOPs reduction. | Model | Method | Dice (\%) | BOPS | RR (\%) | | ---- | ---- | ---- | ---- | ---- | | ResNet50 | FP32 | 85.19 | 258.4G | - | | | UI4 | 77.90 | 4.04G | 75 | | | UI8 | 84.93 | 16.15G | 0 | | | LBI-FL | 83.52 | 8.45G | 47.68 | | ResNet101 | FP32 | 85.52 | 492.0G | - | | | UI4 | NaN | 7.69G | 75 | | | UI8 | 85.05 | 30.75G | 0 | | | LBI-FL | 83.55 | 16.31G | 46.96 | We further evaluate our LBI-FL by training U-Net for image segmentation on DSB2018. The table below shows that, compared with UI8 training, our LBI-FL obtains about 0.6\% loss in Dice similarity coefficient (DSC) with over 50\% reduction of BitOPs. | Dataset | Method | Dice (\%) | BOPS | RR (\%) | | ---- | ---- | ---- | ---- | ---- | | DSB2018 | FP32 | 89.55 | 19848G | - | | | UI4 | 87.01 | 310.1G | 75 | | | UI8 | 89.48 | 1240.5G | 0 | | | LBI-FL | 88.84 | 604.9G | 51.27 | > **W4: References Not Discussed.** **Response:** We carefully read the two papers and will definitely cite them in the final version. However, they are different from our paper in the contents, as elaborated in W1. > **Q1: Convergence results without Assumption 4.2 or with a relaxed version.** **Response:** We have provided a relaxed version of Assumption 4.2. We only assume the expectation and variance of the distribution of the gradient quantization noise on each client exist and the expectation can be viewed as zero, as elaborated in W1. The relaxed assumption does not change the convergence rate in the theorem. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their detailed responses and the additional experiments provided. I appreciate the updated theoretical analysis, so I increase my score accordingly. Best regards, Reviewer --- Reply to Comment 1.1.1: Comment: Dear Reviewer umsU, We sincerely appreciate the time and effort that you have dedicated to reviewing our paper and providing these insightful comments, which will further help improve the quality of the final version of our manuscript. Best regards, The Authors
Summary: This paper proposes a low-bit integerized federated learning (LBI-FL) framework, which reduces the uplink and downlink communication overhead and mitigates the computation burden on clients, all with a tolerable level of performance loss. Specifically, a reinforcement learning based agent, which is trained on a small local dataset, is applied to dynamically determine the bit-widths for weights, activations, and gradients. The authors demonstrate, in theory, that federated learning with gradient quantization can achieve an equivalent convergence rate to the standard FedAvg algorithm with a sufficiently large number of communication rounds. Experimental results show the proposed framework 1) reduces the communication overhead to 1/8 of the full-precision training method; 2) reduces over 50% BitOPs per client on average with less than 2% accuracy loss, compared to INT8 training. Claims And Evidence: The authors verified the validity of the proposed scheme with comprehensive experiments. Methods And Evaluation Criteria: Yes. The authors leverage reinforcement learning to dynamically determine client bit-widths in federated learning, allowing more flexible and reasonable bit-widths allocation. Theoretical Claims: Yes. However, the proof of the theorem in this paper is not rounded enough. Experimental Designs Or Analyses: The authors conducted experiments with the CIFAR-10/100 dataset and models of LeNet, ResNet-18, MobileNet-V2, and ViT-S and compared the communication and computation costs, BitOPs, and accuracy under various bit widths. They also checked the effects of local update epochs, data distribution, learning rate decay, etc. Supplementary Material: Appendix A proves the theorem 4.3; Appendix B provides more comprehensive results under different data distributions and hyperparameters; and Appendix C shows the detailed bit-width change process during LeNet's training on the CIFAR-10 dataset. Relation To Broader Scientific Literature: This paper introduces reinforcement learning methodology into model compression under federated settings. This new approach reduces overhead in federated learning, and the idea could be explored more widely, for example, in personalized federated learning or channel-wise compression strategy. Essential References Not Discussed: One essential reference, which proposed the standard settings of federated learning and the framework of FedAvg, should be cited: McMahan, Brendan, et al. "Communication-efficient learning of deep networks from decentralized data." Artificial intelligence and statistics. PMLR, 2017. Other Strengths And Weaknesses: Strengths: 1) The core idea, leveraging reinforcement learning to determine the compression strategy in federated learning, is novel, reasonable, and potential. 2) Comprehensive experiments verify the validity of the proposed methodology. Weaknesses: 1) The theoretical proof in the appendix is not rounded enough, though, as stated, it is similar to one reference. ## Update after rebuttal The authors gave a rounded analysis as demanded, which makes the theory more generalized. Therefore, I increased the score. Other Comments Or Suggestions: Typos: 1) In Section 4.2, "Every Thr epochs, the agent on each client ...". 2) In Section 4.3, the part of the Balanced reward function, "Supposng the multiplication of a ..." 3) In Section 4.5, "the theoretical convergence rate is equivalent to the rrate of ..." Questions For Authors: As for the bit-width change process, is it the same case (as LeNet on CIFAR-10) for other models on other datasets? (There are no significant differences among clients, but there is a difference between activations and gradients). As for the proof, it would be better to provide clear claims for notations. Also, is it applicable to regard the quantization error as Gaussian noise under the high heterogeneity of federated learning? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your valuable suggestions and below are our responses to the weaknesses and questions. > **W1: Proof of the theorem.** **Response:** According to your comment, we have improved the theoretical analysis by relaxing Assumption 4.2 without affecting the final convergence rate. Specifically, instead of assuming the quantization noise to follow a Gaussian distribution with identical variance across all the clients, we now assume only that (a) The gradient quantization noise on each client has a well-defined expectation and variance (denoted as $\mu_N^i$ and $(\sigma_N^i)^2$ for the $i$-th client), and (b) The quantization noise expectation $\mu_N^i$ can be viewed as zero. Based on this assumption, we provide an updated derivation of the theorem. The noise term does not influence the $A_1$ term in Equation (12). Neglecting the differences across epochs, we reformulate Equation (13) in the appendix as: $$ \begin{aligned} A_2 & \le \frac{\eta^2}{m^2} \mathbb{E}\_t\left[\left\|\sum\_{i=1}^m \sum\_{k=0}^{K-1} \mathbf{g}\_{t, k}^i\right\|^2\right] + \frac{\eta^2}{m^2} \mathbb{E}\_t \left[ \left\| \sum_{i=1}^m \sum\_{k=0}^{K-1} n_{t, k}^i \right\|^2\right] \\\\ & = \frac{\eta^2}{m^2} \mathbb{E}\_t\left[\left\|\sum\_{i=1}^m \sum_{k=0}^{K-1} \mathbf{g}\_{t, k}^i\right\|^2\right] + \frac{\eta^2}{m^2} K \cdot \mathbb{E}\_t \left[ \left\| \sum\_{i=1}^m n_t^i \right\|^2\right] \end{aligned} $$ Considering the quantization noise on each client can be supposed independent, the expectation in the second term is calculated as: $$ \begin{aligned} \mathbb{E}\_t \left[ \left\| \sum\_{i=1}^m n_t^i \right\|^2\right] = \sum\_{i=1}^m \mathbb{E}\_t \left[ \left\| n_t^i \right\|^2\right] + 2\sum_{i<j} \mathbb{E}_t\left[n_t^i n_t^j\right] = \sum\_{i=1}^m ((\mu\_N^i)^2 + (\sigma^i\_N)^2) + 2\sum\_{i<j} \mu_N^i \mu_N^j \end{aligned} $$ Therefore, $\Phi$ in Equation (10) changes as $$ \begin{aligned} \Phi =& \frac{1}{c}\left[\frac{L \eta}{2 m} \left(\sigma_L^2 + \frac{1}{m} \left( \sum\_{i=1}^m ((\mu\_N^i)^2 + (\sigma^i\_N)^2) \right)\right) +\frac{5 K \eta^2 L^2}{2}\left(\sigma_L^2+6 K \sigma_G^2\right) \right] \end{aligned} $$ However, the convergence rate does not change, since the quantization noise expectation is usually at 0. Note that previous convergence result is a special case of this new result. > **W2: References Not Discussed.** **Response:** Thanks for the reminder. We will include FedAvg for the standard FL setting in the final version. > **W3: Some typos.** **Response:** We will correct these typos in the final version. > **Q1: Bit-width change for other models on other datasets.** **Response:** Yes, there are no significant differences among clients but there is a difference between activations and gradients for other models on other datasets. In the table below, we provide the bit-width change process for training ViT-S on CIFAR-10 (200 epochs) and LeNet on CIFAR-100 (2000 epochs) as further evidence. The numbers in brackets indicate the epochs of bit-width change. | Model | Dataset | Client Idx | Activation | Gradient | | ---- | ---- | ---- | ---- | ---- | | ViT-S | CIFAR-10 | 2 | (85, 110) | (20, 85) | | | | 6 | (80, 105) | (20, 80) | | | | 8 | (80, 105) | (20, 75) | | LeNet | CIFAR-100 | 20 | (850, 1345) | (170, 825) | | | | 60 | (830, 1325) | (150, 830) | | | | 80 | (825, 1355) | (140, 820) | > **Q2: Regarding quantization error as Gaussian noise.** **Response:** First of all, we emphasize that we no longer assume Gaussian noise for quantization error to achieve the convergence result. We assume that the expectation and variance of the noise distribution on each client exist are well defined and the expectation can be viewed as zero. We further perform experiments with ResNet-18 on CIFAR-10 to support the assumption, where the distribution of quantization noise is concentrated around zero. In the previous result, we made the assumption as in [B1]-[B4] where the Gaussian distribution is assumed to find the optimal clipping value to reduce the quantization loss. We will revise the assumption and clarify claims for notations in the final version. [B1] Ron Banner et al. Scalable methods for 8-bit training of neural networks. NeurIPS 2018. [B2] Ruizhou Ding et al. Regularizing activation distribution for training binarized deep networks. CVPR 2019. [B3] Zhezhi He and Deliang Fan. Simultaneously optimizing weight and quantizer of ternary neural network using truncated gaussian approximation. CVPR 2019. [B4] Xishan Zhang et al. Fixed-point back-propagation training. CVPR 2020.
Summary: This paper introduces LBI-FL, a novel framework for low-bit integerized federated learning with a focus on temporally dynamic bit-width allocation. The authors provide both theoretical convergence analysis and empirical validation on multiple FL benchmarks showing reductions in communication and computational cosr while maintaining model accuracy. Claims And Evidence: Most of the claims are well supported. The selection of the dataset is limited to CIFAR-10 and CIFAR-100, which are both small and lacks evaluation on fundamentally different tasks. Methods And Evaluation Criteria: The methods and evaluation are appropriate and make sense. The models selected are diversified, and the metrics support the claims. Theoretical Claims: Theoretical contribution of Theorem 4.3, which analyzes convergence of LBI-FL under gradient quantization with Gaussian noise approximation, looks correct. The convergence rate of $O(1/\sqrt(T))$ makes sense. Experimental Designs Or Analyses: The experiment design are valid. The generalization of RL policy to unseen FL tasks is not fully evaluated which limits the claim of broader usage. Supplementary Material: I checked the additional ablation studies on data distributions and hyperparameters Relation To Broader Scientific Literature: This work is closely related to FL Basic, theory, quantization and RL for optimization. This work extends prior works on low-bit FL by proposing a sub-INT8 training framework that dynamically adjusts bit-width over time using RL seems novel to me. Essential References Not Discussed: Layer-wise quantization based methods are mentioned but not used as baselines. Some recent works focus on energy-latency tradeoffs in FL with quantization can also be discussed here. Other Strengths And Weaknesses: Strengths: The idea of this paper seems novel, well-written with strong theoretical contribution Weakness: RL part is not fully discussed. Other Comments Or Suggestions: More diversified datasets are recommended in this paper Questions For Authors: How does the RL agent's decision-making overhead compare to the computational savings from low-bit training? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable comments and below are our detailed responses to the raised weaknesses and questions. > **W1: Experiments on diverse tasks and datasets.** **Response:** We have performed extensive experiments on diverse tasks and datasets to demonstrate the effectiveness of our LBI-FL. We use the RL agent trained for image classification on CIFAR-10 without retraining, which proves the generality of our RL agent. i) Image segmentation with U-Net on DSB2018. We adopt the image size of 96 $\times$ 96. The table below shows that, compared with UI8 training, we achieve acceptable performance of about 0.6\% loss in Dice similarity coefficient (DSC) with over 50\% reduction of BitOPs. | Dataset | Method | Dice (\%) | BOPS | RR (\%) | | ---- | ---- | ---- | ---- | ---- | | DSB2018 | FP32 | 89.55 | 19848G | - | | | UI4 | 87.01 | 310.1G | 75 | | | UI8 | 89.48 | 1240.5G | 0 | | | LBI-FL | 88.84 | 604.9G | 51.27 | ii) Image classification with ResNet-18 on Tiny-ImageNet. The image size is 64$\times$64. Training on Tiny-ImageNet is much more difficult. The table below shows that our LBI-FL achieves over 50\% reduction of BitOPs with less than 0.1\% accuracy loss. | Dataset | Method | Acc (\%) | BOPS | RR (\%) | | ---- | ---- | ---- | ---- | ---- | | Tiny-ImageNet | FP32 | 35.84 | 457.14G | - | | | UI4 | 34.64 | 7.14G | 75 | | | UI8 | 35.21 | 28.57G | 0 | | | LBI-FL | 35.13 | 14.05G | 50.81 | > **W2: Baselines using layer-wise quantization.** **Response:** Most layer-wise quantization based methods (**e.g.**, HAWQ v1-v3) are designed for inference only, and cannot be employed for FL. We compare our LBI-FL with AMPA [A1] that achieves layer-wise quantization based on the sensitivity measurement for centralized low-bit training. We train ResNet-18, MobileNet-V2, and ViT-S on CIFAR-10 using the two methods. The results below show that our LBI-FL achieves higher compression ratios and better performance. [A1] Li Ding et al. AMPA: Adaptive mixed precision allocation for low-bit integer training. ICML 2024. | Model | Method | Acc (\%) | BOPS | RR (\%) | | ---- | ---- | ---- | ---- | ---- | | ResNet-18 | AMPA | 84.10 | 3.49G | 51.11 | | | LBI-FL | 84.16 | 3.28G | 54.06 | | ViT-S | AMPA | 72.54 | 73.08G | 38.79 | | | LBI-FL | 72.55 | 60.3G | 49.51 | | MobileNet-V2 | AMPA | 87.89 | 9.78G | 46.19 | | | LBI-FL | 89.02 | 8.83G | 51.39 | > **W3: Discussion about works on energy-latency tradeoffs with quantization in FL.** **Response:** Thanks for the suggestion. We will include relevant works [A2][A3] in the Related Work section. Optimizing energy consumption is indeed essential for practical deployment of FL systems. For instance, [A2] derives the time and energy consumption models for FL and proposes a iterative algorithm to allocate resources. [A3] proposes an optimization framework that minimizes the total energy consumption of local computation and wireless transmission by adaptively selecting the quantization level. These methods are complementary to our LBI-FL that simultaneously reduces computational and communication overheads, and thereby improves energy efficiency with reduced latency in federated learning. [A2] Yang, Zhaohui, et al. "Energy efficient federated learning over wireless communication networks." IEEE TWC 2020. [A3] Ouiame Marnissi, Hajar El Hammouti, and El Houcine Bergou. Adaptive sparsification and quantization for enhanced energy efficiency in federated learning.IEEE OJCOMS 2024. > **Q1: Overhead of RL agent's decision-making.** **Response:** The overhead of RL agent is very small compared to the computational savings from low-bit training. i) The RL agent consists of two linear layers with only 1.92K parameters, and requires 1.97G BitOPs for making one decision (i.e., 1.92K Mac$\times$32$\times$32). ii) The RL agent makes decision for every 5 epochs rather than each epoch. We provide results on CIFAR-10 as examples. When training LeNet using 100 clients, each client has 500 training images and the RL agent makes a decision after training on 250 images on average (participation rate is 0.1). Its inference cost is only 1.1\% that for UI4 training and 0.275\% for UI8 training. When training ResNet-18 using 10 clients, each client has 5000 training images. The inference cost of the RL agent is 0.0044\% and 0.0011\% those for UI4 and UI8 training.
null
null
null
null
null
null
WILTing Trees: Interpreting the Distance Between MPNN Embeddings
Accept (poster)
Summary: This paper investigates how MPNNs learn to embed graphs in a way that captures functional relationships between them. The authors propose a novel interpretable graph distance, the WILTing distance, which effectively approximates the distances between MPNN embeddings. They demonstrate that MPNNs focus on a small subset of WL colors that are functionally important. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: The paper offers a solid theoretical foundation for understanding MPNN embedding spaces through the lens of metric learning. However, it does not fully explain why some MPNNs achieve better functional alignment than others. Experimental Designs Or Analyses: The paper provides comprehensive experiments across multiple datasets and a clear demonstration that the alignment between MPNN distances and functional distances correlates with model performance. However, 1. the authors compare their method only to existing graph kernels and do not compare it against other interpretability methods in GNN, making it difficult to assess the relative advantages of WILT. 2. the experiments do not address how the approach scales to very large graphs or datasets with many graphs, which is crucial for practical applications. Supplementary Material: Yes Relation To Broader Scientific Literature: No Essential References Not Discussed: No Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: Q1: I am curious about why the Pearson correlation coefficient is used as a measure of correlation. Is there a linear relationship, or would another method, such as rank correlation, be better for quantifying this relationship? Q2: I wonder whether this metric is effective for investigating the internal embeddings of MPNNs, rather than just the final embeddings, as previous work[1] (also based on the WL kernel) suggests that consistency in the distance relationship is crucial for MPNN performance. [1]Liu, et al Exploring Consistency in Graph Representations: from Graph Kernels to Graph Neural Networks. NIPS Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review and valuable feedback, which help us clarify our arguments and consider future directions. Below are our answers to your questions and comments. --- > Why some MPNNs achieve better functional alignment than others Since we focused more on trying out different datasets, we have not conducted in-depth experiments to compare different GNNs. At this point, we share some insights on how hyperparameters affect the functional alignment. We observed that the embedding space of deeper GNNs is more aligned with the functional distance. For example, in Figure 8, deeper models tend to be plotted on the right side of each graph. In future studies, we plan to investigate in more detail what is a key factor contributing to higher functional alignment and thus higher performance. WILT has a nice property that may help in this investigation: The tree structure of WILT is determined only by the dataset and isn't influenced by the GNN. Therefore, different GNNs can be distilled into the same WILT, allowing a direct comparison of the resulting edge weights. > Comparison against other interpretability methods in GNN Most interpretation methods aim to find a subgraph that GNN considers important for a prediction for ONE given graph (instance level). On the other hand, our method tries to explain the GLOBAL behavior of GNN by analyzing $d_\mathrm{MPNN}$. The highlighted subgraphs in Figure 6 do not mean important subgraphs for the four specific graphs. They are the ones that strongly influence $d_\mathrm{MPNN}$ in general. This fundamental difference in purpose makes it difficult to compare our method with existing methods. There are also global explanation methods as mentioned in Section 2, but they use decision trees or logical formulas as a language, which are difficult to compare with the subgraphs extracted by our method. > How the approach scales to very large graphs or datasets with many graphs We expect WILT to scale to large graphs or datasets, although we have not conducted actual experiments. The number of WILT colors is bounded by the number of nodes in all graphs times the WILT depth. In practice, however, it is much smaller because nodes with the same neighborhood are assigned the same color. As for the computational time, building the WILT and computing the embeddings require linear time in the total number of edges in the dataset. Once the embeddings are prepared as sparse vectors, the computation of $d_\mathrm{WILT}$ is always linear in the number of nodes of the two graphs involved. Our method is designed for graph-level tasks and cannot be directly applied to node classification, which is a typical task on very large graphs such as social networks. However, we believe that the node embedding space of GNNs can also be distilled to WILT in a similar way, i.e., by tuning the weights to approximate the node embedding distance with a path distance on WILT. > Q1: Why the Pearson correlation coefficient is used as a measure of correlation We used Pearson's correlation coefficient (PCC) because it is commonly used and preferable in terms of visualization (see red lines in Figure 8). However, you are right in that there is no valid reason to expect a linear relationship, so we have reanalyzed the results using Spearman's rank correlation coefficient (SRCC). The result is similar to when PCC is used: the functional alignment is more consistently and highly correlated with performance than the structural alignment. Here is the result for the Mutagenicity dataset. We will include these results in the final pdf. - SRCC between the functional alignment and accuracy (c.f. Table 1) || train | test | |-|-|-| |k=1|0.66|0.70| |k=5|0.65|0.69| |k=10|0.63|0.68| |k=20|0.61|0.67| - SRCC between the structural alignment and accuracy (c.f. Table 4) ||train/GED|train/TMD|train/WLOA|train/WWL|test/GED|test/TMD|test/WLOA|test/WWL| |-|-|-|-|-|-|-|-|-| |mean|0.33|0.19|-0.09|0.46|0.36|0.36|0.05|0.66| |sum|0.09|0.25|0.38|0.20|0.10|0.31|0.14|0.09| > Q2: I wonder whether this metric is effective for investigating the internal embeddings of MPNNs, rather than just the final embeddings, as previous work[1] (also based on the WL kernel) suggests that consistency in the distance relationship is crucial for MPNN performance. In principle, $d_\mathrm{WILT}$ can be trained to approximate the MPNN embedding distance at internal MPNN layers. Our conjecture is that the deeper the layer is, the more the MPNN embedding distance (and thus $d_\mathrm{WILT}$) respects the functional distance. $d_\mathrm{WILT}$ may also keep the consistency proposed in [1], because $d_\mathrm{WILT}$ includes $d_\mathrm{WLOA}$ as a special case, which has been shown theoretically to preserve consistency. However, the proofs provided in [1] require the WILT weights to be constant and hence the results do not immediately extend to our case. --- Please feel free to let us know if you have any other questions or suggestions. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' detailed response. My concerns have been addressed.
Summary: The authors of this paper investigate the distance function learned by message-passing neural networks (MPNNs) and introduce a new framework called Weisfeiler Leman Labeling Tree to interpret these distances. Unlike previous work that aligns MPNN embeddings with structural graph distances, the authors focus on task-specific functional distances. Their key contribution is the introduction of the WILT framework, which applies optimal transport techniques to a weighted Weisfeiler-Leman (WL) tree, enabling efficient computation and interpretation of MPNN distances. They demonstrate through experiments in graph classification datasets, that MPNN embeddings capture functionally relevant subgraphs, leading to improved interpretability and performance understanding. Claims And Evidence: The claims are supported by convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are suitable for the tasks. Theoretical Claims: The paper makes several theoretical claims, regarding the properties of the proposed WILT distance, its relationship to existing graph distances, and its alignment with MPNN embeddings. The claims are easy to follow and appear correct. Experimental Designs Or Analyses: The experimental design seems valid. Supplementary Material: Yes, all of it. Relation To Broader Scientific Literature: The paper shifts focus from the binary expressiveness of MPNNs (i.e., distinguishing non-isomorphic graphs) to the metric structure of their embeddings. It extends previous theoretical analyses by exploring functional distance alignment (rather than using only structural distance alignment) . Essential References Not Discussed: The paper provides a detailed discussion of related work. Other Strengths And Weaknesses: ## Strong Points: - The paper studies the distance metric learned by MPNNs which is crucial for the interoperability of those models. - The paper moves beyond binary expressivity and addresses how MPNNs learn distance metrics that impact predictive performance. - WILT can be computed in linear time, making it practical for large-scale graph datasets. ## Weak Points: - The paper provides good motivation and theoretical intuition for understanding the learned distance between MPNN embeddings. However, I feel that the experimental results are limited. One of the key limitations is the lack of a systematic ablation study examining how different GNN architectural choices affect the learned embedding distances and the performance of WILTing Trees. For example, how does the number of layers affect the results? Is there a connection with oversmoothing? - The authors only use GCN and GIN in the experiments. However, many modern GNNs exceed 1-WL expressivity (e.g., higher-order message passing, subgraph-based architectures, additional positional features, graph transformers). It is unclear whether the current findings extend to these more powerful architectures, Other Comments Or Suggestions: NA Questions For Authors: - Does WILT extend to more expressive GNNs? If so, have you conducted experiments on them? - Do you see any way to improve GNN performance using insights from WILT, besidesπ the interpretability aspect? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank you for your thorough review and valuable feedback. We hope the points below adequately address your concerns. --- > Weakness 1: How different GNN architectural choices affect the learned embedding distances and the performance of WILTing Trees. For example, how does the number of layers affect the results? Is there a connection with over-smoothing? Since we put more focus on trying out different datasets than on changing GNN architectures, we used only two types of GNNs (GCN and GIN) and poolings (sum and mean). Thus, we admit that the effect of the choice of architecture and hyperparameters is still unclear. At this point, we would like to share some insights on how hyperparameters affect the learned embedding distances. We observed that the deeper the GNN is, the more likely its embedding space is aligned with the functional distance, similar to the observation of [1] Kothapalli et al. mentioned by reviewer n2hL. For example, in Figure 8, deeper models tend to be plotted on the right side of each graph. The connection to over-smoothing is an interesting future work that we did not address during our investigation. > Weakness 2, Question 1: Does WILT extend to more expressive GNNs? If so, have you conducted experiments on them? As briefly mentioned in the conclusions, it is possible to construct WILT for higher-order variants of the WL labeling algorithm. We expect that such a higher-order WILT can be used in a similar way to analyze GNNs whose expressiveness is only bounded by higher-order WL variants. However, we have not yet empirically explored this direction. > Question 2: Do you see any way to improve GNN performance using insights from WILT, besides the interpretability aspect? This is a very important question. At the moment we don't have an effective way to improve GNN's performance using insights from WILT. However, WILT has a nice property that might help us improve GNN's performance: The tree structure of WILT is determined only by the dataset and isn't influenced by the GNN. Therefore, different GNNs can be distilled into the same WILT. By comparing the resulting edge weights, we may be able to gain insight into why some GNNs perform better than others. This is really a promising future direction. Although not fully related to your question, we would like to mention the possibility of using WILT to achieve better performance in graph classification or regression. In this study, we train WILT weights by approximating the GNN embedding distance. However, if we find a way to effectively train the weights from scratch, the resulting $d_\mathrm{WILT}$ can be used as a kernel function that is more flexible than other WL-based kernels at no additional computational cost. --- If you have any other questions or suggestions, please let us know and we'll be happy to address them.
Summary: The paper investigates (i) the properties of the distances defined for the MPNN based on structural and functional pseudometrics to find the one that explains the high performance of MPNN, and (ii) how MPNNs learn such a structure. The main contribution is the new graph distance based on the Weisfeiler Leman labeling tree (WILT) that is weighted. The nodes in WILT are the colors of the Weisfeiler Leman (WL) test, and the edges connect the parent color with the color that it changes to in the iterations of WL. The edge weights in WILT are learned such that the graph distance is close to the graph embedding of the message passing neural networks (MPNN). Using this, the authors claim that the edge weights identify the subgraphs that strongly influence the distance between the embeddings of MPNN and use this to interpret the embedding space. Claims And Evidence: There are two main claims made in the paper based on the analysis: (i) MPNN distance defined on the embedding space is critical to the task performance, and (ii) develops a new distance based on WILT and an algorithm to learn its edge weights to be close to the embedding of MPNN. The edge weights allow to identify the subgraph that yields high performance, thereby providing a way to interpret the embedding space of MPNN. Claim (i) is expected and intuitive. It is however established clearly using the structural and functional pseudometrics, and evaluation of the alignment to the MPNN embedding on several datasets. While claim (ii) is evaluated, the computational aspect of the algorithm is not discussed in detail. I understand $d_{WILT}$ is linear in time. But, for a dataset with $n$ graphs with at least $m$ nodes, one needs to apply the WL test algorithm for all the graphs and then build the WILT traversing the resultant graphs from the WL test. What is the complexity of building this tree? and what about its scalability, especially with the number of graphs in the dataset and their sizes? Methods And Evaluation Criteria: The proposed method is novel and the evaluation is sensible. That being said, the purpose of the proposed graph distance is not clear. Is interpretability the primary objective? Then, to strengthen the evaluation, I think the extracted subgraph needs to be compared to the existing methods such as GNNExplainer [1], SubgraphX [2], etc. [1] Ying et al. GNNExplainer: Generating Explanations for Graph Neural Networks. NeurIPS 2019 [2] Yuan et al. On explainability of graph neural networks via subgraph explorations. ICML 2021. Theoretical Claims: The expressivity theorems appear correct. The proofs in the appendix are not checked thoroughly. Experimental Designs Or Analyses: The experiments are sound and detailed. The plots are especially helpful in conveying the idea and results. Supplementary Material: Sections A.1, A.2, A.4, B, and some parts of the experiments in C are checked. But I acknowledge that these sections are not thoroughly reviewed for correctness. Relation To Broader Scientific Literature: The key contribution is the WILT based graph distance. The subgraph identified from its edge weights is shown to be functionally important, which is interesting for the interpretability in MPNNs. Essential References Not Discussed: Regarding the first investigation on the properties of the MPNN distances that explain its high performance, [1] studies empirically and theoretically the alignment of the embedding to the task related functional metric (adopting the language from the current paper). This work needs to be discussed. [1] Kothapalli et al. "A neural collapse perspective on feature evolution in graph neural networks." NeurIPS 2023 Other Strengths And Weaknesses: **Strengths** The paper is clearly written, and the claims are supported by experiments. I found the discussion on the pseudometrics helpful. The expressivity of WILT is also analyzed which further strengthens the contribution. **Weaknesses** The clarity can be improved in some aspects. For instance, 1. The caption of Figure 1 last line is not clear. What is the tuple in the multi-set with $-$ mean? 2. The alignment to structural pseudometric is evaluated in the appendix. It would improve the draft if it is mentioned and referenced in Section 4 as well. Perhaps, the authors can reference Figure 9 and contrast it with Figure 2. 3. Discussion on the limitations of the method is missing. Other Comments Or Suggestions: 'Figure 5' in line 382 second column shouldn't it be Figure 4? Similarly 'Figure 4' in line 411 first column should be Figure 5? Questions For Authors: Please check the other sections as well for questions. One other question: Why do you have $\alpha$ in the RMSE definition? Is it somehow influencing the lower distance for WILT compared to WWL and WLOA? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reviewing our paper, including the supplementary materials, and acknowledging our contributions. We address each weakness and question individually below. --- > Complexity of building the tree Suppose the dataset consists of $|D|$ graphs with at most $|E|$ edges. Then, building WILT with depth $L$ takes $O(|D| \times L \times |E|)$ time. Thus, the complexity is linear in $|D|$ and $|E|$. If the graphs are sparse, i.e. $|V| \propto |E|$, the complexity is also linear in $|V|$. For dense graphs, the complexity is quadratic in $|V|$. > The extracted subgraph needs to be compared to the existing methods such as GNNExplainer [1], SubgraphX [2], etc. Most interpretation methods, including [1] and [2], aim to find a subgraph that GNN considers important for a prediction for ONE given graph (instance level). On the other hand, our method tries to explain the GLOBAL behavior of GNN by analyzing $d_\mathrm{MPNN}$. The highlighted subgraphs in Figure 6 do not mean important subgraphs for the four specific graphs. They are the ones that strongly influence $d_\mathrm{MPNN}$ in general. Because of the fundamental difference in purpose, it is difficult to compare our method with [1] and [2]. There are also global explanation methods as mentioned in Section 2, but they use decision trees or logical formulas as a language, which are difficult to compare with the subgraphs extracted by our method. > Essential References Not Discussed: [1] studies empirically and theoretically the alignment of the embedding to the task related functional metric. Thank you for bringing this important prior study to our attention. We will add it to the Related Work section in our final version. At this point, we would like to clarify the relationship between our study and [1]. While both investigate the alignment between the embedding distance and the functional distance, there are fundamental differences: Our study examines the graph embedding space of GNNs trained on practical graphs, while [1] analyzes the node embedding space of GNNs trained on graphs generated from a stochastic block model. Nevertheless, both studies reach a similar conclusion: GNNs are trained such that the embeddings respect the functional alignment. It would be interesting to investigate how the degree of alignment between the graph embedding distance and the functional distance changes with layer depth, as [1] did for node embedding. > Weakness 1: Unclear caption for Figure 1 We apologize for the confusion. $-$ and $\cdots$ represent edges with different labels. We consider a variant of the WL algorithm that takes edge labels into account. This allows WILT to handle graphs with edge labels, such as the molecules in the Mutagenicity dataset. We will clarify the caption in the final pdf. > Weakness 2: The alignment to structural pseudometric is evaluated in the appendix. We will include the alignment to $d_\mathrm{struc}$ in the main text in the final pdf. > Weakness 3: Discussion on the limitations of the method is missing. One limitation of our study is that we only distilled two GNN architectures (GCN and GIN) with fixed hyperparameters to WILT. Thus, it remains to be seen how different architectures and hyperparameters affect the edge weights of WILT. Another limitation is that our method cannot be directly applied to the analysis of GNNs for node classification on large graphs such as social networks or citation networks. This is because we only deal with graph-level tasks. However, we believe that the node embedding space of GNNs can also be distilled to WILT in a similar way, i.e., by tuning the weights to approximate the node embedding distance with a path distance on WILT. We will include these limitations and possible future work in the final pdf. > 'Figure 5' in line 382 second column shouldn't it be Figure 4? Similarly 'Figure 4' in line 411 first column should be Figure 5? We are sorry that these are mistakes. They will be fixed in the final pdf. > Why do you have $\alpha$ in the RMSE definition? We use $\mathrm{RMSE}(d_\mathrm{MPNN}, d)$ to measure how well $d$ captures the geometric structure of $d_\mathrm{MPNN}$. We first normalize $d_\mathrm{MPNN}$ so that we can compare or compute the mean of the RMSEs of different MPNNs. The normalization of $d$ is optional as we optimize $\alpha$ anyway. $\alpha$ reflects our belief that the scale of the distance does not affect the importance of the subgraphs. In other words, if $d = \alpha \cdot d_\mathrm{MPNN}$, we consider that $d$ correctly captures $d_\mathrm{MPNN}$. Since $d_\mathrm{WILT}$ is optimized to approximate $d_\mathrm{MPNN}$, $\alpha$ is close to one. However, for $d_\mathrm{WWL}$ and $d_\mathrm{WLOA}$, $\alpha$ is not necessarily close to one. We introduce $\alpha$ for a fair comparison between $d_\mathrm{WILT}$ and $d_\mathrm{WWL}$, $d_\mathrm{WLOA}$. --- Please let us know if you have any further questions or suggestions, and we will be happy to answer them.
null
null
null
null
null
null
null
null
Private Lossless Multiple Release
Accept (poster)
Summary: This paper introduces a lossless multiple release mechanism for differential privacy (DP), allowing analysts with different trust levels to receive private data releases with distinct privacy guarantees. Unlike previous methods, this approach ensures that multiple releases do not accumulate unnecessary privacy loss while maintaining the same accuracy as a single release at the most relaxed privacy level. Traditional DP mechanisms operate under a fixed privacy budget, and any additional release of the same dataset typically increases cumulative privacy loss. However, real-world applications often involve multiple access levels—such as different levels of trust, security clearance, or evolving accuracy needs over time. The authors propose a model where an analyst initially receives a high-privacy (low-accuracy) release but can later receive a less private, more accurate release without paying an extra privacy cost beyond that of the latest release. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: The paper presents a framework based on Gaussian noise mechanisms, enabling multiple DP releases while preserving accuracy and privacy guarantees. Essential References Not Discussed: Yes Other Strengths And Weaknesses: Strengths The paper extends the concept of gradual release to a more general multiple release setting, allowing for arbitrary orderings of privacy parameters. The proposed framework is well-supported by formal theorems. No additional privacy cost beyond the least private release. The paper is well-organized, with clear definitions and intuitive explanations. Supports various data types, including histograms and factorization mechanisms. Weaknesses The approach is currently applicable only to additive Gaussian noise mechanisms. Requires further exploration for mechanisms like the exponential mechanism. While the theoretical results are strong, the experimental section is relatively limited. The computational cost of implementing lossless multiple release is not fully analyzed. A discussion on efficiency, especially for high-dimensional data, would be useful. The paper could benefit from a comparison with alternative privacy-preserving mechanisms beyond gradual release, such as Rényi DP-based approaches. Practical deployment may require careful calibration of noise and privacy budgets. Other Comments Or Suggestions: see Weaknesses Questions For Authors: see Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their reading and comments on our paper. We address some of the perceived weaknesses below: **W1: only applicable to Gaussian additive noise.** Theorem 3.5 holds for any additive noise mechanism based on a noise distribution satisfying a convolution preorder (Definition 1.1). Besides the Gaussian distribution, the Laplace, zero-mean Skellam, and Poisson distributions all meet this criterion. In an updated version of the manuscript, we include more details on sampling for general distributions and for the Poisson distribution in particular. The exponential mechanism cannot in general be expressed as a noise additive mechanism. Whether it can be implemented to satisfy lossless multiple release is an interesting open problem. **W2: computational cost.** The computational efficiency of our methods is fairly straightforward, so we did not really discuss it, but we agree with the reviewer that a discussion should be included. Since the additive noise mechanisms we consider add independent noise in each dimension, the space and time complexity of implementing it with lossless multiple release scales linearly with each dimension. We consider one-dimensional noise henceforth. To allow for releases in arbitrary privacy order, we need to store all $k$ past releases when making the $k+1$’st release. For time complexity, note that any release requires drawing one (possibly conditioned) noise sample. The time complexity of the sampling depends on the particular distribution: In the case of Gaussian noise, it always corresponds to a single sample from a Gaussian. **W3: comparing to mechanisms beyond gradual release, e.g. Renyi-DP.** Lossless multiple release (Definition 3.1) does not rely on a fixed privacy notion but rather ensures, information-theoretically, that a set of releases contain no more information than the least private release. In particular, for any fixed $\alpha$, we support $(\alpha, \varepsilon)$-RDP lossless multiple release with respect to $\rho=\varepsilon$. **W4: practical deployment requiring careful calibration.** Generally, implementations of differential privacy require carefully choosing privacy parameters. Our contributions in this paper arguably make this easier in some contexts: If multiple releases of a statistic are to be made, we can sometimes (Theorem 3.5) do it dynamically at no cost in the privacy-utility trade-off. Not having to decide on privacy parameters up front saves resources spent on planning.
Summary: This paper examines the problem of multiple private data releases with varying privacy parameters, which are assigned sequentially and in arbitrary order. Compared to [LWRR'22], the authors present a simpler analysis that avoids Brownian motion techniques and explicitly provide a sampling method. Their approach extends to additive noise mechanisms whose distributions satisfy a specific convolutional property, termed convolution preorder. As an application, they propose an efficient method for gradually releasing sparse histograms with a runtime independent of the number of dimensions. Claims And Evidence: The theorems appear to be correct to my best knowledge, and proofs are easy to follow. Methods And Evaluation Criteria: N/A as this is mainly a theory paper. Theoretical Claims: The theorems appear to be correct to my best knowledge, and proofs are easy to follow. Experimental Designs Or Analyses: N/A as this is mainly a theory paper. The numerical evaluation looks reasonable. Supplementary Material: I check the omitted proofs in the supplementary material. Relation To Broader Scientific Literature: See the strength and weakness section below. Essential References Not Discussed: The references look proper to me. Other Strengths And Weaknesses: While the paper presents a simple and explicit method for releasing multiple DP levels of a query, its novelty compared to the Brownian mechanism in [LWRR'22] seems rather limited—though it is worth noting that [LWRR'22] studies ex-post DP, a slightly different notion than the one considered here. For the multiple release problem, the Brownian mechanism provides a natural way to correlate DP noise. While the authors claim to extend [LWRR'22] by allowing arbitrary release orders, this improvement appears to be a straightforward conditioning argument. Given future realizations, the noise distribution follows from a Brownian bridge, making the explicit form and sampling strategy fairly direct. Consequently, the claimed advantages of simpler analysis and explicit sampling seem rather natural, and the postprocessing and factorization theorems also follow in a straightforward manner. Thus, the main technical contribution of this work lies in its extension to convolutional preorder noise mechanisms and its application to the sparse histogram problem. Other Comments Or Suggestions: Post-rebuttal: I agree that a direct and concise analysis has its merits. While I still feel the core idea is somewhat natural given the Brownian mechanism, I recognize its value in various differential privacy applications, as echoed by other reviewers. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their careful reading and comments on our paper. We address the perceived weaknesses below: **W1: Relation between Algorithm 1 and the Brownian mechanism.** We agree with the reviewer that Algorithm 1 can be interpreted as the Brownian mechanism combined with Brownian bridge sampling, but we would argue that it is not a trivial translation. Given the focus on ex-post $(\varepsilon, \delta)$-DP in [WWRR’22], it is not immediate that the Brownian mechanism as stated can be sampled in arbitrary order and that it is lossless. Second, we believe that there is a value in deriving these results from first principles. Avoiding the language of continuous time processes makes for a less technically demanding exposition, increasing the chance that these results see real-world uptake. **W2: technical contribution.** We emphasize that our Theorem 3.5 makes a statement not only for Gaussian noise but (as the reviewer points out) for any additive noise distribution satisfying Definition 1.1. There are many such distributions, including the Laplace, zero-mean Skellam, and Poisson distributions. In an updated version of the manuscript, we include a version of Algorithm 1 for generic noise distributions. We also give privacy guarantees for the Poisson distribution, and use it as an illustrative example of lossless multiple release for discrete probability distributions (which is what can be implemented on discrete computers).
Summary: This paper investigates the private lossless multiple release problem and presents a solution for a broad class of mechanisms (e.g., Gaussian, Laplacian, Poisson) based on additive noise, where the noise distribution satisfies a convolution preorder property. Unlike private gradual release, multiple release does not require an increasing privacy budget. Additionally, this paper explores two applications: one for the factorization mechanism and another for sparse histograms, demonstrating its practicality. ## update after rebuttal I will keep my score unchanged. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. I didn't check the proofs, but the theorems make sense to me. Experimental Designs Or Analyses: Yes. The experiment for lossless release demonstrates the correctness of the claims. Supplementary Material: No. Relation To Broader Scientific Literature: The method can help save private budgets in application that needs multiple releases. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: The paper provides a solution for private multiple releases that does not require an increasing order of privacy budget. The theorem generalizing to multiple distributions is well-developed, demonstrating its practicality. Additionally, the approach helps users conserve their privacy budget. The special case of the Gaussian mechanism is particularly easy to understand. Weaknesses: The solution is limited to independently distributed mechanisms, whereas, in practice, correlated noises are widely used [A]. The paper does not address this limitation. To my knowledge, the composition of correlated Gaussian mechanisms is fundamentally different [B], and lossless multiple release may not be feasible in such cases. [A] Koloskova, Anastasiia, Ryan McKenna, Zachary Charles, John Rush, and H. Brendan McMahan. "Gradient descent with linearly correlated noise: Theory and applications to differential privacy." Advances in Neural Information Processing Systems 36 (2023): 35761-35773. [B] Xiao, Yingtai, Guanhong Wang, Danfeng Zhang, and Daniel Kifer. "Answering private linear queries adaptively using the common mechanism." arXiv preprint arXiv:2212.00135 (2022). Other Comments Or Suggestions: None. Questions For Authors: The application to the factorization mechanism assumes that the strategy matrix R remains the same across all releases. However, in practice, different users may require different strategy matrices. For example, in the gradient descent setting, the workload matrix A may vary due to differences in learning rates and momentum weights. How would the proposed method address the multiple release problem in such cases? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and questions. Our technique applies to additive noise mechanisms as well as to invertible post-processing of additive noise mechanisms. In particular, it applies to factorization mechanisms for which the noise added to queries is correlated. This is a rather large class of mechanisms, but the reviewer is correct that it does not include all mechanisms. For example, we do not know how to do lossless private multiple release of iterative methods such as private SGD. Regarding the question about factorization mechanisms with multiple strategy matrices this is a general question that is not specific to multiple release. The situation is usually modeled by combining all strategy matrices into one. The workload matrix can be changed freely as long as it can be expressed as a matrix product LR, where R is the strategy matrix.
Summary: This paper studies the problem of lossless multiple release under differential privacy: we want to release a noisy answers to a query satisfying various levels of privacy such that any subset of the noisy answers contain at most as much information about the statistic as the least private noisy answer. This work builds on results about gradually releasing noisy answers with increasing budget. The authors identify a condition of a noise distribution called convolution preorder that is sufficient for an additive noise mechanism satisfying lossless multiple release. This approach is illustrated through the case of Gaussian noise and applications are given for factorization mechanisms such as the matrix mechanism and sparse histograms. Claims And Evidence: The claims in this paper are well supported by theoretical results. Methods And Evaluation Criteria: The proposed methods make sense for the problem. Theoretical Claims: I checked the main text proofs - but not the appendix proofs - and did not spot any errors. Experimental Designs Or Analyses: The empirical evaluation is reasonable and a good sanity check but is unnecessary. Supplementary Material: I did not review supplemental material for this paper. Relation To Broader Scientific Literature: This paper deepens our understanding of additive noise mechanisms used in differential privacy beyond the gradual release setting. Essential References Not Discussed: I am not aware of essential references that are not discussed. Other Strengths And Weaknesses: Interesting paper; the organization made the paper clear and easy to follow. Both the sections on Gaussian noise and factorized mechanisms were helpful in looking at specific instances of the general result. Other Comments Or Suggestions: In the camera ready version, I don't think the empirical evaluation is necessary. I understand why you added it to the submission though. Questions For Authors: One area of improvement is with regard to motivating the lossless multiple release setting. What is a natural setting that would be analogous to the Bell-LaPadula model for privacy? In what setting would you want one analyst to have a less noisy estimate than another? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive review and the question about the lossless multiple release setting. Realistic privacy settings could be the military setting itself (which Bell-LaPadula was designed for), or other settings where analysts have different trust levels (here the more trusted analysts would get more accurate releases). Another example is if a company wants to release a statistic, say, user statistics: they could make an accurate release for their own data analysts, a less accurate release for external consultants, and include an even less accurate release in a report for shareholders or other external actors. A benefit of this is that lossless multiple release allows for dynamic updates: an external consultant that later gets employed directly by the company could get the more accurate release without constituting a privacy violation or requiring an increased privacy budget to reach the same accuracy.
null
null
null
null
null
null
Making Hard Problems Easier with Custom Data Distributions and Loss Regularization: A Case Study in Modular Arithmetic
Accept (poster)
Summary: This paper proposes a new training strategy and loss function for successful modular additions and other operations. The critical observation is the utility of sparse samples. Models learn modular addition better on sparse samples, and if sparse and dense samples are mixed, it learns from sparse samples and then generalizes to dense samples. Previous studies sample training samples uniformly, and this leads the sampling to concentrate on non-sparse elements. The authors proposed to sample numbers from a distribution in which large numbers are sampled less (and thus, more chances of sampling 0). Further, the new loss function includes an additional term that encourages the model's output to be on the unit circle. The idea is based on the observation that in hard settings, the model's output collapses to the origin. In the experiments, the proposed sparsity-aware sampling and the custom loss both are demonstrated to improve the accuracy significantly. **update after rebuttal.** I appreciate the author's full elaborations and answers to my concerns, which greatly deepened my understanding of their work. The explanation and additional results address my concerns so I suppose that this work is worth being presented in the main conference. Claims And Evidence: The claims and evidence are generally reasonable and convincing, but I have a concern that the explanation and the method sometimes assume modular addition, and the authors should include more general discussions. For example, [l.113] says > [l.113, left] Because 0 and q-1 are "close" in a modular field, ... but this needs several remarks. First, modular field does not equip any distance as it violates the triangle inequality. Thus, being "close" is not precise even with quotations. The two numbers 0 and q-1 might appear "close" because of the unit 1 in the addition, but the field also equips multiplication. Similarly, including many 0 makes problems sparser/easier for addition task, but not necessarily for others. For the multiplication task 1 should be the number that makes the task "sparser." Methods And Evaluation Criteria: The proposed method and evaluation criteria are reasonable. One of the concerns is that the explanation of the proposed method is biased toward modular addition and not general enough. The experiments also put a great focus on the modular addition, and the extended task (Table 8) is also taking additive formats. It would be better for authors to make the explanation more general, and verify them in the experiments. Theoretical Claims: No theoretical claims are made. Experimental Designs Or Analyses: The experiments carefully examine the claims with variations in the number of terms and modulus. The recent repetition technique is also taken into account, and beyond the modular addition task, LWE cryptanalysis and several standard benchmark tasks are included in the experiments. However, as mentioned in #Methods And Evaluation Criteria, the experiments are strongly based on modular addition. The experiments also include other tasks in the end, but Parity is also an additive task, and the other tasks are not very arithmetic. Including more arithmetic tasks and generalizing the proposed method will make this study even more impactful. Supplementary Material: I skimmed all the sections to see the overview of the proofs and searched for answers to my concerns and questions. Relation To Broader Scientific Literature: This study has a strong contribution to the modular arithmetic learning. It is interesting to see if the proposed method works for symbolic computation that involves modular arithmetic. For example, symbolic regression over modular field, or polynomial sum, multiplication, reduction, with finite-field coefficients. Essential References Not Discussed: The following paper can be added as the literature of ML applications to hard math problems in [l.37, right]. 1. Learning to compute Gröbner bases, Hiroshi Kera, Yuki Ishihara, Yuta Kambe, Tristan Vaccon, Kazuhiro Yokoyama, NeurIPS'24 This also shows that the polynomial computations over finite-field is unsuccessful due to errors in coefficient predictions. There are also several studies handling easier polynomial tasks (not necessarily working on finite field), and as mentioned above, it should be an interesting future work to see if the proposed method also works successfully on these tasks on finite field. 2. Do Transformers Understand Polynomial Simplification? Vishesh Agarwal, Somak Aditya, Navin Goyal, ICLR'21 Other Strengths And Weaknesses: Important srengths and weaknesses have been mentioned in other cells. This work is solid and has sufficient contributions to this topic. The proposed methods are simple and easy to adopted. Other Comments Or Suggestions: The paper is written well in general. As I commented above, the explanation should be carefully revised and make it clear whether it writes about addition or more general cases. Questions For Authors: Please refer to the other cells. Particularly, I want a clear explanation of the proposed method beyond addition. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your thoughtful feedback. **Re: claims:** We will include revisions in the final paper to clarify the claims more generally to not assume addition. Thanks for pointing out the clarification on $0$ and $q-1$ in the modular field, we will update this in the final version. **Re: sparsity:** Regarding choosing to include $0$ as the element for sparsity, we actually found that including a random but fixed number instead of $0$ yields similar results. It is not the number itself but instead the shift in distribution that leads to performance improvements. The key is changing the KL divergence between the train and test sets, which is irrespective of the element chosen for sparsity. To show this, we ran an additional experiment where we substitute the $0$ with an arbitrary integer $K$ and shift the distribution by sampling more $K$s compared to all remaining elements in $Z_p^N$. We used a random $K$ for each different $q$ to ensure $K$ is not a factor. We can also run with more $K$ to show that this trend persists. We found similar results in the multiplication case. | $N$ | $q$ | $K$ | tau=1% acc | | --- | --- | --- | --- | | 32 | 257 | 160 | 100.0% | | 32 | 3329 | 3176 | 100.0% | | 32 | 42899 | 24606 | 100.0% | | 32 | 974269 | 79062 | 100.0% | | 64 | 257 | 160 | 99.2% | | 64 | 3329 | 3176 | 99.3% | | 64 | 42899 | 24606 | 99.4% | | 64 | 974269 | 79062 | 99.3% | | 128 | 257 | 160 | 97.9% | | 128 | 3329 | 3176 | 98.3% | | 128 | 42899 | 24606 | 97.9% | | 128 | 974269 | 79062 | 97.9% | **Re: other tasks beyond addition:** We also conducted additional experiments on many other tasks based on your feedback. We present results on the synthetic tasks in our response to reviewer ZgiU and the rest below. *Modular multiplication and scalar product:* The angular embedding is designed for addition, so we use standard token embedding in multiplication experiments and compare $f_{inv-sqrt}$ to $f_{default}$. We test both modular multiplication and the scalar product of two vectors mod $q$. For both tasks, the model with $f_{inv-sqrt}$ performs well for smaller $q$, but declines for larger $q$. The scalar product is more difficult due to it requiring both multiplications and additions. Still, $f_{default}$ performs around 0% (on acc tau=1%) acc for all settings in both tasks, so $f_{inv\\_sqrt}$ is still an improvement. **Modular Multiplication with $f_{inv\\_sqrt}$** | $N$ | $q$ | tau=1% acc | |----|--------|------------| | 4 | 97 | 100% | | 4 | 257 | 100% | | 4 | 3329 | 51% | | 8 | 97 | 100% | | 8 | 257 | 100% | | 8 | 3329 | 32% | | 16 | 97 | 100% | | 16 | 257 | 98% | | 16 | 3329 | 25% | | 32 | 97 | 100% | | 32 | 257 | 75% | | 32 | 3329 | 13% | | 64 | 97 | 100% | | 64 | 257 | 65% | | 64 | 3329 | 3% | **Scalar Product with $f_{inv\\_sqrt}$** | $N$ | $q$ | tau=1% acc | |----|--------|------------| | 2 | 97 | 100% | | 2 | 257 | 100% | | 2 | 3329 | 98% | | 4 | 97 | 100% | | 4 | 257 | 92% | | 4 | 3329 | 83% | | 8 | 97 | 78% | | 8 | 257 | 38% | | 8 | 3329 | 15% | *Polynomial sum:* We also run on two symbolic polynomial tasks mod $q$: sum $N$ polynomials with degree $\\leq K$ and sum 2 polynomials with degree $\\leq K$. For both tasks, we encode the polynomial as $a_m, a_{m-1}, …, a_0$, separate polynomials with <SEP> token, and ask the model to predict the coefficients of the polynomial sum. We add a RoPE positional embedding to help the model learn how to count. We fix $q=3329$. We report the results comparing $f_{inv\\_sqrt}$ vs $f_{default}$. For all experiments, the model is quickly able to predict the degree of the polynomial sum (the number of tokens to output before the <EOS> token), so the real difference between the two strategies is predicting the right coefficients. We say that the model found the solution if the maximum error across all coefficients is less than $0.01q$. Task a. is closely related to our original task (taking $K=0$) | $N$ | $K$ | Correct % for $f_{inv-sqrt}$ | Correct % for $f_{default}$ | |----|----|---------------------------|-------------------------------| | 16 | 1 | 99.5% | 0% | | 16 | 4 | 99.2% | 0% | | 16 | 16 | 99.3% | 0% | | 64 | 1 | 98.8% | 0%| | 64 | 4 | 98.7% | 0% | | 64 | 16 | 98.5% | 0%| Task b. | $K$ | Correct % for $f_{inv\\_sqrt}$ | Correct % for $f_{default}$ | |----|---------------------------|-------------------------------| | 64 | 99.0% | 94.3% | | 128| 99.1% | 92.1% | This task is a bit easier because it can be decomposed into find the right index to attend to (and RoPE is expert on this) + sum two numbers mod $q$ (which can be completely memorized with O($q^2$) even without angular embedding) **Re: references:** Thanks for sharing the references, we will update the related work section to include these references. We agree that investigating finite-field polynomial tasks is an interesting avenue for future work. --- Rebuttal Comment 1.1: Comment: Thank you for your answers and for providing additional results. These address my concerns well and I'll retain my positive score on this work. --- Reply to Comment 1.1.1: Comment: Thank you for your helpful feedback and response, we appreciate it!
Summary: The paper improves machine learning attack baselines on LWE by training models to do modular arithmetic better. It uses custom training data and a special loss function, allowing the model to sum up to 128 elements modulo q ≤ 974269. It also shows improvements on other tasks like copy, associative recall, and parity. Claims And Evidence: The paper claims that its methods enable ML models to perform modular addition for up to 128 elements modulo q ≤ 974269—far exceeding prior limits of N ≤ 6 and q ≤ 1000. While the evidence presented supports these claims, it lacks comparisons with other released methods such as the lattice-estimator baseline (https://github.com/malb/lattice-estimator). Without such baselines, it is hard to fully assess the real gains and stability improvements. Methods And Evaluation Criteria: The methods generally make sense by using distribution aligned with difficulty and better domain knowledge. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design is generally sound, showcasing improved performance on LWE and other tasks. However, the training process consumes a large amount of data and computation, and a relative comparison with existing approaches is missing. It would strengthen the paper if the authors compared the amortized cost and practical efficiency against established methods. Moreover, while some settings are shown to be easier, it remains unclear whether the method has been evaluated under truly hard conditions. More experiments on challenging settings are needed to validate the robustness of the approach. Supplementary Material: No. Relation To Broader Scientific Literature: The technique could help contribute a way for attacking LWE Essential References Not Discussed: Unsure Other Strengths And Weaknesses: The work is overall promising and sheds light on how modular arithmetic tasks can probe the stability of ML models. Addressing the missing baselines and providing deeper comparisons will further solidify the paper’s contributions. Other Comments Or Suggestions: N/A Questions For Authors: I am confused on how the encoded embedding can map to multiple coefficient? I think the space of embedding would be constrained to decode a large number of terms due to expressiveness? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your thoughtful feedback. **Re: comparisons:** In the paper, we compare our methods to the standard training approach with regular loss and default data distribution for the arithmetic, synthetic and the cryptography tasks (see Tables 3, 6, 7, and 9). We also provide a comparison to curriculum learning for modular addition (Table 4), where we show that our method is more consistent at learning the task compared to the curriculum learning approach. The lattice estimator provides theoretical cost estimates (in bit-operations) for attacking certain instantiations of LWE-based cryptosystems. Since the proof-of-concept we present in this work does not attack actual LWE-based cryptosystems, comparing against the lattice estimator would not be a fair comparison. Future work expanding on this should implement a true LWE setup and can then compare against other LWE attack methods. We welcome any additional suggestions of comparisons we could make in this work. **Re: cost/efficiency comparison:** We will include an analysis of the cost/efficiency of our method compared to the baselines in the revised version. The computational overhead of our approach is minimal compared to the baseline, as we still generate the same number of data samples and simply modify the distribution used for generation. Similarly, the loss regularization term does not introduce any additional cost (besides the negligible cost of calculating the term itself) as we have a standard training loop. We timed the difference between the custom loss and standard loss experiments and saw that there was a 0.04% difference. **Re: additional experiments on challenging settings:** As mentioned in our response to reviewers ZgiU and Eby5, we conducted additional experiments on more challenging settings and other tasks, which we will include in the revised version. We conducted additional experiments on more values of $N$ and $q$, including non powers of 2 and non primes, for modular addition. We find that our method is robust to increased $q$s and non-powers of 2 $N$s. **Results for Different Values of $N$ and $q$** | $N$ | $q$ | tau=1% acc | | --- | --- | --- | | 16 | 1728 | 100.0% | | 16 | 100000 | 100.0% | | 16 | 1048576 | 100.0% | | 16 | 10000001 | 100.0% | | 32 | 1728 | 100.0% | | 32 | 100000 | 100.0% | | 32 | 1048576 | 100.0% | | 32 | 10000001 | 100.0% | | 64 | 1728 | 99.5% | | 64 | 100000 | 99.3% | | 64 | 1048576 | 99.4% | | 64 | 10000001 | 99.8% | | 128 | 1728 | 98.0% | | 128 | 100000 | 98.2% | | 128 | 1048576 | 98.1% | | 128 | 10000001| 98.8% | **Results for Different Values of $N$ (non-powers of 2) and $q$** | $N$ | $q$ | tau=1% acc | | --- | --- | --- | | 20 | 257 | 100.0% | | 20 | 3329 | 100.0% | | 20 | 42899 | 100.0% | | 20 | 974269 | 100.0% | | 49 | 257 | 99.7% | | 49 | 3329 | 99.6% | | 49 | 42899 | 99.7% | | 49 | 974269 | 99.6% | | 101 | 257 | 98.6% | | 101 | 3329 | 98.8% | | 101 | 42899 | 98.9% | | 101 | 974269 | 98.5% | We also see that our method is not fully robust to high $N$, but perhaps a longer training time or larger model is needed for higher $N$. | $N$ | $q$ | MSE loss | tau=1% acc | | --- | --- | --- | --- | | 256 | 257 | 0.15 | 90.4% | | 256 | 3329 | 0.14 | 92.7% | | 256 | 42899 | 0.18 | 91.2% | | 256 | 974269 | 0.17 | 90.6% | **1. What happens if you train 4x longer?** | $N$ | $q$ | MSE loss | tau=1% acc | | --- | --- | --- | --- | | 256 | 257 | 0.08 | 94.8% | | 256 | 3329 | 0.08 | 95.1% | | 256 | 42899 | 0.09 | 95.0% | | 256 | 974269 | 0.10 | 94.5% | **2. What happens if your model is 4x larger (embed dim goes from 256 to 512)?** | $N$ | $q$ | MSE loss | tau=1% acc | | --- | --- | --- | --- | | 256 | 257 | 0.07 | 96.2% | | 256 | 3329 | 0.07 | 96.7% | | 256 | 42899 | 0.08 | 96.1% | | 256 | 974269 | 0.09 | 95.8% | We also conducted experiments on additional arithmetic tasks (product of n numbers and scalar product of two vectors) and synthetic tasks (multi hop question answering and selective copy). These results are presented in our response to reviewer Eby5 (“other tasks beyond addition”). We find that our method is robust to these additional tasks. Specifically, we vary the distribution of the input length with our custom distributions, and we find that this still leads to performance improvements on these additional tasks. We are also happy to experiment with any other tasks the reviewers suggest. **Re: embedding question:** Thanks for the question. We use the angular embedding from Stevens et al. (2024) in our work. Could you please clarify your question on the embedding? We want to address your question appropriately but didn’t quite understand it well enough to answer.
Summary: The paper addresses the challenge machine learning models face in learning modular arithmetic, specifically in the context of the Learning with Errors (LWE) problem. It proposes two techniques: (i) using a designed data distribution that mixes sparse and dense modular arithmetic instances, and (ii) introducing a custom loss function with angular embedding and regularization to discourage convergence to trivial local minima. Experimental evaluation demonstrates an increase in performance of ML models on modular arithmetic task. Claims And Evidence: Overall, the paper presents convincing empirical evidence supporting the claimed improvements. Experiments clearly demonstrate that custom data distributions and the novel loss function substantially enhance accuracy across various problem complexities. However, the claim regarding generalization to a other set of arithmetic and synthetic tasks, while promising, could benefit from further detailed experiments and additional baseline comparisons (for example check other N values). Methods And Evaluation Criteria: The proposed methods (custom data distribution sampling and the custom regularization of the loss function) are well-motivated by observations from prior literature and empirical insights. The evaluation criteria, including mean squared error (MSE) and accuracy metrics, are standard for the problem context. Theoretical Claims: The paper does not propose new theoretical claims requiring formal proofs and builds on theoretical insights from previous works. Experimental Designs Or Analyses: The experimental evaluation is sound, and the experiments are comprehensive (different q, number of terms N). However, there is limited hyperparameter sensitivity analysis. Specifically, it needs a more detailed and explicit sensitivity analysis for the improvements over CL. Supplementary Material: No supplementary material provided. Relation To Broader Scientific Literature: I think the paper has sufficient novelty, the authors make two contributions which 1) Augmenting Training Data with Sparse Vectors. 2) Loss Regularization to Avoid Model Collapse. Both of which in relation to prior work are new contributions. Essential References Not Discussed: Mostly discussed, however this area is not my primary area, I may have missed some references. Other Strengths And Weaknesses: Strengths: * The paper is well written and structured * The experimental evaluation is sound. * Demonstrated significant improvements over SOTA works. Weaknesses: * Lack of detailed theoretical explanation for observed improvements limits understanding of the underlying mechanisms. * Experiments do not fully address the practical cryptographic setting (i.e., noisy LWE instances). * Generalization experiments, while promising, are limited and do not fully substantiate claims of broader applicability. Other Comments Or Suggestions: * It would be beneficial to discuss the computational overhead introduced by these modifications compared to the baseline approach. Questions For Authors: I do not have any specific questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your thoughtful feedback. **Re: limited generalization experiments:** Per your suggestion, we conducted additional experiments with more $N$ and $q$ values. These results are presented in the response to reviewer Qesb (“additional experiments on challenging settings”). We also conducted experiments on additional arithmetic tasks (product of n numbers, scalar product of two vectors, polynomial sum) and synthetic tasks (multi hop question answering and selective copy). We present the results for the synthetic tasks below and present the other results in our response to reviewer Eby5 (“other tasks beyond addition”). *Multi hop Question Answering:* We synthetically implement the [associative recall](https://arxiv.org/pdf/2212.14052) with multi hop. Given $N$ pairs $(a_i, b_i)$ which represent a random permutation from $\\{1, 2, …, N\\}$ to $\\{1, 2, …, N\\}$, we want to find the 2-th successor, i.e. $\\sigma(\\sigma(x))$. | # max_length | layers | $f_{default}$ | $f_{inv\\_sqrt}$ | $f_{uni}$ | | --- | --- | --- | --- | --- | | 16 | 4 | 7% | 100% | 100% | | 32 | 8 | 3% | 96% | 99% | | 64 | 12 | 2% | 93% | 94% | | 128 | 12 | 1% | 91% | 90% | *Selective copy:* Given a vector of size $N$ where each element is sampled from vocabulary $V$, output a selective copy of the vector (all tokens different from the <JUNK> token). This task was introduced by [Mamba paper](https://arxiv.org/pdf/2312.00752). | # max_length | $f_{default}$ | $f_{inv\\_sqrt}$ | $f_{uni}$ | | --- | --- | --- | --- | | 32 | 100% | 100% | 100% | | 64 | 100% | 100% | 100% | | 128 | 83% | 100% | 100% | | 256 | 57% | 100% | 99% | We find that our method is robust to these additional tasks. Specifically, we vary the distribution of the input length with our custom distributions, and we find that this still leads to performance improvements on these additional tasks. We are also happy to experiment with any other tasks the reviewers suggest. We will include these additional results and analysis in the revised version. **Re: hyperparameter sensitivity:** We will include more details on the hyperparameter sensitivity analysis in the revised version. While modifying certain parameters in the curriculum can slightly improve performance, the CL approach is much more involved and requires more tuning to the specific task. Our approach is simpler and provides consistent improvement over sampling from the default distribution. We provide the specific parameters here for your reference. $X_1$ = at least half of the elements are zeros, $X_2$ = at maximum half of the elements are zeros. \ When we ran the CL baselines, we modified three things: 1. data mix: \ i) Using $X _1$ up to $T_1$, then $X_2$ until the end \ ii) **Using $X_1$ up to $T_1$, then $X_1$ union $X_2$ until the end** 2. Thresholds: \ i) $T_1$ is either 1% or 3% or 10% of the training \ ii) $T_1$ is when train_loss($X_1$) < eps, where we chose eps = {**1e-2**, 1e-3} 3. lr and weight decay: \ i) We experimented with 3 choices of lr (1e-5, **3e-5**, 1e-4) and 3 choices of weight decay (from 0.03, **0.1**, 0.3) In table 4 of the paper we reported the best choice, which is in bold above. **Re: weaknesses** * We are happy to include more explanation on the observed improvements. We did investigate why our method succeeds and found that our sampling technique allows for a linear sample complexity, while $f_{default}$ needs an exponential sample complexity to tackle the problem. This helps to explain why our proposed sampling strategy is so effective. Below, we measure the number of samples needed to get <0.005 loss and 90% test accuracy. | $N$ | $f_{default}$ | $f_{inv-sqrt}$ (with best $f_{default}$ setting) | $f_{inv\\_sqrt}$ (with our best setting) | | --- | --- | --- | --- | | 6 | 4.5M | 4.1M | 0.6M | | 9 | 7.1M | 1.9M | 0.45M | | 12 | 12.85M | 2.6M | 0.95M | | 15 | 51.1M | 8.15M | 1.3M | | 18 | Never | 9.35M | 1.75M | * Experimenting on the full practical cryptographic setting is an important area for future work, and this work provides the foundation for improving performance on practical settings. * See above for response to generalization experiment limitations **Re: computational overhead:** The computational overhead of our approach is minimal compared to the baseline, as we still generate the same number of data samples and simply modify the distribution used for generation. Similarly, the loss regularization term does not introduce any additional cost (besides the negligible cost of calculating the term itself) as we have a standard training loop. We timed the difference between the custom loss and standard loss experiments and saw that there was a 0.04% difference. We will include this explanation in the revised version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for your response, it answers the points raised in the review, I am increasing my score. --- Reply to Comment 1.1.1: Comment: Thank you for your helpful feedback and response, we appreciate it!
null
null
null
null
null
null
null
null
Disentangling and Integrating Relational and Sensory Information in Transformer Architectures
Accept (poster)
Summary: The authors describe a neural architecture (DAT) in which relational information is a first-class object, and via a series of experiments show that this architecture offers genuine empirical benefits. ## update after rebuttal: I have kept my "accept" score for this strong paper. There is no change since my rebuttal responses below, but I am told I need to add this note here as well. Claims And Evidence: The claims seem solid. The question with this type of paper is often less about the solidity of the claims, however, but more about their significance. To be concrete, why do we need yet another transformer-like architecture? In this case, I think the new architecture meets the bar of significance for publication. First, it can be seen as helping make explicit the type of computation that "classic" transformers are learning implicitly. Second, it appears that for certain tasks DAT actually outperforms transformers. Third, it opens up interesting possibilities for interpretability work, potentially making neural systems more transparent. Methods And Evaluation Criteria: The paper has a good balance of methods, analyzing the architecture theoretically as well as performing experiments. Theoretical Claims: The theorem in Appendix A seems correct, although the Debreu representation theorem was new to me. Once I knew what that was, the result seemed correct. I'd recommend that the authors provide a paragraph outlining the result at a high level, which I read as: selection can be modeled with a preference ordering (with some mild conditions); the Debreu representation theorem allows us to think of computing such a preference ordering as computing a certain continuous function; we then just use standard results saying neural nets can approximate continuous functions. However, I'd note to the authors: if this outline is NOT what the proof is actually saying, then please clarify! Experimental Designs Or Analyses: The designs seem solid, and I appreciate that there are both language and vision tasks, which strengthens the overall claim. Supplementary Material: n/a Relation To Broader Scientific Literature: The question of whether and how transformers learn relationships is definitely central, so this seems squarely in the mainstream. See next section for more detail. Essential References Not Discussed: I think the related work cited is fine. If the authors want to expand a bit, there are two areas where I don't see citations; however, they may not really be essential. One is recent work on how transfomers do seem to represent relations, e.g. "How do Language Models Bind Entities in Context?" (Feng & Steinhardt) and papers that cite it. Another is theoretical ways that have been proposed for representing relations (vector symbolic architecture, etc.) Other Strengths And Weaknesses: Figure 8 in the appendix is intriguing, and interpretability work would probably be an entire new paper. I do want to flag one thing about the claim that "Relational attention in DAT language models encodes human-interpretable semantic relations" from the caption: in many cases, classic transformers, too, encode human-interpretable semantic relations. (And here the relation isn't particularly striking: it just seems like it's picking out generally related words?) The fact that one can find a single example of an attention head that is interpretable isn't particularly interesting or useful in itself. I wonder if there's an example of an attention head which finds some kind of relation that is not normally seen in a classic transformer? Other Comments Or Suggestions: I'm not fond of the term "sensory" here, because it seems actively misleading. There's already a common metaphor for deep networks where people talk about the early layers as doing sensory processing, and the final layers as being analogous to motor neurons. I'd recommend instead calling this something like "first-order" vs. "relational" information. Questions For Authors: none, except for the note about confirming my read of the proof, mentioned above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed and thoughtful review. We appreciate your positive feedback about the significance, methodological soundness, and strength of the empirical evaluation. Below, we hope to address the main concerns you raised in turn. **D1: Interpretability of Learned Relations** > Figure 8 in the appendix is intriguing, and interpretability work would probably be an entire new paper. I do want to flag one thing about the claim that "Relational attention in DAT language models encodes human-interpretable semantic relations" from the caption: in many cases, classic transformers, too, encode human-interpretable semantic relations. ... I wonder if there's an example of an attention head which finds some kind of relation that is not normally seen in a classic transformer? Thank you for your thoughtful comments regarding the interpretability of learned relations. This is indeed an interesting and important question that we’ve begun to explore further. We are glad you found the preliminary exploration in Figure 8 intriguing. As you rightly point out, the relations observed in Figure 8 appear to encode relatively simple semantic similarity between words/tokens, which has also been observed in the attention scores of standard Transformer attention. However, it is important to note that these relations serve a different functional role here: while the attention scores $\alpha_{ij}$ in standard Transformers encode a *selection criterion* controlling *where* information is routed, in *DAT*, the relation vectors $\boldsymbol{r}_{ij}$ serve as the *values* being transmitted between tokens, controlling *what* information is routed (with an independent set of attention scores controlling the selection criterion). We agree with you that finding a single interpretable relational attention head may not be very revealing on its own, and we would be excited to investigate whether relational heads in *DAT* might encode novel relationships that are not typically seen in classic Transformers. In particular, it would be interesting to understand how these relations form *computational circuits* that are unique to the DAT architecture---that is, characterize specific computational circuits that use relational heads in unique ways to carry out a specific computation, understanding their functional role on a deeper level beyond just the fact that the relations themselves appear human-interpretable. For now, this interpretability work is outside the scope of this paper, but we intend to explore it further in future work. One initial step we’ve taken is developing an interactive tool (accessible online) that allows users to load pre-trained DAT models and visualize relational representations at various layers on their own inputs. We plan to include a link to this tool in the final, de-anonymized version of the paper. --- **D2: Question on proof of Representational Capacity Theorem.** > The theorem in Appendix A seems correct, although the Debreu representation theorem was new to me ... I'd recommend that the authors provide a paragraph outlining the result at a high level Thank you for the question and the suggestion to provide further discussion on the Debreu representation theorem. We will add a high-level overview of the Debreu representation theorem and the related literature to improve the clarity and accessibility of the representation result presented in Appendix A Yes, your interpretation of the result and its proof are correct. The Debreu representation theorem, due to the economics literature, identifies preference relations on a topological (e.g., metric) space with a continuous "utility" function, assuming certain continuity properties of the preference relations with respect to the underlying topology. For us, the key is to extend this idea to a *family* of query-dependent preference relations in order to specify the attention mechanism. That is, each query is associated with an ordered space, and we require continuity of the family of preference relations with respect to both queries and keys, which we formulate as query-continuity and key-continuity, respectively. From there, the result follows by the approximation properties of inner products of MLPs. **D3: Further discussion of related work** > If the authors want to expand a bit, there are two areas where I don't see citations; however, they may not really be essential. One is recent work on how transfomers do seem to represent relations, e.g. "How do Language Models Bind Entities in Context?" (Feng & Steinhardt) and papers that cite it. Another is theoretical ways that have been proposed for representing relations (vector symbolic architecture, etc.) Thank you for these suggestions. We will incorporate them into our discussion of related work in the final version of the paper. --- Thank you again for your engagement with our work! --- Rebuttal Comment 1.1: Comment: These sound like useful improvements, and I appreciate the extensive explanations here.
Summary: This paper presents the Dual Attention Transformer (DAT), an extension of the Transformer architecture that introduces a relational attention mechanism alongside the standard self-attention mechanism. The key idea is to explicitly represent and process relational information by replacing the standard value aggregation in self-attention with a weighted combination of a relation vector computed for each pair of objects and a symbol vector. Claims And Evidence: The paper argues that standard Transformers primarily process sensory information and struggle with relational reasoning due to the entanglement of sensory and relational information. The proposed DAT disentangles these components, leading to improved performance. The empirical results largely support these claims. Methods And Evaluation Criteria: The authors employ standard benchmarks to assess the effectiveness of DAT, comparing it against standard Transformer models across various tasks. Theoretical Claims: This paper primarily focuses on the empirical aspects. Experimental Designs Or Analyses: It might be more convincing if more baseline comparisons with graph-based models and message-passing networks are included, given the conceptual similarities. Supplementary Material: I reviewed implementation details and experiment details in supplementary material. Relation To Broader Scientific Literature: The paper presents contribution to the development of relational reasoning in Transformer architectures, with potential implications across multiple domains, including language processing and vision. By introducing an explicit mechanism for processing relational information, the proposed approach highlights the importance of integrating structured reasoning capabilities into deep learning models. Essential References Not Discussed: It might be helpful if more key works in graph neural networks that have incorporated relational attention mechanisms were discussed and compared. Other Strengths And Weaknesses: The strengths and weaknesses have been addressed in earlier sections, and no additional ones require emphasis here. Other Comments Or Suggestions: The comments and suggestions have been addressed in earlier sections, and no additional ones require emphasis here. Questions For Authors: How does DAT perform on more complex benchmarks beyond the current experimental setup? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and positive feedback on the overall contribution of our work, especially for highlighting the empirical support for our claims, the contribution to relational reasoning in Transformer architectures, and the importance of structured reasoning in deep learning models. Below, we hope to address the key comments and concerns you raised. --- **C1: Comparison with Baselines** We would like to highlight Appendix C (specifically Section C.1) and Appendix D, where we compare our proposed method to several prior works on relational learning. In particular, we compare our model to PrediNet (Shanahan et al. 2020), CoRelNet (Kerg et al. 2022), and the Abstractor (Altabaa et al. 2024), positioning our work within this ongoing line of research on relational reasoning and extending these prior efforts. One way to describe the architectures in these prior works is that they incorporate *"subtractive"* inductive biases that constrain the types of representations the model can compute (see [Ref 24] on the "Relational Bottleneck" for more on this). These strict biases enable strong performance on the benchmarks they were designed for (e.g., the Relational Games benchmark introduced by Shanahan et al. (2020)), but also constrain the models to a narrow domain of applicability. By contrast, our approach in developing the *Dual Attention Transformer* architecture is *"additive"*, in the sense that it incorporates new explicit relational processing capabilities without constraining existing components of the Transformer architecture, allowing the model to learn to select between the different computational mechanisms available to it based on the task or context, as well as compose them to create flexible and expressive computational circuits. Despite these differences, we find it valuable to compare *DAT* against those architectures on controlled synthetic benchmarks to explore the trade-offs of strong inductive biases and evaluate *DAT* against alternative approaches to relational learning. This was carried out and discussed in Appendix C.1. Initially, due to space constraints, we deferred this discussion to the appendix. However, with the additional page allowance, we plan to integrate this more detailed comparison and discussion of related works into the main body of the paper. Please also see our response numbered **A1** to reviewer oPuj, where we discuss a similar question. --- **C2: Conceptual similarities to message-passing networks** We view the aforementioned line of work on relational learning [Ref. 19, 20, 21, 22, 23] to be the most closely related literature to our work. However, we agree that there are some conceptual similarities between our *DAT* architecture and message-passing networks, which is a characteristic shared by Transformer models in general. In particular, like standard Transformers, the *DAT* architecture can be described in the language of the message-passing framework. In message-passing terminology, the messages exchanged in standard Transformer attention encode first-order sensory features of the sender, while in *relational attention*, the messages encode relational features between the sender and the receiver. We will additionally incorporate a discussion on the conceptual connections between the *DAT* architecture and message-passing networks into the paper. --- Thank you again for your review. We hope we were able to address your main concerns. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I've increased my rating.
Summary: The authors propose a parameter-efficient variant of the self-attention mechanism in transformers called the Dual-Attention Transformer (DAT), which explicitly routes both sensory information (about individual tokens) and relational information (about relationships between pairs of tokens). The key differences between relational attention and standard attention are that 1) instead of routing a value projection from the query, a vector of relational similarities is constructed (by concatenating the dot products of multiple learnable relation projections between query and key), and 2) A symbol vector (from a learned codebook), retrieved for each key, is added to the corresponding relation vector, and the result is then routed with the usual attention weight. The authors then incorporate both standard and relational attention heads in a multi-headed attention framework to construct DAT models and demonstrate their effectiveness across various domains - from explicitly relational tasks in the RelationsGame to language modeling, mathematical problem-solving, and image recognition tasks. Claims And Evidence: The paper makes two primary claims that are well-supported by the evidence: 1. DAT outperforms standard transformers with multi-headed attention - this is demonstrated convincingly across multiple domains and tasks 2. The relational attention mechanism better routes relational information than standard self-attention - this is most clearly shown on explicitly relational tasks from the RelationsGame Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate for the demonstrations. The authors evaluate on a diverse set of tasks spanning different domains (visual reasoning, symbolic math problems, language modeling, image classification) and compare against comparable transformer baselines (or relational baselines) while controlling for parameter count. Theoretical Claims: A theoretical claim is made about the class of functions that can be computed by relational attention. A proof is provided in the appendices and was not checked. The claim is not particularly strong and the informal proof in the main text seems sufficient. Experimental Designs Or Analyses: The experiments and ablations are thorough, well-reported, and compelling. The learning curves on across most tasks show clear data efficiency advantages of DAT over standard transformers. Furthermore, the additional results in the supplementary material are comprenehsive. Supplementary Material: The supplementary materials are extensive. I primarily reviewed the relational baselines in figure 6; for most claims the results in the main body was sufficient. Relation To Broader Scientific Literature: The paper's findings are moderately significant and relevant to current research on improving transformer architectures. The authors position their work appropriately within the literature on relational inductive biases and transformer models. One potential limitation not fully addressed is whether the performance gains from RelationalAttention would still be significant at very large scales, or if they might be incompatible with optimization tricks for self-attention. However, the consistent improvements across the scales explored are promising and warrant further exploration. Essential References Not Discussed: None of which I am aware Other Strengths And Weaknesses: None of note Other Comments Or Suggestions: * The ICML Style guide requires citations within the text to include authors' last names and year, which this paper doesn't follow * Line 161 contains an unnecessary sentence fragment "this adds structures..." Questions For Authors: 1. A potential downside of DAT even within the experiments carried out is the possibility that the additional computations required for RelationalAttention outweigh the gains from parameter efficiency. Though differences are likely marginal, was this something you investigated? 2. For very large transformers trained on extensive data, standard self-attention may eventually capture relational information in complex tasks while retaining greater efficacy for other common computations - potentially narrowing the performance gap at increasing scale. While large-scale demonstrations aren't necessary for this paper, do you have intuitions about how the advantages of DAT might scale? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Response Thank you for your detailed and thoughtful review. We appreciate your positive assessment of our work and are encouraged by your recognition of its methodological soundness, strong empirical results, and relevance to the literature on relational inductive biases and Transformer-based architectures. Below, we will aim to respond to the key comments and concerns you raised. --- **B1: Computational efficiency** > A potential downside of DAT even within the experiments carried out is the possibility that the additional computations required for RelationalAttention outweigh the gains from parameter efficiency. Though differences are likely marginal, was this something you investigated? Thank you for raising this important point. Our experiments were designed to carefully control for model size (i.e., parameter count) when comparing the *DAT* architecture to baselines. In particular, the parameter count is slightly *smaller* for the *DAT* model compared to the baselines. While we did not explicitly measure computational cost in terms of FLOPS, we expect the differences to be marginal. From a practical standpoint, we believe that the more important factor in computational efficiency is the availability of optimized GPU kernels, such as FlashAttention. This gap could perhaps be bridged given interest from the MLSys community to develop optimized kernels for relational attention, but for now, this would be an obstacle to scaling *DAT*-style architectures in a cost-effective way. --- **B2: Scaling to Larger Models** > One potential limitation not fully addressed is whether the performance gains from RelationalAttention would still be significant at very large scales, or if they might be incompatible with optimization tricks for self-attention. However, the consistent improvements across the scales explored are promising and warrant further exploration. > While large-scale demonstrations aren't necessary for this paper, do you have intuitions about how the advantages of DAT might scale? Our intuition, guided by our scaling results up to ~1B-parameter scales, is that the explicit relational processing capabilities of *DAT* will continue to provide significant benefits at even larger scales. Our hypothesis is that relational processing is a key computational capability that is useful in many domains and at several levels of abstraction---explicit support for this in the model architecture can enable more efficient learning and greater generalization capabilities. Of course, confirming this hypothesis requires empirical validation, which we currently do not have the resources to do at our academic institution. As you rightly mentioned, and as discussed above, scaling further introduces new challenges, and the availability of various optimization tricks becomes an important consideration. We hope future work will explore this further. --- Thank you again for your thoughtful review and your engagement with our work.
Summary: The authors introduce a modification/extension of the classical attention mechanism they term ‘dual attention’, which not only routes sensory information (as in classic SA) but adds a dedicated pathway to exchange relational information between tokens using its unique attention matrix – allowing for information flow which differs from the sensory one. Claims And Evidence: All major claims are in my opinion well justified. The authors do a good job in articulating and introducing their angle on the problem step by step, starting from the well-known Transformer self-attention mechanism. The relational information is explicitly computed (i.e. inductive bias) and added to a symbolic identity, hence justifying the claim that relational information is exchanged. (Although, actual insights into what exactly is exchanged would enhance the paper – currently placed in limited form in the appendix.) Methods And Evaluation Criteria: The authors evaluate their method on four different tasks with corresponding evaluation criteria (acc for vision, perplexity for language, etc.); The selection of the vision task could in my opinion be significantly improved – Image classification on CIFAR seems rather ill-suited to show the power of relational processing: - CIFAR is very object-centric and a single-object dataset, hence computation of relations between tokens might be rather straight-forward (around center of image) -- BUT more importantly: - Image classification: It can be enough to look at one or very few tokens of a CIFAR image and directly tell what the class would be; Hence, this eval seems ill-suited to me. $\textrightarrow$ There is a variety of other vision tasks where the benefit of relations between parts might be much more intuitive, e.g. multi-object detection, tracking, semantic segmentation, just to name a few. Theoretical Claims: I have checked the formulas and algorithms, and briefly read through the supporting evidence for Theorem 1 (appendix) – but couldn’t spot any obvious issues; Experimental Designs Or Analyses: As previously mentioned, I think the visual experiments could be significantly improved by choosing a task that requires relational modelling in a more obvious way that would be much more intuitive to the reader (e.g. multi-object detection, semantic segmentation, etc.); It is unclear to me if an instance-based task like classification would benefit from this, as one token might already be enough to determine the correct class label. Also: Experimental analyses are often performed well but not necessarily contrasted to related methods – this has been deferred to the appendix, but might be better placed in the main paper for visibility and to provide the reader with appropriate context. Supplementary Material: The supplementary material in the form of the appendix nicely complements the paper and shows plenty of additional insights; especially Section C in terms of additional insights regarding experiments; Very important in terms of comparison to highly-relevant work is Section D! Relation To Broader Scientific Literature: Relation to relevant literature can and should be improved – A very important relationship to the work of Altabaa et al. [22] is discussed in the appendix in detail, but the main body of the paper severely lacks in terms of discussing and attributing the similarity; While indeed different, both method are (in terms of underlying idea, choice of Transformer architecture and modified attention mechanism) very closely related – which should be appropriately indicated already in Sections 2.2 and 2.3. Essential References Not Discussed: None that come to mind – but not extremely up-to-date in this particular area. Other Strengths And Weaknesses: **Strengths:** *Originality & Significance:* - Clear motivation and step-by-step intro of the method based on a known shortcoming of a missing dedicated modelling mechanism of token-relationships within a Transformer architecture, addressing a known but important gap - Proposed method applicable to range of modalities due to the choice of a Transformer architecture and the preservation of its generality (in terms of attention) - Authors demonstrate the applicability via a range of experiments across different tasks/modalities *Clarity:* - Explanations well-supported through a good mix of figures, algorithms and formulas - The paper is well written and easy to read and follow; several details moved to the appendix, but the paper provides a good level of depth to easily follow --- **Weaknesses:** - Discussion of & comparison to related works is lacking in parts of the main text: The similarity to [22] is discussed in detail in the appendix, but should be indicated much earlier and in a clearer manner in Sections 2.2 and 2.3; Similarity, almost all experiments exclusively compare default Transformers with DAT – although for some, there are related works available that might even outperform (see appendix Figure 6); Could be discussed in the respective section to be ‘up-front’ with the reader (I don’t expect the method to outperform task-specialist-methods, but a comparison to these works (even if treated in a class of their own) would in my opinion help the reader to better place the proposed method’s strengths - Experiments on the visual task, i.e. classification on CIFAR, seems rather ill-chosen to support a claim of modelling relations; see previous comments & questions-section - Details on the ‘subspace’ comparison for relationship computation, i.e. l \in Rd could be extended -- see questions. Other Comments Or Suggestions: Typos: - L 177 right: to both computational mechanismS (plural) - L 354 left: it is useful TO consider.. (to missing) Questions For Authors: 1. I’d like the authors to provide some more details and insights into the explicit relation computation between the feature maps: The authors state this operation is performed with “$l \in [d_r]$”, and “for each $l \in [d_r]$, the feature maps ..” (l151 f) – producing a relation vector “across different features subspaces”. $\textrightarrow$ Are these subspaces particularly chosen? And if yes, how? $\textrightarrow$ How many comparisons are performed between two tokens? As this is one of the main components of this approach (and differences to [22]), this aspect could be made a lot clearer to the reader. 2. Comments on the visual task: I’d like to know why the authors think that Image classification on CIFAR benefits from the relational modelling. It clearly does, as we can see in the results, but as I’ve mentioned previously: One token might already be enough to solve the entire task of classifying an image – so this choice doesn’t particularly feel well suited. $\textrightarrow$ Have the authors visualised what relations are modelled? If not, is this possible and could be included? Do they represent any ‘expected’/intuitive relations (e.g. parts of the object)? $\textrightarrow$ The visualisation provided for the language task in Figure 8 (appendix) is quite interesting; so sth similar for the image task would be a good addition; 3. Follow-up: $\textrightarrow$ Have the authors thought about evaluating their model on an alternative vision task that more intuitively require modelling of relations, e.g. multi-object detection or semantic segmentation? Why/why now? *TLDR;* I think the paper presents an interesting approach, although some aspects could be improved (as detailled before); Depending on the response, I’m happy to consider further increasing my score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive review. We appreciate your positive feedback on the originality, significance, and clarity of our work. We are especially grateful for the time and effort you took to thoroughly engage with our work, reading the appendix and making note of typos. We'd also like to thank you for your specific and constructive feedback, which we believe has helped us improve the paper further. Below, we outline the key concerns you raised and attempt to address them. --- **A1: Discussion & Experimental Comparison to Related Work on Relational Architectures (in main body of paper)** > Experimental analyses are often performed well but not necessarily contrasted to related methods – this has been deferred to the appendix, but might be better placed in the main paper for visibility > The similarity to [22] is discussed in detail in the appendix, but should be indicated much earlier We agree that a more detailed discussion of related work in the main text would enhance clarity. Accordingly, we will use the additional page allowance to: - Integrate the experimental comparison to previous relational architectures [20,21,22], currently in Appendix C, into the main text. - Expand the discussion of Altabaa et al. [22], currently in Appendix D, to Sections 2.2 and 2.3. --- **A2: Suitability of CIFAR for Evaluating Relational Processing in Vision** > CIFAR is very object-centric and a single-object dataset > I’d like to know why the authors think that Image classification on CIFAR benefits from the relational modelling. It clearly does, as we can see in the results Although CIFAR datasets contain only one object per image, we believe relational modeling is valuable at multiple levels of abstraction, from local patches to object parts and entire objects. The explicit relational mechanisms in our architecture enable the model to process and reason about visual relationships between object parts and image patches. For instance, this allows the model to detect symmetries or represent visual similarities between object parts across different regions of the image. > It can be enough to look at one or very few tokens of a CIFAR image and directly tell what the class would be. In our models, images are divided into small 4x4-pixel patches (64 tokens per image), making it unlikely that a single token would suffice for classification. Since individual tokens represent very small regions at early layers, the models must consider information from several tokens/patches, including the relationships between tokens. At those early layers, the relations visually compare different patches, which can perhaps be thought of as analogous to applying one patch as a "kernel" or "filter" to another patch in the image. At later layers, tokens may come to represent more global higher-level features, and the relations can represent higher-level relations between object parts. Indeed, the improved results we observe for the *DAT* architecture demonstrate the utility of the enhanced relational processing capabilities of our architecture, even for the simple CIFAR benchmark. That said, we agree that more complex tasks (e.g., multi-object detection, tracking, semantic segmentation) would better showcase our architecture’s capabilities. We will add a discussion on this limitation and potential future directions. > Have the authors visualised what relations are modelled? If not, is this possible and could be included? Do they represent any ‘expected’/intuitive relations (e.g. parts of the object)? Following your suggestion, we visualized the relations learned by the *ViDAT* model. We find that some relations do appear to represent intuitive "visual similarity" relations between object parts. To illustrate this, we provide an example of a visualization of the learned relations on an image of a truck at [Layer 0](https://postimg.cc/PNC0fdLX) and [Layer 4](https://postimg.cc/gXQSL6wY). The patch labeled "source" represents the reference token, and the value annotations indicate sigmoid-normalized relation activations $r_{ij}[\ell]$. The relation activations appear to be high for object parts that are visually similar, especially at earlier layers. We will add a discussion of these findings in the revised paper. --- **A3: Questions** > Are these subspaces particularly chosen? And if yes, how? The 'feature subspaces' are not predefined; they are learned via $W_{q,\ell}^{rel}, W_{k, \ell}^{rel}$ during training. These are separate from the $W_{q,h}^{attn}, W_{k,h}^{attn}$ weights, which specify the selection criterion of the attention operation. > How many comparisons are performed between two tokens? This is a hyperparameter of the model, denoted $d_r$. For example, in the 1.3B-parameter language model, $d_r = 128$. We will revise the main text to make these points clearer. --- Thank you again for your thoughtful review and your helpful feedback. --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for their thorough responses to my and the other reviewers' queries. While I do think that the paper provides an interesting approach, I am still not convinced by using *image classification on CIFAR* as a valid way to show the results/validity on vision tasks -- and the visualised relationships on the image provided unfortunately don't really strengthen this aspect. (Compare e.g. various distant sky/background-related patches with higher source-relation against close and distant truck regions) I will therefore keep my rating as is to reflect this aspect.
null
null
null
null
null
null
Directed Graph Grammars for Sequence-based Learning
Accept (poster)
Summary: In this paper, the authors propose a novel framework for mapping Directed Acyclic Graphs (DAGs) to sequences via directed graph context-free grammars. Importantly, the authors formulate graph grammar induction as a Minimum Description Length (MDL)-based compression, achieved through an intricate sequence of graph optimization problems. The obtained grammar enables subsequent sequence-based learning tasks using transformers (possibly opening a new bridge between graph-based problems and the emerging LLM technologies) and variational autoencoders (VAEs). Empirical results are demonstrated on small- to medium-scale datasets involving neural architectures, Bayesian networks, and circuit designs. ## Update after rebuttal I believe the clarifications and additional experiments promised by the authors constitute a good improvement of the present manuscript. While I still find the presentation very dense and technical, I will raise my score to a 3. Claims And Evidence: The authors claim that their grammar-based approach provides a compact, principled, and lossless sequential representation of DAGs, which could improve subsequent generative modeling, property prediction, and Bayesian optimization tasks. The mapping to an abstract context-free grammar seems very interesting, although some of the derivation details are hard to grasp for a non-expert, especially given the extremely dense and technical presentation. The authors provide convincing evidence that, once the induction is completed, DIGGED outperforms several baseline models. Methods And Evaluation Criteria: DIGGED relies on an MDL-based grammar induction algorithm, encompassing different graph optimization problems, including frequent subgraph mining, compatibility maximization, and rule disambiguation. The evaluation focuses on the downstream performance once the grammar is derived, and the metrics include graph validity, uniqueness, novelty, predictive performance, and effectiveness in Bayesian optimization. However, there is no clear study on the scalability of the grammar induction process to large datasets, and the study lacks controlled experiments showing the downstream performance loss if pruning heuristics over the brute-force MDL algorithm are introduced in larger datasets. Theoretical Claims: The authors claim theoretical guarantees of one-to-one and onto mappings between DAGs and the graph-grammar produced sequences. However, critical theoretical limitations, particularly the NP-hardness of grammar induction, are relegated to the appendix, reducing transparency on the true bottle-neck of the proposed approach. Experimental Designs Or Analyses: The presented experiments span multiple real-world DAG datasets, but primarily focus on relatively small or moderate sizes. The paper shows strong empirical results in these controlled environments (where the authors claim the grammar induction takes "few minutes"), but lacks rigorous scalability analyses of the grammar derivation for larger, real-world datasets. For example, it is entirely unclear how the brute-force approach for the grammar induction could be applied to a large molecule-/protein-structure dataset. Supplementary Material: The supplementary material contains essential details for understanding the extremely dense paper, including algorithmic complexity analyses and the critical admission that grammar induction is NP-hard, which significantly impacts practical applicability. Such fundamental information should be explicitly discussed in the main body. Moreover, the dense formatting of the proofs is not friendly to a non-specialized reader. Relation To Broader Scientific Literature: The work is well placed in the scientific context of graph generative modeling. Essential References Not Discussed: None. Other Strengths And Weaknesses: *STrenghts:* * The authors propose an interesting and theoretically principled grammar-based framework for graph generative modeling. * The proposed (lossless) mapping enables sequence-based learning and might allow new LLM-based approaches. * The downstream performance is very competitive varied (medium sized) benchmarks *Weaknesses:* * The presentation is dense and overly technical. Given this conference is a general ML conference, I would think it is best to write the paper to be readable by a researcher working on related fields, while in this case, a non-expert reader will struggle in deciphering the technical terminology and the dense visual representations. * There are some clear scalability concerns due to NP-hard grammar induction, and the authors propose a brute-force approach (at least in the experiments in the main). There is a list of possible adjustments and heuristics in the appendix, but they are never tested (if I understand correctly). Moreover, this crucial discussion is largely ignored in the main text. Other Comments Or Suggestions: * I believe a major restructuring of exposition is needed to enhance readability and accessibility for non-expert readers. * Essential NP-hardness and complexity discussions should be moved to the main text, to provide transparency about the computational bottleneck of this novel approach. * Scalability and computational analyses for large-scale data must be explicitly discussed, and potentially it would be nice to see some tests where the downstream effects of an imperfect grammar induction are studied. Questions For Authors: * Given the NP-hardness of grammar induction, how practical is your method for datasets significantly larger or more complex than those demonstrated? * How does grammar induction complexity grow empirically with dataset size and graph complexity? * Have you explored heuristic or approximate methods to improve scalability, like the ones proposed in the appendix? * Can you clarify the advantages of grammar-based encoding over simpler encoding schemes for practical tasks? Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for acknowledging the novelty and strengths of our interesting, theoretically principled framework! *There are some clear scalability concerns due to NP-hard grammar induction, and the authors propose a brute-force approach…. There is a list of possible adjustments and heuristics in the appendix, but they are never tested (if I understand correctly). Moreover, this crucial discussion is largely ignored in the main text…. theoretical limitations, particularly the NP-hardness of grammar induction, are relegated to the appendix…* * The adjustments and heuristics are in fact used in our implementation. In many places, we resort to approximate approaches wherever needed. For example, Subdue is a well-known and fast approx. subgraph mining library. The pseudocodes ``fast_subgraph_isomorphism, approx_max_clique, quick_hitting_set” mean approximation algorithms. We will explicitly say these are approximate/heuristic algorithms in the paper, along with known complexity guarantees. * Here are the NP-hard submodules used within grammar induction, and the exact/approximate/heuristic options, along with any parameters which tradeoff accuracy vs speed: * Frequent subgraph mining (FSM): * Approximate: We use the Subdue library. It has various options for pruning the search. Parameter: beam_width (used for subgraph expansion). * Max clique: * Exact (O(exp(n))): networkx’s cliques library * Approximate (O(poly(n))): We use networkx’s O(|V|/(log|V|)^2) approximation algorithm. * Heuristic (O(n)): (Repeat K times) Initialize a random node, iterate over all remaining nodes in random order, adding any that satisfies clique condition. Parameters: K * Hitting set problem during disambiguation: * Exact: our own implementation * Approximate: Beam search. Parameters: beam width * Our datasets have variable sizes from 47877 (CKT), 152160 (ENAS), to 2,000,000 nodes (BN), which span the range of real-world use cases. We use the size of the input to toggle between different options, trading off accuracy and efficiency. Roughly speaking: CKT mostly invokes exact/approximate solutions, ENAS approximate/heuristic solutions and BN heuristic solutions. * We apologize for relegating the discussion around complexity to the Appendix and will bring the major conclusions to the main text. *...there is no clear study on the scalability of the grammar induction process to large datasets… lacks controlled experiments showing the downstream performance loss if pruning heuristics over the brute-force MDL algorithm are introduced in larger datasets…. potentially it would be nice to see some tests where the downstream effects of an imperfect grammar induction are studied.* * Thanks for proposing this additional control study. We agree it is crucial to quantify how approximations/heuristics chosen to speed up the NP-hard submodules will affect the downstream performance. * Due to CKT being the smallest dataset, the main paper results already reflect the exact and approximate settings. Thus, we can measure the performance gap and efficiency gains by using heuristic settings for one submodule at a time. Specifically, we tried the following ablations: 1. For Subdue (FSM), use beam width=3 instead of 4 2. Always use heuristic (max clique) instead of approx., with K=10 3. Always do beam search instead of exact, with beam_width=10. 4. Skip Algo. 7 (disambiguation), losing property 1. * Ablation 3 did not affect any of the samples. Due to small input (derivation) sizes, it is not a bottleneck and does not introduce meaningful changes. * Here are the results for Abl’s 1 and 2. ||Unique|Novel|Gain|BW|PM|FoM|1st|2nd|3rd|%Faster|Compress Ratio| |-|:-:|:-:|-|-|-|-|-|-|-|-|-| |Abl.1|65.6|69.1|0.623,0.777|0.628,0.783|1.003,0.258|0.624,0.786|267.55|253.61|246.78|562\%|2.04| |Abl.2|91.3|85.1|0.629,0.773|0.629,0.788|1.005,0.251|0.617,0.797|278.93|278.93|267.61|1844\%|2.13| |Abl.3|97.3|100|0.635,0.793|0.630,0.785|0.993,0.316|0.625,0.785|306.32|290.42|260.97|~300\%|2.32| |DIGGED|98.7|99.9|0.630,0.791|0.635,0.784|0.990,0.314|0.627,0.787|306.32|296.82|265.53|0%|2.18| * We see these modifications provide significant speedups over the original runs. The latent space quality and Bayesian optimization results slightly benefit from more accurate solutions to the FSM and max clique submodules, but they are still reasonably close. We do note that the max clique submodule has better marginal returns when trading off accuracy for speed, so we recommend starting with that. We will include these findings in the main paper. *Can you clarify the advantages of grammar-based encoding over simpler encoding schemes for practical tasks?* * Please see our response to ymms, where we add a new ablation study, comparing against simpler encoding schemes, while fixing the same model architecture. Our findings show the naive encoding experience issues with decoding and lowers downstream performance. We hope we've addressed your concerns!
Summary: This paper discusses how to convert a directed acyclic graph (DAG) into a sequence, allowing for sequence decoding based on autoregressive models. The paper proposes a method to transform a graph into a sequence in the form of a context-free grammar. The core idea is to induce the grammar from existing data using statistical methods, and then use the statistically derived grammar to convert the DAG into a sequence. Experimental results show significant improvements on certain tasks. Claims And Evidence: Yes Methods And Evaluation Criteria: See Strengths And Weaknesses Theoretical Claims: NA Experimental Designs Or Analyses: This method achieves notable improvements on downstream tasks. Supplementary Material: NA Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: 1. How to convert a graph into a sequence and integrate it with an auto-regressive model is a question worth exploring. Moreover, uncovering the intrinsic relationships between nodes through grammar induction and linearizing the graph is a reasonable approach. 2. The experimental results are quite positive. By converting graphs into sequences, the existing autoregressive models can be fully utilized, achieving relatively notable results in downstream tasks. Weaknesses: 1. My main concern is that the entire grammar induction process is based on symbolic statistics rather than end-to-end representation learning. The limitation of this approach is that the entire learning process is pipelined, thus creating a certain gap with the subsequent neural module. 2. Simply analyzing the effectiveness of grammar induction through case studies might not be comprehensive enough. Evaluation on some real-world benchmarks could be introduced to make the results more intuitive. For instance, text can be treated as a graph, and the F1 score can be calculated by comparing the statistically derived structures with manually annotated syntax. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: *Thank you for recognizing the reasonableness and positive experimental results of our work!* * In addition to recognizing our method as “converting a graph into a sequence”, we want to add there is a deep motivation from the objectives of **compositional generalization**! Contrary to what a “sequential” representation may imply, DIGGED is intrinsically compositional (Fig. 2 top). In fact, DIGGED is trained to embed DAGs with similar hierarchical compositions to similar points in latent space. For instance, if DAG 1 is (uniquely) represented as W->X->Y and DAG 2 as W->X->Z, the autoregressive decoder has to predict the same first two tokens, which moves the two DAGs to similar points in latent space. Combined with the relational inductive bias of our DAGNN encoder, the autoencoder objective can be viewed as combining both the relational and hierarchical inductive bias to learn expressive and generalizable representations. *“My main concern is that the entire grammar induction process is based on symbolic statistics rather than end-to-end representation learning. The limitation of this approach is that the entire learning process is pipelined, thus creating a certain gap with the subsequent neural module.”* Thanks for raising this point! Here are some of our thoughts: 1. The symbolic-neural divide is to some extent unavoidable when working with discrete, irregular data like DAGs. Our main reference point is existing methods that still generate a graph sequentially without any symbolic statistics, using naive sequential representations. The core contribution of our work is showing there is a principled, sequential representation that outperforms such methods and is agnostic to the domain. 2. The same limitation exists in language models, where an algorithm like byte-pair encoding (BPE) is the standard way to build a vocabulary building up from the character level. Practice shows larger tokens improve downstream performance [1] and efficiency [2]. 3. Although seemingly a limitation, pipelining does have a few advantages. It allows the vocabulary derived from symbolic statistics to be transferred across models. It doesn’t require re-training end-to-end representations from scratch. Lastly, the mined vocabulary can be an inductive bias that can capture domain-specific insights, as shown in our case study (App. E.1). 4. An ongoing effort of ours is optimizing grammar induction beyond unsupervised objectives like minimizing description length. End-to-end learning of the vocabulary and propagating feedback will be essential for further improvements. *Simply analyzing the effectiveness of grammar induction through case studies might not be comprehensive enough. Evaluation on some real-world benchmarks could be introduced to make the results more intuitive. For instance, text can be treated as a graph, and the F1 score can be calculated by comparing the statistically derived structures with manually annotated syntax.* * Great idea! The main challenge in this study is that DAG domains like circuits or neural architectures lack high-level annotations. Thus, for circuits, we resorted to case studies with experts. A systematic evaluation of the grammar quality is an active direction for us, especially when manual annotations are available. Natural language does have such annotations, and we are looking into it! In particular, we’re looking to induce grammars of phrase structures from Penn TreeBank, then generate novel, diverse phrase structure trees that can be evaluated by its plausibility. For example, by substituting words for POS tags, we can also evaluate how natural and grammatical the sentences are. We hope to get around to finishing this soon! For extra motivation, we hope you can endorse this study, which opens multiple avenues of future research! [1] Large Vocabulary Size Improves Large Language Models. arXiv:2406.16508 [2] Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies. NeurIPS 2024.
Summary: This paper describes a graph grammar approach for mapping graphs to strings in a principled way. Given a set of graphs, the underlying grammar induction is deterministic and determines grammar production rules able to reconstruct an in principle unbounded graph ensemble, containing the "training" graph set. Claims And Evidence: The claims are partly theoretical, and partly about usefulness in practice. The theoretical claims about the parsings (subjacent to the proposed grammar induction) that are produced by the algorithms are 1) uniqueness of the parsing of each graph in the dataset 2) surjectivity of the production from the obtained grammar (i.e. that we can recover the dataset as a subset of the grammar-produceable graphs) 3) that DAG property is valid for the produced graphs. These are proved/verified in the appendix. The practical applications of the proposed parsing are for Neural Architecture Search, Bayesian Networks and Analog Circuit Design. Compared to other methods, this algorithm gives 100% validity of the DAGs and very high validity of the constructed cycles. The novelty of the graphs produced is lower than the benchmarks. - The 100% validity is to be expected given the theoretical guarantees, but it's important to have a practical confirmation of that. - On the other hand, the low novelty compared to other methods seems to be a negative aspect for the method. Several case studies are presented, in which a positive point is slightly higher interpretability due to deterministic decoding. However, the presented examples are extremely simple, and I don't see how the claim for better interpretability would scale to applications in which the grammar / parsing encoding shows all its compositional generalization capability. Methods And Evaluation Criteria: I think that the evaluation criteria are alike the literature, and they have the same flaws as the ones of competing approaches, which is that the examples are often oversimplified. Theoretical Claims: I think that the proofs are correct. Experimental Designs Or Analyses: I did not fully check them in full detail, but at a cursory look they look OK. Supplementary Material: I reviewed appendices A-F but not G and H. Relation To Broader Scientific Literature: I think that this principled and deterministic approach to parsing graphs is an important new addition to the literature, as also highlighted by comparisons in tables 1 and 2. Essential References Not Discussed: I am not aware of works that have not been cited. Other Strengths And Weaknesses: I think that the paper is well written and clear. There are some points that can be easily improved, mentioned below in the comments part. I feel that a weakness is that it restricts to toy models, with only small mention of scaling (as opposed to spending a paragraph/appendix on that). Also, there is a cursory mention of use of the underlying encoding in combination with transformers, but perhaps this combination will have issues and hurdles, and thus a weakness of the paper is not spending some more time on this direction, which to me seems like one of the main future prospects for this line of research. Other Comments Or Suggestions: See strengths/weaknesses part. Here are some more comments / typos. Line 030 : about positive properties of the approach, producing "valid" outputs is mentioned and puzzling at this point, I think it'd be good to expand and make it clear what kind of validity that refers to Figure 1 : there are some things I am not following, in the upper 3 pictures of DAGs + grey motives. Maybe can you explain so I can be sure I understand? picture 1) : there is a puzzling grey arrow from "(b)" to the grey oval -- shouldn't the arrow go the opposite way? picture 2) : there is a puzzling grey arrow from "(d)" to the grey oval -- shouldn't the arrow go the opposite way? About the explanation of Figure 1, lines 155-158, I don't follow this part "However, adding such an instruction would create a conflict with the motif’s occurrences in DAGs two and three" - The dags in the figure are not numbered, so what does "two" and "three" mean? - Why would that rule create a conflict, can you expand some more on the explanation? I tend to think that the reference to the subpictures of Figure 1 is wrong, so can you verify/explain? Line 140 "linearizes" -- can you replace it by another word or remove it? this has nothing to do with linear algebra, so it may be confusing. Line 170 - 183 : from "At a high level" onwards, I don't follow some of the wording of the explanation (the formulas are OK and would be fine if you remove the words mentioned below, but the wording is what I care about here): - what does "the clique solution" mean? - "the or-reduction" is followed by a formula which I don't follow why is some "or-reduction".. can you explain a bit? In the beginning of section 3.3, point 3 has an "i)" which probably should be removed Line 207 "but we present a linear programming" : what does that even mean? Line 211 "memoization" lacks an "r" I guess. Questions For Authors: 1) See also "strengths and weaknesses" part, what do you think about those comments? 2) Can you relate your proposed method to the objectives of compositional generalization? 3) I return to the use of your framework in combination with transformer architectures. What do you think can be hurdles that this would face? 4) Can you expand on scaling issues for extending your envisaged applications to much larger graphs? And when the graph size grows to infinity in some regime on the valence and graph structure, can you predict the complexity guarantees for your algorithm? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for recognizing our paper as well-written and the attention to detail in your review! *... combination with transformers… and thus a weakness of the paper is not spending some more time on this…. what do you think can be hurdles that this would face?* * To clarify: we **did** use Transformers as our architecture (see, e.g., Sec. 5.1). Our findings show Transformer decoders are powerful graph generative models when coupled with DIGGED’s principled, sequential derivations as a representation. In our response to ymms, we add an ablation study that shows, controlling for model architecture, other graph-as-a-sequence representations show less potential when combined with Transformers. * In Sec. 3.4, we explain the most natural setup is to use a Transformer as the encoder as well, but we found in practice this results in lower performance ((Token) vs (GNN)). In App. G., we discuss why we think that is the case and additional outlook on the future prospects. *I feel that a weakness is that it restricts to toy models, with only small mention of scaling (as opposed to spending a paragraph/appendix on that).* * Thank you for the comment. We do have a paragraph on scaling in App. D.3. We will elaborate further, given the results from the new ablation study in our response to 7hype. Our datasets have a size ranging from 47877 (CKT), 152160 (ENAS), to 2,000,000 nodes (BN); which span diverse real-world use cases. For each module of our algorithm, we have exact, approximate and heuristic solvers, with a number of parameters exposed to trade-off accuracy vs efficiency. The new ablation quantifies this trade-off, revealing good performance-speed elasticity. Now, we address your detailed comments one-by-one. *Line 030 : …make it clear what kind of validity that refers to* * Validity means the output has to be a connected DAG, and additionally satisfy domain-specific criteria. Defined in prior works, CKT DAGs need a stabilizing transconductance unit, BN need one node for each random var., and ENAS need a consecutive-numbered path from input to output. We will add these definitions to the paper. *Figure 1 : … in the upper 3 pictures of DAGs + grey motives…. shouldn't the arrow go the opposite way? picture 2)...* * Thank you for the attention to detail. The grey edge directions are actually variables we **solve** for (in step 2. Compute possible redirections) to maximize rule definition **compatibility** among occurrences of the subgraph. Currently, the occurrence in DAG 1 induces the instruction: “for each green in-neighbor, add out-edge from node 2”. DAG 2 induces the instruction: “for each green **out**-neighbor, add out-edges from **both** nodes 1 and 2”. If in DAG 1, we reverse the gray arrow, the two cases are no longer compatible. Should we add out-edges from both 1 & 2 to each green out-neighbor? or just node 2? Either way, there is a conflict. Semantically, these two cases are different (hence labeled a vs b), requiring different but consistent instructions. *...lines 155-158, ... why would that rule create a conflict, can you expand some more on the explanation?* * There is a similar explanation for picture 2. Reversing the edge would create incompatibility between the occurrences in DAG 2 vs DAGs 1 & 3. *The dags in the figure are not numbered, so what does “two” and “three” mean?* * We apologize for not numbering the top three DAGs. We will add it. *Line 140 “linearizes”…* * Good suggestion! We will replace it with “This simplifies the parse tree to a rooted path”. *Line 170 - 183 : from ‘At a high level’ onwards…* * Thank you. We will try to simplify the wording, keep the formulas and refer interested readers to the more elaborate explanation in App. B.1. *what does”‘the clique solution” mean?* * Sec. 3.2.2. describes how we solve for the optimal set of grey edge redirections, formulated as a graph. Each node is one way to set the edge directions for a subgraph occurrence. Each edge means the occurrences are compatible. The clique solution is the maximal set of nodes on this graph that are all compatible. *“the or-reduction” is followed by a formula which I don't follow… can you explain a bit?* * Right. In the graph, each node is an assignment to the grey edge directions for an occurrence, and from it we deduce an inset: the set of instructions that must be in the rule definition for that node. After we obtain a maximal set of compatible nodes, we OR all the insets to obtain the lower bound on the final instruction set. *section 3.3, point 3 has an "i)"* * Removed. *Line 207…* * We propose a CYK-like **dynamic** programming algorithm for DAGs, to find all derivations, where we memoize intermediate results. See App. D. for details. Line 211 "memoization". Fixed! *Relate to objectives of compositional generalization* *Expand on scaling issues for much larger graphs... complexity of algorithm* Due to char limit, we will answer your two remaining questions in our responses to DetZ & 7hpe! --- Rebuttal Comment 1.1: Comment: Thank you for clarifying the questions. I'll keep my score. About how to make the figure clearer, I think the main suggestion is to expand the caption so that one can follow everything without having to go to the text... if possible. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you again for your thorough and constructive feedback. Your suggestions have substantially helped clarify and strengthen our paper. While we understand you chose to keep your current score, we have carefully revised several parts of our manuscript based on your detailed comments: **Combination with Transformers:** We've significantly expanded our discussion (App. G) to clearly articulate potential hurdles and practical considerations when integrating our grammar-based representation with Transformer architectures. Our first added ablation study ([openreview.net](https://openreview.net/forum?id=laUd1q5iWW&noteId=meODWp5Jhf)), further demonstrates that our representation shows enhanced potential compared to other graph-as-sequence representations when paired with Transformers. **Scaling and Real-World Applications:** We've further elaborated on scaling considerations (App. D.3 and the second added ablation study ([openreview.net](https://openreview.net/forum?id=laUd1q5iWW&noteId=Zgr8ErgmEv))), demonstrating elasticity between efficiency and downstream performance for datasets containing up to millions of nodes. This illustrates DIGGED's practical scalability and applicability beyond toy models, addressing your concern regarding limited scale. **Improved Clarity and Interpretability (Figure 1)**: Following your specific recommendations, we fixed the specific points you identified, making the text self-contained and substantially clearer. For Figure 1, we labeled the DAGs 1-3 and expanded the Figure 1 caption as follows to explain each step better: """ Step 1 (Sec 3.2.1). Our approx. frequent subgraph mining library finds a candidate of subgraphs. As an example, the induced subgraph from nodes 1 & 2 in all 3 DAGs is considered. Step 2 (Sec 3.2.2). Next, for each possible realization of gray edge directions, bounds on the necessary set of instructions are computed. For example, the occurrence in DAG 1 induces the instruction: “for each green in-neighbor, add out-edge from node 2”. DAG 2 induces the instruction: “for each green out-neighbor, add out-edges from both nodes 1 and 2”. If in DAG 1, we had reversed the gray arrow, the two cases are no longer compatible with all 3 DAGs, since it's unclear if we should add out-edges from both 1 & 2 to each green out-neighbor, or just node 2. Intuitively, such cases are labeled with separate letters (e.g. a vs b), indicating they require different but non-conflicting instructions. Step 3 (Sec 3.2.2). Given bounds on the instruction set for each motif occurrence, the final set of instructions is deduced from the (approximate) solution of a max clique problem. Each node is a (motif occurrence, edge redirections) realization. Each edge indicates compatibility. Step 4 (Sec 3.2.3). The candidate motif and the associated solution to Step 3 which minimizes the description length of the current state of $H$ is chosen to define a grammar rule. Then, Steps 1-4 are repeated until convergence. """ We believe these revisions address your primary concerns, particularly around scalability and practical interpretability, and strengthen the overall contribution. Explaining a method with deep, technical details like ours can be tricky, but we are trying our best to explain it so the readers can follow. We would greatly appreciate if you reconsidered whether these clarifications and improvements merit an increase in your evaluation score. Thank you once again for your thoughtful comments. Sincerely, Authors
Summary: This paper proposes representing directed acyclic graphs as sequences of production rules. These sequence-based representations enable generative modeling using language-like models, such as transformers. The authors train and evaluate these models in various scenarios, including neural architecture search, Bayesian networks, and analog circuit design. ## Update after rebuttal (copied from the corresponding comment below) Generally, I think the authors approach is interesting and the authors could show the relevance of DIGGED in the manuscript and the rebuttal. The authors proposed to improve their manuscript, still, I am not fully convinced the clarity issues can be fully solved in an updated/extended manuscript version. However, I evaluate the proposed method as interesting and relevant for the community. Therefore, I tend towards accepting the paper which is why I increased my score. Claims And Evidence: This paper is built around the hypothesis that the proposed unique way of representing graphs is necessary, particularly to address issues related to ambiguous graph representations. * The authors demonstrate good results with their method, which can be interpreted as evidence for the hypothesis above. * Still, I miss an extended, thorough motivation for this claim. The proposed representations appear quite complex, which could be a drawback. Additionally, a more detailed rationale for why simpler alternatives (e.g., naively mapping graphs to sequence representations using positional encoding) are insufficient would have been valuable. Why are unambiguous representations necessary? Couldn’t the model learn invariance through data augmentation strategies? Section 3.3 outlines a set of guaranteed properties. As far as I can tell, these claims are well supported by the corresponding sections in the appendix. Methods And Evaluation Criteria: The included experiments make sense because a) they show that the proposed way of representing graphs can be applied to completely different domains and b) the proposed method achieves good performance values indicating that DIGGED might be relevant. Also the authors compare their method to baselines with other strategies to represent graphs as sequences, including positional encoding, topological ordering, and canonical ordering. To properly assess the impact of different graph-as-a-sequence representation strategies, the authors should compare these strategies using the same model architectures (with individual hyperparameter tuning). Could the authors please clarify whether this was done? Notably, the metrics validity, uniqueness, and novelty, generally, have inherent limitations in evaluating a model’s overall quality since any method can achieve perfect scores by adding a simple symbolic rule as a filter that memorizes seen sequences and permits only new, valid ones. Theoretical Claims: The formulas related to directed graph grammar in Section 3 have been reviewed and appear to align with existing literature. I have briefly read through the proofs for the grammar properties in the appendix but have not checked them in detail. Experimental Designs Or Analyses: The proposed experiments seem relevant. However, experiments, which directly compare the proposed method with alternative graph-as-a-sequence representation strategies might miss (see question above) but would have been important to demonstrate the relevance of DIGGED. Supplementary Material: I read through Sections A and C (without thoroughly checking). I also reviewed Section H on hyperparameter search. Relation To Broader Scientific Literature: Using graph grammar for graph generation tasks has been done before, e.g., here [1] (for molecules). However, [1] uses domain-specific knowledge, while the proposed method is context-free. Since [1] also discusses other domain-independent techniques to mine grammar from real data (see page 3, section: Generative Models using Graph Grammars.), the authors might clarify their unique, novel contribution to the field. [1] Guo, Minghao, et al. "Data-efficient graph grammar learning for molecular generation." arXiv preprint arXiv:2203.08031 (2022). Essential References Not Discussed: -- Other Strengths And Weaknesses: Strengths: The experiment section presents strong, significant results for DIGGED, suggesting that the proposed method might be relevant and valuable to the research community. Weaknesses - clarity: Several crucial points remain unclear, making it difficult to fully assess the novelty (see: *Relation To Broader Scientific Literature*) and relevance of the proposed method (see *Claims And Evidence* and *Methods And Evaluation Criteria*). Other Comments Or Suggestions: * LHS and RHS are used without being defined. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for recognizing our strong experimental results and asking insightful questions! *The proposed representations appear quite complex… a more detailed rationale for why simpler alternatives (e.g., positional encoding) are insufficient would have been valuable…. the authors should compare these strategies using the same model architectures... would have been important to demonstrate the relevance of DIGGED.* * Good question! Simpler alternatives can be classified into by whether they 1) sequentially decode a graph node-by-node or 2) output a **sequential encoding** of the graph (e.g. ordered adjacency info). * Methods under 1) add one node at a time and predict edges, requires keeping the state of the intermediate graph to check whether edges can be added, making implementation cumbersome. * Methods under 2) are naive encodings, and we argue they are insufficient. * We decided to do the ablation study you suggest. First, a few notes: * We fix DAGNN as the encoder as it is tailored for DAGs. * We fix the same decoder Transformer architecture and try different node-order encodings as the output targets. * (Note: it is not really possible to fix the same model architecture to compare with category 1) methods, since keeping the state of the graph requires a fundamentally different architecture.) * We tried several common ways to define an ordering over nodes: * Default order (most baselines): whatever order the data comes in, topological * BFS (e.g. GraphRNN): a BFS traversal from a random initial node * Random order: a random order to the nodes |||Valid|Unique|Novel|RMSE|Pearson's r|1st|2nd|3rd| |-|-|:-:|:-:|:-:|:-:|:-:|-|-|-| |Graph2NS-Default|ENAS|96.1|99.17|100|0.746|0.656|0.746|0.744|0.743| ||BN|95.8|96.4|94.8|0.498|0.869|-11590|-11685|-11991| |Graph2NS-BFS|ENAS|40.8|100|100|0.806|0.595|0.746|0.746|0.745| ||BN|2.2|100|100|0.591|0.819|-11601|-11892|-11950| |Graph2NS-Random|ENAS|0%|-|-|0.859|0.508|-|-|-| ||BN|8.4|100|100|0.535|0.857|-11523|-11624|-11909| |DIGGED|ENAS|100|98.7|99.9|0.912|0.386|0.749|0.748|0.748| ||BN|100|97.6|100|0.953|0.712|-11110|-11250|-11293| * The default order is unique in most cases, but its unguaranteed validity results in lower BO optimization results. We added **additional** logic to re-attempt sampling for each latent point until a valid DAG is obtained (or fall back to a training example). * Meanwhile, ordering nodes via BFS or randomly completely destroys the decoder’s ability to generate valid examples. BFS order is do-able for the mostly linear path graphs of ENAS but is entirely infeasible for BNs, due to the dense dependencies making the order unpredictable. * Simple node order positional encoding cannot simult. satisfy all the principles outlined in 3.2. This incomplete representation can lead to issues with decoding and efficacy of downstream optimization. We think the fundamental issue is imposing position onto data that is by definition invariant to it. Even for DAGs, there can be an exponential number of topological orderings. * Meanwhile, DIGGED is a position-**less** sequential representation. DIGGED finds the optimal ``change-in-basis" which casts the graph as a unique, sequential procedure. Each token codes for a set of instructions to recreate the graph, going beyond positional encoding. DIGGED is also a design language, combining hierarchical inductive biases (see DetZ) and can uncover domain-specific insights (case studies in App. E and F)! *Using graph grammar for graph generation tasks has been done before.... clarify unique, novel contribution to the field.* * We are familiar with [1]. They learn primarily on small datasets, containing just a few dozen examples. Our experience using [1] is it can take days to learn on moderate datasets (100s samples). It is slow because it has to learn the optimal way to downsample the data, requiring reward supervision from downstream generation metrics to reinforce the grammar. Our induction procedure, meanwhile, has a much simpler, unsupervised objective, based on minimizing description length. In our response to 7hpe, we benchmark options to trade-off accuracy for efficiency, enabling scaling to larger datasets. * More crucially, it is unclear whether grammar-based generation generalizes or scales with pretraining. Our work bridges the expressiveness and scalability of Transformers with the unique representation challenges of graph structures. We hope our proposed representation, theoretical insights and validation of its necessity lays the first steps towards embracing modern sequence-learning architectures for the graph generation community! *the metrics validity, uniqueness, and novelty….* * You’re absolutely right. The unconditional generation results aren’t meant to evaluate the model’s overall quality. It is a basic sanity check. We will write a disclaimer. The much more essential evaluation is the latent space quality for prediction and Bayesian optimization (Tables 3 & 4, Fig. 3). --- Rebuttal Comment 1.1: Comment: Dear authors, thank you for your answers and comments. * **Ablation study.** For the important metrics RMSE and Pearson's r DIGGED appears to perform worse than the other compared methods. Am I misreading the table? * **Contribution to the field.** Thank you for elaborating on this point. Your reasoning is convincing, and I now better understand the relevance of your contribution. * **Alternative encodings.** I appreciate the clarifications. Your argument regarding point 1), particularly the implementation complexity, is interesting and convincing. Regarding point 2), the explanation below the table is insightful. However, based on the numbers presented, I am not yet convinced that the evidence clearly supports the claim that naïve encodings are insufficient (see my question above). * **Clarity issue**: My main concern has been—and to some extent still is—clarity. Interestingly, reviewer 7hpe seems to share the clarity concern, whereas reviewer 5waZ found the manuscript well written and clear. This leads me to think that my concerns might a) stem from personal preferences, and b) result from not being an expert in the specific subfield, which is why I am willing to assign less weight to this point. Assuming my confusion regarding the ablation study table is due to a misreading on my part, I would be inclined to raise my score, if the authors can address this point. --- Reply to Comment 1.1.1: Comment: Thanks for your reply! We're thrilled our reasoning reasonates with you, and we're more than happy to address the remaining point about the ablation study. First, we would like to update the table with the Ablation results for CKT, which just finished running (apologies for delay): |||Valid|Unique|Novel|RMSE (FoM)|Pearson's r (FoM)|1st|2nd|3rd| |-|-|:-:|:-:|:-:|:-:|:-:|-|-|-| |Graph2NS-Default|CKT|80.2|71.0|96.8|0.695|0.738|220.96|177.29|148.92| |Graph2NS-BFS|CKT|0.1%|100|100|0.676|0.751|-|-|-| |Graph2NS-Random|CKT|0%|-|-|0.680|0.760|-|-|-| |DIGGED|CKT|100|100|78.8|0.627|0.787|306.32|296.82|265.53| In this case, we see DIGGED outperforms Graph2NS on predictive metrics. These findings are consistent with previously discussed results related to unconditional decoding and downstream optimization. **Why does DIGGED show better predictive accuracy on CKT but not ENAS and BN?** You're correct to note that DIGGED has lower predictive performance compared to Graph2NS on the ENAS and BN datasets. These two datasets impose special constraints: all DAGs have the same number of nodes; ENAS DAGs must follow consecutively numbered nodes, and BN DAGs must contain exactly one node of each type (8 types). Such simplifying conditions allow naïve positional encodings to overcome the shortcomings we discussed earlier, making predictive tasks relatively easier. We initially chose these datasets due to the limited availability of standardized benchmarks for DAGs. By contrast, the CKT dataset involves significant diversity in both graph topology and node types, making it a better testbed for evaluating the true strengths of DIGGED's compositional, position-free encoding approach. **How important is predictive accuracy?** At the same time, we note predictive accuracy (RMSE, Pearson’s r) does not reflect decoder effectiveness. For example, BN-Random, CKT-BFS, and CKT-Random achieve reasonable scores on predictive metrics (RMSE and Pearson’s r), yet fail fundamental decoder sanity checks, rendering them ineffective for subsequent optimization tasks. DIGGED prioritizes end-to-end optimization results, which requires the ability to navigate and decode from the latent space. **The high-level view.** One way to see DIGGED's efficacy for end-to-end optimization is through the lens of hierarchical, composition generalization. DIGGED is intentionally designed for compositionality of its outputs. Unlike naïve sequential encodings, DIGGED places DAGs with shared hierarchical structures (intermediate derivations) close together in the latent space. For example, consider DAG 1 represented (uniquely) as W→X→Y and DAG 2 as W→X→Z. DIGGED's decoder must predict shared initial tokens for both graphs, naturally clustering these related graphs in latent space. Learning both the token vocabulary embeddings and latent space compositional structure jointly indeed poses a more challenging training task -- reflected partly in predictive metrics -- but strongly supports compositional generalization and decoder reliability. This trade-off underscores DIGGED's core strength: effectively navigating a compositional design space to reliably generate diverse and valid DAG structures optimized for practical performance. We will explicitly incorporate this extended motivation, along with the updated ablation results, into the revised manuscript. We trust these additions, alongside edits addressing others' suggestions, will enhance the relevance, clarity and transparency of the paper!
null
null
null
null
null
null
Benefits of Early Stopping in Gradient Descent for Overparameterized Logistic Regression
Accept (poster)
Summary: This paper investigate the importance of early stopping in well-specified high-dimensional logistic regression. They demonstrate that the early stopping ensures generalization in terms of excess logistic risk, while interpolator diverges. These results emphasize the need of early stopping in overparameterized models. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The proofs seem to be solid, but I have not checked all the details. Experimental Designs Or Analyses: NA. Supplementary Material: NA Relation To Broader Scientific Literature: This paper considers early-stopped gradient descent in logistic regression, while most of the previous works focus on linear regression. The results in this paper extend the previous results to the setting of logistic regression, which is novel. These results contribute to understandings of early stopping in over-parameterized statistical problems. Essential References Not Discussed: NA. Other Strengths And Weaknesses: ### Strengths The paper is well-written and easy to follow. The techinical results are comprehensive that both upper bounds for early-stopped GD and lower bounds for interpolators are given, which are persuasive. Moreover, the connection between early stopping and explicit $l^2$ regularization is discussed. ### Weakness The early-stopping time is oracle-based and lacks explicit bounds, which will limit the practical utility of the results. Other Comments Or Suggestions: NA Questions For Authors: 1. What are the insights of selecting $k$ in the main theorems? How the "variance" and the "bias" are connected to $k$? 2. Can the results in this paper be extended to generalized linear models or M-estimators? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for supporting our paper. We answer your questions as follows. --- **Q1.** “The early-stopping time is oracle-based and lacks explicit bounds, which will limit the practical utility of the results.” **A1.** Our aim is to understand the benefits of early stopping. We believe designing a practical criterion for early stopping, despite being an important question, is beyond the scope of this paper. --- **Q2.** “What are the insights of selecting $k$ in the main theorems? How the "variance" and the "bias" are connected to $k$?” **A2.** Intuitively, $k$ determines the number of dimensions in which early stopped GD is able to learn the true parameter $w\^\*$. Moreover, early stopped GD ignores the remaining dimensions and pays an “approximation” error. We choose $k$ to minimize the upper bounds. In Theorem 3.1, the stopping time is relatively small, at which point GD first hits an empirical risk of $\\hat L(w\_{0:k}\^\*)$. Thus, the total parameter movement is relatively small, and the bias error tends to be large. In Theorem 3.2, the stopping time is relatively large, at which point GD hits an empirical risk of $\\hat L(w\^*)$. Thus, the total parameter movement is large, and the variance error tends to be large. We will clarify these in the revision. --- **Q3.** “Can the results in this paper be extended to generalized linear models or M-estimators?” **A3.** We believe that part of our results can be extended beyond logistic regression. For example, the proof of Theorem 3.1 can be adapted to other loss functions that are convex, smooth, and Lipschitz. We will comment on this as a future direction.
Summary: The paper studies the impact of early stopping in gradient descent (GD) for overparameterized logistic regression. It demonstrates that, in this setting, early-stopped GD is well-calibrated and achieves lower risk and zero-one risk compared to GD at convergence. Additionally, the paper establishes a connection between the implicit regularization of GD and explicit regularization. # Update after rebuttal The authors have addressed my concerns. I keep my positive score Claims And Evidence: Yes, the authors prove all of their results. Methods And Evaluation Criteria: N.A Theoretical Claims: No. I focused on reading the main text. Experimental Designs Or Analyses: N.A Supplementary Material: N.A Relation To Broader Scientific Literature: Most of the paper's main contributions are not directly comparable to previous work. The most relevant studies primarily focus on asymptotic GD, the setting of separable data, or scenarios where the loss function is the squared loss. Essential References Not Discussed: The paper discusses all of the essential references. Other Strengths And Weaknesses: Strengths: 1. The paper is well-written and clearly structured. 2. It thoroughly discusses related work and provides detailed comparisons with its own results. 3. The findings on well-calibration and improved generalization with respect to 0-1 risk, compared to interpolating GD, are both interesting and somewhat surprising. 4. The paper extensively discusses the limitations of its results. Weaknesses: 1.The technique used to derive the logistic risk upper bounds relies heavily on the prior work of Telgarsky (2022). 2.The results are based on an optimal stopping time. In particular, the authors acknowledge that "early-stopped GD in Theorem 3.1 is not a practical algorithm". It would be beneficial to present results that explicitly depend on the stopping time or provide an example of a stopping time for which the bounds hold. 3. In cases where the results are comparable to prior work, the bounds obtained in this paper are sometimes weaker. Other Comments Or Suggestions: N.A Questions For Authors: Are there scenarios where the optimal stopping time is impractically large, such as exponential in $ d $ or $ n $? If so, what happens to the risk of GD when it is stopped after $\text{poly}(d,n)$ iterations? Does it still benefit from early stopping in such cases? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We address your questions below. --- **Q1.** “The technique used to derive the logistic risk upper bounds relies heavily on the prior work of Telgarsky (2022).” **A1.** While our Lemma 3.3 appears in [Telgarsky 2022] (and other prior works as mentioned), it seems unfair to call it a “weakness”, as new research builds upon existing results. We would also like to point out that our logistic risk upper bounds require careful statistical control around the population optimum since we work in an overparameterized regime, which is beyond [Telgarsky 2022]. Besides, we show how early stopped GD separates from asymptotic GD and how it explicitly connects to $\ell_2$-regularization. None of these results appears in [Telgarsky 2022]. --- **Q2.** “...It would be beneficial to present results that explicitly depend on the stopping time or provide an example of a stopping time for which the bounds hold.” **A2.** One can compute an upper bound on the stopping time using standard optimization and concentration tools. Specifically, $\\hat L$ is smooth and convex, so GD converges in $O(1/t)$ rate. Moreover, we can compute $\\hat L(w\^*\_{0:k})$ using concentration bounds. In this way we can compute a stopping time upper bound. Once we obtain the upper bound on the stopping time, we can state the theorem as “there exists a $t$ no larger than XX, such that…”. This gives examples of stopping time for our bounds to hold. We will comment on this in the revision. However, we prefer to maintain our original theorem statement as it is cleaner. --- **Q3.** “In cases where the results are comparable to prior work, the bounds obtained in this paper are sometimes weaker.” **A3.** As discussed in Lines 397-425, no prior results strictly dominates our results; the work that obtained sharper rates also needs stronger assumptions than ours. While there is still room to improve our upper bounds (as discussed in the end of Section 3.2), we believe our contribution is already very significant by establishing the benefits of early stopping in logistic regression. --- **Q4.** “Are there scenarios where the optimal stopping time is impractically large, such as exponential in $d$ or $n$? If so, what happens to the risk of GD when it is stopped after $\mathsf{poly}(d,n)$ iterations? Does it still benefit from early stopping in such cases?” **A4.** Note that our results allow $d=\infty$, but the optimal stopping time is always finite as hinted by our negative results for asymptotic GD. Therefore, the stopping time is generally small compared to $d=\infty$, and it is meaningless to discuss “exponential or polynomial in $d$”, which is infinite. Moreover, as discussed in **A2**, the stopping time is at most polynomial in $n$ by optimization and concentration arguments. Note that the hidden constant factors might be instance-dependent. Again, there is no exponential dependence.
Summary: This paper studies the problem of early stopping in logistic regression. It considers two metrics: the zero classification accuracy and the logistic loss. The paper shows that the excess zero one loss is bounded by the excess logistic loss. Building on this the paper shows that there exists an early stopped model that has zero excess logistic loss (asymptotically). Hence the model is well calibrated and consistent as well. On the other hand the paper also shows the existence of a setup where the converged solution has infinite excess logistic loss and that the excess zero one loss is at least a constant. Claims And Evidence: The paper is a theory paper and presents a variety of theoretical statements that support their claims. However, I have some questions about some of the statements and their proofs. Quoting from the paper, it says that "The empirical risk $\hat{\mathcal{L}}(w_t) decreases monotonically for $t \ge 0$. Thus there exists a stopping time $t$ such that $\hat{\mathcal{L}}(w_t) \le \hat{\mathcal{L}} (w^∗_{0:k} \le \hat{\mathcal{L}}(w_{t-1})$." However, being monotonically decreasing does not mean that $\hat{\mathcal{L}}(w_t)$ is eventually smaller than $\hat{\mathcal{L}} (w^∗_{0:k}$. I tried checking the proof, but I could not find the part of the proof where the assertion that there exists a $t$ such that $\hat{\mathcal{L}}(w_t) \le \hat{\mathcal{L}} (w^∗_{0:k} $ is proven. Theorem 3.2 then assume the existence of such a time $t$. But the description in the paper seems to assume that such a time $t$ must exist. -------- My current score is primarily due to the exists of $t$ such that $\hat{\mathcal{L}}(w_t) \le \hat{\mathcal{L}} (w^∗_{0:k} $. I believe that the exietnce of such a $t$ needs to be proven to complete the story. Otherwise the current story is if there is a good early stopping point it is good to stop because the cobverged solution can be bad versus early stopping is good. Methods And Evaluation Criteria: There are no experiments. Theoretical Claims: There are a variety of theoretical claims. I did not check any of the proofs except for reading the lemma statements in Section C.1 and then the proof of Theorem 3.1 given the lemmas. However, I think there is an issue with the proof. Please see my comments in the claims and evidence section. Experimental Designs Or Analyses: No Experiments. Supplementary Material: I looked at section C.1 Relation To Broader Scientific Literature: The relationship to the broader scientific literature is well established, and understanding the simplicit bias of gradient descent and early stopping in different scenarios is an important problem. Earlying stopping and GD have been mostly studied for linear regression. The case for logistic regression is less well understood. Hence the duality of Theorems 3.1 and 4.1 is quite interesting in the loogistic regression setting. Essential References Not Discussed: I think the paper [A] is an important recent paper that is quite relevant to many of the ideas discussed in the paper (early stopping, implicit regularisation, connection to explicit regularization) is missing [A] Sonthalia, Rishi, Jackie Lok, and Elizaveta Rebrova. "On regularization via early stopping for least squares regression." arXiv preprint arXiv:2406.04425 (2024). Other Strengths And Weaknesses: **Strengths** The prose in between theorem statements is very well written. In fact, it is amongst the best I have ever seen. The duality of Theorem 3.1 and 4.1 is quite surprising. In particular, in light of Theorem 4.2. One would imagine that things break down in the limit due to the norm going to infinity. However, Theorem 4.2 says that is not the case. The conenction to ridge regularized version is also interesting. **Weakness** I think the theorem statements could be sharper. Words like - thus should not appear in a theorem statement. Other Comments Or Suggestions: N/A Questions For Authors: All of the current analyses are for the case when we initialize at zero. What if we initialized somewhere else, or did it have a random initialization? just to confirm the probabilities in Theorems 3.1 and 3.2 aree with respect to the sampling of the data? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your comments. We address your concerns as follows. --- **Q1.** “My current score is primarily due to the exists of $t$ such that $\\hat L(w\_t) \\le \\hat L(w\_\{0:k\}\^\*)$. I believe that the existence of such a $t$ needs to be proven to complete the story.” **A1.** The existence of a $t$ such that $\\hat L(w\_t) \\le \\hat L(w\_\{0:k\}\^\*)$ is guaranteed because * GD converges to a minimizer of the empirical risk $\\hat L(\\cdot)$; * and $\\min \\hat L(\\cdot) < \\hat L(w\_\{0:k\}\^\*)$. So there exists a $t$ such that $\\hat L(w\_t) \\le \\hat L(w\_\{0:k\}\^\*)$. The second bullet is clear. We explain the first bullet as follows. Note that $\\hat L(\\cdot)$ is smooth and convex. Classical optimization theory guarantees the global convergence of GD in this case. Specific to logistic regression, this is proved in, for example, Theorem 1.1 in [Ji & Telgarsky 2018], with a precise convergence rate. We also see this by applying Lemma 3.3 with suitable $u$ and large enough $t$. This explanation should address your concerns. But let us know if you have further questions regarding this. We will add these discussions in the revision. --- **Q2.** “I think the paper [A] is an important recent paper that is quite relevant to many of the ideas discussed in the paper (early stopping, implicit regularisation, connection to explicit regularization) is missing” **A2.** Thanks for pointing out the missing reference. We will cite and discuss it in the revision. --- **Q3.** “I think the theorem statements could be sharper. Words like - thus should not appear in a theorem statement.” **A3.** We will polish our theorem statements according to your suggestions. Thanks. --- **Q4.** “All of the current analyses are for the case when we initialize at zero. What if we initialized somewhere else, or did it have a random initialization?” **A4.** When the initialization $w\_0$ is nonzero, in the bounds, $w\^\*$ should be replaced by $w\^\* - w\_0$. This can be seen from the proof of the theorems. We will clarify this in the revision. --- **Q5.** “just to confirm the probabilities in Theorems 3.1 and 3.2 agree with respect to the sampling of the data?” **A5.** Yes. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. I have increased my score.
Summary: This paper theoretically examines the additional regularization bias introduced by early stopping in logistic regression. The authors first demonstrate that for any well-specified logistic regression problem, gradient descent (GD) with oracle-based early stopping is well-calibrated and statistically consistent. They then establish lower bounds on test risk, calibration error, and zero-one error for asymptotic GD, showing that it is inconsistent and has worse sample complexity compared to early-stopped GD. Additionally, they derive bounds on the angular differences between the regularization path and the GD path for any convex and smooth objective. Finally, they analyze asymptotic bounds for logistic regression with linearly separable data, showing that under a sufficient condition on the data, both paths converge. **Update after rebuttal** I thank the authors for engaging during the rebuttal period. They addressed most of my key questions and agreed to make appropriate additions to the paper. Overall, I believe that it is a good paper, and improves our understanding of early-stopped gradient descent, and I maintain my positive rating. Claims And Evidence: The claim on calibration via early stopping is well supported by Theorems 3.1 and 3.2. The negative results on poor calibration and the inconsistency of asymptotic GD are also established in Theorems 4.1 and 4.2, along with the need for exponential sample complexity for achieving zero-one error. However, I have some concerns regarding the comparison between Theorems 3.1 (3.2) and Theorems 4.1 and 4.2—see the questions section. Finally, Theorems 5.1 and 5.2 establish the global and asymptotic connection between GD and the regularization path. Methods And Evaluation Criteria: It’s mainly a theory paper. I don’t think this question is very valid, but the toy experiment in Figure 1 matches the theoretical claims in the paper. Theoretical Claims: No, I did not read the proofs of the theorems. Experimental Designs Or Analyses: N/A, Look at the response to Methods and Evaluation Criteria. Supplementary Material: No, I did not check the supplementary material. Relation To Broader Scientific Literature: The paper builds on the well-established theory of implicit regularization in GD for logistic regression by theoretically demonstrating the additional benefits of early stopping. While the generalization benefits of early-stopped GD have been studied in much greater detail for linear regression (for example, vanishing excess risk with early stopped GD despite no benign overfitting with $\ell_2$ norm interpolator (see references on column 1, page 2)), they are less well understood for logistic regression. This work addresses an important gap in the literature by showing that early stopped GD can achieve good generalization (vanishing excess risk), despite a statistically inconsistent interpolator, similar to results in linear regression. Essential References Not Discussed: Looks good and thorough to me. Other Strengths And Weaknesses: The paper studies a very important (and understudied) problem from a theoretical point of view. It is well-written in general, easy to read and follow. Other Comments Or Suggestions: N/A Questions For Authors: 1. Thm 3.1 and 3.2 provide early stopping risk bounds for any (including overparameterized) well-specified logistic regression problem. Lines 266-273, discuss application of these bounds to a trivial, under-parameterized case. How do these bounds perform in the overparameterized case? $\Sigma$ needs to be carefully chosen to control the tail of EVs and the trace. For a meaningful comparison with the negative results in Section 4 on asymptotic GD, it would be useful to determine when Thm 3.1 and Thm 3.2 provide non-vacuous bounds and how common these settings are. 2. For Thm 4.2, what role does the $k$-sparsity condition play? Is this the key condition needed to prove inconsistency? Please provide some intuition based on the proof. Additionally, linking it to the previous question, can you provide an example (or a class of examples based on data covariance) where Thm 3.1 and Thm 3.2 yield non-vacuous bounds while the negative results in Thm 4.2 still hold? 3. Regarding conjectures at the end of page 6 and page 8 (left column end)—what justifies these claims? For the first, do you have any empirical results supporting it? For the second, Thm 5.2 and Thm 5.3 merely represent convergence under a sufficient condition and a counter example, the conjecture seems strong—do you have additional reasoning to support it? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed comments. We address your questions below. --- **Q1.** “...Lines 266-273, discuss application of these bounds to a trivial, under-parameterized case. How do these bounds perform in the overparameterized case?... For a meaningful comparison with the negative results in Section 4 on asymptotic GD, it would be useful to determine when Thm 3.1 and Thm 3.2 provide non-vacuous bounds and how common these settings are.” **A1.** In Lines 205-214 of the right column and Lines 272-274, we discuss our bounds in an overparameterized setting satisfying the standard source and capacity conditions, where our bounds are non-vacuous. Our comparison holds in general and common settings. In what follows, we explain this using Theorems 3.1 and 4.1 as examples. For every well-specified logistic regression problem (see Assumption 1), Theorem 3.1 yields a vanishing excess risk for early stopped GD as discussed in Lines 190-203 right. For the same set of problems, Theorem 4.1 implies GD without early stopping is inconsistent for logistic loss and poorly calibrated, as discussed in Lines 279-283 of the right column. Note that this comparison is agnostic to dimension and holds in the overparameterized regime as well. --- **Q2.** “For Thm 4.2, what role does the $k$-sparsity condition play? Is this the key condition needed to prove inconsistency? Please provide some intuition based on the proof. … can you provide an example (or a class of examples based on data covariance) where Thm 3.1 and Thm 3.2 yield non-vacuous bounds while the negative results in Thm 4.2 still hold?” **A2.** The $k$-sparsity is a sufficient condition to establish the sample complexity lower bound in Theorem 4.2. Note that Theorem 4.2 provides a lower bound on the excess zero-one error, but this does not imply inconsistency. For the inconsistency results in Theorem 4.1, we do not need the $k$-sparsity condition. The intuition behind Theorem 4.2 is that there are $k$ informative dimensions and a lot more uninformative dimensions. Since $n\gg k$, the training set cannot be separated purely using the $k$ informative dimensions. Thus, interpolators must use the uninformative dimensions to separate the data, leading to the risk lower bound. This explains the role of the $k$-sparsity in Theorem 4.2. We discuss in Lines 313-319 for situations where Theorems 3.1 and 3.2 yield non-vacuous bounds while the negative results in Thm 4.2 still hold. Moreover, the example discussed in Lines 205-214 of the right column and Lines 272-274 also suffices. We will make this clear in the revision. --- **Q3.** “Regarding conjectures at the end of page 6 and page 8 (left column end)—what justifies these claims? For the first, do you have any empirical results supporting it? For the second, Thm 5.2 and Thm 5.3 merely represent convergence under a sufficient condition and a counter example, the conjecture seems strong—do you have additional reasoning to support it?” **A3.** We discuss evidence/intuitions for the two conjectures as follows. For the first one, note that in Figure 1(b), the test zero-one error keeps increasing even after reaching interpolation (training zero-one error becomes zero). This suggests that the zero-one error of the maximum $\\ell\_2$-margin interpolator (this is when $t\to\\infty$) should be higher than an oblivious interpolator. Our conjecture is partly motivated by this observation. The reasoning behind the second conjecture is as follows. Note that Assumption 2 implies that the dataset projected perpendicular to the max-margin directions (called “projected dataset”) is strictly nonseparable (see Lemma 3.1 in Wu et al., 2023). This is the only property used in Theorem 5.2. Moreover, in Theorem 5.3, the “projected dataset” is nonseparable but with margin zero – we conjecture this property is sufficient for Theorem 5.3 to hold. Now for a generic separable dataset, we check the “projected dataset”: - if it is strictly non-separable, Theorem 5.2 holds; - if it is non-separable but with margin zero, we conjecture Theorem 5.3 holds; - otherwise it is separable (with positive margin), we decompose the dataset recursively. This is the reasoning behind our conjecture. We will add these discussions in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for your response to my questions. Most of my questions have been answered, please add clarifications in the paper wherever discussed above. I have a few more questions: on parsing again, it feels like the connection of Section 5 to the rest of the paper is weak. Why do I care about the distance between regularization and GD paths (sure a complete characterization is good, but why is it important to the message of the rest of the sections)? For error, the distance doesn't matter, only correlation does, which is always 0, asymptotically. However, for calibration, distance does matter. Can we see an example where the two paths diverge (like in Thm 5.3), and one of them achieves better calibration? --- Reply to Comment 1.1.1: Comment: Thanks for confirming most of your questions were resolved—we will incorporate clarifications into the paper. Below we respond to your follow-up questions. **Role of Section 5**. We believe our results in Section 5 are closely tied to the rest of the paper. Sections 3 and 4 demonstrate that early stopping carries a certain regularization effect that benefits its statistical performance. This regularization is, however, implicit. In Section 5, we attempt to provide some intuitions of the implicit regularization of early stopping by establishing its connections to an explicit $\ell_2$-regularization. We will revise the text to better motivate its relevance and clarify the connection. **Importance of studying paths distance**. If GD and regularization paths were uniformly and absolutely close, one could argue that early stopping fully mimics $\ell_2$-regularization. However, our results show that while the two paths are relatively close in general, absolute closeness only holds in special cases. This suggests that the implicit regularization induced by early stopping might not be entirely equivalent to $\ell_2$-regularization (despite being highly similar and comparable). Understanding where and why the two paths diverge could reveal important nuances in the behavior of early stopping, and we see this as a promising direction for future work. **GD vs. regularization for calibration**. *Is there a logistic regression example such that early stopped GD has a better calibration/logistic risk rate than $\ell_2$-regularization or vice-versa?* This is a great question, as it directly probes the extent to which early stopping replicates the effects of explicit regularization. We currently lack the tools to definitively answer this, but we believe that resolving this would significantly deepen our understanding of early stopping's regularization effect. By Theorem 3.1 and discussions in Lines 296–300, both GD and $\ell_2$-regularization require careful tuning—via early stopping or non-vanishing regularization, respectively—to attain good calibration. Although Theorem 5.3 shows that the two paths can diverge asymptotically, our bounds are not sharp enough to yield a clear separation in performance between early stopping and $\ell_2$-regularization in terms of calibration or logistic risk. We will mention this as a concrete open problem in the revision.
Summary: The authors examined high-dimensional logistic regression scenarios where p could be finite or infinite. They analyzed the gradient flow dynamics of logistic regression, discussed the generalization capabilities of early stopping estimators and interpolators, and provided comparisons between the gradient descent (GD) path and the L2 penalty path. This topic is particularly interesting for its theoretical insights into optimization and generalization in high-dimensional settings. Claims And Evidence: I have not checked the proofs carefully, but most of the claims sound reasonable for me. Methods And Evaluation Criteria: This is a theoretical paper and no method was proposed. Theoretical Claims: I have not checked the proofs carefully. Most of the claims sound reasonable for me. Experimental Designs Or Analyses: No experiments have been reported. Supplementary Material: No Supplementary Materials. ( all the proof are included in a single pdf file). Relation To Broader Scientific Literature: Most of the current work focuses on regression under square loss. In contrast, regarding logistic regression, most research focuses on empirical minimal loss rather than analyzing gradient descent with early stopping. This work would be of interest to other theoretical groups. Essential References Not Discussed: Though the settings are different, I am not sure if the authors ignored the line of work on logistic regression by E. Candes et al. For example: modern maximum-likelihood theory for high-dimensional logistic regression. Other Strengths And Weaknesses: The results sound reasonable to me. However, it would be better if the author could provide a more concrete comparison between their work and those from linear regression using square loss. This would help readers better understand the key differences between square loss and logistic loss. Other Comments Or Suggestions: N/A Questions For Authors: Please see the "Other Strengths And Weaknesses". Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your support! We address your concerns as follows. --- **Q1.** “Though the settings are different, I am not sure if the authors ignored the line of work on logistic regression by E. Candes et al.” **A1.** We will cite and discuss these works you pointed out in the revision. They focused on the existence of MLE and its behavior if it exists, which is quite apart from our focus where MLE never exists (see paragraph “Noise and overparameterization.” in Section 2 and Proposition 2.2). --- **Q2.** “...it would be better if the author could provide a more concrete comparison between their work and those from linear regression using square loss. This would help readers better understand the key differences between square loss and logistic loss.” **A2.** Risk bounds for logistic regression and linear regression are not directly comparable, as these two problems are different in nature: they have different data assumptions and different risk measurements. In Lines 244-257 of the right column, we provide a high level comparison on the techniques for analyzing GD in logistic regression and linear regression. One immediate issue is that we cannot directly use tools from linear regression, which relies on the chains of equalities that needs the Hessian to be constant. We will make these discussions more detailed in the revision to better clarify the key difference between squared loss and logistic loss. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will keep the current score
null
null
null
null
SING: Spatial Context in Large Language Model for Next-Gen Wearables
Accept (poster)
Summary: This paper proposes a method for aligning spatial audio sensing to LLM embeddings. The approach includes CNN-based DoA estimation module and automatic speech recognition module from OpenAI. The paper introduces an OmniTalk dataset that is synthetically generated to train the module. The experiment evaluated the method with DoA and ASR. ## update after rebuttal I appreciate the author's engagement during the rebuttal process. However, my concerns still remain, and I decide to keep my score. Claims And Evidence: - Introduction-A: The paper claims the use of “micro-structure assisted miniature” as a challenge & approach for the proposed problem. However, the solution is just using Owlet (Anonymous, 2021) framework, thus could not be claimed as own contributions. - Section 2.3: The paper necessitates spatial positioning to distinguish multiple speakers. Why do we need a spatial positioning instead of speaker diarization techniques [1, 2]? [1] Chang, Xuankai, et al. "End-to-end monaural multi-speaker ASR system without pretraining." ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. [2] Fujita, Yusuke, et al. "End-to-end neural speaker diarization with self-attention." 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEE, 2019. Methods And Evaluation Criteria: The experiment should include the CNN-based model from Owlet as a baseline for DoA estimation. It is unclear why the authors generated an additional dataset (OmniTalk) instead of utilizing Owlet for data collection. How accurately does a simulated dataset reflect Owlet? It is unclear if the experiment is properly designed to evaluate the “alignment to LLMs,” considering the main evaluation metrics are DoA and ASR. Table 2 does not report any multi-source comparison baselines. Would it be possible to compare with speaker diarization techniques? Theoretical Claims: There are no significant theoretical claims in the paper. Experimental Designs Or Analyses: The experiment shows that DoA with speech ASR results in twice lower performance than the baseline (SALMONN). This questions the need for spatial awareness. Supplementary Material: The supplementary material includes the analysis of the performance of estimating the number of speakers. However, the experiment details are not specified, and only the result of a single 5-speakers experiment is reported. Relation To Broader Scientific Literature: The paper is related to spatial speech understanding, speaker diarization, and aligning spatial speeches for language models. The paper improves the use of Owlet to enable monaural spatial speech sensing. Essential References Not Discussed: The paper does not discuss any speaker diarization techniques [1, 2]. [1] Chang, Xuankai, et al. "End-to-end monaural multi-speaker ASR system without pretraining." ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. [2] Fujita, Yusuke, et al. "End-to-end neural speaker diarization with self-attention." 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEE, 2019. Other Strengths And Weaknesses: Weaknesses: - Vague positioning: the paper necessitates spatial positioning to distinguish multiple speakers. Why do we need spatial positioning instead of speaker diarization? - Technical novelty: the paper just lists the methodologies from various related works to build SING. For example, it is not properly motivated why we need a mel-spectogram and 3x3 CNN structure. - Limited scalability: The multi-DoA encoder is hard-coded with 5 speakers and lacks discussion of its scalability. Also, the performance of DOA + speech ASR is lower than the baseline. - The experiment should be improved: The current experiment does not properly reflect the superiority of the method for “aligning with LLMs” by evaluating only DoA/ASR with insufficient baselines. Other Comments Or Suggestions: Please update the running title (+ current format is not the double-blind format). Why is the Owlet (Anonymous, 2021) anonymized? They are already published in the conference proceedings. Typo: LLava → LLaVA (Page 2). Figure 2 is slightly confusing - is the figure representing a supervised fine-tuning phase, considering DoA is already pre-trained? Questions For Authors: The paper necessitates spatial positioning to distinguish multiple speakers. Why do we need spatial positioning instead of speaker diarization? Also, Table 2 does not report any multi-source comparison baselines. Would it be possible to compare with speaker diarization techniques? Why did the authors generate an additional dataset (OmniTalk) instead of utilizing Owlet for data collection? How accurately does a simulated dataset reflect Owlet? Why did the experiment not include the CNN-based model from Owlet as a baseline for DoA estimation? Why are the main evaluation metrics DoA and ASR if the main objective aligns with LLMs? The experiment shows that DoA with speech ASR results in twice lower performance than the baseline (SALMONN). Then, what if we combine SALMONN with DoA estimation? How is the method scalable with the number of speakers? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: **1: Why we need a spatial positioning instead of speaker diarization.** Thank you for the question. While speaker diarization answers "who spoke when," it lacks spatial awareness—crucial for many applications. Our focus on Direction of Arrival (DoA) estimation adds spatial context to speech, enabling new capabilities in AR, robotics, and embodied AI. For instance, AR glasses can answer location-based queries like “What did the person to my left say?” Robots can localize and respond to specific speakers, enhancing natural interaction. In meetings, spatial cues help disambiguate overlapping speech and attribute content by location. Though diarization is useful, it doesn’t ground speech in space. We see strong potential in combining it with DoA estimation to support both spatial and identity-aware speech understanding, which we aim to explore in future work. **2: The solution is just using Owlet framework, could not be claimed as own contributions.** While Owlet provides a powerful foundation, our work takes it one step further by integrating it with an LLM-based reasoning module, enabling new capabilities such as open-ended spatial understanding and dialogue-based interaction, which were not possible in the original framework. **3: The experiment should include the CNN-based model from Owlet as a baseline for DoA estimation.** Thank you for the suggestion. While the CNN-based model from Owlet is effective for single-source DoA estimation, our work advances beyond that by estimating multiple simultaneous DoAs and, more importantly, enabling an LLM to understand and reason about these spatial cues in context, paving the way for open-ended spatial dialogue rather than just angle prediction. We agree it’s a valuable baseline and will include the Owlet CNN model in our experiments for comparison. **4: Unclear why the authors generated an additional dataset (OmniTalk) instead of utilizing Owlet for data collection.** Owlet's original dataset was designed for generic audio with only a single sound source, whereas our goal in OmniTalk is to model **speech-specific**, **multi-speaker scenarios** with up to 5 overlapping sources. While OmniTalk is simulated, it is built using impulse responses generated by the Owlet hardware, ensuring consistency with real-world spatial characteristics. We will highlight it in the paper. **5: DoA with speech ASR results in twice lower performance than the baseline (SALMONN). This questions the need for spatial awareness.** The drop in performance is due to Whisper's sensitivity to spatial structure distortion in multi-source scenarios. To address this, we fine-tuned Whisper on spatialized speech mixtures, which led to a notable improvement in ASR accuracy—narrowing the gap with the SALMONN baseline and demonstrating that incorporating spatial awareness can indeed enhance robustness in challenging, multi-speaker environments. We will include the updated ASR results in the revised manuscript. **6: The supplementary material includes the analysis of the performance of estimating the number of speakers. However, the experiment details are not specified, and only the result of a single 5-speakers experiment is reported.** We will include detailed experiment settings for speaker count estimation in the camera-ready version. The results for 1 to 5 speakers are already reported in Table 2 and Figure 10. **7: The paper just lists the methodologies from various related works to build SING. For example, it is not properly motivated why we need a mel-spectogram and 3x3 CNN structure.** We use mel-spectrograms and 3×3 CNNs due to their proven effectiveness in speech tasks. Mel-spectrograms offer a perceptually meaningful time-frequency representation, widely used in ASR, localization, and classification. The 3×3 CNN, inspired by VGG, efficiently captures local spectrogram patterns. While SING builds on these established components, our key contribution is combining them with spatial supervision and LLM reasoning to jointly estimate and interpret spatial properties in natural language. **8: The current experiment does not properly reflect the superiority of the method for “aligning with LLMs” by evaluating only DoA/ASR with insufficient baselines.** Our evaluation focuses on aligning spatial audio with LLM reasoning, not just DoA or ASR. Tasks like spatial QA, location-aware summarization, and speaker counting test whether the LLM can interpret spatial cues in context. While traditional benchmarks ground the system, our goal is to assess how well spatial understanding supports reasoning and interaction. We agree that adding more baselines would strengthen the evaluation and plan to expand them in future work. **9: Figure 2 is slightly confusing - is the figure representing a supervised fine-tuning phase, considering DoA is already pre-trained?** Your understanding is correct. We will update the figure and writing to make it clearer. --- Rebuttal Comment 1.1: Comment: Dear authors, Thanks for the rebuttal and the discussion. Now I understand the importance of spatial positioning (from Q1), but I still have reservations. - For the questions on the experiment results (Q3, Q5), the authors said they will update in the revised manuscript but did not provide any detailed numbers; so it's difficult for me to be convinced about the evaluation. - For Q7, although the authors provided the reason to use certain components, I still need a better ablation study showcasing why their design choice is an optimal solution. - For Q8, I am still not convinced why do we need to align LLMs directly with spatial positioning; why don't we just use specialized models for DoA/ASR and pass it as an input text? The paper only evaluates on DoA/ASR rather than advanced reasoning or LLM-related capabilities. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, We appreciate your sincere interest in understanding the motivation of our work and your suggestions to enhance its quality. Let us first try to highlight the position of the paper and its lineage to existing works. I hope this will help us to convey our excitement about this project better. To begin, several recent research has opened up the possibility of directly aligning audio signals with LLM to expand the understanding beyond semantic interpretation of the speech therein. We are motivated by this novel direction and focused on introducing a new capability of spatial understanding from the audio signal directly into an LLM. Moreover, we have shown the feasibility of miniaturization of such physical-computational system by building on a recent and celebrated innovation in spatial acoustic sensing. The evaluation presented in the paper is primarily geared towards this scope of the work. As elaborated below, we have started new experiments pointed out in your review. As re-training the model with changed parameters takes several days, we relied on existing works in deciding parameters not critical to the scope of this work. We also included some new results that is achieved at this point without requiring retraining. Following is our response to your questions. We hope it will help us to reduce the gaps in our viewpoints. **1: Detailed numbers of evaluations** As shown in the CDF plots, our model, SING, has comparable performance with the similar experiment for the DoA estimation accuracy reported in the Owlet paper. Note that exact comparison is infeasible as Owlet processes the audio signal differently and does not attempt to retain in speech information. SING not only supports DoA estimation, but also retains speech features in the audio. **CDF plot for SING**: https://limewire.com/d/Mg8cw#QEKSu0q2iY **CDF plot for Owlet (taken from the paper with permission)**: https://limewire.com/d/YLkU7#jg8WSk9UYD **2. Ablation study for DoA encoder design** To show an ablation study of our DoA encoder design, we compared the performance of SING, designed DoA encoder that has a 3-layer CNN architecture together with the fine-tuning of LLM, a pure 3-layer CNN architecture without the fine-tuning of LLM, and a transformer-based DNN encoder. This transformer-based DNN encoder was trained using the same architecture as Audio Spectrogram Transformer, which is a popular transfomer architecture for audio. The mean DoA errors of these three models are: | Model | Mean DoA Error | | -------- | ------- | | SING | 25.72 | | 3-layer CNN | 11.00 | | Audio Spectrogram Transformer | 17.08 | Ast: Audio spectrogram transformer, arXiv preprint arXiv:2104.01778 (2021). **3. Align LLMs directly with spatial positioning; why not specialized models for DoA/ASR and pass it as an input text? Only evaluates on DoA/ASR rather than advanced reasoning or LLM-related capabilities** Direct audio integration with LLMs is rapidly becoming a new paradigm in multimodal AI research, with recent studies demonstrating significant advantages over pipeline approaches. Models like SALMONN and GAMA have shown that feeding raw audio directly into an LLM enables richer contextual understanding and emergent capabilities. This approach provides benefits that specialized modules with text descriptions alone cannot replicate. We see similar patterns in vision-language models such as LLaVA, where directly embedding visual features via linear projection significantly outperforms text-based captioning for complex reasoning tasks. Our research follows this established direction, enabling LLMs to jointly reason over spatial and semantic information in ways that are difficult to capture precisely with text descriptions. Our current evaluation focuses on establishing technical feasibility of accurate spatial perception with a miniaturized setup, which we see as a necessary foundation for wearable intelligent systems with higher-level reasoning abilities. While specialized models could extract DoA/ASR outputs as text, this approach would discard the rich latent spatial relationships that direct embedding preserves. Recent work like video-SALMONN demonstrates how integrated approaches yield more nuanced contextual understanding across modalities. We fully acknowledge that advanced spatial reasoning must be demonstrated in future work. Our present effort ensures the LLM can reliably *perceive* spatial information first which is an essential step toward the context-aware wearable applications. SALMONN: Towards generic hearing abilities for large language models, ICLR, 2024. GAMA: A large audio-language model with advanced audio understanding and complex reasoning abilities, Association for Computational Linguistics. Visual instruction tuning, Advances in Neural Information Processing Systems 36, 2023. video-SALMONN: Speech-enhanced audio-visual large language models, ICML, 2024.
Summary: The paper leverages the Owlet monaural microphone, with superior direction-of-arrival sensing (DoA), to endow LLMs with spatial audio awareness towards more intelligent wearables and other usability scenarios. To achieve this, the authors prepare synthetic variants of the LibriSpeech dataset with ground-truth source direction, train lightweight speech and DoA encoders, align the obtained speech embeddings into the LLMs input space, and apply supervised instruction fine-tuning with LoRA. The approach is compared to related works including BAT, SALMONN, SELDNet, and AudioMAE, demonstrating significant improvements for the tasks of spatially-aware ASR and soundscaping. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes. Given the lack of suitable datasets, synthetic data was generated following principled techniques from signal processing ($3.2). Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. Read through the experiments in S6 and S7. Supplementary Material: Yes, glanced at all Appendices whenever the main text pointed me there. Relation To Broader Scientific Literature: Good demonstration of the potential of integrating innovative hardware designs into advanced AI systems. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is very well-written and easy to follow, making it a good example for subsequent works. Good job! Other Comments Or Suggestions: - In both Table 2 and Figure 11, I wondered if the 0-speaker case should be considered? Please clarify. - It would help to add a citation to a friendly text or tutorial for readers without the signal processing background. This seemed needed before Eq. 1 and upon mentioning spectrograms, and even "ASR" (S3.2). - Also, STFT wasn't defined before Eq. 4, only later in S6.1 and again in S7.1. - It would help to clarify how the resampling operation in Eq. 2 is implemented. - It would help to clarify in which frame of reference the 360 degrees should be understood. Only later in future work there was a discussion of elevation angles. I assume there's a canonical frame define w.r.t. the sensor hardware, so were datasets generated with the assumption the sensor is "upright"? More nitpicking - (S4.2) Not sure I understand "hidden states". Guessing it's simply meant as "latent representations"? - (S6.2) It mentions 20 minutes / epoch, so I was wondering how many epochs? I see it's listed in Table 3 under Appendix B. - (S6.2) we first lock the LLM --> freeze? - (S7.1) final recording is trim and pad to 8 seconds -> trimmed and padded? Questions For Authors: No further questions at this time. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Comment 1: 0-speaker case should be considered.** Thank you for pointing this out. In the current version of Table 2 and Figure 11, we focus on scenarios with one or more speakers. However, we agree that it would be valuable to explicitly consider the 0-speaker case. To address the 0-speaker case, we plan to incorporate a dedicated speech activity detection module, similar to those used in voice activity detection (VAD) systems such as the referenced papers below, which can detect whether any speakers are active before estimating their count or direction. Ramırez, Javier, José C. Segura, Carmen Benıtez, Angel De La Torre, and Antonio Rubio. "Efficient voice activity detection algorithms using long-term speech information." Speech communication 42, no. 3-4 (2004): 271-287. Ball, Joshua. "Voice Activity Detection (VAD) in Noisy Environments." arXiv preprint arXiv:2312.05815 (2023). **Comment 2: Add a tutorial for readers without signal processing background.** We'll add a citation to a beginner-friendly tutorial and textbooks, such as Jurafsky & Martin or an online spectrogram/ASR tutorial, to help readers without a signal processing background, especially before Eq. 1 and in §3.2. Also including STFT and resampling in this tutorial. **Comment 3: Frame of reference the 360 degrees should be understood.** The 360° azimuth angles are defined in a canonical frame relative to the sensor hardware, assuming the device is in an upright orientation. In our current setup, we align the sensor's forward-facing direction with 0°, and angles increase clockwise in the horizontal (azimuthal) plane. Traditionally werable systems rely on the embedded IMU to determine the sensor's orientation and, when needed, apply a standard coordinate transformation to convert angles from the sensor frame to the global frame. We will include a discussion on this topic in the revised paper. https://qsense-motion.com/quaternion-orientation-imu-sensor-fusion/ **Comment 4: Other details revisions.** Thank you for these detailed suggestions! - You’re right, “hidden states” in §4.2 refers to the intermediate latent representations output by the model layers. We’ll clarify the terminology to avoid confusion. - Thanks for the comment regarding §6.2. We’ll revise “20 minutes/epoch” to explicitly mention the total number of epochs in the main text for clarity, even though it's listed in Table 3 (Appendix B). - Yes, we will replace “lock the LLM” with “freeze” for correctness and consistency with standard terminology. - We will correct “trim and pad” to “trimmed and padded.” Thanks again for point this out.
Summary: In this paper, the authors introduce SING, a system that integrates spatial speech understanding into LLMs to enhance context-aware applications for wearable devices. SING uses microstructure-based sensing with a monaural microphone to extract Direction of Arrival information and combines it with linguistic embeddings from Whisper model. The fused embeddings are aligned with LLaMA-3.2 3B and fine-tuned using LoRA for efficient on-device processing. SING achieves improved ASR performance with a 25.72° mean DoA error and 5.3% WER. It also support multi-speaker detection with a median DoA error of 16°, demonstrating good performance under power, privacy, and hardware constraints for applications like AR and accessibility. Overall, there are both algorithm and system design in this paper. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: not applicable Experimental Designs Or Analyses: designs are ok Supplementary Material: no Relation To Broader Scientific Literature: This method has potential applications in wearable computing, which is a hot topic. Essential References Not Discussed: Some references can be added in the Vision section, in the introduction. I have mentioned the details below. Other Strengths And Weaknesses: Overall, I enjoyed reading this paper because the research topic is interesting and has broader impact potential. However, I also have some concerns: [1] In the last few sentences of the Vision paragraph, Introduction Section, some references could be added to support the potential applications. [2] I suggest the authors to introduce or define what “spatial” or “spatial feature” is in the early of the Introduction section. For readers with other backgrounds, it might be confusing, e.g., spatial could mean geographical knowledge, or human movement in the 3-d environment. And I suppose those kinds of spatial context has been explored for a while by multiple studies. [3] When introducing "IoT sensor" for the first time, it should be written as "Internet of Things (IoT)" in full. [4] In the Introduction, the authors mention “the mean error rate (MAE) is 88.52°, too high for real-world applicaitons”. Appropriate contexts could be given here to help the readers. Otherwise, regular readers have no idea what difference will it make if reduce from 88.52 to 8.52. [5] I acknowledge that comparing with baselines might be not easy given the speciality of this topic. The authors could provide more justifications about this, otherwise, it seems the baseline models are too limited. [6] About the output of the model, I also have some concerns, correct me if I am wrong. It comes to me that we can use a easier way to achieve the same output , e.g., "who is talking about what at which direction". For example, we design a simple method to detect the direction of the talking person and get x, which is a numerical value such as 120 degree. Then, combine x with the detected text language and give it to the LLM, with some kind of prompt design, the LLM will generate a similar output, "who is talking about what at which direction. " as shown in Fig.2. Other Comments Or Suggestions: please see above Questions For Authors: please see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Comment 1: I suggest the authors to introduce or define what “spatial” or “spatial feature” is in the early of the Introduction section.** We appreciate the reviewer’s insightful comment. We agree that the term “spatial” can be interpreted in multiple ways depending on the context. To clarify our intended meaning, we will revise the Introduction section to explicitly define “spatial features” in the context of our work as acoustic cues that convey directional or positional information of sound sources. We will also add a brief comparison to distinguish our usage from other interpretations of “spatial” such as geographical or kinematic contexts. **Comment 2: It comes to me that we can use a easier way to achieve the same output , e.g., "who is talking about what at which direction". For example, we design a simple method to detect the direction of the talking person and get x, which is a numerical value such as 120 degree. Then, combine x with the detected text language and give it to the LLM, with some kind of prompt design, the LLM will generate a similar output, "who is talking about what at which direction. " as shown in Fig.2.** While the proposed approach of using explicit numerical values (e.g., "120°") combined with transcribed speech and a prompt to generate structured outputs is viable for single-speaker scenarios, it faces significant scalability and robustness limitations in multi-speaker environments. In particular, multi-DoA scenarios involve multiple overlapping speakers and corresponding DoAs. Concatenating raw numbers or crafting ad-hoc prompts for each speaker-direction pair becomes increasingly brittle and complex as the number of speakers increases. Embedding-based representations are more scalable and generalizable, allowing us to encode both directional and contextual information in a unified vector space that the LLM can attend to and reason about. This enables the system to handle variable-length inputs, concurrent speakers, and noisy observations more robustly. We have added a discussion of this design in the camera-ready version. **Comment 3: In the Introduction, the authors mention “the mean error rate (MAE) is 88.52°, too high for real-world applicaitons”. Appropriate contexts could be given here to help the readers. Otherwise, regular readers have no idea what difference will it make if reduce from 88.52 to 8.52.** We thank the reviewer for the thoughtful suggestion. We will revise the Introduction section to include specific real-world application contexts where accurate DoA estimation is critical, and where an MAE of 88.52° would be insufficient. For example: - In a meeting scenario, an error of 88.52° would result in attributing spoken content to the wrong person sitting across the table, severely degrading the reliability of voice-based summarization features in AR glasses or virtual assistants. - In accessibility tools for the visually impaired, such as acoustic navigation or sound-based obstacle detection, such a large directional error could cause the user to orient toward the wrong direction, compromising safety and usability. - In smart home systems, identifying which room or device a user is speaking from (e.g., turning toward a speaker to answer a question) would fail if the estimated direction is off by nearly 90°, leading to incorrect command execution. - In immersive AR/VR applications, spatial misalignment of that magnitude would result in audio-visual desynchronization, breaking the immersion and confusing the user. **Comment 4: I acknowledge that comparing with baselines might be not easy given the speciality of this topic. The authors could provide more justifications about this, otherwise, it seems the baseline models are too limited.** We appreciate the reviewer’s comment and agree that rigorous comparison with baseline methods is essential. To address this, we have carefully selected and implemented a widely used and fundamentally distinct traditional DoA estimation technique MUSIC algorithm. To ensure fairness, we applied two algorithms on same size of two-microphone array under matched conditions and identical SNR levels. SING significantly outperforms MUSIC algorithm in terms of the cumulative distribution of angular errors, with 3.6 degree median error for SING and 15.2 degree median error for MUSIC algorithm. This comparison provides a comprehensive and rigorous baseline evaluation. We will also add this result in the paper. **Comment 5: When introducing "IoT sensor" for the first time, it should be written as "Internet of Things (IoT)" in full.** Thanks for pointing this out. We will use Internet of Things (IoT) at the first time we mention IoT. **Comment 6: In the last few sentences of the Vision paragraph, Introduction Section, some references could be added to support the potential applications.** Thanks for the valuable suggestion. We will cite relevant references to support the potential applications. --- Rebuttal Comment 1.1: Comment: Thank you for providing detailed responses, which address most of my concerns. Therefore, I will maintain a positive rating. Hope to see the revised version.
Summary: This paper introduces SING, a system that integrates spatial speech understanding into LLMs for wearable applications. It leverages microstructure-based spatial sensing to extract Direction of Arrival (DoA) information using a single monaural microphone. Spatial cues are fused with Whisper embeddings and aligned to LLaMA-3.2 3B using a linear projection. The system achieves a DoA mean error of 25.72° (vs. 88.52° in prior work) and a WER of 5.3%. The work has potential applications in spatially-aware ASR, accessibility, and augmented reality. Claims And Evidence: Claim: SING significantly improves DoA estimation using a monaural microphone. - Evidence: Achieves a mean error of 25.72° vs. 88.52° in BAT. - Limitation: Lacks real-world dataset validation; results are based on synthetic data. Claim: The system enables spatially-aware ASR. - Evidence: Demonstrates 5.3% WER with spatial embeddings. - Limitation: Linear projection may limit generalization across environments. Claim: Optimized for wearable devices. - Evidence: Uses LoRA for efficient fine-tuning. - Limitation: No benchmarks on latency, power, or memory for real-time on-device use. Methods And Evaluation Criteria: Uses OmniTalk (synthetic dataset), derived from LibriSpeech, to simulate spatial speech. Evaluates DoA estimation and ASR performance but lacks real-world testing. Comparison with BAT and SALMONN is valid but excludes state-of-the-art beamforming methods. Theoretical Claims: NA Experimental Designs Or Analyses: DoA experiments are well-structured but rely on synthetic impulse responses. Evaluations on background noise and reverberation are limited to GTU-RIR dataset, lacking real-world generalization tests. No quantitative analysis of model robustness under speaker movement or occlusions. Supplementary Material: Reviewed dataset details, hyperparameters, and additional results. Spectrograms and CDF plots support model accuracy but lack failure case analyses. Relation To Broader Scientific Literature: Extends prior work in spatial ASR, multimodal LLMs, and monaural DoA estimation. Cites BAT (spatial audio LLMs) and Whisper, but omits comparisons with beamforming and HRTF-based methods. Essential References Not Discussed: Traditional microphone array DoA methods (e.g., MUSIC, GCC-PHAT) should be compared. Deep-learning-based spatial ASR (e.g., SELDNet, SoundSpaces) is missing. Other Strengths And Weaknesses: Strengths 1. Novel approach: First LLM-based spatial ASR using a monaural microphone. 2. Efficient design: Uses LoRA for low-power adaptation. 3. Strong performance: Outperforms BAT in DoA estimation. Weaknesses 1. Linear projection limits generalization to unseen conditions. 2. No real-world dataset evaluation; performance may degrade in natural environments. 3. Wearable feasibility unverified; lacks latency and power benchmarks. 4. .Does not evaluate closely spaced speakers (<10° separation). Other Comments Or Suggestions: NA Questions For Authors: 1. How does the model perform on real-world datasets, not synthetic data? 2. What are the latency and memory requirements for on-device inference? 3. How does SING handle occlusions or moving speakers? 4. Why was a simple linear projection chosen over attention-based fusion? 5. Can SING be adapted to estimate elevation angles (3D spatial ASR)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1: Lacks real-world dataset validation. Results are based on synthetic data.** We used real-world impulse responses measured from calibration of the acoustic frontend and followed standard practices from acoustic physics principles. This methodology enables creation of a diverse, controlled dataset while preserving real acoustic properties. Similar approaches are common in spatial audio research, such as the references below. We've verified key components of the data in real-world settings, showing strong correlation between synthetic and real-world performance. We will update the paper to better highlight our validation methodology and include additional real-world test cases from the AMI dataset to demonstrate generalizability. Rishabh et al. "Wave field synthesis: The future of spatial audio." IEEE Potentials 32, no. 2 (2013): 17-23. Ville et al. "Spatial sound scene synthesis and manipulation for virtual reality and audio effects." Parametric Time‐Frequency Domain Spatial Audio (2017): 347-361. **2: Linear projection may limit generalization across environments.** Our current evaluation contains results for the model's resilience to new environments (Appendix E). We selected a linear projection over attention-based fusion primarily for efficiency on wearable devices. Our tests across varied acoustic conditions demonstrate that this approach generalizes effectively despite its simplicity. We will better highlight these generalization results in the revised paper to address this concern. **3: No benchmarks on latency, power, or memory for real-time on-device use.** We have conducted benchmark experiments and will add our results to the paper. Our edge-cloud hybrid approach processes DoA and speech embeddings on-device, then transmits these small embeddings to the cloud for LLM inference. For comparison, Whisper-tiny model uses **273MB** memory, **75 MiB** disk usage, and **<500ms** latency. This model has been deployed on mobile devices. Please find the git repo below. Our DoA encoder is considerably lighter: it achieves an average latency of **62.93 ms** per speech file. Memory usage per speech file is **50 MB**, and drops to **16 MB** when using the quantized model. During full inference, the overall memory footprint is **741.42 MB**. These results demonstrate that our method is well-suited for on-device processing in wearable or embedded systems, where both memory and power are constrained. We will provide a summary table of latency and memory footprints in comparison to other works with similar scope as ours. https://github.com/ggerganov/whisper.cpp **4: No quantitative analysis of model robustness under speaker movement or occlusions.** Approaches for spatial analysis of sound in human-centric application consider low dynamicity of the source source. SING is designed for reasonably slow-moving environments where acoustic conditions don't change significantly within short timeframes. On the other hand, our current framework shows promising performance with static occlusions, with results shown in Table 6. For future work, we plan to handle fast moving speaker scenarios, dynamic tracking, and Doppler frequency shifts. **5: Omits comparisons with beamforming and HRTF-based methods.** Primary focus of the current work has been enabling spatial awareness in ultra-compact form factors where traditional beamforming methods cannot operate due to physical constraints (requiring multiple spatially-separated microphones). While HRTF methods offer complementary benefits, they typically require binaural setups not suited for our target monaural applications. We cite relevant papers on these techniques and clarify these distinctions in the revised paper. **6: Deep-learning-based spatial ASR (e.g., SELDNet, SoundSpaces) is missing.** We compared our method with SELDNet in Table 2, showing SING achieves a 25.72° MAE versus SELDNet's 90.03° MAE, demonstrating superior performance despite using fewer microphones. SoundSpaces is primarily an acoustic simulation platform rather than a competitive DoA system. We will revise the comparison section to better highlight these results. **7: Does not evaluate closely spaced speakers (<10° separation).** Our hardware demonstrates a 3° median error in controlled settings, indicating capability for fine-grained angular resolution. The 10° separation in our experiments was chosen for common human-centric scenarios. While the system can resolve speakers at closer angular separations, we focused on realistic use cases for initial evaluations. **8: Can SING be adapted to estimate elevation angles (3D spatial ASR)?** Yes, SING can be extended to estimate elevation angles with minimal changes. While the current hardware is optimized for azimuth, the architecture supports 3D estimation if impulse responses are calibrated for elevation and the hardware is updated. We plan to explore this in future work. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their comments, and based on the answers would like the keep the same score.
null
null
null
null
null
null
Cooperation of Experts: Fusing Heterogeneous Information with Large Margin
Accept (poster)
Summary: The authors propose the Cooperation of Experts (CoE) framework to fuse heterogeneous information in multiplex networks. Authors design a two-level expert system where low-level experts focus on individual network layers while high-level experts capture cross-network relationships. The framework employs a large margin mechanism to optimize the collaboration among experts, ensuring that each expert's specialized knowledge is effectively utilized. The experiments demonstrate the superiority of their approach over state-of-the-art methods in both multi-relational and multi-modal tasks. Claims And Evidence: Clear and convincing evidence are given to support the claims. Methods And Evaluation Criteria: The proposed method is well-suited for the heterogeneous information fusion problem. The datasets are representative of heterogeneous data, demonstrating the broad applicability of the method. Theoretical Claims: I reviewed the proofs and they appear to be correct. Experimental Designs Or Analyses: I carefully reviewed the experimental setup and analyses to assess their rigor. Supplementary Material: I carefully examined the supplementary material. Relation To Broader Scientific Literature: 1. The paper introduces a novel framework that emphasizes expert cooperation rather than competition, which is a significant departure from traditional Mixture of Experts (MoE) approaches. This is a fresh perspective in the field of graph neural networks (GNNs) and multiplex network learning. 2. The large margin mechanism is an innovative optimization strategy that ensures robust expert collaboration, leading to improved predictive performance. Essential References Not Discussed: None. Other Strengths And Weaknesses: Pros: 1. The motivation is clearly presented by Figure 1. The observed phenomenon is interesting and important. 2. The expert perspective for multiplex network learning is interesting and novel. 3. CoE is highly resilient to structural perturbations, which is crucial for real-world applications where data may be noisy or incomplete. Cons: The writing quality of the paper could be improved for better readability and clarity. Several grammatical errors, awkward phrasings, and unclear sentences make it difficult to follow the arguments. Other Comments Or Suggestions: 1."The cross-entropy loss Lcls(Z,Y ) is the upper bound of −I(G;Y ), where Z denotes the node representations of all nodes in network G." Correction: "The cross-entropy loss Lcls(Z,Y) serves as an upper bound for −I(G;Y), where Z represents the node embeddings of all nodes in network G." 2. "We select two supervised structure-fixed GNNs—GCN (Kipf & Welling, 2016) and HAN (Wang et al., 2019)—as well as six supervised GSL methods." Correction: "We select two supervised, structure-fixed GNNs—GCN (Kipf & Welling, 2016) and HAN (Wang et al., 2019)—along with six supervised GSL methods." 3."L(Θgi) is a convex function with respect to (Θgi)." Correction: " L(Θgi) is a convex function with respect to Θgi." Questions For Authors: 1. The number of experts is given in Line 187. How do you get it? It should be explained in more detail. 2. The supplementary material suggests that the authors add a loss convergence plot. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the encouraging and constructive feedback. We greatly appreciate your recognition of our motivation, expert coordination design, and robustness to structural perturbations. Below we provide detailed responses to your suggestions. Figures and Tables are summarized in this link: **https://anonymous.4open.science/r/ICML_rebuttal-7D0E**. **Weakness & Suggestions: Writing and clarity** Thank you for pointing out the writing issues. We agree that improving readability is important for accessibility and impact. In the final version, we will: 1. Thoroughly proofread the entire manuscript for grammatical correctness and clarity. 2. Refine awkward phrasings, shorten overly long sentences, and ensure each technical statement is precisely and clearly expressed. 3. Review the supplementary materials and figure captions to maintain consistency and readability. We apologize for the writing quality issues in our manuscript, which may have caused some difficulty in following the arguments. Once again, we sincerely appreciate your careful reading, and we have made efforts to clarify the text in the revised version. We are confident that these improvements will significantly enhance the presentation quality of the paper. **Question1: Numbers of experts** Thanks for the concern, we appreciate the opportunity to clarify this point. In Section 4.2, we emphasize that the number of experts is fixed to $\frac{V(V+1)}{2}+1$, where $V$ is the number of views. This is mainly because CoE train $V$ low-level experts on each network view and $C^{2}_V= \frac{V(V-1)}{2}$ high-level experts. Additionally, we pointed out that an extra high-level expert is trained on $G_{tot}$. Thus, we have $V+ \frac{V(V-1)}{2}+1=\frac{V(V+1)}{2}+1$ experts in total. It is remarkable that the number of experts is limited, thus the overall expert training cost highly manageable. We analyze the model's efficiency **in the link above**. **Question 2: Loss convergence plot in the supplementary** Following your advice, we provide the convergence plots **in the link above**, on common-scale and large-scale datasets respectively. Across all datasets, the training loss decreases smoothly and steadily, without oscillation or divergence, reflecting stable optimization dynamics. A significant loss drop is observed within the first 100–200 epochs, indicating that the confidence tensor and expert fusion strategy are effective at capturing informative gradients early in training. The convergence behavior is consistent across datasets of different scales and domains, which shows that our training process is not sensitive to specific data distributions or graph modalities. These properties collectively confirm that CoE is not only theoretically convergent (as shown in Theorem 5.5), but also empirically stable and efficient in practice. Once again, we sincerely appreciate your thoughtful comments and are encouraged by your support. Your feedback has helped us further improve the clarity and completeness of the paper. We hope the final version will fully meet your expectations, and we are more than happy to add clarifications to address any additional recommendations and reviews from you! --- Rebuttal Comment 1.1: Comment: Thank you for addressing the key questions I raised. These supplementary clarifications have given me a more comprehensive understanding of the paper's value and significance. I will increase my rating for this paper.
Summary: This paper proposes the Cooperation of Experts (CoE) framework, which solves the problem of multimodal heterogeneous information fusion by constructing a heterogeneous multiplexing network. The research focuses on the challenge of pattern heterogeneity across semantic spaces, designing specialized encoders as domain experts, combining large interval collaboration mechanisms and optimization strategies to achieve robust modeling and knowledge complementary extraction of complex data structures. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. All parts. Relation To Broader Scientific Literature: * Inspired by the MoE (Mixture of Experts) framework, but innovatively introducing a learnable confidence tensor, it solves the limitation of expert competition rather than cooperation in traditional MoE (such as the unactivated expert utilization problem proposed by Shi et al., 2024a). * By maximizing the margin of the predicted results, the theory of ensemble learning (such as Boosting's weight adjustment) is combined with graph neural networks, distinguishing it from static expert weight allocation methods (such as Liu et al., 2022's fixed gating mechanism). Essential References Not Discussed: > Common ensemble methods include bagging (Zhou & Tan, 2024) and boosting (He et al., 2024). In recent years, deep forests [Zhou and Feng, NSR 2019] have emerged in the field of ensemble learning, combining bagging and boosting techniques. Especially the theoretical analysis of its large margin property [Lyu et al., NeurIPS 2019] is highly relevant to this paper. 1. Zhou, Z. H., & Feng, J. (2019). Deep forest. National Science Review, 6(1), 74-86. 2. Lyu, S. H., Yang, L., & Zhou, Z. H. (2019). A refined margin distribution analysis for forest representation learning. Advances in Neural Information Processing Systems, 32. Other Strengths And Weaknesses: Strengths: * For the first time, a framework emphasizing expert cooperation rather than competition was proposed, breaking through the paradigm limitations of expert competition in traditional MOE models. Introducing the large margin optimization mechanism into expert collaboration scenarios provides a new perspective for model optimization (the theoretical innovation of this mechanism is validated by comparing it with traditional RF/WRF methods). * The experimental design covers 0-90% of the network structure disturbance intensity and systematically verifies the robustness of the method in extreme attack scenarios, making up for the shortcomings of existing research on high-intensity disturbance testing. Weaknesses: * Theoretical depth limitation: No mathematical convergence proof or generalization error boundary analysis provided for the large margin mechanism. * Doubtful generalization of scenarios: The current experiment is only based on the ACM dataset and has not validated its effectiveness on larger scale graph data or cross modal scenarios. Other Comments Or Suggestions: NAN. Questions For Authors: NAN. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your thoughtful feedback. Your constructive criticism is invaluable in refining our work. Below, we give point-by-point responses to your comments. **Weakness 1** We fully agree that deeper theoretical analysis strengthens the credibility of a new learning framework. To clarify, we would like to emphasize that **Theorem 5.5 in the main paper provides a mathematical convergence analysis** of the optimization procedure. This theorem rigorously proves that the gradient norm of training objective is guaranteed to converge to zero at a sublinear rate, which establishes the optimization stability and convergence of our training process. Following your advice, we further prove a **generalization error bound** ---- a probabilistic upper bound on the 0-1 loss of the CoE classifier: Let $\mathcal{X}\times\mathcal{Y}$ be the input-label space with $|\mathcal{Y}|=C$ classes. We have an i.i.d. training sample $S=\{(x_i,y_i)\}_{i=1}^n$. A CoE classifier $f$ produces a probability vector over $C$ classes, denoted $f(x)=(f_1(x),\dots,f_C(x))^\top$. We define the margin of $f$ at $(x,y)$ as $$ \gamma_f(x,y):=f_y(x)- \max_{y'\neq y} f_{y'}(x), $$ a large positive $\gamma_f(x,y)$ implies a strong preference for the correct class $y$. The usual 0-1 loss is $\ell_{\mathrm{0\text{-}1}}(f;x,y):=\mathbb{I} [\arg\max_{c}f_c(x)\neq y].$ We also define $\ell_{\gamma}^{\mathrm{0\text{-}1}}(f;x,y):=\mathbb{I} [\gamma_f(x,y)\leq0]$ and the ramp loss: $$ \ell_{\gamma}(f;x,y) := \begin{cases} 0, & \gamma_f(x,y)\ge \gamma,\\\\ 1-\dfrac{\gamma_f(x,y)}{\gamma}, & 0<\gamma_f(x,y)<\gamma,\\\\ 1, &\gamma_f(x,y)\le0. \end{cases} $$ One has $\ell_{\mathrm{0\text{-}1}}(f;x,y)\le \ell_{\gamma}^{\mathrm{0\text{-}1}}(f;x,y) \le \ell_{\gamma}(f;x,y).$ Let $\ell_\gamma\circ\mathcal{F}$ be the set of ramp-loss functions induced by a hypothesis class $\mathcal{F}$, where each $f\in\mathcal{F}$ is a CoE classifier. Then for any $\delta>0$, with probability at least $1-\delta$ over an i.i.d. sample $S=\{(x_i,y_i)\}_{i=1}^n$, the following holds for all $f\in\mathcal{F}$: $$ \mathbb{E}[\ell _{0-1}(f)]\le \mathbb{E} _{(x,y)\sim\mathcal{D}}[\ell _\gamma^{0-1}(f;x,y)]\le \frac{1}{n}\sum _{i=1}^n\ell _\gamma(f;x _i,y _i)+\frac{2}{\gamma}\mathfrak{R} _n(\mathcal{F})+3\sqrt{\frac{log(\frac{2}{\delta})}{2n}}, $$ where $\mathfrak{R}_n(\mathcal{F})$ is the Rademacher complexity of the CoE margin-function class. Due to space limitations, the detailed proof will be included in the final version and is omitted here. In CoE framework, we have $k$ experts $E_1,\dots,E_k$, each outputs a probability vector $E_j(x)\in \mathbb{R}^C$. Besides, we have a confidence tensor $\Theta$ and we form $g(x)=[E_1(x)^\top,\dots,E_k(x)^\top]^\top$, then $f(x)=\mathrm{softmax}\bigl(\Theta g(x)\bigr).$ And the margin is $\gamma _f(x,y)=[\Theta g(x)] _y-\max _{y'\neq y}[\Theta g(x)] _{y'}$. We assume $||\Theta|| _F\le B _\Theta$ and $||E _j(x)|| _2\le G _e ,\forall j,x$. Hence $||g(x)|| _2\le \sqrt{k}G _e$. Let $\mathcal{F} _\Theta$ be the set of CoE margin functions $\gamma_f$. Then $\mathfrak{R} _n(\mathcal{F} _\Theta)\le C _\mathrm{MC} \frac{B _\Theta G _e \sqrt{k}}{\sqrt{n}}$, where $C _\mathrm{MC}$ is a constant on the order of $\sqrt{ln(C)}$, reflecting the multi-class max operation. With probability at least $1-\delta$, all $f \in \mathcal{F}_ \Theta$ satisfy $$ \mathbb{E}\bigl[\ell_{\mathrm{0\text{-}1}}(f)\bigr]\le\frac1n \sum_{i=1}^n \ell_{\gamma}\bigl(f;x_i,y_i\bigr)+\frac{2B_\Theta G_e\sqrt{k}}{\gamma \sqrt{n}}+3\sqrt{\frac{\log(\tfrac2\delta)}{2n}}. $$ It shows that ensuring a large margin $\gamma$ and controlling the norms $B_\Theta$ (confidence-tensor magnitude) and $G_e$ (expert-output scale) leads to a generalization guarantee. Increasing the number of experts $k$ has a $\sqrt{k}$ impact, illustrating the trade-off between model capacity and margin-based guarantees. Due to CoE mechanism limits the number of $k$ and $B_\Theta$, while $G_e$ is fixed, thus our model has remarkable generalization ability. **Weakness 2** We would like to clarify that our main experiments already include evaluations on **large-scale network datasets** such as Amazon and MAG in Table 1, all of which involve diverse views and heterogeneous information types. Besides, we conduct experiments on four **multi-modal datasets** in Table 2. To further strengthen our claims, we conduct robustness experiments on large-scale datasets and main experiment on an additional large-scale dataset DGraph, results are concluded in **https://anonymous.4open.science/r/ICML_rebuttal-7D0E**. Once again, we appreciate your valuable suggestions to help us improve our work. We will include the missing citations in the revised version and ensure all relevant work be properly cited. With the newly added theoretical analysis and experiments, we hope your concerns have been addressed. We sincerely hope this response and the revised improvements can earn your stronger recommendation.
Summary: This paper proposes the Cooperation of Experts (CoE) framework, which aims to address the challenge of fusing heterogeneous information in modern data analysis. The CoE framework encodes multi-typed information into unified heterogeneous multiplex networks and allows dedicated encoders, or "experts," to collaborate rather than compete. The authors claim that this approach captures the intricate structures of real-world complex data and outperforms existing methods. Claims And Evidence: Theoretical analysis and extensive experimental on multi-relational graphs and multi-modal data verify the claims. Methods And Evaluation Criteria: The proposed method makes sense to me. The experiments are conducted on two types of data, verifying the generalizability of the proposed method. Theoretical Claims: Theoretical justification is provided to show the convergence property of the proposed method, which makes sense to me. Experimental Designs Or Analyses: The experiments are conducted on two types of data and compared with different categories of methods, which is sufficient. Supplementary Material: I checked the supplementary material thoroughly. Relation To Broader Scientific Literature: The proposed CoE framework encodes multi-typed information into unified heterogeneous multiplex networks, which transcends modality and connection differences. It provides a unified approach to handle real-world complex data. To my best knowledge, this is the first application of expert learning idea to multiplex networks. Essential References Not Discussed: The references are complete. Other Strengths And Weaknesses: Strengths: 1. The CoE framework introduces a unique approach to handling heterogeneous information by transcending modality and connection differences, which is a significant advancement in the field. 2. The paper provides rigorous theoretical analyses to support the feasibility and stability of the CoE framework, adding credibility to the proposed method. 3. The extensive experiments across diverse benchmarks demonstrate the superior performance of the CoE framework, indicating its broad applicability and effectiveness. Weaknesses 1. While the paper verifies the effectiveness of the CoE framework, it lacks a deeper discussion on the scalability of the approach, especially for very large datasets. Training multiple experts and computing mutual information across networks introduces computational complexity. 2. The paper could benefit from some experimental analysis in terms of computational efficiency, as this is an important aspect of practical applications. Other Comments Or Suggestions: 1. The limitation of the proposed method should be discussed. 2. "While boosting relies on sequential training where each model builds upon the previous one"->"While boosting relies on sequential training, where each model builds upon the previous one" 3. Some notations are confusion. For example, L represents different meanings in the supplementary. Questions For Authors: 1. How does the CoE framework perform in terms of scalability when applied to very large datasets? Are there any specific challenges or limitations? 2. Can the authors provide more details on the computational efficiency of the CoE framework in the experiments? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your thoughtful feedback. Your constructive criticism is invaluable in refining our work. Below, we give point-by-point responses to your comments. Figures and Tables are summarized in this link: **https://anonymous.4open.science/r/ICML_rebuttal-7D0E**. **Weakness 1 & Question 1: Model scalability** We appreciate your important concern about scalability. CoE is designed to be modular and extensible, and we agree that scalability becomes critical for real-world deployment. In fact, the following characteristics of CoE are particularly favorable for large-scale data: Parallelizable Expert Modules: Unlike boosting or sequential learning schemes, experts are trained in parallel, which makes the framework highly suitable for distributed training and parallel inference. Expert Fusion via Lightweight Confidence Tensor: Rather than using complex hierarchical gating or stacking mechanisms, CoE fuses the predictions of all experts through a simple yet effective linear tensor-based confidence fusion mechanism. This module has negligible memory and computational overhead, and scales linearly with the number of experts. Reformulating Mutual Information (MI): network-level MI is replaced by node-level representations to reduce complexity. Scalability-Aware Experiments: On the largest datasets in our experiments (e.g., MAG (113,919 nodes) and Amazon (11,944 nodes)), CoE demonstrates not only strong performance but also comparable training time to state-of-the-art baselines (see the next part). New Large-Scale Benchmark: We additionally include a new large-scale experiment on the DGraph dataset, with 111,310 nodes and 430k+ edges. CoE outperforms other scalable baselines such as InfoMGF and other baselines which get relatively high scores on the previous datasets. These results provide strong empirical evidence that CoE can scale to realistic large-scale graph settings. To directly show the performance, we summarize the results on MAG, Amazon and DGraph **in the link above**. **Weakness 2 & Question 2: Empirical computational complexity** We appreciate the reviewer’s concern regarding the computational efficiency of CoE. Although CoE adopts a two-stage structure where experts are trained individually before being fused, we emphasize that: - **The number of experts is limited** and does not grow with data size. In practice, we only train limited number of experts, which makes the overall expert training cost highly manageable. - **Fast convergence is achieved**. This is achieved due to the design of our confidence tensor and the optimization strategy. Specifically, the confidence tensor is a lightweight linear transformation trained jointly with the final fusion stage, and it converges rapidly in practice. - **Linear scalability is observed** on large-scale datasets. Despite the two-stage structure, CoE exhibits empirically near-linear runtime scaling with dataset size. For example, on large-scale datasets like MAG and DGraph, CoE trains even faster than several single-encoder GSL baselines, as detailed **in the link above**. This demonstrates that the proposed architecture does not incur significant overhead compared to standard GNN-based models. **Suggestion 1: Limitation of proposed method** Currently, CoE is designed for heterogeneous multiplex networks, the current formulation assumes static input structures. Extending the framework to accommodate more complex graph settings like dynamic or hierarchical networks remains an interesting direction for future research. Additionally, although we adopt GCN as the base encoder in this work for fairness in comparison, incorporating other architectures such as GAT or graph transformers remains future work. **Suggestion 2: "While boosting relies on sequential training where each model builds upon the previous one"->"While boosting relies on sequential training, where each model builds upon the previous one"** We sincerely appreciate the suggestion. We will revise the corresponding phrase in the refined version. **Suggestion 3: Notation confusion** Sorry for the trouble caused by our symbols, we apologize for the confusion caused by inconsistent notation. As you correctly noted, some symbols such as $\mathcal{L}$ are overloaded in the supplementary, mainly occurred in Section D. We will unify and revise all instances of $\mathcal{L}$ in the final version of the paper. Additionally, we will conduct a thorough pass of the paper to ensure precise and consistent mathematical expression. Your concerns about scalability and efficiency are well-justified and have helped us greatly improve the clarity and practicality of our method. We hope the new large-scale experiments, added analysis, and revisions in the final version will adequately address your concerns. We sincerely hope this response convinces you that CoE is scalable, practical, and broadly applicable, and that our efforts in addressing your suggestions merit a higher overall recommendation. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed response to the concerns. I have no other concerns and think this work brings a clear contribution to the related community. So I keep the positive rating.
Summary: This paper presents the CoE framework, a groundbreaking method for extracting knowledge from diverse and multi-layered networks. Its core novelty lies in a hierarchical expert coordination mechanism, where specialized low-level experts focus on capturing unique relational patterns, while high-level experts integrate insights from across these networks. The framework is further refined through the incorporation of a large margin mechanism, which optimizes expert collaboration, enhancing both robustness and generalization capabilities. Theoretical assessments validate the feasibility of the proposed method, and comprehensive experiments conducted on benchmark datasets demonstrate its superiority compared to existing techniques. Claims And Evidence: Yes, they possess a convincing persuasive force. Methods And Evaluation Criteria: Yes, the methodologies and assessment standards put forth should be pertinent and suitable for the problem or application in question. Theoretical Claims: Yes, I have checked the validity of the proofs underpinning the theoretical claims discussed. Experimental Designs Or Analyses: Yes, I have checked the validity of the experimental designs and analyses. Supplementary Material: Yes, I reviewed the supplementary material. Relation To Broader Scientific Literature: In this work, unlike prior Mixture of Experts (MoE) models that rely on gating mechanisms (activating a subset of experts), CoE promotes cooperation instead of competition. The application of MoE to heterogeneous information fusing is new. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1) Novel Expert Coordination Strategy: (a) Unlike prior Mixture of Experts (MoE) models that rely on gating mechanisms (activating a subset of experts), CoE promotes cooperation instead of competition. (b) The introduction of high-level experts allows for cross-network knowledge fusion, enhancing model flexibility. 2) Strong Theoretical Foundation: (a) The paper provides convexity and Lipschitz continuity proofs, ensuring convergence and stability. (b) The mutual information maximization strategy enhances the fused representation’s effectiveness. (c) The confidence tensor mechanism allows all experts to contribute dynamically, preventing over-reliance on a small subset. 3) State-of-the-Art Performance: CoE outperforms all baselines on multiplex network and multimodal classification tasks. Weaknesses: 1) Interpretability of Expert Decisions: While CoE optimizes expert collaboration, it does not provide an explicit mechanism to interpret the contributions of individual experts. A visualization or an explainability analysis (e.g., SHAP values) would be useful. Other Comments Or Suggestions: 1) A visualization or an explainability analysis (e.g., SHAP values) would be useful. 2) Remove the "..., etc." as "etc." already implies continuation in Line 48. 3) “(possible with different attributes) but a different type of links” should be changed to “(possibly with different attributes) but different types of links”. 4) "The symbol “+" denotes directly add up the networks" can be changed to "The symbol “+” denotes directly adding the networks". Questions For Authors: 1) The paper primarily compares CoE with graph structure learning (GSL) and MoE models. Are there any attention-based approaches (e.g., Transformer-based graph fusion)? 2) How does CoE handle conflicting expert opinions? Does the large-margin mechanism still work? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your thoughtful feedback. We are especially grateful for the positive recognition of our novel expert coordination strategy, solid theoretical foundation, and strong empirical performance. Below, we provide responses to the specific suggestions and questions. Figures and Tables are summarized in this link: **https://anonymous.4open.science/r/ICML_rebuttal-7D0E** **Weakness & Suggestion1: Interpretability of Expert Decisions** We appreciate your suggestion regarding the interpretability of expert contributions. In fact, we have emphasized the relative contribution of each expert using confidence scores in Figure 1(a) and 1(b) of the main paper, which demonstrate how different experts participate dynamically across semantic contexts. In addition, to enhance interpretability more explicitly, we further compute SHAP values for each expert across different datasets. For better visualization, we normalize the SHAP values within each dataset and display them in tables for small-scale expert settings. For datasets with a larger number of experts, we provide heatmaps to show expert influence across different classes, enabling a more fine-grained interpretation of their roles. These analyses confirm that our confidence tensor indeed reflects meaningful and diverse expert specializations. We include these SHAP-based visualizations and interpretation discussions **in the link above**. **Suggestion2 & 3 & 4: Corrections of writings** Sorry for the mistake we made. We thank the reviewer for pointing out the writing issues and ambiguous expressions. We will carefully revise the identified sentences to improve clarity and precision. Specifically, we will rephrase the description of the “+” and “etc.” symbols for better readability, and correct the noted grammatical issues. These improvements will be incorporated into the revised version to enhance overall presentation quality. **Q1: Attention-based approaches** We apologize for not clearly highlighting the attention-based baselines in the original submission. Thank you for raising this point. In fact, among our baselines, NodeFormer (Wu et al., 2022) and HAN (Wang et al., 2019) are attention-based methods. To strengthen our comparison and address this oversight, we additionally include SGFormer [1], a recent Transformer-based graph structure learning method that applies self-attention over both features and learned graph topology. We compare its performance with CoE on all datasets, and present the results **in the link above**. This updated comparison confirms that CoE consistently outperforms attention-based methods across all datasets. [1] Wu et al. "SGFormer: Simplifying and Empowering Transformers for Large-Graph Representations." NeurIPS (2023). **Q2: How does CoE handle conflicting expert opinions?** The confidence matrix $\Theta$ we learn represents the authority of each expert. To explain with a simple example: if expert $i$ believes the sample belongs to class $p$, the distribution vector of their judgment is $\Theta(:,p,i)$. If another expert $j$ believes it belongs to class $q$, the distribution vector is $\Theta(:,q,j)$. The combined opinion of the two experts is $ \Theta(:,p,i) + \Theta(:,q,j)$. The final classification result is then obtained through Eq. (5): $\hat{y}_i=\underset{j = 1...c}{argmax}\ \left(\mathcal{S}\left(\Theta g_i\right)\right)_j$. Each expert provides an opinion, which we consider as a probability distribution vector. By summing the probability distribution vectors, normalizing the result, and performing an argmax operation, we can handle both similar and opposing opinions among the experts. Once again, we sincerely appreciate your time and effort in reviewing our paper. Your constructive criticism has been invaluable in refining our work, and we are more than happy to add clarifications to address any additional recommendations and reviews from you!
null
null
null
null
null
null
Tensor Decomposition Based Memory-Efficient Incremental Learning
Accept (poster)
Summary: This paper addresses the challenge of memory efficiency in Class-Incremental Learning (CIL), whose goal is continuously learning new classes over time. Replay-based methods, a prominent approach in CIL, suffer from high memory consumption due to the need to store past exemplars. To mitigate this, the authors propose a novel method that leverages Tensor Decomposition (TD) to compress images efficiently, thereby reducing memory footprint while preserving sufficient discriminative information. Furthermore, they introduce a two-stage exemplar selection strategy to enhance the representativeness and diversity of the stored samples. In the first stage, herding is used to sample exemplars most representative of the class characteristics, focusing on central, high-quality samples. The second stage then samples from the remaining, unselected data, prioritizing samples with low reconstruction error after tensor decomposition. This aims to select diverse and noise-robust exemplars. Experimental results on two image datasets demonstrate that the proposed sampling strategy improves the performance of several existing CIL techniques. ## update after rebuttal Thank you for your response. Your rebuttal has addressed the two questions to a reasonable extent, so I will revise my rating. However, having also reviewed the comments from the other reviewers, I believe their concerns are valid as well. Therefore, I do not intend to strongly oppose their opinions. I mistakenly posted my comment in the Official Comment section. My apologies. Claims And Evidence: The selection of central exemplars using herding is reasonable and well-justified. However, while sampling from the remaining data (i.e., those not selected by herding) can be expected to provide some degree of diversity, the extent to which this random sampling strategy ensures diversity is not demonstrated. Furthermore, while a low reconstruction error after Tensor Decomposition indicates that the original image features are well-preserved, it does not necessarily guarantee robustness to noise. Methods And Evaluation Criteria: It's unclear whether random sampling adequately ensures diversity, and the evaluation doesn't assess how much diversity contributes to performance gains. Similarly, it's uncertain whether tensor decomposition ensures robustness, and the evaluation doesn't assess this aspect either. Theoretical Claims: The paper primarily discusses the memory efficiency of the proposed method, centered around Equation (3), which makes sense. However, there is a lack of theoretical analysis regarding the diversity ensured by random sampling and the robustness ensured by tensor decomposition. Experimental Designs Or Analyses: The paper demonstrates the effectiveness of the proposed method empirically through evaluation experiments using two different datasets and several key existing CIL techniques, which is good. However, as mentioned in the Methods and Evaluation Criteria section, the evaluation does not assess the extent to which the performance improvements are attributable to diversity, nor does it evaluate the robustness ensured by tensor decomposition. Supplementary Material: The supplementary material has been reviewed. It contains more detailed descriptions of the tensor decomposition process and additional evaluation experiments. Relation To Broader Scientific Literature: By demonstrating the potential of tensor decomposition to enhance CIL performance through robustness, this work offers a novel perspective on the application of tensor decomposition's noise resilience. Essential References Not Discussed: The contribution of random sampling to improved predictive performance in machine learning appears to be a central idea in the proposed method. Discussing and citing relevant prior work on this topic could enhance the persuasiveness of the paper. For example, mentioning Random Forests [Breiman '01] would be one possibility. Breiman, L. Random Forests. Machine Learning 45, 5-32 (2001). Other Strengths And Weaknesses: As mentioned above, this paper offers a novel perspective on the application of tensor decomposition's noise resilience. However, the persuasiveness of the proposed method could be enhanced by providing a more detailed discussion or evaluation experiments regarding the extent to which the performance improvements are attributable to diversity and the degree to which robustness is ensured by tensor decomposition. Other Comments Or Suggestions: There are several instances where the writing is potentially confusing and could lead to misinterpretations. For example, in the Exemplar Selection Strategy section, the description of the second stage (line 263) uses the same notation for the samples (e.g., x_1^t) as in the first stage, and it only specifies that the number of selected samples, j, is different from i. This could mistakenly suggest that the samples are not necessarily different from those selected in the first stage. The authors should refine the expression to convey that the samples are distinct from those selected in the first stage. Questions For Authors: 1. Can you demonstrate how the diversity ensured by the second-stage random sampling contributes to performance improvements? Experimentally showing the diversity of the sample distribution within a class might be one approach. 2. Can you theoretically or experimentally demonstrate that actively selecting samples with low reconstruction error after tensor decomposition contributes to performance improvements? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's insightful feedback and constructive suggestions. Below, we address the key concerns raised. ### **1. Why diversity can be ensured by the second-stage sampling and its contribution to performance** The paper proposes a novel exemplar selection strategy that enhances representativeness and diversity. Specifically, In the first stage, we use herding to select a small, equal number of representative original samples per class, prioritizing high-quality exemplars. In the second stage, we choose similarly an equal number of sample factors per class, focusing on sample quantity. The diversity of the selected exemplars is primarily ensured through the more stored samples and the uniform sampling strategy. About the diversity's contribution to performance, as shown in Fig. 3, when $\epsilon$ remains within a small range, meaning a sufficient number of samples are retained in the second stage, this strategy consistently yields great performance improvements. However, as $\epsilon$ increases further(means fewer samples are stored and lower diversity ), the model’s final performance gradually gets close to the original. For MEMO, when $\epsilon$ exceeds 0.6, the performance gain on CIFAR remains below 1%. ### **2. Can you theoretically or experimentally demonstrate that actively selecting samples with low reconstruction error after tensor decomposition contributes to performance improvements?** Yes, we can. Here, we provide results (Average Accuracy) for the sample selection strategy prioritized by the minimum reconstruction error (M= 2k, 10-task); note that we only use reconstructed samples. It can be seen that AA gradually improves as the reconstruction error decreases. | Method | R = 10 ( rse 0.061) | R = 12 (rse 0.047) | R = 16 (rse 0.034) | R = 20 (rse 0.024) | | ----------- | :-----------------: | :----------------: | :----------------: | :----------------: | | DER w/ours | 71.94 | 72.43 | 72.91 | 73.21 | | MEMO w/ours | 70.80 | 71.88 | 71.91 | 72.20 | ### **3.Clarification of notation in the exemplar selection strategy** We will revise the description of the Exemplar Selection Strategy to distinguish samples selected in each stage.
Summary: This paper presents a new memory-efficient method for CIL. Different from previous papers, tensor decomposition is used to compress original image. Besides, a new exemplar selection strategy is proposed to ignore the influence of negative compressed samples. Extensive experiments on different datasets demonstrate this method's robustness. Claims And Evidence: Yes. Methods And Evaluation Criteria: No. Inadequate Experiments: The article lacks ablation experiments for the proposed method. As I understand, there are two main innovations in the article: tensor decomposition and a new selection strategy. Providing ablation experiments similar to these would help prove the effectiveness of both components. Theoretical Claims: Yes. Question: The paper uses herding to select a subset of samples, but my understanding is that the criterion for herding is to minimize the difference between the selected samples and the overall dataset. In incremental learning, however, there are data of different class categories. Is it reasonable to use the herding selection strategy for samples of different classes? Experimental Designs Or Analyses: Yes. Accuracy of the experimental results: 1.What are the specific experimental setups in Tab1 and Tab3? For methods such as iCaRL and DER, were they kept to their original results with 2k samples, or were there any modifications made to the methods? The reproduced results of iCaRL and DER by PyCIL should not be this low, and there is a significant discrepancy compared to the results reported in the original paper.[1][2][3][4] 2.For memory-efficient methods, there are many other approaches. The article should compare and highlight the advantages of the proposed method over other existing methods (this can be done if time permits and is not intended to be a part of the final scoring criteria).[5] [1] DER: Dynamically Expandable Representation for Class Incremental Learning [2] Multi-layer Rehearsal Feature Augmentation for Class-Incremental Learning [3] Dynamic Residual Classifier for Class incremental Learning [4] FOSTER: Feature Boosting and Compression for Class-Incremental Learning [5] A MODEL OR 603 EXEMPLARS: TOWARDS MEMORY EFFICIENT CLASS-INCREMENTAL LEARNING Supplementary Material: No supplementary material provided. Relation To Broader Scientific Literature: No Essential References Not Discussed: No. Other Strengths And Weaknesses: No Other Comments Or Suggestions: No Questions For Authors: The paper uses herding to select a subset of samples, but my understanding is that the criterion for herding is to minimize the difference between the selected samples and the overall dataset. In incremental learning, however, there are data of different class categories. Is it reasonable to use the herding selection strategy for samples of different classes? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's thoughtful feedback and constructive suggestions. Below, we address the concerns raised. ### **1. Lacks ablation experiments for the proposed method** In our manuscript, we have conducted experiments (Section 4.2, Fig. 3, and Tab. 6) to evaluate the impact of key components, e.g., decomposition rank $R$, proportion $\epsilon$, as shown in Fig.3 when $\epsilon = 0$, we only use reconstructed samples without the two-stage selection strategy. Tab. 6 demonstrated the effect of different compression methods on performance. ### **2.Specific experimental setups in Tab. 1 and Tab. 3** We acknowledge this oversight. For the experimental setups in Tab. 1 and Tab. 3, we set the memory budget to 2k and explained other settings in the section "Protocols." ### **3. Performance discrepancy on iCaRl and DER** For the performance discrepancy in DER, since they chose resnet18 as the backbone network for CIFAR(for resnet32, their results are close to ours), we used more lightweight resnet32 (that's what most do.). Thus, there is a performance gap; for ImageNet, we use the same backbone network with similar performance. For iCaRL, we have double-checked it without any changes. ### **4. Lacks comparison results with memory-efficient methods** In our manuscript, we have provided comparison results with some recent memory-efficient replay methods in Tab. 5 and Tab. 9. All the results demonstrate our method's superiority. ### **5.Exlaination of Herding Strategy** For exemplar selection, we clarify that herding is employed independently within each class, ensuring that selected samples best represent the class distribution. While herding traditionally minimizes the difference between selected samples and the overall dataset, its application per class in our method aligns with class-incremental learning settings. We will further clarify this in the manuscript. We deeply appreciate your rigorous review and constructive feedback. All suggested revisions will be incorporated to strengthen the manuscript’s clarity, technical depth, and experimental validation. Thank you again for your time and consideration.
Summary: The paper addresses the challenge of catastrophic forgetting in Class-Incremental Learning (CIL), where models struggle to retain previous knowledge when incrementally learning new classes. While replay-based methods mitigate this by storing old exemplars, their high memory consumption limits practicality. Existing memory-efficient approaches using pixel-level compression face trade-offs between compression efficiency and retaining discriminative information. To overcome this, the authors propose a novel method leveraging low-rank tensor decomposition (TD) to exploit natural images' low intrinsic dimensionality and spatial correlations, achieving high compression while preserving critical features. Experiments on classic CIL datasets validate the method’s effectiveness. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: NA Experimental Designs Or Analyses: Yes Supplementary Material: NA Relation To Broader Scientific Literature: Yes Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths 1. The proposed method mainly focuses on efficient memory represent, enabling direct integration into existing CIL methods. 2. Experiments on CIL datasets (CIFAR-100 and ImageNet-100) validate that the proposed approach can improve the performance of previous baselines such as MEMO and DER. Weakness 1. This paper focuses on memory-efficient CIL, but it just conduct experiments under different memory budgets (the method presented in this paper uses both stored real data and tensor components), failing to explicitly report the actual memory costs of compared methods or quantify the additional memory overhead introduced by the proposed approach. 2. There is a lack of comparative experiments and analysis of different methods under fixed memory of varying sizes, which are critical for evaluating the memory efficiency of different methods. 3. The method includes exemplar selection strategy to select high quality of reconstruction during training, while the computational latency of TD for image compression/decompression during exemplar storage and rehearsal is not discussed. 3. The experimental results on ImageNet-1k is missing, which is important to demonstrate the effectiveness on large-scale case. Other Comments Or Suggestions: No Questions For Authors: Questions: 1. Could you provide qualitative examples of TD-reconstructed images under different hyperparameter settings? 2. What does 'CP' mean, as it appears in line 135 for the first time without explaination. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the reviewer's constructive feedback and valuable suggestions. Below, we address the concerns raised. ### **1. Failing to explicitly report the actual memory costs** In our manuscript, we have provided parameter configurations of different datasets in Tab. 7, e.g., for CIFAR, $R = 12$, $\epsilon = 0.2$, when $M = 2k$, according to ep.3 compression rate $\eta \approx 0.26$, which means we saved 400 original samples and about 6153 sets of sample factors. We will elaborate on this in the next release. ### **2. Lacks comparative experiments under fixed memory of varying sizes** We have indeed provided comparative experiments under fixed memory of varying sizes (see Tab. 2 and Tab. 4 for two different fixed memories) in our manuscript. We can add more explanations for clarity. ### **3. Computational latency of TD for image compression/decompression** On a single GPU (NVIDIA 3090), decomposing a CIFAR-100 image takes approximately 17 ms, while reconstruction takes approximately 2 ms, since the decomposition can be done in parallel with the training process, this part of the delay is negligible. The main computational delay is caused by the inclusion of reconstructed samples in the training, which requires about 40% extra computation for DER and MEMO. Furthermore, as we pointed out in our response to reviewer qf47, the extra calculations for incorporating reconstructed samples during training are not necessary for iCaRL and FOSTER. ### **4. Lacks experiments on ImageNet-1k** While ImageNet-1k is a valuable benchmark, our current experiments on high-resolution datasets (e.g., ImageNet-100) and complex scenarios (e.g., 200-class Tiny-ImageNet in Table 8) already demonstrate the scalability and robustness of our method. Specifically: 1. **ImageNet-100**: As a standard high-resolution benchmark in CIL literature, our method achieves consistent improvements (e.g., **+6.42%** AA for DER in Table 3), validating its effectiveness under realistic settings. 2. **Tiny-ImageNet**: With 200 classes and 64×64 resolution, this dataset mimics the complexity of large-scale tasks. Our method boosts DER’s accuracy by **9.53%** (Table 8), illustrating strong generalization. 3. **Community Practices**: For ImageNet-1k, prior works (e.g., iCaRL, DER) typically adopt a memory budget of 20k and 10-/20-task splits, which aligns with our experimental protocols. While computational constraints limited direct validation on ImageNet-1k, the consistent gains across varying resolutions and class numbers (ImageNet-100 to Tiny-ImageNet) suggest scalability to larger datasets. We acknowledge the value of ImageNet-1k experiments and will provide results as time permits. For now, the results on ImageNet-R (provided in response to qf47) further corroborate our method’s adaptability to domain-shifted and large-scale scenarios. ### **5.Other questions for authors** We acknowledge our oversight and clarify it here. CP is the abbreviation of "CANDECOMP/PARAFAC."[1]. We will ensure terms are properly introduced when first mentioned and provide visualizations of reconstructed samples in the revised manuscript. [1]Tensor decompositions and applications.
Summary: This paper applied tensor decomposition on the replay-based continual learning methods. To minimize the influence of the reconstruction error on the training, the reconstructed images with low reconstruction error are selected for storage. The method is validated combined with other replay-based methods. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No issues are found. Experimental Designs Or Analyses: No issues are found. Supplementary Material: Yes. The supplementary materials are provided with implementation details and more experimental results. Relation To Broader Scientific Literature: Previous methods have increased storage capacity by utilizing low-quality JPEG compression, reconstructing images from partial raw patches, or encoding image information within trained parameters. In contrast, this paper proposes leveraging tensor decomposition to store some samples in tensor form, offering a more efficient approach. Essential References Not Discussed: No issues are found. Other Strengths And Weaknesses: Strengths: This study finds that applying tensor decomposition (TD) to a subset of stored samples and preserving them in tensor form not only enhances storage efficiency but also improves model performance. To mitigate the potential adverse effects of reconstructed samples on the model, this paper incorporates them into the training process, enabling the model to be more robust to reconstruction errors. Furthermore, this paper validates the proposed approach by integrating it with various replay-based methods across different backbone architectures, demonstrating its effectiveness and broad applicability. Weaknesses: In practical applications, setting an appropriate reconstruction error threshold and a decomposed tensor rank can be challenging when dealing with images of varying resolutions. Other Comments Or Suggestions: 1. Additional experiments are needed to validate the importance of incorporating reconstructed samples into training. It would be insightful to examine how performance degrades if these samples are excluded from training. 2. The paper lacks an ablation study on the reconstruction error threshold. If the domain gap between tasks is too large, the method may introduce noisier samples, potentially affecting the model’s overall performance. Moreover, for datasets with significant variations in image resolution, such as ImageNet-R, setting an appropriate threshold and ranks may be more challenging. 3. It is necessary to validate the method on replay-based methods from the recent two years. 4. Providing visualizations of reconstructed samples and comparing them with those from other methods would strengthen the empirical evidence and enhance the credibility of the paper. Questions For Authors: If the effectiveness of the proposed method can be validated on datasets with greater resolution variations and recent replay-based methods, I would reconsider my evaluation. Ethical Review Concerns: n/a Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s thoughtful evaluation and constructive feedback. Below, we address each concern raised and outline revisions to strengthen the manuscript. ### **1. Additional experiments are needed to validate the importance of incorporating reconstructed samples into training** In experiments, we observed that pure replay-based methods, such as iCaRL and FOSTER, achieved strong performance even without incorporating reconstructed samples during training. This is primarily because they operate on a fixed network structure and employ strategies like knowledge distillation to mitigate forgetting. However, methods like DER and MEMO adopt a dynamical model structure. Their primary mechanism for combating forgetting is freezing previously trained sub-networks, although they do retain some old samples for replay. Nevertheless, their reliance on sub-network freezing is more significant. To illustrate, when training on the first task, the corresponding sub-network lacks the ability to recognize reconstructed samples unless they are included in the training. When the second task arrives, although some reconstructed samples from previous tasks are preserved and added to training, relying solely on this limited number of old samples does not ensure enough generalization, and there may be some contradiction between the outputs of these two networks. Therefore, for methods like this, not incorporating reconstructed samples during training leads to a drastic decrease in performance. Here, we report model performance under both scenarios in the table below(10-task,$M = 2k$, $R = 12$, $\epsilon = 0.4$). For consistency, we choose to include reconstructed samples in training across all cases. | Scenario | iCaRL | FOSTER | MEMO | DER | | - | - | - | -| -| | w Reconstructed Sample | 67.32 | 70.34 | 71.88 | 72.43 | | w/o Reconstructed Sample | 67.67 | 70.09 | 63.19 | 64.37 | ### **2. Setting an appropriate reconstruction error threshold and a decomposed tensor rank can be challenging** We have provided the ablation experiments of Rank $R$ in Fig. 3 in the manuscript, and the results show that our method always produces positive results when $R \in [\frac{H}{3}, \frac{H}{2}]$. As for the reconstruction error threshold $\tau$, which is primarily introduced to filter out samples with failed decompositions (rarely occur), which typically exhibit reconstruction errors exceeding 0.1. Indeed, for RGB images, our experimental results show that the reconstruction error is minor when $R \in [\frac{H}{3}, \frac{H}{2}]$, essentially no noisier samples are introduced. For instance, on CIFAR-100, when using a CP rank $R = 12$, the average relative squared error (RSE) is already below 0.05. Increasing $R$ to 16 further reduces the average RSE to 0.034. Under a memory budget of $M = 2k$ and $\epsilon = 0.1$, each class retains 2 raw images and approximately 50 sets of decomposition factors(only 10% of the total data). For a threshold $\tau$ of around 0.07, its effect on the final result is negligible. ### **3.Experimental results on ImageNet-R** Here we provide experimental results on ImageNet-R under 5, 10, and 20-task settings; we set $M = 2k$, $\epsilon = 0.4$, $R = 80$. It can be seen that our approach also responds effectively in this scenario. | Method | Base 0 Inc10 | Base 0 Inc20 | Base 0 Inc40 | | -| - | -| - | | MEMO | 46.93 | 50.73 | 51.37 | | MEMO w/ours | 53.21 | 54.56 | 55.82 | | DER | 47.77 | 51.95 | 52.61 | | DER w/ours | 55.84 | 56.02 | 57.31 | ### **4. Combining with recent replay-based methods** Here, we provide experimental results (10-task) on CIFAR-100 with our method integrating into **MRFA**[1], we set $M = 2k$,$\epsilon = 0.1$, $R = 16$, we report **Average Accuracy (AA)** and **Last Accuracy (Last)**. It can be seen that our method also provides positive gains. | Method | AA | Last | | ----------- | ----- | ----- | | MRFA | 76.23 | 63.80 | | MRFA w/ours | 79.83 | 69.88 | ### **5. Lacks visualization of reconstructed images** We will provide figures showcasing the original image and its corresponding reconstructed image at different ranks in the revised version. [1] Multi-layer Rehearsal Feature Augmentation for Class-Incremental Learning, ICML 2024
Summary: This paper introduces a novel approach to Class-Incremental Learning (CIL) that addresses memory efficiency challenges in replay-based methods. By employing tensor decomposition techniques instead of traditional pixel-level compression, the method exploits the low intrinsic dimensionality and pixel correlations in images to achieve better compression while preserving critical discriminative information. Combined with a hybrid exemplar selection strategy that enhances representativeness and diversity, the approach significantly improves upon baseline methods across multiple datasets of varying resolutions, demonstrating robust generalization capabilities. Claims And Evidence: The claims made in the paper are basically convincing. Methods And Evaluation Criteria: A minor issue: I find the terminology for evaluation metrics in this paper somewhat confusing, as it deviates from standard naming conventions in continual learning literature. Specifically, what this paper refers to as "average incremental accuracy (AIA)" is typically called "average accuracy" in most publications, while what the authors term "average accuracy" generally corresponds to what the field commonly refers to as "final accuracy" or "last accuracy" (the performance after learning all incremental tasks). This inconsistency in metric naming might create confusion for readers familiar with the established terminology in continual learning research. Theoretical Claims: No theoretical claims included in this paper. Experimental Designs Or Analyses: 1. The experimental design is fundamentally sound. However, the paper would benefit from more comprehensive comparisons between the proposed approach and other efficient replay methods in the field. I would particularly encourage the continual learning community to focus their attention on the results presented in Table 5, which offer more meaningful insights than those in Tables 1 and 3. This comparative analysis would provide a clearer understanding of the method's relative advantages within the broader context of efficient replay techniques. 2. It's suggested that authors may include some online continual settings in the experimental part since in this area, the memory efficiency may be more important. Supplementary Material: No. Relation To Broader Scientific Literature: Maybe some literatures in online continual learning can be cited but it's not a necessity. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths** 1. This paper proposes a remarkably straightforward yet highly intuitive method for memory-efficient continual learning. The manuscript is well-structured with clear organization, making it easy to comprehend. Additionally, the proposed methodology is presented in a manner that facilitates straightforward implementation or replication. 2. As the authors note, this represents the first application of tensor decomposition techniques in this specific sub-area of continual learning (which aligns with my understanding of the current literature). **Weaknesses** 1. See the previous parts. 2. The primary limitation of this paper is the insufficient integration of tensor decomposition techniques with the unique challenges of continual learning. Furthermore, as mentioned earlier, the comparative analysis against other memory-efficient replay methods is somewhat lacking. 3. The paper would benefit from additional visualizations of reconstructed images, particularly demonstrating the effects of varying rank and $\epsilon$ parameters on reconstruction quality. Other Comments Or Suggestions: No other specific suggestions. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful feedback and constructive suggestions. Below, we provide a point-by-point response to the comments raised. ### **1. Inconsistent terminology for evaluation metrics** In the revised manuscript, we will align our metric naming with the community standards.**"Average Incremental Accuracy (AIA)"** will be replaced with **"Average Accuracy "**, and **"Average Accuracy "** will be renamed to **"Last Accuracy''** ### **2. More comprehensive comparisons and analysis for efficient replay methods** We thank the reviewer for highlighting the importance of comparative analysis. The initial manuscript has provided the comparison results in Tab. 5 and Tab. 9, and all the results demonstrate our method's superiority. Here, we offer more comparison results (10-tasks) and analysis. | Method | M=2k | M=1k | | :- | -| -| | MRDC | 76.02 | 72.78 | | CIM | 77.94 | 73.91 | | Ours | 79.75 | 74.95 | As our introduction and related work noted, pixel-level compression methods (e.g., MRDC[1], CIM[2]) directly compress images in the high-dimensional pixel space, often neglecting the low intrinsic dimensionality and local correlations inherent to natural images. This oversight leads to a significant loss of discriminative information. In contrast, our Tensor Decomposition (TD)-based method explicitly leverages these properties by factorizing images into low-rank components. This not only achieves lower storage complexity (e.g., a compression ratio of **0.34** vs. CIM’s **0.56** in Tab. 6) but also preserves more discriminative information through high-fidelity reconstruction. As shown in Tab. 6, training on TD-compressed data achieves **69.9% accuracy** on CIFAR-100, significantly closer to the upper bound (**72.3%**) than pixel-level methods like downsampling (**44.1%**) or CIM (**66.9%**). This demonstrates TD’s ability to retain essential information while drastically reducing memory costs. The superiority of our method stems from two key factors: 1. **Efficient compression**: TD captures multi-dimensional correlations (spatial, channel-wise) in images, preserving more discriminative information while keeping a great compression ratio, avoiding the "brittle" compression of pixel-level methods. 2. **Adaptability**: Unlike methods reliant on fixed heuristics (e.g., CIM’s center cropping), TD flexibly adapts to varying resolutions and dataset complexities, as evidenced by consistent gains across CIFAR-100, Tiny-ImageNet, and ImageNet-100 (Tab. 1, 3, 8). These revisions will solidify our method’s advantages over existing efficient replay techniques, and we hope this can address the reviewer’s concern for deeper comparative insights. ### **3.Combing with online continual learning** This is an excellent suggestion. While our current focus is on Class-Incremental Learning, we recognize the importance of online settings. In the revised manuscript, we will add a discussion about the applicability of our method to online continual learning and outline plans for future work in this direction. ### **4. Effect of varying ranks and $\epsilon$ on reconstruction quality** Regarding the reconstruction quality, which depends on the CP rank $R$, we provide some results on CIFAR-100(evaluated by mean **r**elative **s**quared **e**rror). In the table, ''comp'' means compression ratio, and ''accuracy'' represents the classification accuracy after finishing offline training on compressed data. About the effect of $\epsilon$, in the initial manuscript, Fig. 3 has shown its influence on final performance. | rank | 10 | 12 | 14 | 16 | 18 | 20 | | -------- | ------ | ------ | ------ | ------ | ------ | ------ | | rse | 0.0613 | 0.0472 | 0.0402 | 0.0343 | 0.0276 | 0.0243 | | comp | 0.22 | 0.26 | 0.31 | 0.35 | 0.39 | 0.44 | | accuracy | 66.11 | 67.84 | 69.26 | 69.99 | 70.37 | 70.81 | ### **5. Lacks visualization of reconstructed images** We will provide figures showcasing the original images and their corresponding reconstructed images at different ranks in the next version. [1]Memory replay with data compression for continual learning [2]Class-incremental exemplar compression for class-incremental learning
null
null
null
null
Learning to Match Unpaired Data with Minimum Entropy Coupling
Accept (poster)
Summary: This paper proposes a novel method to solve the continuous Minimum Entropy Coupling (MEC) problem. Specifically, it incorporates generative diffusion models to learn the joint distribution with the minimum joint entropy, while enforcing a relaxed version of the marginal constraints. Claims And Evidence: In general, the claims made in the paper are supported by clear and convincing evidence. However, I believe the experiments are not extensive enough, and require additional datasets/applications. Methods And Evaluation Criteria: yes Theoretical Claims: N/A Experimental Designs Or Analyses: I’ve checked the soundness of the experimental designs, and I don’t see any issues. Supplementary Material: Yes, I reviewed all the supplementary material (both sections). Relation To Broader Scientific Literature: The key contribution of the paper is related to the broader scientific literature through interesting ideas (e.g., combining diffusion models with MEC). Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1. The paper considers an important topic in multimodal learning – learning from naturally unpaired data. 2. The paper is generally clear and easy to follow Weaknesses: See questions. Other Comments Or Suggestions: I believe there’s a typo in line 190 (should be Y|X) Questions For Authors: Question For Authors 1. The evaluation of each of the two applications is done using only a single dataset. I find it not convincing enough. Could you add one dataset to each application? 2. The applications shown in the paper are existent, and the proposed method only marginally improve existing results (e.g., InfoOT, SDDM). As DDMEC is general, could you present another application of it? Please support this with empirical evaluation. 3. I’m afraid about the data-scalability of DDMEC. Could you elaborate about it? How is it compared to other translation methods such as CycleGAN and SDDM? And compared to InfoOT? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful feedback and suggestions. Below, we provide responses and additional experiments to address their concern. > 1. Could you add one dataset to each application? - **Additional Image translation experiment**: We consider the CelebA-HQ dataset (Karras et al., 2017), which features high-quality human face images divided into two domains: Male and Female. Each domain includes approximately 10,000 and 17000 training samples for the male and female modalities. We compare the performance of DDMEC against several baseline methods on the Male-to-Female translation task, following the same evaluation protocol as in (Zhao et al. 2022). | **Model** | **FID ↓** | **SSIM ↑** | |-------------------------------|-----------------|------------------| | **CelebA-HQ (Male → Female)** | | | | SDEdit* | 49.43 ± 0.47 | 0.572 ± 0.000 | | ILVR* | 46.12 ± 0.33 | 0.510 ± 0.001 | | EGSDE* | 41.93 ± 0.11 | 0.574 ± 0.000 | | SDDM* | 44.37 ± 0.23 | 0.526 ± 0.001 | | **50 sampling steps** | | | | DDMEC (guidance=2.5) | **40.73 ± 0.61**| **0.593 ± 0.003**| | **100 sampling steps** | | | | DDMEC (guidance=2.5) | **38.93 ± 0.37**| **0.588 ± 0.002** | In our experiment, we follow the same training procedure as for the AFHQ dataset available in the main paper. Our new results, demonstrate that DDMEC outperforms competitors on both FID and SSIM even with only 50 sampling steps. Specifically, at 50 steps the FID improves by approximately 1 point and the SSIM by 0.02 points, with an even greater improvement (a 3-point FID reduction) when using 100 sampling steps: it is well-known in the generative modeling literature that these improvements are significant. With the CelebA-HQ dataset, DDMEC benefits from a larger training set (in the AFHQ animal dataset, each modality has approximately 3000 images) and achieves state-of-the-art performance on image translation. Qualitative results supporting the quantitative evaluations can be viewed via ( https://anonymous.4open.science/api/repo/icml2025_ddmec-7798/file/celeba/qualitative.png?v=7d6cc987 ). - **Additional Single-Cell data experiment:** We performed new experiments on a more complex single-cell data alignment task. Since the SNARE dataset we used in the submitted paper is relatively small and simple, our goal is to further substantiate the superiority of DDMEC using the peripheral blood mononuclear cells (PBMC) dataset: this dataset consists of healthy human PBMCs with simultaneous profiling of gene expression (RNA) and chromatin accessibility (ATAC). PBMC contains 11,910 cells, spanning 7 major cell classes that are further divided into 20 cell subclasses. We use the data processing and evaluation pipeline from (Singh, 2023), which results in a 50-dimensional representation for each modality. The obtained results available here (https://anonymous.4open.science/r/icml2025_ddmec-7798/PBMC/table.png ) demonstrate that DDMEC performs extremely well also in a high-dimensionality scenario, whereas OT-based methods completely fail. Although OT-based methods are relatively lightweight, they do not scale, motivating the need for more advanced methodologies such as DDMEC. Our method outperforms the best competitor, scTopoGAN, scoring superior performance in terms of both cell type matching rate and subcell matching rate. > 2. The applications shown in the paper are existent, and the proposed method only marginally improve existing results (e.g., InfoOT, SDDM). As DDMEC is general, could you present another application of it? The experiments (presented in Q1) include results on the extra CelebA-HQ dataset and an additional PBMC single-cell dataset, both of which are more elaborate and sophisticated. The results demonstrate that DDMEC outperforms and improves upon the state-of-the-art. The reviewer’s suggestion about considering additional applications is both valid and insightful. As suggested by reviewer Acbm, it is possible to consider image-text pairs or multilingual text alignment. However, due to rebuttal time constraints, we leave this to the camera-ready version. > 3. data-scalability of DDMEC? We discuss the scalability of our method against the baselines in our answer to the question 3 of Reviewer zrWS. > typo in line 190 We apologize, indeed it should be $Y|X$. -Singh, et. al, scTopoGAN: unsupervised manifold alignment of single-cell data. Bioinformatics Advances 2023. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response and the additional experimental results. I appreciate the authors' effort in addressing my concerns. Based on the revisions, I am increasing my score to 3. --- Reply to Comment 1.1.1: Comment: Thank you very much for your valuable suggestions. We’re glad that the additional experiment addressed your concerns and that you updated your evaluation score accordingly. Thanks again, The Authors
Summary: This paper proposes minimum entropy coupling (MEC) to align unpaired multimodal data. MEC seeks a joint distribution with the desired marginals that is optimal in the sense of minimum entropy, in comparison to optimal transport approaches which minimize an integrated cost. This sidesteps the difficulty of specifying a cross-modal cost function which in previous work was either achieved using auxiliary labels or with Gromov-Wasserstein OT. To solve MEC over continuous spaces, the joint is factorized into known marginals and unknown conditionals, for which minimum entropy becomes maximum conditional likelihood. This is done for both possible factorizations and the conditional models are trained as denoising diffusion. Experiments are performed on single-cell and image translation datasets. Claims And Evidence: Yes, I think the claims made in the paper are supported by evidence. Methods And Evaluation Criteria: The MEC method makes a lot of sense as an alternative to OT for the problem of multimodal alignment, which solves the problem of the marginals being in different spaces trivially. I can't really comment on whether the diffusion approach makes sense as I am unfamiliar with MEC, but it seems like a reasonable approach. The evaluation leaves much to be desired, unfortunately. The single-cell dataset from the SCOT paper appears to be a poor choice for demonstrating the strengths of the MEC method, as SCOT already performs so well in this case. Another single-cell dataset with ground-truth pairings available is the CITE-seq dataset from the NeurIPS benchmark from 2022 that many other works in this area also evaluate on. The image translation dataset makes little sense to include as it is not really multimodal, and the authors do little to support why MEC would be preferable to existing methods such as conditional diffusion or CycleGAN. Note the authors write (l373) > By design, our method does not require comparable domains and does not rely on a specific image similarity measure. Which I agree is a strength but is not demonstrated through the experiment itself. As a suggestion, it would be much more interesting to look at image--caption pairs (which are abundant), artificially split them, and then evaluate on this instead. Theoretical Claims: The paper makes few theoretical claims. There is a claim on l190 under Equation 6 about the approximate equivalence of gradients that has no justification, and it is not clear to me that it would be true (these are conditional distributions over completely different spaces). Also, is there a typo here? $-\log p_{Y \mid Y}$ should maybe be $-\log P_{Y \mid X}$, and is the gradient on the RHS supposed to be w.r.t. $\theta$ or $\phi$?) Experimental Designs Or Analyses: The actual design and analyses seem fine but I find them insufficient, see "Methods And Evaluation Criteria". Supplementary Material: I skimmed the short supplementary about architecture and hyperparameters. Relation To Broader Scientific Literature: This is an important problem that has been especially significant for single-cell biology, where measurement processes can usually only be taken once per cell, but different measurements can nonetheless capture additional information. The authors do a good job of reviewing the literature here. Essential References Not Discussed: The authors do a good job at reviewing the relevant literature. There are a few methods that bring both modalities into a shared latent space and then do OT alignment on that space, e.g., MatchCLOT (Gossi et al., 2023) or propensity score alignment (Xi et al., 2024), but this usually requires some sort of supervision (ground truth pairs, or shared labels) to first learn the latent space so it may not be as relevant here. One simple baseline (though I don't expect this to do well unsupervised) might be to just do separate dimension reduction on both modalities and align them with OT. Gossi et al., Matching single cells across modalities with contrastive learning and optimal transport, Briefing in Bioinformatics, 2023. Xi et al., Propensity score alignment of unpaired multimodal data, NeurIPS 2024. Other Strengths And Weaknesses: ### Strengths - The paper does great job reviewing the literature ranging from the problem, applications and solutions, as well as effectively introducing the MEC problem. The writing was clear and approachable. ### Weaknesses - I actually think the paper undersells the relevance of MEC for this problem, since it completely avoids the problem of different modalities. In Section 4.1 the authors briefly mention >All alternative methods we consider require geometric distances or similarity measures, which is a pain point that our method DDMEC lifts completely. In my opinion this should absolutely be a central selling point of the method. - The authors do not support the usage of MEC for alignment theoretically nor intuitively. Why should we expect minimum entropy to be a good objective that aligns unpaired data? What are your assumptions about how these data became unpaired in the first place? - There is no theoretical support for the DDMEC itself: the maximum likelihood part seems fine to me but I am not sure about the alternating optimization. Other Comments Or Suggestions: ### Typos/unclear points - See also "Theoretical Claims". - Eqn 3 has $E_{y \sim p_Y}[\mathbb{H}(p_{X\mid Y})]$. Is this a typo? I think it should probably be $\mathbb{H}(P_{x,y}) = E_{x,y}[-\log p_{X \mid Y}^\theta - \log p_y]$, the latter term is a constant for Eqn 4. - I am trying to wrap my head around Eqn 6. How does this find a MEC? Doesn't $E_{x, y \sim P_{X,Y}^\theta}( -\log p_{Y\mid X}^\phi(y \mid x))$ control the __relative__ entropy (ie KL divergence) instead of the actual entropy? I can kind of understand how this enforces $P_{X,Y}^\theta = P_{X,Y}^\phi$, since $$ \mathbb{KL}(P_{X,Y}^\theta \mid P_{X,Y}^\phi) = E_{x, y \sim P_{X,Y}^\theta}( -\log p_{Y\mid X}^\phi(y \mid x) - \log p_X^\phi(x)) = E_{x, y \sim P_{X,Y}^\theta}( -\log p_{Y\mid X}^\phi(y \mid x)) + \mathbb{KL}(P_{X}^\theta \mid P_{X}^\phi) $$ but I don't understand how it gives a MEC. - In Eqn. 11 you compute the marginal $p_X^\theta$ by $p_{X \mid Y}(x \mid y = \emptyset)$. However there is a true marginal corresponding to your model $p_X^\theta = \int_{y} P_{X \mid Y=y}(x \mid y) dy$. Is the approach in Eqn. 11 a reasonable approximation to the marginal? Questions For Authors: I really like the idea of using MEC to solve this problem (see weakness 1 above). However the paper falls short on two counts. I would be happy to argue for acceptance (at least, weak accept) if satisfactory improvements/answers to both of these were made. ### (1) Experiments See "Methods And Evaluation Criteria" ### (2) Technical Details See "Other Comments Or Suggestions". Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for recognizing the relevance of the MEC framework in coupling unpaired data and for the insightful feedback, which we address next. > Another single-cell dataset. We perform new experiments using the peripheral blood mononuclear cells (PBMC) dataset, as the CITE-seq dataset is used in a semi-supervised setting: while we can ignore label information with DDMEC, we lack the time to run competitors. The PBMC dataset is large and high-dimensional, consisting of simultaneous profiling of gene expression (RNA) and chromatin accessibility (ATAC) of healthy patients (refer to question 1 of Reviewer 2cNi). Our new results indicate that OT fails likely due to issues related to data dimensionality, while DDMEC achieves the best performance in all cases. > Image translation is not really multimodal. Use image caption pairs. Both modalities are images though the unpaired setting is a significant challenge. CycleGAN lags behind DMs. DDmec outperforms DM alternatives without the need to define a similarity function like in EGSDE and SDDM ( additional CelebA experiments in RV 2cNi question 1). While interesting, due to time limits, we differ the suggested txt/img case to a camera ready version. > The claim on l190 has no justification. We apologize, $- \log p_{Y|Y}$ is a typo and should be $- \log p_{Y|X}$. The l.h.s term of l190 reads $\nabla_\theta \int p_Y(y) p^\theta_{X|Y}(x|y) \log p^\theta_{X|Y}(x|y)dx dy$. Moving the $\nabla_\theta$ inside we obtain $$ \int p_Y(y) \nabla_\theta\left(p^\theta_{X|Y}(x|y)\right) \log p^\theta_{X|Y}(x|y)dx dy + \int p_Y(y) p^\theta_{X|Y}(x|y) \nabla_\theta\left(\log p^\theta_{X|Y}(x|y)\right) dx dy $$ The second term simplifies to zero: $$ \int p_Y(y) p^\theta_{X|Y}(x|y) \nabla_\theta\left(\log p^\theta_{X|Y}(x|y)\right) dx dy= \int p_Y(y) \nabla_\theta\left(p^\theta_{X|Y}(x|y)\right) dx dy=\nabla_\theta\int p_Y(y)\left(p^\theta_{X|Y}(x|y)\right) dx dy=\nabla_\theta1=0 $$ Assuming $p^\theta_{X,Y}=p^\phi_{X,Y}$ and $p^\theta_X=p_X,p^\phi_Y=p_Y$, the first term rewrites as $$ \int p_Y(y) \nabla_\theta\left(p^\theta_{X|Y}(x|y)\right) \log \frac{p^\phi_{Y|X}(y|x) p_X(x)}{p_Y(y)} dx dy = \int p_Y(y) \nabla_\theta\left(p^\theta_{X|Y}(x|y)\right) (\log p^\phi_{Y|X}(y|x) +\log p_X(x)-\log p_Y(y)) dx dy= $$ $$ = \nabla_\theta\int p_Y(y)\left(p^\theta_{X|Y}(x|y)\right) \log p^\phi_{Y|X}(y|x) $$ which is the rhs of l190. Indeed $\int p_Y(y) \nabla_\theta\left(p^\theta_{X|Y}(x|y)\right) \log p_X(x) dx dy=\int \nabla_\theta\left(p^\theta_{X}(x)\right) \log p_X(x) dx=\int \nabla_\theta\left(p_{X}(x)\right) \log p_X(x) dx=0$, and similarly, $-\int p_Y(y) \nabla_\theta\left(p^\theta_{X|Y}(x|y)\right) \log p_Y(y) dx dy=0$. > Clarify Eqn 3 and Eqn 4. To clarify, we recognize that $$ H(p^\theta_{X,Y})=-\int p^\theta_{X,Y}(x,y) \log p^\theta_{X,Y}(x,y) dx dy=-\int p_Y(y)p^\theta_{X|Y}(x|y) \left( \log p^\theta_{X|Y}(x|y)+ \log p_Y(y) \right) dx dy= $$ $$ -\int p_Y(y)p^\theta_{X|Y}(x|y) \left( \log p^\theta_{X|Y}(x|y) \right) dx dy+H(p_Y). $$ The term $H(p_Y)$ does not influence optimization being independent of $\theta$, while the first term, can be either rewritten as $E_{y\sim p_Y}[H(p_{X| Y=y}) ]$ (as in eqn 3) or as $- E_{x,y\sim p_{X,Y}}[\log p_{X|Y}(x|y)]$, which is the expression in eqn 4. > How does Eqn 6 finds a MEC? To see that eqn 6 is a proxy for a MEC loss, we start from equations 4 and 5, which correspond exactly to two (uncoupled) MEC problems. We combine them in a system where we use the assumption from line 190, which allows swapping the loglikelihood terms: from the perspective of a gradient based optimizer, two losses which induce the same gradients are equivalent. Attention: we need to enforce the joint constraint throughout training, as discussed in L 241. > Is the approach in Eqn. 11 a reasonable approximation to the marginal? We apologize, the term $\epsilon^\theta(x_t, y = \emptyset, t))$ is a typo and should be $\epsilon^\theta(x_t, y, t)$. To enforce the marginal constraint, we keep the conditional model close to the frozen unconditional model by following Fan et al. (2023), which upper-bounds the KL divergence (see their Equation 6). In their work, fine-tuning preserves proximity to the original model, whereas we learn a conditional model while maintaining closeness to the frozen unconditional model. Our Equation 11 reformulates Equation 6 from (Fan et al,2023) in terms of denoisers. > Why MEC is a good objective that aligns unpaired data? The MEC problem is well-established and grounded in the domain of information theory. By minimizing entropy while satisfying marginal constraints, MEC maximizes mutual information, capturing shared structure between distributions. Unlike optimal transport, which relies on a predefined ground metric and may overlook structural relationships, MEC aligns unpaired data without requiring an explicit similarity measure. --- Rebuttal Comment 1.1: Comment: I see, the derivation depends on the exact constraints being satisfied and you do disclose the approximate nature of the expression in the paper. Thank you. This derivation should be included in (an Appendix of) the paper. On the image translation experiments, I now realize the authors don't exclusively frame their method as only applicable to the multimodal setting. OK, this was probably my own bias as someone more interested in the multimodal problem. Image translation makes sense as an evaluation then. The PBMC experiment is nice, but again I think it fails to showcase the most exciting aspect of the method which is that it handles multimodality so long as separate generative models can be trained on each modality. Since the PBMC dataset is dimension reduced first it does not showcase this but I recognize the time constraints here. After reading the response and further reading of the paper, **I think this work makes a lot of interesting contributions, and so I will raise my score to 4**. However I feel the presentation has a high risk of being under-appreciated and I hope the authors can take some of my suggestions into account: - Currently, the paper is written heavily emphasizing multimodality, which leads to disappointment that the more impressive experiment is on a uni-modal image translation task. My suggestion is to frame MEC as a novel approach to learning unpaired **continuous** data that avoids the distance calculation in both high dimensional **and** multimodal settings, as long as good generative models can be learned. - The ability to do conditional sampling should be emphasized more. In multimodal settings, computationally feasible OT approaches can only sample over the empirical distribution of the data, but your method can sample genuinely novel instances (eg Figure 1). This seems commonplace in image translation but I think is potentially a substantial contribution in the biological settings. For example the fact that your method gets good FOS matching scores despite generating novel data and needing to select a nearest sample is impressive. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Acbm, thank you very much for your additional feedback and advice, which is very useful and that we will follow in our revised paper. Thanks, the Authors
Summary: The manuscript presents a novel method for matching unpaired data through Minimum Entropy Coupling (MEC). By extending MEC to continuous distributions and leveraging denoising diffusion probabilistic models (DDPMs), the authors propose a cooperative framework that alternates between two conditional generative models. The method is evaluated on single-cell multi-omics alignment and unpaired image translation. Experimental results show competitive performance compared to existing state-of-the-art approaches. Claims And Evidence: The claim that a cooperative diffusion-based approach can be used to solve the continuous MEC problem is supported by both theoretical derivations and experiments. The author claim that DDMEC is applicable across diverse domains. Although the experiments on two tasks are promising, these evaluations may not be sufficient to claim full generality. More extensive testing on a broader range of datasets or modalities could better support this claim. Claims that the proposed method outperforms state-of-the-art methods are supported by quantitative results. However, the margins of improvement is relatively small, and no repeated experiments are conducted to measure the stability of the method. Methods And Evaluation Criteria: The methods and evaluation criteria make sense for the problem at hand. The authors propose method is well aligned with the challenge of matching unpaired data in multimodal settings. The choice of benchmark datasets and metrics also appears appropriate. Theoretical Claims: The proofs and the theoretical claims appear correct. Experimental Designs Or Analyses: The experimental designs and analyses are fundamentally sound. The authors did not provide sensitivity analyses, and computational cost discussion. No repeated experiments are conducted to measure the stability of the method. Supplementary Material: No. Relation To Broader Scientific Literature: This paper extends the Minimum Entropy Coupling (MEC) framework to continuous distributions. By reinterpreting the joint distribution as two conditional generative models, the work is related to generative diffusion models. Essential References Not Discussed: No. Other Strengths And Weaknesses: The paper makes contributions by extending the MEC framework from discrete to continuous settings. The authors provide derivation of the optimization objectives, including soft marginal constraints and the joint cooperative scheme. Experimental results on two distinct tasks demonstrates the method’s performance The manuscript would benefit from more detailed discussion and analysis on how sensitive the method is to these choices could strengthen the empirical section. A discussion of computational cost and runtime compared to baseline methods would help assess the practicality of the approach. Repeated experiments could be helpful in measuring the stability of the method. Other Comments Or Suggestions: No. Questions For Authors: 1. Could you provide a more detailed discussion and analysis on the sensitivity of your method to various hyperparameter choices? 2. Could you elaborate on the computational cost and runtime of your approach compared to baseline methods? 3. Have you conducted repeated experiments to measure the stability of your method? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the contribution of our method for matching unpaired data and the insightful feedback which we address in the following. As suggested by the other reviewers as well, we performed additional experimental campaigns to challenge our DDMEC method. > 1- Could you provide a more detailed discussion and analysis on the sensitivity of your method to various hyperparameter choices? The hyperparameters on which our method depends are: * **Classifier Guidance Weight**: This parameter acts as a temperature scale that determines the influence of conditioning in the diffusion model. Since our reinforcement learning framework for diffusion models requires sampling from the model during training, we use classifier guidance and set it to cfg = 7, following (Fan,2023). In the plot available here [ https://anonymous.4open.science/r/icml2025_ddmec-7798/celeba/ablations.png ], we show the results of a new ablation study on the guidance scale in the unpaired image translation case, which is performed at test time. The plot displays the trade-off between FID and SSIM scores, as a function of the guidance scale for two different number of sampling steps (50 and 100). Intuitively, we observe that the guidance scale controls the tradeoff between information transfer between modalities and fidelity to the marginal constraint. A higher guidance scale improves SSIM but worsens FID, while a lower guidance scale favors FID but negatively impacts SSIM. Under the MEC framework, guidance allows us to balance the maximization of mutual information (by lowering the joint entropy) and maintaining fidelity to the marginals. - **Lambda Weight**: We find that a low KL divergence weight is essential for ensuring reward convergence, thus set $\hat{\lambda_X} = \hat{\lambda_Y} = 0.001$, in equation (9) for all experiments. Indeed, as noted in (Fan, 2023), the $\hat{\lambda_{(\cdot)}}$ parameter helps the diffusion model to remain close to the unconditional model and prevents reward overfitting, thus avoiding the degenerate solution where the marginal constraints are ignored. Recall that our method initializes the model with the unconditional model, so we start from a state where the marginal constraints are fully respected and the KL divergence is near zero. Thus, the role of lambda is weight to the KL to keep the conditional model close to the unconditional one while optimizing the joint entropy coupling. Fan, Y., Watkins, O., Du, Y., Liu, H., Ryu, M., Boutilier, C., ... & Lee, K. Dpok: Reinforcement learning for fine-tuning text-to-image diffusion models. Neurips2023. > 2 - Could you elaborate on the computational cost and runtime of your approach compared to baseline methods? We discuss computational costs of our and baseline methods according to an application-specific division. - **Image Translation**: Given that diffusion models (DMs) outperform GANs for this task, we compare DDMEC against other diffusion-based methods. The inference process of our method behaves similarly to an unconditional model, with additional conditional layers to incorporate guidance. The generation requires an iterative process with a guidance mechanism. We find that 50 steps are sufficient to achieve superior performance. This is significantly more efficient than SDEdit and EGSDE, which require 1000 steps, and SDDM and ILVR, which require 100 steps. - **Single-Cell RNA (ScRNA) Analysis**: For the SNARE dataset, optimal transport (OT) methods are computationally lightweight and achieve results comparable to DDMEC. However, this dataset is limited, with dimensionalities of 19 and 10 for the two modalities and only 1,047 samples. To further assess scalability, we conducted additional experiments on larger and high-dimensional datasets such as PBMC (see Question 1 of Reviewer 2cNi). The results indicate that OT methods fail entirely on this dataset, whereas alternative methods like DDMEC and ScTopoGAN perform significantly better. In general, approaches to solve the OT problem in high dimensions face challenges related to the cost matrix computation and fundamental difficulties of distance estimation in high dimensions. Comparison with ScTopoGAN: In terms of computational time, DDMEC is comparable to ScTopoGAN. However, ScTopoGAN involves a first topological autoenconding and a GAN model trained via multiple distinct phases and multiple runs to select the best model over multiple seeds, making a direct computational comparison challenging. >3 - Have you conducted repeated experiments to measure the stability of your method? Inference-time results are run over several seeds to improve statistical significance. Our new results show confidence intervals in terms of standard deviation.
null
null
null
null
null
null
null
null
Rhomboid Tiling for Geometric Graph Deep Learning
Accept (poster)
Summary: This paper proposes Rhomboid Tiling (RT) clustering, a hierarchical clustering method designed for geometric graph deep learning. The method leverages higher-order Voronoi structures to improve graph pooling and demonstrates competitive performance across multiple benchmark datasets. Claims And Evidence: The claims in the submission are generally supported by mathematical proofs, theoretical analysis, and empirical results, but the lack of computational complexity discussion weakens the justification of its scalability and key design choices. Methods And Evaluation Criteria: The proposed RT clustering method and RTPool model are well-aligned with the problem of geometric graph learning, and the evaluation criteria, including benchmark datasets from molecular and bioinformatics domains are appropriate; however, broader dataset diversity could further validate generalizability. Theoretical Claims: The theoretical proofs, particularly Theorems 3.1, 3.2, and 3.3, appear mathematically sound, but the lack of computational complexity analysis raises concerns about scalability. Experimental Designs Or Analyses: The experimental design, including comparisons across 7 benchmark datasets and ablation studies, is generally sound, Supplementary Material: The supplementary material was not explicitly provided in the reviewed content, but the appendix includes proofs of theoretical claims and experimental details, which were examined for mathematical soundness and experimental validity. Relation To Broader Scientific Literature: The paper builds on prior work in graph pooling (e.g., DiffPool, MinCutPool), computational geometry (Voronoi tessellation, Delaunay complexes), and topological data analysis. Essential References Not Discussed: The paper thoroughly discusses related works in graph pooling (DiffPool, MinCutPool), computational geometry (Voronoi tessellation, Delaunay complexes), and topological methods (Wit-TopoPool, SIN), but it lacks citations to recent advances in topological data analysis (e.g., persistent homology for graph learning) and efficient geometric clustering methods, which could provide additional context for its contributions. Other Strengths And Weaknesses: Strengths: 1.The paper provides solid theoretical support for the proposed method. 2.The authors have released the code to ensure reproducibility. 3.Extensive experiments validate the effectiveness of the proposed approach. Weaknesses: 1.The paper lacks a detailed description of the datasets, making it difficult to determine the data dimensions and raising concerns about the scalability of the proposed method. 2.The comparative analysis does not include recent studies from 2024, making it challenging to demonstrate the effectiveness of the proposed approach against more recent advancements. 3.The paper lacks complexity analysis, both theoretically and experimentally. 4.It would be beneficial to include additional visualization experiments to illustrate how RT clustering performs hierarchical clustering. Other Comments Or Suggestions: 1. Clarity of Notation and Mathematical Definitions – Some mathematical formulations, particularly in Section 3 (RT Clustering Definition), are complex and may benefit from more intuitive explanations or examples. A clearer breakdown of key notations and geometric interpretations would improve readability, especially for non-experts in computational geometry. 2. Baseline Selection and Fair Comparisons – While the paper compares against 19 methods, it is unclear whether hyperparameters for all baselines were tuned fairly or if default settings were used. A discussion on how baseline models were optimized would ensure a fair evaluation of the proposed method’s advantages. 3. Hyperparameter Sensitivity Analysis – The paper does not explore how sensitive the method is to key hyperparameters, such as the number of pooling layers. A hyperparameter sensitivity study would help assess the robustness of the approach across different datasets. Questions For Authors: Please see weaknesses and comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. Below we address each concern raised. **1. Dataset Description** To address the reviewer’s concern, we provide a summary of the datasets used in our classification experiments: **Table: Description of classification datasets** |Dataset|BZR|COX2|MUTAG|PTC_MR|PTC_MM|PTC_FM|PTC_FR| |---|---|--|--|--|---|---|--| |No. graphs|405|467 |188|344|336|349|351| |Avg. nodes|35.75|41.22|17.93|25.56|24.25|25.00|24.96| We also include additional experiments on regression tasks and social network datasets. Their statistics are summarized below. Please refer to our response to **Reviewer FBWn** for performance results. **Table: Description of regression and social datasets** | Dataset| Esol | FreeSolv | Lipo | IMDB-BINARY | IMDB-MULTI | |--|--|--|--|--|---| |No. graphs|1128|642|4200|1000 |1500| |Avg. nodes|26.00|18.00|49.00|19.77|13.00| **2. Comparison with Recent Studies** We thank the reviewer for this suggestion. To address it, we included two pooling methods published in 2024: - **Hop-Pool** (Zhang et al. (2024), *Multi-hop graph pooling adversarial network for cross-domain remaining useful life prediction*, Reliability Engineering & System Safety, 244, 109950.) - **Mv-Pool** (Ma et al. (2024), *GraphADT: empowering interpretable predictions of acute dermal toxicity with multi-view graph pooling and structure remapping*, Bioinformatics, 40(7), btae438.) We followed the default settings from their original papers and conducted a grid search around those values to fairly tune the hyperparameters. The results of these methods compared to ours are summarized below: **Table: Performance of different models on benchmark datasets (mean ± std). Best results are in bold.** | Model| BZR|COX2|MUTAG|PTC_MR|PTC_MM|PTC_FM|PTC_FR| |---|---|---|---|---|---|--|---| |Hop-Pool|85.37±4.36|85.11±3.74| **94.74±4.76**|65.71±2.85|73.59±5.27|64.15±4.62|65.71±3.71| |Mv-Pool|78.05±3.38|82.98±5.24|89.64±2.43|68.58±2.61|70.65±4.83|62.86±3.37|65.72±2.14| |**RTPool**|**88.29±0.98**|**89.36±2.33**|**94.74±3.33**|**76.57±1.14**|**82.94±2.20**|**77.72±1.14**|**82.29±2.80**| These results show that RTPool consistently outperforms or matches state-of-the-art methods from 2024 on all benchmark datasets. **3. Computational Complexity** Due to space constraints, please refer to our detailed theoretical and empirical complexity analysis provided in our response to **Reviewer kfVb**. In short, our RT pooling model have overall complexity of $O(n^2)$ in $\mathbb{R}^3$, and our runtime comparisons confirm the model’s efficiency. **4. Additional Visualization** We appreciate the suggestion. We have added a visualization example using the molecular graph of **Formaldehyde** to illustrate how RTpool performs hierarchical clustering pooling. The visualization is available at the anonymous link: [RTPool_Visual](https://anonymous.4open.science/r/RTpool_rebuttal_images-B017/RTPool_Visualization.png) **5. Clarity of Notations** We have revised our notation and mathematical expressions to improve consistency and clarity—e.g., clarifying the usage of matrices $C_l$ and $C_k$. Additionally, we added an example in the appendix using the molecular graph of Formaldehyde to illustrate the construction of its rhomboid tiling in a step-by-step manner to improve the readability. **6. Baseline Comparisons** The baseline results are taken from the ***Wit-TopoPool*** paper, which also serves as a strong recent benchmark. According to the authors, all baselines in that work, including Wit-TopoPool itself, were tuned using grid search over a fixed set of hyperparameter choices, and evaluated under the same cross-validation protocol. Our model uses the same setting, and this ensures a fair and consistent comparison. **7. Hyperparameter Sensitivity** We thank the reviewer for pointing this out. We have already included a sensitivity analysis of the key structural hyperparameters $k_2 - k_1$ and the number of pooling layers in our response to **Reviewer z8We**. In addition, we performed sensitivity experiments on the learning rate and the final dropout rate. The results are summarized below. **Table: Sensitivity to learning rate (LR)** |Dataset|LR| Accuracy| |--|--|--| |COX2|0.0002|87.23±1.50| || 0.0005 | 87.66±0.95| ||0.001|**89.36±2.33**| ||0.002|88.93±1.78| |MUTAG|0.0002|89.47±0.00| ||0.0005|91.57±2.88| ||0.001|**94.74±3.33**| ||0.002|93.68±2.10| |PTC_MR|0.0001|73.72 ± 2.39| ||0.0002|**76.57 ± 1.57**| || 0.0005 |72.57 ± 1.40| ||0.001|70.29 ± 1.40| **Table: Sensitivity to final dropout rate** |Dataset|Dropout|Accuracy| |--|--|--| |COX2|0.4|88.51 ± 1.91| ||0.5|**89.36 ± 2.33**| ||0.6|88.93 ± 1.78| |MUTAG| 0.4| 92.63 ± 2.88| || 0.5 | **94.74 ± 3.33** | || 0.6 | 93.69 ± 2.36| | PTC_MR|0.2| 74.86 ± 1.27| ||0.3|**76.57 ± 1.14** | ||0.4|75.43 ± 1.56| These results show that RTPool is relatively robust to variations in learning rate and dropout.
Summary: This paper proposes a geometry-aware graph clustering algorithm for enhancing geometric graph classification performance across diverse datasets. The method captures high-order structural information of geometric graphs through high-order Voronoi tessellation and Delaunay complexes. Building on this foundation, this paper introduces a hierarchical graph pooling operation that constructs a normalized clustering matrix via Rhomboid incidence matrices. This operation aggregates lower-order cluster features into higher-order representations and updates node features using highly expressive GNNs. The underlying graph structure can be flexibly configured as either Delaunay graphs or generated graphs expanded from chemical bonds to incorporate structural priors. Extensive experiments on multiple benchmark datasets demonstrate comparable or competitive results against state-of-the-art methods. Claims And Evidence: The motivation of this paper is clearly articulated. Traditional connectivity-based graph clustering methods lack effective geometric information and fail to adequately capture the inherent spatial geometric features in geometric graphs. Furthermore, topology-driven clustering approaches struggle to identify geometric substructures in dense regions and cannot effectively characterize spatial proximity relationships between atoms or molecules, thereby limiting the model's ability to learn complex geometric patterns. Through rigorous analysis and theoretical proofs, this work confirms that the proposed Rhomboid Tiling-based clustering method effectively addresses these limitations. Methods And Evaluation Criteria: This paper proposes a geometric-aware hierarchical clustering method based on the rhomboid tiling structure and a corresponding graph pooling model. The method constructs high-order geometric clusters through spherical partitioning of the spatial domain, where atomic nodes in geometric graphs are hierarchically aggregated based on spatial proximity. Lower-order clusters are formed by small-scale atomic groups, while higher-order clusters emerge by merging geometrically adjacent lower-order clusters. A rhomboid cell depth-based weighting mechanism is introduced to quantify the importance of subclusters. The model incorporates molecular chemical bonds to construct hierarchical graph structures, leverages GNNs to hierarchically extract geometric features across layers, and ultimately obtains global representations through multi-layer geometric pooling. This approach preserves 3D spatial relationships while enabling efficient hierarchical information abstraction. The method's design is well-founded and should enable efficient clustering of data with complex geometric structures. Theoretical Claims: I have reviewed the proofs and reasoning in the main text, which are clear and comprehensible, and the relevant formulas are consistent with the descriptions in the methodology section of the paper. Experimental Designs Or Analyses: The proposed method is validated on datasets including molecular property prediction and protein-protein interactions, with experimental results demonstrating that the proposed approach achieves SOTA performance. Ablation studies further confirm the critical impact of the proposed modules on performance enhancement. The experiments in this work are comprehensive, and the method exhibits promising predictive capabilities even on more challenging data scenarios. Supplementary Material: I carefully reviewed the proof process in the appendix of the paper and conducted a systematic derivation. Relation To Broader Scientific Literature: The efficient clustering algorithm proposed in this paper holds significant implications for adaptive research in point cloud testing. The experiments conducted on complex data such as proteins have validated its feasibility, suggesting that it may also be beneficial for the clustering and construction of neighborhood relationships in point clouds, as discussed in [1]. [1]. Yang S, Wang Y, Van de Weijer J, et al. Trust your good friends: Source-free domain adaptation by reciprocal neighborhood clustering[J]. IEEE Transactions on pattern analysis and machine intelligence, 2023, 45(12): 15883-15895. Essential References Not Discussed: I believe the methodologies referenced and the literature discussed in this paper demonstrate relatively comprehensive coverage. Other Strengths And Weaknesses: This paper proposes a highly intriguing graph clustering algorithm. By introducing a novel Rhomboid Tiling clustering method, it effectively handles diverse types of data with complex geometric information, achieving highly accurate classification. I believe this work will provide significant inspiration for research in point cloud feature extraction, unsupervised adaptation. Other Comments Or Suggestions: None. Questions For Authors: How does the incremental construction mechanism of rhomboid tiling mentioned in the paper handle geometric conflicts between newly added clusters and existing clusters? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's positive recognition of our rhomboid tiling clustering framework and its potential for broader applications. **1.Question about geometric conflicts** Thank you for the insightful question. Our method adopts a **layer-wise hierarchical clustering** strategy. Given a set of clusters corresponding to nodes in the layer of order-$k_1$ Delaunay complex, we apply RT clustering to map these clusters onto a higher-order structure—specifically, the new layer of order-$k_2$ Delaunay complex—where $k_2 > k_1$. During this process, we do not incrementally modify or extend the existing lower-order clusters. Instead, we directly construct an entirely new layer of clusters at order-$k_2$. As you pointed out, a possible concern is whether clusters in the new layer (order-$k_2$) may geometrically conflict—e.g., a point being assigned to multiple clusters, or clusters overlapping in space. This can indeed occur at the point level; a point may belong to multiple higher-order clusters. However, this is not problematic. In particular, if $X_1$ and $X_2$ are point sets forming two distinct clusters from the new layer at order-$k_2$, then even if they share some points (i.e., $X_1 \cap X_2 \neq \emptyset$), their corresponding convex hulls $\mathrm{conv}(X_1)$ and $\mathrm{conv}(X_2)$ only intersect at their **boundaries**. Their interiors remain disjoint. This structural property ensures that the rhomboid tiling construction preserves **geometric consistency**, and does not introduce significant geometric conflicts during hierarchical clustering.
Summary: This paper introduces Rhomboid Tiling (RT) clustering, a novel hierarchical clustering method for geometric graph deep learning. The RT clustering approach is based on the rhomboid tiling structure, which extends Voronoi tessellation and Delaunay complex theory to efficiently capture high-order geometric relationships within graph-structured data. Based on RT clustering, the authors propose RTPool, a new graph clustering pooling model tailored for graph classification tasks. Claims And Evidence: While the paper presents some experimental results, some claims require additional clarification and justification: 1. The geometric advantage of RT clustering over existing pooling methods is not extensively analyzed. 2. The computational complexity of RTPool is not explicitly compared against existing graph pooling models, making it unclear whether the method is scalable. Methods And Evaluation Criteria: Yes, I think the choice of graph classification datasets is reasonable, as they involve molecular graphs where geometric properties are crucial. Theoretical Claims: The paper presents several theoretical formulations related to high-order Voronoi tessellation, Delaunay complexes, and Rhomboid Tiling clustering. The main theorems (Theorems 3.1, 3.2, 3.3) are well-structured, but there is no empirical validation that confirms these theoretical claims in real-world datasets. Also, the weighting mechanism for RT clustering (Theorem 3.3) is introduced without much intuition on its impact on pooling quality. Experimental Designs Or Analyses: The experimental setup is well-defined, following the same train-test split strategies as previous works. Supplementary Material: Yes, I have checked the proofs but (to be honest) not fully understand the details. Relation To Broader Scientific Literature: I am not clear about this work's relationship to existing hierarchical clustering techniques, spectral pooling, and adaptive pooling. Authors may consider providing a more comprehensive review of related work and explicitly highlight how this method differs from classical pooling techniques. Essential References Not Discussed: N/A Other Strengths And Weaknesses: S1: The paper presents a novel geometric clustering approach based on rhomboid tiling, which has not been extensively explored in the context of graph pooling. S2: The proposed RTPool method consistently outperforms existing state-of-the-art graph pooling approaches across seven benchmark datasets, including molecular and biochemical graphs. W1: For readability, I feel that some mathematical notations and derivations are quite dense, making it challenging for readers unfamiliar with computational geometry. W2: The paper does not clearly position its key contributions within the broader graph learning and pooling literature. While rhomboid tiling is novel, its relationship to existing hierarchical clustering techniques, spectral pooling, and adaptive pooling is not sufficiently discussed. Other Comments Or Suggestions: 1. The choice of underlying graph representations (Delaunay graphs vs. generated graphs) is not fully justified. 2. A deeper analysis of how different choices of k-values impact clustering performance is needed. >updated after reading the rebuttal and responses >All my concerns have been clarified. Therefore, I am happy to increase my score to '4: Accept'. Questions For Authors: see the above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable and constructive feedback. Below we respond to the key concerns raised. **1. Readability** We have revised our notation and mathematical expressions to improve consistency and clarity—e.g., clarifying the usage of matrices $C_l$ and $C_k$. Additionally, we added an example in the appendix using the molecular graph of Formaldehyde to illustrate the construction of its rhomboid tiling in a step-by-step manner. **2. Positioning of RTpool** RTpool is a hierarchical clustering pooling method. Unlike traditional methods such as DiffPool, which learn how to group nodes via a trainable assignment matrix, RTpool performs clustering purely based on the geometric structure of the input graph. This geometry-driven approach offers several advantages: - It naturally incorporates higher-order geometric information from the graph, which contributes to the superior performance of RTpool. - It avoids additional learnable parameters or training overhead, making the training process efficient. - The clustering results depend only on the input graph's geometry and are less affected by noise or quality variations across the training dataset. A known limitation is that RTpool does not support adaptive pooling to a target size. However, we can mitigate this by leveraging the "birthtime" of each cluster, which is naturally generated during the RT clustering process and reflects its importance. By removing clusters with low importance after pooling, we can control the final graph size if needed. **3. Choice of Underlying Graph** Thank you for pointing this out. We clarify our reasoning as follows: - RT clustering already relies on the geometric structure of the input graph. Since Delaunay-connected nodes are often clustered together, reusing Delaunay graphs after pooling adds limited new information. - RT clustering depends solely on geometric positions and does not fully capture edge-level connectivity of the initial graph. However, in domains like molecular graphs, edge connections (e.g., chemical bonds) carry crucial information. To compensate, we use the original molecular graph to construct **generated graphs** after pooling, allowing the model to retain important high-order connectivity information of initial graph. Therefore, we adopt **generated graphs** as the underlying graph after each pooling layer to enhance representation learning. **4. Choice of $k$** In RTpool, the choices of order $k$ is determined by two parameters: the difference $k_2 - k_1$ and the number of pooling layers. - $k_2 - k_1$ can be viewed as the "step size": at each pooling layer, features are aggregated from order $k$ to order $(k + (k_2 - k_1))$. - The number of pooling layers determines how many such "steps" we perform in total. We conducted a sensitivity analysis for both parameters. The results are shown below: **Table: Sensitivity to $k_2 - k_1$** | Dataset | $k_2 - k_1$ | Accuracy | |----------|-------------|------------------| | COX2 | 1 | 89.36 ± 2.33 | | | 2 | **92.76 ± 1.90** | | | 3 | 86.38 ± 1.90 | | MUTAG | 1 | **94.74 ± 3.33** | | | 2 | 89.64 ± 2.36 | | | 3 | 88.42 ± 2.10 | | PTC\_MR | 1 | 76.57 ± 1.14 | | | 2 | **78.86 ± 1.57** | | | 3 | 69.71 ± 1.56 | **Table: Sensitivity to the number of pooling layers** | Dataset | #Pooling Layers | Accuracy | |----------|------------------|------------------| | COX2 | 1 | **89.36 ± 2.33** | | | 2 | 88.50 ± 1.17 | | MUTAG | 1 | **94.74 ± 3.33** | | | 2 | 90.52 ± 2.35 | | PTC\_MR | 1 | **76.57 ± 1.14** | | | 2 | 72.57 ± 2.56 | We observe that RTPool performs robustly when $k_2 - k_1 = 1$ or $2$, and shows degraded performance when $k_2 - k_1 = 3$. This aligns with our theoretical analysis, which suggests that setting $k_2 - k_1 \geq 3$ may result in a loss of node feature information during the pooling process, thus hurting performance. Therefore, we recommend choosing $k_2 - k_1 = 1$ or $2$ in practice. While our original model used $k_2 - k_1 = 1$ by default, these experiments suggest that using $k_2 - k_1 = 2$ can yield better results on some datasets. The number of pooling layers functions similarly to the number of layers in an MLP—deeper models can increase capacity, but may also introduce overfitting or unnecessary compression on small datasets. For most of the small-to-medium-sized benchmarks in our experiments, we find that a single pooling layer is sufficient and often preferable in practice. --- Rebuttal Comment 1.1: Comment: The authors have clarified my previous concerns. Thanks for the efforts. I have no further comments and will increase my score accordingly. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer's positive feedback and support.
Summary: The author proposes a new pooling methgod inspired by the the rhomboid tiling structure. The author starts from “High-Order Voronoi Tessellation”, which is a method to partitate space based on points. The Voronoi cell Q indicates that all points blong to Q are clustered together and separated with other points. All the partitions are called order-K Voronoi tessellation. And the high-order Deluanay Complex, which is defined as the nerve of Voronoi Tessellation, encodes the relationship between these clusters. And Rhomboid Tiling can be used to build the relationship between different orders of Deluanay Complex. With these theories as foundation, the author porpose the RT clustering, which basically use spheres to partition points. The partitions can be used to define geometric relationships or further clustering. And the author also provides methods on two important subproblem, how to choose step size between order k and how to assign weights to different points. Furthermore, authors propose RTPool, which applies the mentioned methods as pooling methods. The author also provides extensive experiments to prove the efficiency of the proposed methods. Claims And Evidence: Yes. Methods And Evaluation Criteria: 1: The paper is clear to read. The authors clearly states the theory and algorithm of their methods. 2: The idea is novel. Pooling based on cluster is interesting. 3: The experiment result is good. Theoretical Claims: No Experimental Designs Or Analyses: No issues. Supplementary Material: No. Relation To Broader Scientific Literature: The approach proposed is potentially benefit other scientific domains. Essential References Not Discussed: No. Other Strengths And Weaknesses: Question and concerns: 1: It looks like the authors does not talk about the efficiency of their methods. These operations seems introduce extra overheads to the pooling operation. It can also be helpful to provide experiment results to show efficiency, comparing with other pooling methods. 2: In section 3.3, the author talk about C_l, which is clustering matrix for layer l. But previously, the author only talk about C_k. How does C_l and C_k connect with each other? And how to choose the value of k, what will the maximum value of k for each pooling? Please provide some explanation. Other Comments Or Suggestions: Please see the weakness. Questions For Authors: See weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful feedback. Below, we provide detailed responses to each of the points raised. **1. Model Efficiency** To address the reviewer’s concern, we provide both the **theoretical time complexity** of RTPool and **empirical evidence** comparing it with other pooling methods. ***Theoretical Time Complexity*** When the input geometric graph is embedded in $\mathbb{R}^3$, both the construction of the rhomboid tiling and the pooling operation based on this structure have theoretical time complexity $O(n^2)$, where $n$ is the number of input nodes. ***Conclusion 1:*** The total time complexity of computing rhomboid tiling up to order $K$ in $\mathbb{R}^d$ is: $$O\left(n^{\left\lfloor \frac{d+1}{2} \right\rfloor} K^{\left\lceil \frac{d+3}{2} \right\rceil} \right),$$ as induced by result in Corbet et al. (2021) and Corbet et al. (2023). And in our practice, dimension $d=3$, and maximun order $K=2$ or $3$, the complexity reduces to $O(n^2)$. ***Conclusion 2:*** The number of nodes in the $k$-th RT layer (order-$k$ Delaunay complex) is $O(k^2(n - k))$ (Lee, 1982), and with small $k$ and sparse input graphs, this ensures the total time complexity of RTPool remains $O(n^2)$. Each pooling layer consists of: - **Step 1:** Matrix multiplication between clustering matrix ($O(n) \times O(n)$) and node features ($O(n) \times O(1)$): $O(n^2)$ - **Step 2:** GIN layer on sparse graph with $O(n)$ nodes: $O(n)$ Thus, total per-layer complexity is $O(n^2)$. ***References*** - Corbet et al. (2023), *Computing the Multicover Bifiltration*, Discrete & Computational Geometry. - Corbet et al. (2021), *Computing the Multicover Bifiltration*, arXiv:2103.07823. - Lee (1982), *On k-nearest neighbor Voronoi diagrams in the plane*, IEEE Trans. Computers. ***Empirical Efficiency*** We also benchmarked RTPool against other pooling methods such as DiffPool, MinCutPool, HaarPool, etc. RTPool demonstrates comparable efficiency. **Table 1: Accuracy over 5 runs × 100 epochs** | Model|BZR| COX2 | MUTAG | PTC_MR | PTC_MM | PTC_FM | PTC_FR | |--------|-----|---------|----------------|----------------|----------------|----------------|----------------| | MinCutPool | 76.47±2.32 | 79.86±2.47 | 69.47±2.11 | 66.86±2.91 | 72.94±1.18 | 56.74±5.63 | 62.86±1.81 | | DiffPool | 78.54±0.98 | 77.93±3.18 | 73.68±3.33 | 69.14±2.80 | 67.06±2.20 | 68.57±3.61 | 68.57±1.81 | | HaarPool | 78.05±0.00 | 80.64±4.58 | 68.42±0.38 | 62.29±4.20 | 64.71±3.72 | 61.14±6.66 | 65.67±2.25 | | Wit-TopoPool | 80.98±2.39 | 80.43±1.59 | 85.32±2.58 | 71.84±1.14 | 72.82±4.71 | 68.56±4.84 | **68.57±4.04** | | Hop-Pool | 78.05±1.39 | 80.00±1.70 | **87.56±4.41** | 61.71±2.91 | 70.59±0.78 | 58.29±1.40 | 63.43±1.14 | | Mv-Pool | 75.60±1.98 | 79.68±1.27 | 73.68±10.53 | 64.57±4.64 | 68.24±1.18 | 57.14±0.00 | 65.71±1.77 | | **RTPool** | **84.39±1.19** | **85.96±1.04** | 83.16±5.16 | **72.86±3.65** | **72.94±3.43** | **68.86±8.59** | 67.71±7.75 | **Table 2: Total Runtime (seconds) over 5 runs × 100 epochs** | Model | BZR | COX2 | MUTAG | PTC_MR | PTC_MM | PTC_FM | PTC_FR | |----------------------|---------|---------|---------|--------|--------|--------|--------| | MinCutPool | 2297.74 | 2554.96 | 1999.89 | 2016.22| 2482.84| 2470.79| 2355.13| | DiffPool | 2507.53 | 2813.80 | 2144.75 | 2235.34| 2637.06| 2621.80| 2529.79| | HaarPool | 3238.15 | 2666.93 | 1787.79 | 1412.01| 1407.39| 2075.04| 2022.91| | Wit-TopoPool | 4512.64 | 4904.08 | 7475.70 | 7390.13| 7332.05| 7370.51| 7357.56| | Hop-Pool | 1483.80 | 1420.63 | 1190.00 | 1171.09| 1450.05| 1421.54| 1355.92| | Mv-Pool |11244.74 | 9137.48 | 5828.78 | 8935.74| 8327.48| 8326.58| 8907.54| | **RTPool (constructor)** | 1616.06 | 2356.41 | 189.59 | 341.39 | 321.39 | 331.57 | 360.00 | | **RTPool (model)** | 1117.62 | 1053.07 | 1041.59 | 673.77 | 1268.01| 1708.35| 306.10 | Despite running for only 100 epochs, RTPool achieves strong results with limited training time, even when including the overhead of rhomboid tiling construction. **2. Relationship between $C_l$ and $C_k$** $C_k$ denotes the clustering matrix from the order-$k$ Delaunay complex to order-$(k+1)$. When the pooling layer index $l = k$, we use $C_k$ as the clustering matrix $C_l$. So $C_l$ and $C_k$ refer to the same object with different notations. We thank the reviewer for pointing this out and will revise our notation to avoid confusion. The maximum value of $k$ is set to 2 or 3 in our experiments, which is equal to **#pooling layers+1**. (The choice of #pooling layers is provided in the appendix). Due to space limits, please refer to our response to **Reviewer z8We** for detail on the choice of $k$.
Summary: The paper introduces a novel hierarchical clustering method—Rhomboid Tiling (RT) clustering—for geometric graph deep learning. Unlike traditional clustering-based pooling methods that mainly rely on graph connectivity, RT clustering leverages high-order geometric structures derived from concepts such as alpha shapes, higher-order Voronoi tessellations, and Delaunay complexes. Using these ideas, the authors design RTPool, a graph pooling model that constructs hierarchical representations by clustering vertices using the rhomboid tiling structure. The model is validated on seven benchmark datasets (including chemical and molecular graphs), where it outperforms 19 state-of-the-art competitors. The paper also provides theoretical analysis, including necessary and sufficient conditions for cluster membership and a weighting mechanism based on the frequency of shared high-dimensional rhomboids. Claims And Evidence: The main claims—that RT clustering can capture higher-order geometric information and that RTPool yields superior performance—are backed by both rigorous theoretical derivations and extensive empirical evaluations. The experiments compare RTPool against a broad spectrum of baselines using standard benchmarks and include ablation studies to isolate the contribution of individual components. The theoretical claims are supported by proofs (presented in the supplementary material) that derive conditions under which clusters form and justify the weight definition, though these proofs might benefit from further independent validation. Methods And Evaluation Criteria: The proposed method is well-motivated: it addresses the limitations of connectivity-only approaches in graph pooling by incorporating geometric information. The derivation of RT clustering from higher-order Voronoi and Delaunay structures is conceptually sound and aligns with existing computational geometry ideas. In terms of evaluation, the use of established benchmark datasets (e.g., MUTAG, PTC variants) and comparisons with a wide range of baselines (from graph kernel methods to modern GNN pooling techniques) are appropriate for the problem setting. Theoretical Claims: The paper includes proofs for its main theoretical results, such as Theorem 3.1 (characterizing when a vertex belongs to a cluster) and Theorem 3.2 (guiding the choice of clustering parameters k₁ and k₂). These proofs appear detailed and correct on inspection, though their full validity would benefit from additional scrutiny by experts in computational geometry. The weighting mechanism based on counting shared depth-(d+1) rhomboids is also justified through a theorem that connects it to geometric proximity. Experimental Designs Or Analyses: The experimental design is solid. The authors test their model on multiple datasets with different characteristics (chemical vs. molecular graphs) and conduct an ablation study to assess the impact of replacing RTPool with trivial pooling, varying the choice of GNN architectures for feature updates, and selecting different underlying graph constructions. The consistent performance gains reported and the detailed breakdown of results suggest that the experimental analysis is sound and that the improvements are not merely due to incidental factors. Supplementary Material: The supplementary material has been reviewed and includes detailed proofs of the theoretical claims as well as additional experimental settings and ablation study results. These materials provide further clarity on the derivation of the clustering conditions and the implementation details that support the empirical results. Relation To Broader Scientific Literature: The paper builds on several important threads in the literature. It extends ideas from traditional graph clustering pooling methods (e.g., DiffPool, MinCutPool) by incorporating geometric insights from alpha shapes, Voronoi tessellations, and Delaunay complexes. This connection to classical computational geometry distinguishes it from methods that rely solely on graph connectivity. The work also relates to recent advances in topological and geometric deep learning, positioning its contributions within both the graph neural network and computational geometry communities. Essential References Not Discussed: The paper cites a wide range of relevant works in topological pooling or geometric deep learning have been considered. Other Strengths And Weaknesses: Strengths: • Innovative integration of Rhomboid Tiling clustering into GNN pooling. • Strong theoretical grounding that clarifies the clustering mechanism. • Comprehensive experimental evaluation demonstrating significant performance gains. Weaknesses: • The complexity of the geometric constructions and associated proofs may pose challenges for reproducibility and practical implementation. • Sensitivity to hyperparameter choices (e.g., the difference between k₂ and k₁) might require careful tuning in practice. • The approach is tailored to geometric graphs; it is less clear how well it generalizes to non-geometric settings. Other Comments Or Suggestions: It would be useful for the authors to provide further insights into the computational overhead of the RT clustering process and to discuss potential strategies for scaling to larger graphs. Additionally, clarifying the limits of parameter sensitivity could help practitioners better understand when and how to deploy the method. Questions For Authors: Could you elaborate on the sensitivity of RTPool to the choice of clustering parameters (k₁ and k₂)? In particular, how robust is the performance when these parameters are varied within the recommended range? Have you considered applying RT clustering to graph tasks beyond classification, such as regression or link prediction? If so, what challenges might arise in those contexts? Can you provide more details on the computational complexity of the RT clustering procedure, especially in high-dimensional spaces, and how it scales with graph size? In cases where the underlying graph is not strictly geometric (e.g., social networks), do you foresee adaptations of RT clustering that could still capture meaningful structure? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful and constructive comments. Below, we provide detailed responses to each of the points raised. **1. Hyperparameter sensitivity** To address this concern, we have conducted a detailed sensitivity analysis by varying the value of $k_2-k_1$. The results are summarized in **Table 1** below: **Table: Sensitivity to $k_2-k_1$.** |Dataset|$k_2-k_1$| Accuracy | |------|---|------| | COX2 | 1 | 89.36 ± 2.33 | || 2 | **92.76 ± 1.90** | || 3 | 86.38 ± 1.90 | | MUTAG | 1 | **94.74 ± 3.33** | | | 2 | 89.64 ± 2.36 | | | 3 | 88.42 ± 2.10 | | PTC\_MR | 1 | 76.57 ± 1.14 | | | 2 | **78.86 ± 1.57** | | | 3 | 69.71 ± 1.56 | From the results above, we observe that RTPool performs robustly when $k_2-k_1=1$ or $2$, and shows degraded performance when $k_2-k_1=3$. This aligns with our theoretical analysis, which suggests that setting $k_2-k_1\geq 3$ leads to a loss of node feature information during the pooling process, thus hurting performance. Therefore, we recommend choosing $k_2-k_1=1$ or $2$ in practice. While our original model used $k_2-k_1=1$ by default, these experiments suggest that using $k_2-k_1=2$ can yield better results on some datasets. **2. RTPool for Graph Regression Tasks** We believe that our model is not only suitable for graph classification tasks, but also for **graph regression**. This is because the RTPool process incorporates **geometric information** into the pooling operation, ensuring that the final graph-level representation captures essential geometric structures. As a result, this learned representation can benefit a variety of downstream tasks—regardless of whether the objective is classification or regression. To support this claim, we have conducted additional experiments on regression datasets. The results, in terms of **RMSE**, are reported in the following table: **Table: RMSE comparison on graph regression datasets** | Model| Esol | FreeSolv | Lipo | |------|---------|---------|--------| | MinCutPool | 2.1913 ± 0.0374 | 4.0111 ± 0.0170 | 1.3481 ± 0.0224 | | StructPool | 2.1749 ± 0.0411 | 4.0077 ± 0.0150 | 1.3422 ± 0.1264 | | DiffPool | 3.7699 ± 0.2035 | 5.2877 ± 0.2049 | 2.7431 ± 0.0753 | | HaarPool | 2.1035 ± 0.0340 | 3.8892 ± 0.0098 | 1.3361 ± 0.0627 | | Wit-TopoPool | **1.8783 ± 0.1628** | 4.2159 ± 0.0816 | 1.0916 ± 0.5007 | | Hop-Pool | 2.4831 ± 0.0760 | 4.0030 ± 0.0940 | 1.3725 ± 0.0738 | | Mv-Pool | 2.5691 ± 0.0484 | 4.0627 ± 0.1048 | 1.3746 ± 0.0682 | | **RTPool** | 2.0195 ± 0.5318 | **3.6666 ± 0.2112** | **1.0789 ± 0.0496** | As shown in the table, **RTPool achieves the best or near-best performance across all three datasets**, demonstrating its effectiveness in graph regression tasks compared to other pooling baselines. **3. Computational Complexity of the RT Clustering** Theoretically, the construction of the Rhomboid Tiling structure has complexity bounds $O\left(n^{\left\lfloor \frac{d+1}{2} \right\rfloor}\right)$, and the associated pooling operation in RTPool has complexity bounds $O(n^2)$. Here $n$ is the number of nodes, and $d$ is the dimenstion of space in which the graph is embedded. We have also conducted experiments to empirically assess the efficiency of RTPool compared to other pooling baselines. Due to space constraints, we kindly refer the reviewer to our detailed response to **Reviewer kfVb**, where we provide both the **theoretical proof** and **empirical runtime results** **4. RTPool for Social Network Graphs** To adapt RTPool to non-geometric graphs such as social networks, one key step is to find an appropriate way to embed the graph nodes into $\mathbb{R}^3$. A feasible approach is to compute the **graph Laplacian** and use the combination of eigenvectors corresponding to the **three smallest non-zero eigenvalues** as coordinates for each node. Based on this idea, we applied RTPool to social network datasets which contain only graph connectivity. The results are shown below: **Table: Accuracy (\%) on social network datasets (mean ± std).** | Model| IMDB-BINARY | IMDB-MULTI | |--------|------|------| | MinCutPool | 70.77 ± 4.89 | 49.00 ± 2.83 | | DiffPool | 68.60 ± 3.10 | 45.70 ± 3.40 | | SAGPool | 74.87 ± 4.09 | 49.33 ± 4.90 | | HaarPool | 73.29 ± 3.40 | 49.98 ± 5.70 | | Wit-TopoPool | **78.40 ± 1.50** | **53.33 ± 2.47** | | Hop-Pool | 68.04 ± 2.04 | 49.33 ± 5.09 | | Mv-Pool | 69.75 ± 3.61 | 51.67 ± 0.74 | | **RTPool** | 73.06 ± 3.84 | **53.33 ± 1.26** | As shown above, RTPool achieves competitive performance despite lacking enough geometric information
null
null
null
null
WOMD-Reasoning: A Large-Scale Dataset for Interaction Reasoning in Driving
Accept (poster)
Summary: The authors provided the new Q&A Reasoning dataset built on top of the famous Waymo Open Motion Dataset (WOMD-Reasoning) with the help of ChatGPT-4 and MetaDrive simulator for visualization purposes, and checked the baseline performance of Motion-LLaVA on top of it. ## update after rebuttal Authors did a slight downstream testing and increased a number of Human-evaluated QA pairs, so I increased the overall score from WR to WA Claims And Evidence: Claims: 1. Providing of WOMD-Reasoning, the largest multi-modal dataset with 3 million Q&A pairs focused on interaction reasoning in driving. 2. Fine-tuning on WOMD-Reasoning, Motion-LLaVA provides detailed and insightful interaction predictions on various driving scenarios 3. Ablation study on Motion-LLaVA design Evidence: 1. To be shared but there is no doubt that it is done, also refer to the Tables 1 and 2. 2. and 3. Using LLaVA + Multipath++ encoders (Figure 4), authors finetuned the solution to get the results shown in Tables 4 and 5 underlining the correct approach / architecture used; some ablations are shown in Tables 13 and 14 (Appendix) Methods And Evaluation Criteria: * One of the main drawbacks is the validity of the automatically generated by ChatGPT data - especially about intentions. Authors tried to assess the quality of auto labels by human evaluation in Section 4.3 - "4 people to judge 1610 Q&A". This is quite small in comparison to the size of the dataset (3 millions of Q&A), less than 1%. * Moreover, the results of human assessment - e.g. 87.5% for Intentions - is quite low, especially for Planning, where each % is prone to be converted to some disastrous accident on the road * There is no information provided on whether the incorporation of WOMD-Reasoning into any Autonomous Driving solution actually helps with any of real Autonomous Driving metrics (safety / comfort / progress), or even improves just the prediction power of common methods. Theoretical Claims: No theory involved Experimental Designs Or Analyses: * Quantitative analysis concentrates mostly on Language-related metrics - and very sparsely on Acc / Median Error. That's far from being useful * In Section 5.4 "Validation of Motion-LLaVA" there is no details on what type of "facts" were measured (map elements, other agents states, ego state). Moreover, was Motion-LLaVA fine-tuned with motion prediction task or not? Supplementary Material: Yes, all. Relation To Broader Scientific Literature: Two main directions: Autonomous Driving and Multi-modality Large Language Models. The paper is to combine these areas on the task of reasoning for AD. Essential References Not Discussed: NA Other Strengths And Weaknesses: Main items were emphasized in "Methods And Evaluation Criteria" and "Experimental Designs Or Analyses" sections Other Comments Or Suggestions: Typo: * Line 149: "some studies x" Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful and constructive feedback. We address your comments and concerns below. > Q1. Human Assessment Scale. We are grateful for the reviewer's suggestion on providing stronger evidence of validations. To explore the validity of our assessment, we further enlarge our assessment to ~2.2k Q&As. With that being done, the total correct rate comes from 91.99% in the paper to 91.80%, which does not fluctuate much. This suggests that our sampling for assessment is valid and effective. Besides, we hope to confirm that our dataset improvement and human verifications are continuously ongoing. We will update the dataset and the validity when newer data become available. > Q2. Real-world risks of imperfect (intention) languages. We thank the reviewers for raising this question. We hope to address that, since language cannot thoroughly describe exact positions and velocities etc., language models are more commonly being used as an auxillary driving model to provide explainability and introduce high-level knowledge (like traffic rules in our case) or explainability. Therefore, its failing in infrequent cases would generally not cause accidents due to its auxillary nature. Furthermore, our intentions part serve as the summary of interactions between ego agent and all other agents, which is a very comprehensive task than other classes, therefore we believe that 87.5% is satisfying for an auto-labeled dataset. To further improve our dataset's quality, we will also update our dataset with human verifications to further ensure its reliability to 99+%. > Q3. Downstream impacts on real-world Autonomous Driving tasks. We highly appreciate this question. To justify WOMD-Reasoning dataset's ability in downstream tasks, we perform a trajectory prediction task using outputs of our Motion-LLaVA model fine-tuned upon WOMD-Reasoning. Multipath++ is used as the trajectory prediction baseline model. As the interaction part is the most significant part of our dataset, we introduce the interaction part of language output into Multipath++, by cross-attending the T5 embedding of them with corresponding agents' features. Each experiment is runned with 3 seeds to ensure reliability. The results are shown in the following table | Model | minFDE_6 | MR_6 | |---|---|---| | Multipath++ | 1.27 | 12.59% | | Multipath++ with Interaction Language | **1.18** | **11.69%** | | relative &Delta; | -7.35% | -7.10% | We observe a significant performance enhancement using language outputs from motion-LLaVA fine-tuned upon WOMD-Reasoning. This strongly supports WOMD-Reasoning's ability to fine-tune LMs to help downstream tasks like predictions or explainabilities of driving behaviors. We will add final results of these experiments into our manuscript in our next version. > Q4. More quantitative analysis. We thank the reviewer for suggesting more downstream useful evaluations to prove WOMD-Reasoning's abilities. In Q3 we provide evidence proving the effectiveness of WOMD-Reasoning in trajectory prediction tasks. Since WOMD is a highly-interactive real-world driving dataset, prediction evaluations on WOMD offer evidences of the usefulness of WOMD-Reasoning on real-world applications. > Q5. Validation of Motion-LLaVA - "facts" measured. We thank the reviewers for requesting details on the definition of facts in Motion-LLaVA evaluations. Our Facts questions can be categorized into 3 types: 1) **Environment-related questions** focus on the scenario’s surroundings, covering details such as the presence and type of intersections, the number of lanes on the ego vehicle’s side, and the existence and location of crosswalks or stop signs. 2) **Ego agent-related questions** assess the ego vehicle's characteristics, including its speed, motion status, lane position, and directional attributes. 3) **Surrounding agents-related questions** focus on other agents in the scenario, covering their type, speed, motion status, and position relative to the ego vehicle and intersection. We hope this clarifications help, and we will include this in our next version. > Q6. Was Motion-LLaVA fine-tuned with motion prediction task or not? We are grateful for the question. While Motion-LLaVA is not fine-tuned with motion prediction task, we additionally use Motion-LLaVA outputs to illustrate the effectiveness of WOMD-Reasoning data in motion prediction tasks, the table is shown in Q3. We will add final results of these experiments into our manuscript in our next version. > Q7. Typo in Line 149 We thank the reviewer for pointing this typo out. We will fix this in our next version. --- Rebuttal Comment 1.1: Comment: Thank authors for your careful investigation of the items I listed, esp. the downstream task impact. I'd like to kindly ask for more details about: * How this cross-attention module for Multipath++ was deployed (details on the architecture and training) * Some examples on "intentions part serve as the summary of interactions between ego agent and all other agents" to see the real impact/importance of it --- Reply to Comment 1.1.1: Comment: We thank the reviewer for reading our response. Please see our answers to your questions below > Q1. How this cross-attention module for Multipath++ was deployed We appreciate the question on the details of introducing languages into Multipath++. This mainly contains 2 steps: 1) **Language Acquisition and Encoding**. The language we use comes from Motion-LLaVA's output. For each scenario, Motion-LLaVA would provide a comprehensive set of Q&As, where we pick the interaction Q&As for this experiment. Each of these Q&As include the interaction info between a specific agent and the ego agent (the one whose trajectory is to be predicted). We pick the answer parts of these Q&As to use. These answers are then fed into a T5 encoder followed by a few MLP layers to be encoded. In the end, for each scenario, we have a set of language embeddings, each contains info on the interaction between the ego agent and one specific agent. 2. **Introducing Languages into Multipath++** Our next step is to introduce these encoded languages into each agent's feature. For Multipath++, we use MPA [1], an open-source Multipath++ implementation, as our codebase, whose structure can be found in their Figure 1 (https://arxiv.org/pdf/2206.10041). Our language introduction module is inserted right after their "Other Agents History Encoder" (and before their "MCG (multi-context gating) encoder"). Our module is a cross-attention block, which takes each agent's history encoding as queries (shape = [agent_num, 1, feature_size]), letting them cross-attend to their corresponding language embeddings (shape = [agent_num, language_length, feature_size]) containing info about the interaction between the ego agent and that specific agent. In this way, the interaction information in Motion-LLaVA is introduced into the features of each agent. For experiment setups, due to limited computational resources, we use a subset of WOMD training set and WOMD validation-interactive set to perform the training and the evals of Multipath++, respectively. We choose the subsets using the same standard as the one we choose scenarios to build WOMD-Reasoning, where we pick scenes considered interactive by WOMD as flagged with the interactive agent pairs label in the `objects_of_interest` key, and the ego agent must be one side of the interactive pair. Eval metrics are minFDE_6 and MR_6 (Miss Rate). [1] MPA, arXiv 2022 > Q2. Examples on "intentions part serve as the summary of interactions between ego agent and all other agents" We thank you for letting us further explain the building of the intention part. In our paper's Figure 1 (a), we have shown some examples, and we would like to show a complete example here. In one case, the interaction Q&As read: `[Q] What interactions are anticipated between the ego agent and surrounding agent #0..?` `[A] Surrounding agent #0 will yield to the ego agent because ..` `[Q] What interaction is likely to occur between the ego agent and surrounding agent #5?` `[A] Surrounding agent #5 will yield to the ego agent because ..` And, the intention Q&A summarize the interactions as followed: `[Q] What will the ego agent aim to do in the upcoming moments?` `[A] The ego agent intends to continue exiting the intersection. It will proceed as surrounding agents #0 and #5 are not moving and will yield to the ego agent. The other surrounding agents are also not moving and are not in the path of the ego agent, so no response is needed from the ego agent towards them..` This hierarchical structure is rooted in our prompts for building WOMD-Reasoning. In our paper's Table 8, the intention part of our dataset building prompt ask the GPT to: `think about following questions .. (1) What is the ego agent’s intention if no other agents exist? (2) .. what actions does the ego agent take to respond to traffic lights .. ? .. (3) In each interaction involving the ego agent, what actions do the ego and surrounding agent take to respond to each other? ..` Therefore the intention part would summarize the interactions and provide a comprehensive response. We hope this clarifies the structure of the intention part. For its impact, our goal for building WOMD-Reasoning is to provide an *auxiliary* driving model to offer explainability and introduce high-level knowledge. Our experiments in utilizing it for boosting trajectory predictions has proved the effectiveness in this sense. Also, we believe that this strategy is popularly adopted in LM-assisted driving models [2-4]. We will continue to update our dataset, including with human verifications, to make it even more reliable. Thanks again for helping make our paper better! We are happy to provide these additional details, and we kindly hope that our response solves your concerns! [2] Trajectory-LLM, ICLR 2025. [3] Large Language Models Powered Context-aware Motion Prediction in Autonomous Driving, IROS 2024. [4] Asynchronous Large Language Model Enhanced Planner for Autonomous Driving, ECCV 2024
Summary: The paper introduces WOMD-Reasoning, a large-scale, multi-modal dataset for reasoning about interactions in autonomous driving, focusing on traffic rule-induced and human intention-induced interactions—areas underrepresented in existing datasets. It also presents Motion-LLaVA, a multi-modal model fine-tuned on this dataset, demonstrating improved interaction prediction and traffic rule-compliant planning. Evaluations show strong performance gains over prior baselines. Claims And Evidence: The claims are well-supported. Methods And Evaluation Criteria: The methods are appropriate: 1. Automated dataset generation with ChatGPT-4 and rule-based systems is clearly described. 2. Evaluations use standard metrics (BLEU, ROUGE, CIDEr, etc.) and human validation. Theoretical Claims: No formal theoretical claims or proofs. Experimental Designs Or Analyses: The experimental design is sound. It would be improved by providing more details on the BEV input used for LLaVA and considering evaluations on real-world scenarios. Supplementary Material: I have reviewed the pdf and the code. Relation To Broader Scientific Literature: The paper builds on prior datasets like BDD-X, DriveLM, and DRAMA, addressing their limitations in interaction reasoning. It advances the field by covering traffic rule-based and intention-driven interactions at scale. Essential References Not Discussed: The paper covers the most relevant work. Other Strengths And Weaknesses: Strengths: 1. WOMD-Reasoning is the largest language-based dataset for autonomous driving interaction reasoning to date. It covers underexplored areas such as traffic rule-induced and human intention-induced interactions. Besides, despite being largely automatically generated, the dataset achieves around 90% accuracy in human evaluations, showing strong reliability for research purposes. The authors also provide the related code in the supplementary materials. 2. The provision of simulated BEV and ego-view videos adds versatility to the dataset, supporting vision-language research and training for autonomous systems. 3. The model fine-tuned on WOMD-Reasoning shows significant improvements in interaction prediction and traffic rule-compliant planning, key tasks for autonomous driving. 4. The application of CoT strategies reduces hallucination and improves reasoning quality, a practical enhancement for language-based models. Weakness: 1. The paper focuses on WOMD data. It’s unclear how well Motion-LLaVA generalizes to other datasets or real-world cases without vectorized motion inputs. 2. The BEV inputs used in baseline comparisons with LLaVA are not well-detailed. Clarifying their preparation and limitations—especially in comparison to motion vectors—would be helpful. Other Comments Or Suggestions: 1. Some figures are cluttered, with overlapping numbers and diagrams that reduce readability. Simplifying the visuals would improve clarity and accessibility. 2. Consider running scaling studies to analyze how model performance evolves with different dataset sizes (e.g., 0.5M → 1M → 3M Q&A pairs). This would provide insights into the data efficiency of WOMD-Reasoning. 3. Include more comparisons with different backbone models (e.g., other LLMs or vision-language models) to better understand the advantages of Motion-LLaVA’s architecture and components. Questions For Authors: 1. Clarify how agent IDs or numbers (e.g., surrounding agent #0, #1) are processed when fed into the network. Are they treated as categorical inputs, embedded, or pre-processed in another way? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful feedback. Please see our response below. > Q1. Details on the BEV input used for LLaVA & Evaluations on real-world scenarios. We appreciate the reviewer's suggestions: 1) **For the BEV input**, it comes from plotting WOMD data. Road elements are plotted with line segments, while trajectories of agents are plotted with arrow-shaped boxes to indicate position and orientation. Trajectories are labeled with their Agent IDs. Additionally, an XY scale bar is added to help to interpret the position and the velocity. 2) **For real-world evaluations**, we first address that WOMD is a large real-world dataset. Therefore, testing our model on WOMD's cases can reflect the real world situations. To provide more evals on real-world tasks, we test introducing language outputs of Motion-LLaVA into trajectory prediction model Multipath++, seeing ~7% improvements on minFDE_6 and MR_6 in real-world WOMD cases. Details can be found in our answer to reviewer BaWk's Q2. > Q2. Generalization to other datasets or cases w/o motion inputs. We appreciate the question on generalizations. - **For other motion datasets**, our translation program convert motion data into raw language suitable for feeding LLMs to produce data in our dataset's format. With minor modifications in the translation program's IO part, it can be used for any motion dataset. Our translation program is included in the supplementary materials and will be open-sourced. - **For non-vectorized motion inputs**, we believe that the well-developed perception models can convert non-vectorized inputs into vectorized motion data. Furthermore, with our simulated visual modal, users can directly train VLMs with WOMD-Reasoning to take vision inputs. > Q3. BEV preparations and limitations We thank the reviewers for suggesting the comparison between BEV and motion vectors. The BEV preparations are detailed in Q1. BEV and motion vectors are equivalent representations of the same scenario. However, BEV images convey the coordinates and velocity info more implicitly, which poses challenges to extract these info. Therefore, our Motion-LLaVA choses motion data as inputs. > Q4. Simplifying the visuals. We appreciate the suggestions on simplifying the visuals to improve readability. We will fix this in the next version. > Q5. Data efficiency. We thank reviewer's suggestion on showing the data efficiency of WOMD-Reasoning by fine-tuning Motion-LLaVA with different data sizes. The results of the suggested experiments are listed below: For the **factual** questions: | Dataset Size | ROUGE (↑) | BLEU (↑) | METEOR (↑) | CIDEr (↑) | SPICE (↑) | GPT Score (↑) | |-|-|-|-|-|-|-| | 0.5M | 0.806 | 0.683 | 0.477 | 5.77 | 0.760 | 6.69 | | 1M | 0.818 | 0.701 | 0.491 | 5.90 | 0.773 | 6.74 | | 3M | 0.840 | 0.736 | 0.516 | 6.35 | 0.794 | 7.09 | For the **interaction reasoning** questions: | Dataset Size | ROUGE (↑) | BLEU (↑) | METEOR (↑) | CIDEr (↑) | SPICE (↑) | GPT Score (↑) | |-|-|-|-|-|-|-| | 0.5M | 0.562 | 0.410 | 0.339 | 1.94 | 0.513 | 5.67 | | 1M | 0.589 | 0.447 | 0.354 | 2.23 | 0.547 | 6.36 | | 3M | 0.614 | 0.474 | 0.366 | 2.52 | 0.571 | 6.76 | We find that with more WOMD-Reasoning data involved, the fine-tuned model first gains the ability to answer factual questions, which hits a good GPT score with only 0.5M data. As the dataset grows, the model gradually learns to answer interaction reasoning questions. These justify the size of WOMD-Reasoning, as well as its data efficiency in helping models to answer hierarchical driving-related questions. > Q6. Comparisons with different backbone models. We thank the reviewer for suggesting on using more LM baselines. We provide this comparison in our response to reviewer BaWk's Q1. Results show that w/o fine-tuning on WOMD-Reasoning, all baselines can hardly answer driving-related questions, supporting our dataset's motivation. We then fine-tune LLaVA and LLaMA-Adapter on WOMD-Reasoning, which both significantly benefit from the fine-tuning. Finally, we find that fine-tuned Motion-LLaVA works better than other fine-tuned models, supporting its superiority. > Q7. Processing of Agent IDs when fed into the network. We appreciate the question on agent IDs. These IDs are neither categorical inputs nor embedded. Instead, they are used as textual identifiers within the prompt, enabling the LLM to distinguish agents through natural language understanding. The prompt format is: `Ego agent: <motion>\nAgent #0: <motion>\n...\nNow, please answer: {Question}`, where `<motion>` represents the encoded motion data from each agent's own viewpoint. These IDs only serve as range allocations, i.e. #0-#99 indicate vehicles, #100-#199 indicates bicycles and #200-#299 indicates pedestrians. We transform WOMD agent IDs to these local IDs to avoid downstream models overfitting to specific agent's behavior. Besides this, specific agent IDs do not carry any additional information.
Summary: The paper introduces WOMD-Reasoning, a large-scale dataset designed for interaction reasoning in autonomous driving, built upon WOMD. The dataset addresses a critical gap in understanding traffic rule-induced and human intention-induced interactions, which are often overlooked in existing driving datasets that primarily focus on proximity-based interactions. WOMD-Reasoning contains 3 million Q&A pairs spanning scene descriptions, motion predictions, and planning tasks for autonomous driving. To validate its effectiveness, the authors develop Motion-LLaVA, a motion-language model fine-tuned on WOMD-Reasoning, which demonstrates improved performance in interaction prediction and traffic rule-compliant planning. Claims And Evidence: The submission makes strong claims about WOMD-Reasoning as the largest multi-modal dataset for interaction reasoning in autonomous driving, with well-supported evidence from dataset comparisons, language model benchmarks, and fine-tuning evaluations of Motion-LLaVA. However, some claims require stronger validation, particularly the high dataset accuracy. More details on evaluation criteria, distributional diversity, and human annotation reliability. The claim that WOMD-Reasoning enables end-to-end traffic rule-compliant planning is not fully substantiated, as planning evaluations are mostly qualitative rather than tested in real-world driving or simulations. Additionally, while the dataset was automatically generated, its reliability and biases are not thoroughly analyzed. To strengthen the paper, the authors should provide error analysis, real-world validation, and detailed evaluation metrics for reasoning quality and planning effectiveness. Methods And Evaluation Criteria: Proposed WOMD-reasoning generally make sense for the problem of interaction reasoning in autonomous driving, but there are some points could be improved: 1) Lack of real world/close loop evalutation: Using closed-loop evaluation, where the model's predictions influence simulated vehicle behavior, would better demonstrate its practical impact. 2) The claim that WOMD-Reasoning enhances traffic rule-compliant planning is mostly based on qualitative results. Metrics such as collision avoidance rate, rule violation rate, or trajectory efficiency in a planning framework could provide stronger evidence. Theoretical Claims: N/A. Experimental Designs Or Analyses: While the overall experimental design is appreciated, some issues remain. There is a lack of systematic error analysis for the generated Q&As, particularly regarding potential (hallucinations, inconsistencies, and ambiguous cases) introduced by the automated pipeline. Without an in-depth breakdown of failure modes, it is unclear how reliable the dataset is for training and evaluating reasoning-based autonomous driving models. Motion-LLaVA’s failure cases are not analyzed, making it difficult to understand when and why the model makes incorrect predictions. A detailed error analysis of both the dataset and model outputs would provide stronger evidence of robustness and highlight areas for further improvement. Also, more transparent human evaluation is needed to ensure the reliability of the dataset and model outputs Supplementary Material: I have gone through all the supplementary material. Relation To Broader Scientific Literature: The dataset and model enable better reasoning capabilities for language-driven autonomous driving systems, facilitating safer and more interpretable decision-making. Essential References Not Discussed: There are some literatures that lack detailed comparsion and discussion to outline the novelty: [1] Sima, C., Renz, K., Chitta, K., Chen, L., Zhang, H., Xie, C., ... & Li, H. (2024, September). Drivelm: Driving with graph visual question answering. In European Conference on Computer Vision (pp. 256-274). Cham: Springer Nature Switzerland. [2] Zhang, S., Huang, W., Gao, Z., Chen, H., & Lv, C. (2024). WiseAD: Knowledge Augmented End-to-End Autonomous Driving with Vision-Language Model. arXiv preprint arXiv:2412.09951. Other Strengths And Weaknesses: Refer to above. Other Comments Or Suggestions: Refer to above. Questions For Authors: 1. How do you ensure that the dataset accurately reflects real-world traffic interactions, especially for complex or rare cases? 2. What criteria were used to determine the correctness of Q&A pairs, and how was reasoning correctness evaluated? 3. How does Motion-LLaVA compare against non-language-based trajectory prediction or planning models in real-world performance? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments and insightful suggestions. Please see our responses below: > Q1. Dataset accuracy: Eval criteria, diversity and annotation reliability. We thank the reviewer for questions on validations, please see our responses below: 1) **Evaluation Criteria**: We perform 3 evals on WOMD-Reasoning and Motion-LLaVA fine-tuned on it: - For human assessment of WOMD-Reasoning, the criteria is detailed in Q7. - For evaluating the outputs of Motion-LLaVA, we use language metrics (ROUGE, BLEU, METEOR, CIDEr, and SPICE) along with a GPT Score which assesses semantic understanding beyond mere text similarity shown in Table 11. - For evaluating WOMD-Reasoning in real-world applications, we add a trajectory prediction task, whose prediction metrics are minFDE_6 and MR_6. 2) **Distributional Diversity** WOMD-Reasoning's scenario diversity is backed by that of WOMD, a large-scale highly-interactive dataset. Also our paper's Figure 2 shows that among 63k scenarios covered in our dataset, there are ~120k `yields`, ~74k `lights`, ~74k `stops`, showing our dataset’s extensive coverage of traffic rule-induced interactions beyond just near-end events. 3) **Human Annotation Reliability** To justify the size of our human eval, we scale it up from ~1.6k to ~2.2k Q&As, and the correct rate remains stable (91.99% -> 91.80%), justifying the current eval scale. 4) **More Validations** We are tuning our data generation pipeline and incorporating WOMD's recent upgrades to offer new versions of the dataset with even better accuracy soon. We have also completed several human verification demos. We are proceeding to the human labeling. > Q2. Quantitative tests in real-world driving. We appreciate the questions on testing on the real world. First we address that testing Motion-LLaVA results on WOMD's val set can reflect real-world interactions, as WOMD is a real-world dataset. To provide more quantitative evals, we introduce language outputs of Motion-LLaVA into a trajectory prediction model Multipath++, seeing ~7% improvements on minFDE_6 and MR_6 in real-world WOMD cases. Details are included in our answer to reviewer BaWk's Q2. > Q3. Dataset's reliability and biases. We appreciate the question on reliability. We analyze the eval criteria, distribution diversity and human eval in Q1. Besides, in our response to reviewer BaWk's Q2, we show that WOMD-Reasoning helps a trajectory prediction model to enjoy ~7% improvements, which supports its overall reliability. > Q4. Error analysis. We appreciate the suggestions on error analysis. In our evaluations, we find that 1) Some scenarios are too complicated to describe in text. 2) Some errors still come from automatic translation program despite efforts. 3) We also see occasional error by the LM due to imprecise attention (e.g., confusing “following” when a car is behind but not in the same lane). Therefore, we are continuously updating and human-verifying the dataset to further improve its quality. > Q5. Evidence on traffic rule-compliance planning. We appreciate the suggestions. As the first step, we add a trajectory prediction task in our answer to reviewer BaWk's Q2, which proves overall effectiveness of our dataset on real-world tasks. We will follow the suggestions to quantify traffic rule-compliance metrics to give more specific evidence. > Q6. Literatures Comparison. We thank the reviewers for these important references. We discuss them below and will cite them in our next version. - **DriveLM**(ECCV 2024) We have cited and compared to it in Table 1 and in the introduction (line 28, right column). We will update its version in our next version. - **WiseAD**(ArXiv 2024) We see that WiseAD seems to be a driving model work rather than a new dataset. Their Tab. 1 lists the datasets they use for training. We hope our dataset will help encode traffic rule knowledge in models like WiseAD. > Q7. Criteria for evaluating Q&A pairs, and for reasoning. We appreciate questions on the eval criteria of Q&A pairs. Generally, we ask evaluators to judge whether the correct answer is included in the dataset's answers. Practically: For **facts**, the answer must be totally correct. (e.g. if number of lanes is asked, that number must be correct.) For **interaction reasoning**, the correct interaction or intention keyword must be included. (e.g., if human sees Agent #0 is required by rules to yield to Agent #1, then keywords `Agent #0 yields to Agent #1` must exist) These standards help us to maximize the eval accuracy while minimizing the human labor. We are performing human verifications to further boost data quality. > Q8. Motion-LLaVA v.s. non-language-based models. We thank the reviewer for the comparison request. However, instead of comparing, the outputs of Motion-LLaVA can be used as an addition to help real-world trajectory prediction models to perform better, like the ~7% improvement we observe (see our response to reviewer BaWk's Q2). --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response and additional exepriment against Multipath++. I will maintain my rating, with a positive inclination toward acceptance. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for reading our response as well as acknowledging further evidence we provide in it. Also, we highly appreciate the reviewer's constructive suggestions which help us improve our work. Should you have any further questions, please let us know and we are more than happy to provide more!
Summary: This paper introduces the Waymo Open Motion Dataset-Reasoning (WOMD-Reasoning), a comprehensive question-and-answer dataset designed to articulate and assess the interactions prompted by traffic rules within driving scenarios. To demonstrate the utility of WOMD-Reasoning, the paper proposes Motion-LLaVA, a motion language model specifically fine-tuned using this dataset. The quantitative and qualitative evaluations confirms the dataset's quality and its usefulness in autonomous driving. Claims And Evidence: * This paper proposes a large-scale dataset with 3 million Q&A pairs centered on interaction reasoning in driving, which is valuable for the community if released. Methods And Evaluation Criteria: * This paper introduces Motion-LLaVA that provides interaction prediction for driving scenarios. The model utilizes a motion prediction model as a motion encoder, followed by a LLaVA that generates language outputs. Theoretical Claims: N/A Experimental Designs Or Analyses: * There are no comparison with baselines other than LLaVA in the manuscript. Some multi-modal VLMs could also achieve this, like Qwen[1], VITA[2], or llama-adapter[3]. As a dataset paper, the reviewer believes a through study of the baselines should be conducted. [1] Bai, J., Bai, S., Chu, Y., Cui, Z., Dang, K., Deng, X., ... & Zhu, T. (2023). Qwen technical report. arXiv preprint arXiv:2309.16609. [2] Fu, C., Lin, H., Wang, X., Zhang, Y. F., Shen, Y., Liu, X., ... & He, R. (2025). Vita-1.5: Towards gpt-4o level real-time vision and speech interaction. arXiv preprint arXiv:2501.01957. [3] Gao, P., Han, J., Zhang, R., Lin, Z., Geng, S., Zhou, A., ... & Qiao, Y. (2023). Llama-adapter v2: Parameter-efficient visual instruction model. arXiv preprint arXiv:2304.15010. Supplementary Material: Data release in the supp. Relation To Broader Scientific Literature: * As a language dataset designed for driving setting, it is essential to demonstrate its impact on downstream autonomous driving tasks like perception, motion prediction or trajectory prediction planning, and that's most of the existing research focusing on, like DriveLM[9] or Hint-ad[10]. Thus I recommend the authors conduct some experiments for the application of the dataset. [9] Sima, C., Renz, K., Chitta, K., Chen, L., Zhang, H., Xie, C., ... & Li, H. (2024, September). Drivelm: Driving with graph visual question answering. In European Conference on Computer Vision (pp. 256-274). Cham: Springer Nature Switzerland. [10] Ding, K., Chen, B., Su, Y., Gao, H. A., Jin, B., Sima, C., ... & Zhao, H. (2024). Hint-ad: Holistically aligned interpretability in end-to-end autonomous driving. arXiv preprint arXiv:2409.06702. Essential References Not Discussed: * Table. 1 lists previous real-world language datasets for driving. However, some datasets like OmniDrive[4], NuInstruct[5], nuCaption[6], rank2tell[7], Tod3Cap[8] are not included. [4] Wang, S., Yu, Z., Jiang, X., Lan, S., Shi, M., Chang, N., ... & Alvarez, J. M. (2024). Omnidrive: A holistic llm-agent framework for autonomous driving with 3d perception, reasoning and planning. arXiv preprint arXiv:2405.01533. [5] Ding, X., Han, J., Xu, H., Liang, X., Zhang, W., & Li, X. (2024). Holistic autonomous driving understanding by bird's-eye-view injected multi-modal large models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 13668-13677). [6] Yang, S., Liu, J., Zhang, R., Pan, M., Guo, Z., Li, X., ... & Zhang, S. (2023). Lidar-llm: Exploring the potential of large language models for 3d lidar understanding. arXiv preprint arXiv:2312.14074. [7] Sachdeva, E., Agarwal, N., Chundi, S., Roelofs, S., Li, J., Kochenderfer, M., ... & Dariush, B. (2024). Rank2tell: A multimodal driving dataset for joint importance ranking and reasoning. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 7513-7522). [8] Jin, B., Zheng, Y., Li, P., Li, W., Zheng, Y., Hu, S., ... & Zhao, H. (2024, September). Tod3cap: Towards 3d dense captioning in outdoor scenes. In European Conference on Computer Vision (pp. 367-384). Cham: Springer Nature Switzerland. Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: * The method part primarily applies a motion encoder to replace the image encoder in LLaVA. Is there any adaptation specifically designed for the setting? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments and insightful suggestions, and we address your comments and concerns below. > Q1. More LM baselines. We are grateful for the suggestion. While our main goal for fine-tuning Motion-LLaVA is to show the effectiveness of WOMD-Reasoning, we agree that testing and fine-tuning more baselines would make our claims more reliable. Therefore, we test the quality of answers provided by a few vanilla and WOMD-Reasoning fine-tuned baselines, including LLaMA-Adapter, VITA and Qwen, all using 7B version. Results are shown below: | Model | Fine-tuned on WOMD-Reasoning | ROUGE (↑) | BLEU (↑) | METEOR (↑) | CIDEr (↑) | SPICE (↑) | GPT Score (↑) | |-|-|-|-|-|-|-|-| | LLaVA | ❌ | 0.512 | 0.211 | 0.275 | 1.36 | 0.455 | 2.31 | | LLaMA-Adapter v2.1 | ❌ | 0.413 | 0.174 | 0.235 | 0.91 | 0.372 | 1.62 | | VITA-1.5 | ❌ | 0.278 | 0.121 | 0.190 | 0.40 | 0.227 | 1.77 | | Qwen2.5-VL | ❌ | 0.384 | 0.156 | 0.247 | 0.62 | 0.379 | 2.36 | | LLaVA | ✅ | 0.779 | 0.581 | 0.439 | 5.51 | 0.735 | 6.88 | | LLaMA-Adapter v2.1 | ✅ | 0.722 | 0.470 | 0.375 | 4.72 | 0.691 | 5.14 | | Motion-LLaVA (Ours) | ✅ | **0.792** | **0.616** | **0.449** | **5.69** | **0.744** | **7.02** | We observe that w/o fine-tuning, all models can hardly answer driving-related questions, supporting the motivation of building WOMD-Reasoning. We then fine-tune LLaMA-Adapter on WOMD-Reasoning, which also significantly benefit from the fine-tuning. Due to limited resources, fine-tuning other baselines are still ongoing. Besides, we find that fine-tuned Motion-LLaVA works better than fine-tuned LLaVA or LLaMA-adapter, proving its well-designed structure in utilizing WOMD-Reasoning info. > Q2. Downstream Applications We thank the reviewer for this suggestion. To justify WOMD-Reasoning's ability in downstream tasks, we perform a trajectory prediction experiment to see the influence of using language outputs of Motion-LLaVA fine-tuned on WOMD-Reasoning. Multipath++ is used as the trajectory prediction baseline. As the interaction part is the most significant part in our dataset, we introduce the interaction part of language outputs into Multipath++, by cross-attending the T5 embedding of them with corresponding agents' features. Each experiment is run with 3 seeds for reliability. Averaged results are shown below, | Model | minFDE_6 (↓) | MR_6 (↓) | |-|-|-| | Multipath++ | 1.27 | 12.59% | | Multipath++ w/ Language | **1.18** | **11.69%** | | relative &Delta; | -7.35% | -7.10% | We observe a significant boost from language outputs, which strongly supports WOMD-Reasoning's ability to help downstream tasks like predictions. It can also help understanding the explainability of driving behaviors. > Q3. More comparisons. We thank the reviewers for these important papers to compare to! We will cite and compare to them, and we list a brief comparison here in our Table 1's form: | Dataset | Data Source | Total Scenes | Total Q&As | Interaction Q&As | Distance-induced | Traffic Rule-induced | Human Intention-induced | Scene Descriptions | Motion Prediction | Motion Planning | |-|-|-|-|-|-|-|-|-|-|-| | OmniDrive | nuScenes | <1k | N/A | N/A | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | NuInstruct | nuScenes | 850 | 91k | <46k | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | | nuCaption | nuScenes | <1k | 420k | ~140k | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ | | Rank2Tell | Rank2Tell | 116 | N/A | N/A | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Tod3Cap | nuScenes | 850 | ~2,300k | 0 | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | | Ours | WOMD | **63k** | **2,940k** | **409k** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | The stats further support that WOMD-Reasoning is the largest language Q&A dataset for driving by its release, with uniqueness in analyzing traffic rule-induced interactions. > Q4. The method part applies a motion encoder to replace the image encoder in LLaVA. Is there any adaptation? We appreciate the question on the design of Motion-LLaVA. Yes, we thoroughly adapt LLaVA to better encode motion info for answering driving-related questions, which are discussed in Appendix A.7. Besides introducing a motion encoder from Multipath++ to replace the vision encoder in LLaVA, we have also made following adaptations: 1) We use the encoder of a *pre-trained* Multipath++ as motion encoder. Motion prediction pre-training ensures the quality of the feature extracted from motion data, which is ablated in our Table 13. 2) We remove the projector alignment stage in LLaVA, and keep the motion encoder unfrozen in fine-tuning, to help the model better encode info for answering driving-related questions. The benefit of this is ablated in Table 14. 3) In Motion-LLaVA, we design a prompt for feeding motion embeddings from the Multipath++ encoder into LLM. The format is: `Ego agent: <motion>\nAgent #0: <motion>\n...\nNow, please answer: {Question}`. Here, `<motion>` represents the encoded motion data from each agent's ego-centric viewpoint. The Agent IDs are randomly assigned and serve only to differentiate agents in downstream Q&As. --- Rebuttal Comment 1.1: Comment: The authors added comparisons with strong baselines (LLaMA-Adapter, Qwen, VITA), showing improvements after fine-tuning on WOMD-Reasoning. They also demonstrated the dataset’s utility in downstream trajectory prediction, and added comparisons to related datasets I previously mentioned. Clarifications on Motion-LLaVA’s design were also helpful. I remain WA. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for reading our response as well as acknowledging further evidence we provide in it. Also, we highly appreciate the reviewer's constructive suggestions which help us improve our work. Should you have any further questions, please let us know and we are more than happy to provide more!
null
null
null
null
null
null
DictPFL: Efficient and Private Federated Learning on Encrypted Gradients
Reject
Summary: This paper studies the problem of using homomorphic encryption in federated learning. The idea is to use Lookup-based Convolutional Neural Network (LCNN), and only encrypt a small fraction of model weights. Furthermore, positions with small scale of gradients are pruned from uploading. Experiments are conducted to show the proposed method can significantly reduce communication and training time. Claims And Evidence: The paper claims that the privacy is not comprised, which I am not sure if this is right. Due to the usage of pruning, the server can at least know the distribution of large entries in the gradients. One may argue that this does not leak meaningful information, but we still need to make the privacy claim more clear: under the semi-honest setting, which information is protected? Methods And Evaluation Criteria: I am not sure if the evaluation is fair. The proposed method needs a pretrained weight, and it is not clear how the pretrained weight affect the utility. Theoretical Claims: This paper does not contain theoretical claims. Experimental Designs Or Analyses: The experiment designs look good to me. Supplementary Material: Yes, and I think it is great to include the source code. Relation To Broader Scientific Literature: I think practical HE-based federated learning is an important topic, and this paper makes good progress on it. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: N/A. Other Comments Or Suggestions: - Figure 2 only have results for existing works. How do your algorithm fit into the figure? - I am not sure if the use of LCNN restricts the utility, as essentially we are working with a smaller model. Also, since LCNN is not a popular model structure, will this affect the applicability of the proposed method? Questions For Authors: - You mentioned secure aggregation [1]. How do you compare your work to secure aggregation? Is the goal to keep the server from knowing aggregated weights? If so, does the following simple alternation of secure aggregation work? The clients first use some crypto-safe protocol to share a random number/matrix $W_{random}$, and perform secure aggregation on $W_{random} + W$, by uploading $W_i + W_{random}/n$. In this way we may not need to perform costly homomorphic encryption. [1] Bonawitz, Keith, et al. "Practical secure aggregation for privacy-preserving machine learning." proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank Reviewer C4sK for providing constructive comments. **Q1. Why is the gradients privacy not compromised? Pruning may let the server know the distribution of large entries in the gradients. Please clarify what information is protected under the semi-honest setting.** The gradient's privacy in our DictPFL is not compromised since all model gradients shared with the server are already encrypted, and unencrypted gradients are kept local without sharing. The pruning does not impact privacy since it is performed before encryption. After encryption, the ciphertext of unpruned gradients will not reveal the values or distributions to the server. This ciphertext privacy is guaranteed by FHE. Thus, our methods share the same privacy protection as the baseline with fully encrypted gradients (privacy of gradients and model weights of clients are protected), but are more secure than the prior work FedML-HE that still shares partial unencrypted gradients with the server. **Q2: It is not sure if the evaluation is fair. The proposed method needs a pretrained weight, and it is not clear how the pretrained weight affect the utility.** Our evaluations are fairly compared. All methods (DictPFL and our baselines) are conducted on the same pre-trained weights. In the current ML settings, more and more pre-trained models are used, and it is practical to use them as initial models especially for privacy-sensitive fields like healthcare where data is scarce. Starting from a pre-trained model and fine-tuning it on privacy-sensitive data is very promising and practical. In addition, our proposed methods, like Decompose-for-Partial-Encrypt (DePE), work well for cases where no pre-trained model is used. Instead, one could use randomly initialized model parameters, just like LORA work (which decomposes W=A*B) [1] demonstrates strong performance when A is initialized randomly and fixed, and only B is trained. We performed experiments to validate that, to achieve the same level of accuracy (80% for GTSRB dataset with ViT-16), using a randomly initialized dictionary needs 11.81 minutes of training time while the baseline requires 295 minutes of training time. We will clarify this point in the next version of our manuscript. **Q3. Figure 2 only has results for existing works. How does your algorithm fit into the figure?** Thanks for asking this question. We used Figure 2 as a motivating example to analyze the execution communication and computation latency breakdown. In Figure 9 of the results section, we put our method and all prior work into one figure for comparison and show that our work significantly reduces the communication and encrypted operations. We will link Figure 2 and Figure 9 in a more explicit way in the next manuscript. **Q4. If the use of LCNN restricts the utility, as essentially we are working with a smaller model. Also, since LCNN is not a popular model structure, will this affect the applicability of the proposed method?** LCNN works more like an advanced weight decomposition technique (not a new architecture), thus when the decomposition is full-rank, it does not reduce the model size or lose representation abilities as shown in equation 1 in the paper. As an option, users could tune the rank size to control the tradeoff between utility and efficiency. LCNN is not a new model structure or architecture, and it can be applied to any linear projections, convolutions, and Transformers[2][3]. Our experiments have shown that DictPFL works well on models of different scales (from LeNet to Llama2-7b). **Q5. How to compare your work to secure aggregation? Is goal to keep the server from knowing aggregated weights? If so, does the following simple alternation of secure aggregation work? The clients first use some crypto-safe protocol to share a random number/matrix $W_{random}$, and perform secure aggregation on $W_{random}+W$, by uploading $W_{random}/n+W_i$. In this way we may not need to perform costly homomorphic encryption.** Yes, one advantage of HE-based FL over Secure Aggregation is to keep the server from knowing aggregated weights (HE will ensure the aggregated weights are still encrypted to server). Also, the reviewer’s proposed simple alternation of secure aggregation is insightful but introduces additional vulnerabilities. For instance, the server can compute differences between different rounds’ masked weights, i.e., $(W_2+W_{random}/n)-(W_1+W_{random}/n)=W_2-W_1$, to reveal the model updates $\Delta W$ and perform gradient inversion attacks. Additionally, client dropouts disrupt mask cancellation, as missing clients’ shares prevent proper removal of $W_{random}$, corrupting the aggregated result. While promising, this approach requires further innovations to strictly protect weight. This would be an interesting future work. [1] Improving loRA in privacy-preserving federated learning [2] Dictformer: Tiny transformer with shared dictionary [3] Lite-mdetr: A lightweight multi-modal detector --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response. Some of my questions have been addressed; however, I remain unconvinced about the necessity of using homomorphic encryption (HE) in the context of federated learning. If the primary goal is merely to prevent the server from accessing the aggregated model weights, then HE may appear excessive. I would appreciate further insights into why HE is essential in this setting. Additionally, I have concerns regarding the reliance on a pre-trained model, as discussed in [1]. [1] Tramèr, Florian, Gautam Kamath, and Nicholas Carlini. "Position: Considerations for differentially private learning with large-scale public pretraining." arXiv preprint arXiv:2212.06470 (2022). --- Reply to Comment 1.1.1: Comment: We thank Reviewer C4sK for the valuable questions and comments. **Question 1: Necessity of using homomorphic encryption (HE) in federated learning (FL). HE might be excessive if the primary goal is simply preventing the server from accessing aggregated model weights.** We clarify that using HE to secure federated learning is not originally proposed by us but has been widely explored and shown to be practical and beneficial in previous literature, including our baselines such as [a] (Zhang et al., 2020) accepted in USENIX ATC, [b] (Roth et al. 2022) from Nvidia; [c] (Jin et al. 2023) from FedML Inc.. Here we illustrate several important reasons: HE provides comprehensive end-to-end protection—covering model weights/gradients transmission, computation (aggregation), and server storage. This protection addresses multiple security threats, including adversaries in network communications, multi-tenant vulnerabilities during computation on servers, and insider attacks on stored data. HE safeguards not only the confidentiality and intellectual property of model weights and gradients but also protects training data against inversion attacks. Additionally, the overhead of HE in federated learning is often **LESS** significant than commonly assumed. Typically, secure aggregation in FL primarily involves fast HE additions rather than expensive non-linear operations or multiplications. Prior work FedML-HE has demonstrated HE-based FL overheads below 10× compared to plaintext FL (Please see Figure 2). In contrast, our approach further reduces latency overhead to less than 2× (Please see Figure 2 and Figure 9). Hence, we argue that incorporating HE into federated learning is not excessive but rather highly practical and promising. **Question 2: Reliance on a pre-trained model, as discussed in [1] of the position paper.** The referenced position paper [1] primarily discusses potential privacy risks associated with pre-trained models derived from public datasets, noting situations where publicly available data might inadvertently contain private information. However, it does not argue against the use of pre-trained models entirely. Indeed, initializing models with pre-trained weights for further training offers considerable benefits, most notably substantial reductions in training time compared to training models from scratch with randomly initialized weights. Moreover, our proposed methods, such as Decompose-for-Partial-Encrypt (DePE), do not inherently require pre-trained weights and effectively accommodate various initialization strategies, including random initialization. For instance, we evaluated baselines like FedHE-Full, FedML-HE, and our DictPFL using randomly initialized weights, obtaining training times of 294.6 mins, 56.7 mins, and 11.8 mins, respectively. Although these durations are longer compared to training initialized with pre-trained weights (187.0 mins for FedHE-Full, 29.7 mins for FedML-HE, and 3.1 mins for our DictPFL, as depicted in Figure 9), the results demonstrate our methods' consistent advantage over baselines regardless of initialization. Given the widespread adoption of pre-trained models in current research, relying solely on random initialization might create artificially weak comparisons. Nevertheless, we acknowledge the privacy considerations highlighted in [1] and will incorporate a thorough discussion in the revised manuscript, clearly outlining both the advantages of pre-trained models in reducing training time and the related privacy concerns.
Summary: The paper introduces DictPFL, a novel framework for privacy-preserving federated learning that fully encrypts shared gradients while maintaining efficiency. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: The paper presents theoretical claims regarding the relationship between the dictionary D and lookup table T in representing the weight matrix W, the reduction in communication overhead, and the privacy preservation of its fully encrypted method. These claims are supported by algorithmic descriptions, equations (such as Equations 1-5), and empirical results but lack formal proofs with lemmas and theorems. While the weight matrix construction and SVD factorization are described mathematically, there are no formal proofs of their optimality or convergence. Similarly, the Temporal Inactivity Pruning and Holistic Reactivation Correction mechanisms include equations for pruning masks and reactivation probabilities but do not provide formal theoretical validation. Experimental Designs Or Analyses: 1. Privacy metrics (e.g., LPIPS-based recovery scores) are only applied to image datasets (CIFAR-10, GTSRB, Diabetic Retinopathy). Text tasks (AG’s News, MetaMathQA) lack privacy evaluation, despite gradient inversion attacks potentially leaking sensitive textual data. 2. Results for Llama2 (7B) in Figure 8b raise practicality concerns. Training such models with HE is computationally prohibitive, yet execution details (e.g., rounds, approximations) are omitted. 3. Baselines focus on HE-based methods (FedHE-Full, FedHE-Top2, FedML-HE) but omit non-HE approaches (e.g., sparsity or adaptive pruning techniques). This limits the scope of efficiency comparisons. 4. While ablation studies explore key hyperparameters (e.g., \( r, \tau, s\% \)), HE-specific parameters (e.g., CKKS polynomial degree, scaling factors) are relegated to an inaccessible appendix. Improper settings could compromise security or efficiency. 5. Accuracy drops in Table 3 (e.g., 82.67% → 81.82% as clients increase from 3 to 20) are presented without statistical testing. Small standard deviations (±0.4) suggest robustness, but significance tests (e.g., t-tests) are absent. 6. Key details like total training rounds (main experiments) and Dirichlet \( \alpha \) values (default settings) are unspecified. For example, Table 4 tests \( \alpha \in [0.3, 0.9] \), but the main experiments’ \( \alpha \) is unclear. 7. Data heterogeneity is simulated via Dirichlet sampling, which may not capture real-world non-IID distributions (e.g., user-specific patterns in medical imaging). Supplementary Material: No Relation To Broader Scientific Literature: The key contributions of the paper are directly related to existing challenges in Federated Learning (FL) and privacy-preserving techniques, particularly regarding the trade-off between privacy and efficiency. Previous works, such as FedML-HE, utilize Homomorphic Encryption (HE) for secure aggregation but suffer from high communication and computational overheads. The paper’s proposed DictPFL framework builds upon these ideas by enhancing HE’s efficiency without sacrificing privacy. It advances the state of the art by introducing two novel modules—DePE and PrME—that reduce the number of encrypted gradients, a key limitation in prior methods. Essential References Not Discussed: No Other Strengths And Weaknesses: The security analysis is incomplete, particularly regarding the potential leakage of the globally shared dictionary \( D \) and reactivation patterns in pruning, which could compromise privacy. The technical soundness of the pruning strategy is questionable, as it may be misaligned in non-IID settings due to reliance on historical gradients. Experimental rigor is also a concern, as comparisons with non-HE methods and large-scale evaluations, particularly for text generation tasks, are lacking. The paper does not address key limitations such as the static nature of the dictionary, the impact of pruning on convergence, and the need for clearer algorithmic details and reproducibility. Other Comments Or Suggestions: No Questions For Authors: 1. Privacy metrics (e.g., LPIPS-based recovery scores) are only applied to image datasets (CIFAR-10, GTSRB, Diabetic Retinopathy). Text tasks (AG’s News, MetaMathQA) lack privacy evaluation, despite gradient inversion attacks potentially leaking sensitive textual data. 2. Results for Llama2 (7B) in Figure 8b raise practicality concerns. Training such models with HE is computationally prohibitive, yet execution details (e.g., rounds, approximations) are omitted. 3. Baselines focus on HE-based methods (FedHE-Full, FedHE-Top2, FedML-HE) but omit non-HE approaches (e.g., sparsity or adaptive pruning techniques). This limits the scope of efficiency comparisons. 4. While ablation studies explore key hyperparameters (e.g., \( r, \tau, s\% \)), HE-specific parameters (e.g., CKKS polynomial degree, scaling factors) are relegated to an inaccessible appendix. Improper settings could compromise security or efficiency. 5. Accuracy drops in Table 3 (e.g., 82.67% → 81.82% as clients increase from 3 to 20) are presented without statistical testing. Small standard deviations (±0.4) suggest robustness, but significance tests (e.g., t-tests) are absent. 6. Key details like total training rounds (main experiments) and Dirichlet \( \alpha \) values (default settings) are unspecified. For example, Table 4 tests \( \alpha \in [0.3, 0.9] \), but the main experiments’ \( \alpha \) is unclear. 7. Data heterogeneity is simulated via Dirichlet sampling, which may not capture real-world non-IID distributions (e.g., user-specific patterns in medical imaging). 8. Claiming "full encryption" is misleading—only lookup tables are encrypted, while the dictionary is shared in plaintext. If \( D \) contains sensitive information (e.g., in medical models), privacy breaches may persist. 9. DictPFL’s efficiency gains depend on pretrained models. Clients without access to such models (e.g., small institutions) cannot participate fairly, exacerbating resource disparities in FL. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer Y4dC for the thorough reading of our manuscript and for providing constructive comments. **Q1. Misleading "full encryption": Dictionary D and pruning patterns may leak privacy.** Our "full encryption" refers explicitly to encrypting all information that clients share with the server. The dictionary D is static, untrainable, and globally shared only among clients—it is never shared with or accessible by the server. Reactivation patterns from pruning are based solely on the client's local gradient history, which is also never revealed to the server. The server receives only encrypted pruned gradients, and thus cannot infer pruning or reactivation patterns, ensuring no privacy leakage. **Q2. Privacy metrics are only applied to image datasets?** We used Figure 8(a) and privacy metrics (e.g., LPIPS-based recovery scores) as an example to illustrate that FedML-HE leaks sensitive information due to partially plaintext gradients vulnerable to inversion attacks. In contrast, DictPFL encrypts all gradients shared with the server, preventing such privacy leakage for any data type, including images and text. **Q3. Results for Llama2 (7B) in Figure 8b: execution details (e.g., rounds).** For the Llama2 (7B) in Figure 8b, we provided the training time for different models, where the training time is equal to the training round number multiplied by the training round time. When we compared the different methods, we calculated the training time required to achieve the same level of performance (60% accuracy on the MetaMathQA task). We will clarify these details in the new manuscript. **Q4. Efficiency comparison with non-HE sparsity/pruning methods.** We compared DictPFL's efficiency with plaintext FL in Figures 2 and 9, showing roughly a 3× overhead for privacy guarantees. Non-HE sparsity or adaptive pruning methods excel in plaintext efficiency but do not provide privacy protections, limiting direct comparison. If reviewers suggest specific methods, we can include additional comparisons. **Q5. HE parameters: Current compilers ensure the security level for the given HE parameters.** Our HE parameters are secure and are detailed in Appendix A. **Q6. Table 3 statistical testing.** We performed Welch’s t-test between the 3-client and 20-client settings in Table 3. The results confirm a statistically significant difference $t(7.0) = 2.50, p = 0.040$. **Q7. Table 4 tests \alpha in [0.3, 0.9], but the main experiments’ \alpha is unclear.** The main experiments in Figure 7 were conducted under IID setting (α = ∞). These experiments ran for a maximum of 50 training rounds. **Q8: Formal proofs with lemmas and theorems.** Our DictPFL framework includes two HE-aware modules (DePE, PrME) that generalize decomposition and pruning techniques, previously grounded theoretically in plaintext FL. Our primary innovation adapts these established methods to HE constraints rather than developing new theoretical foundations. The practical benefit of our approach lies in significantly reducing ciphertext volume, thus mitigating the high training cost in HE-based FL. We will clarify explicit links to prior theoretical work in the revised manuscript. **Q9. Pretrained models**: Please refer to C4sK Q2. **Q10. Study on Client numbers**: Please refer to 4p9J Q2. **Q11. Text generation tasks.** In Appendix B.2, we evaluated DictPFL on text generation by fine-tuning the Tiny Llama model on the MetaMathQA mathematical reasoning task. DictPFL reduces training time by 94.2% compared to the previous state-of-the-art, FedML-HE. **Q12. The pruning strategy convergence (may be misaligned in non-IID settings due to reliance on historical gradients).** DictPFL addresses pruning misalignment in non-IID settings using Holistic Reactivation Correction (HRC), which reactivates pruned parameters by incorporating client-specific accumulated gradients. This preserves essential local information despite divergence from global patterns. Table 4 shows that HRC effectively maintains stable convergence across varying non-IID settings. **Q13. Static nature of the dictionary, the impact of pruning on convergence.** Dictionary decomposition leverages the insight that correlated model weights can be compactly represented as linear combinations of key vectors (dictionary). By using a static dictionary and training only the combination coefficients (lookup tables), we drastically reduce HE-related overhead. Our experiments confirm that this approach achieves higher accuracy and lower training costs compared to previous HE-based methods. **Q14. Dirichlet sampling may not capture real-world non-IID distributions.** Dirichlet is widely used in FL to simulate real-world non-IID. It provides reproducible conditions reflecting practical scenarios like medical imaging [1]. [1] Improving performance of federated learning based medical image analysis in non-iid settings using image augmentation.
Summary: The paper proposes DictPFL, a novel framework for federated learning that addresses the trade-off between privacy and efficiency in homomorphic encryption (HE)-based FL. By decomposing model weights into a fixed dictionary and a trainable lookup table (DePE) and further pruning gradients via encryption-aware pruning (PrME), DictPFL significantly reduces communication and computational overhead while ensuring full privacy protection. Experiments demonstrate substantial improvements over fully encrypted and selectively encrypted baselines, achieving up to 748× lower communication overhead and 65× faster training. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: see below Essential References Not Discussed: see below Other Strengths And Weaknesses: Pros: 1. The idea of diction decomposition is novel, and the method seems to be logical and effective. 2. The experiment part is well written, and the way data is shown provides insightful information, which demonstrates the advantage of DictPFL on speed, accuracy and privacy-protection. Cons: 1. From Section 4.1, DePE is established on the assumption that a public model is shared at the initlal of FL, which may not always hold. 2. TIP and HRC are empirical and lacks novelty. 3. Since the experiments part consider web simulation, it is better to attach codes for more convinient reproduction Other Comments Or Suggestions: see below Questions For Authors: 1. Is it necessary to encrypt indices? What attack will be brought by transmitting encrypted content with plaintext indices? 2. In DePE, is SVD decomposition the only method to reduce matrix rank? Why is T be set as V in SVD decomposition? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer jD6X for the thorough reading of our manuscript and for providing constructive comments. **Q1. From Section 4.1, DePE is established on the assumption that a public model is shared at the initial of FL, which may not always hold.** Please refer to the Reviewer C4sK's Q2. **Q2. TIP and HRC are empirical and lack novelty.** The proposed TIP and HRC are aiming to address novel and tricky challenges of pruning in HE-based FL. While numerous pruning methods exist in plaintext FL, their direct application to HE-based FL is fundamentally infeasible due to unique cryptographic constraints. **Key Challenges in HE-Based Pruning** 1. Server-Side Pruning Limitations: Existing server-side methods [4] require access to plaintext gradients for pruning, which is impossible in HE-based FL, where the server only accesses ciphertexts. 2. Client-Side Pruning Misalignment: Client-side methods [5] allow clients to prune locally but will lead different clients to prune different positions. And because HE’s SIMD packing mechanism, which encrypts multiple plaintext gradients into a single ciphertext, demands strict alignment of pruned positions. Mismatched indices render aggregated ciphertexts unusable. In order to address these issues to enable privacy-preserving, HE-compatible pruning, we propose TIP and HRC. **Novelty of TIP and HRC** 1. Temporal Inactivity Pruning (TIP) solves the alignment challenge by leveraging global gradient history, which is identical across all clients, to derive a shared pruning mask. This ensures clients prune identical positions, maintaining HE-compatible aggregation. 2. Holistic Reactivation Correction (HRC) addresses a critical limitation of irreversible pruning: permanently excluding parameters risks losing valuable gradients. HRC dynamically reintroduces pruned parameters based on global importance, preserving model utility without compromising efficiency. These mechanisms are the first to enable privacy-preserving, HE-compatible pruning, bridging a critical gap between plaintext efficiency techniques and encrypted FL’s constraints. Prior work cannot operate under HE’s limitations, which demand novel solutions to align pruning decisions without exposing sensitive data. **Q3. Since the experiments part considers web simulation, it is better to attach codes for more convenient reproduction.** In our submission, we have included the code for the main results as supplementary material to facilitate verification of the results. Upon acceptance, we will release a public GitHub repository with detailed documentation, step-by-step execution guides, and pre-configured environments to ensure the replication of all results. **Q4. Is it necessary to encrypt indices? What attack will be brought by transmitting encrypted content with plaintext indices?** In the proposed pruning method PrME, no indices are transmitted, plaintext or ciphertext. Pruning decisions are derived from shared global gradient history, ensuring clients prune identical positions. This eliminates the risk of attacks via plaintext indices (e.g., inferring sensitive patterns from pruned parameter locations). Thus, DictPFL’s design inherently avoids vulnerabilities associated with index transmission while maintaining full encryption of sensitive data. **Q5. In DePE, is SVD decomposition the only method to reduce matrix rank? Why is T set as V in SVD decomposition?** While other decomposition methods such as PCA and QR with Truncation can reduce matrix rank, SVD provides the better rank-k approximation according to Eckart-Young theorem, to maintain the important information in the pre-trained weights. Moreover, SVD is widely used to extract low-dimensional information-sensitive representation from model weights to compress the model [1][2][3]. The lookup table $T$ is set to $V$ (right singular vectors) to exploit the orthogonality inherent to SVD. The columns of $V$ form a decorrelated basis for the row space of the original weight matrix, where each column represents an independent direction of variation. This orthogonality minimizes redundancy and captures the most significant parameter correlations, enabling efficient gradient compression. The choice of $T = V$ and $D = U\sum$ (left singular vectors scaled by singular values) aligns with established practices in model compression [3]. [1] Asvd: Activation-aware singular value decomposition for compressing large language models. [2] Dictformer: Tiny transformer with shared dictionary. [3] Lite-mdetr: A lightweight multi-modal detector. [4] Fedmef: towards memory-efficient federated dynamic pruning. [5] Zerofl: Efficient on-device training for federated learning with local sparsity.
Summary: This paper proposes a strategy that selectively encrypts only important weights using Dictionary-based Pruning and Holistic Reactivation Correction (HRC) techniques. This approach maintains the strong security of homomorphic encryption while reducing communication costs and improving training speed. Experimental results show that the proposed method achieves significant improvements compared to conventional methods, with communication cost reductions of 402–748 times and training speed improvements of 28–65 times. Additionally, the method maintains high performance even in WAN environments. Claims And Evidence: Unlike conventional methods that encrypt all model parameters, this paper proposes a strategy that selectively encrypts only important weights to reduce communication and computational costs. To achieve this, the authors employ Dictionary-based Pruning and Holistic Reactivation Correction to enhance computational efficiency while maintaining security. The server collects encrypted model updates from clients and updates the global model while preserving data privacy and training performance. Experimental results show that the proposed method reduces communication costs by 402–748 times and improves training speed by 28–65 times compared to conventional HE-based federated learning methods while maintaining stable performance in WAN environments. Methods And Evaluation Criteria: This paper proposes the following methodologies to address the high computational and communication costs in conventional HE-based federated learning: 1. Decompose-for-Partial-Encrypt (DePE): Traditional methods encrypt and transmit the entire model’s weights, which results in high computational and communication overhead. To address this issue, DePE decomposes the model weights into a fixed dictionary and a learnable lookup table, encrypting only the lookup table for transmission. This reduces the amount of encrypted data while maintaining security. 2. Prune-for-Minimum-Encrypt (PrME): Existing methods encrypt a fixed percentage of weights without considering their importance, which may degrade model performance. To improve this, PrME utilizes global gradient information from previous training rounds to prune less important weights first and selectively encrypt only the more critical weights. This reduces unnecessary computations while maintaining model performance. However, in environments with highly imbalanced data distributions (Non-IID settings), the weight selection strategy may introduce model bias, potentially disadvantaging certain clients. Additional measures may be required to address this issue. The paper evaluates the proposed method’s performance through various experiments, comparing communication cost reduction, training speed improvement, and security maintenance against FedHE-Full and FedML-HE methods. It also experimentally confirms that the approach is scalable to large models, including ViT, BERT, and TinyLlama. Additionally, experiments consider factors such as the number of clients, data imbalance, and pruning ratio, analyzing their impact on model performance and communication efficiency. Theoretical Claims: This paper presents two key theoretical claims: 1. Decompose-for-Partial-Encrypt (DePE): This method decomposes model weights into a fixed dictionary and a lookup table, encrypting only the lookup table to maintain security while reducing communication costs. The paper claims that this approach significantly reduces encrypted data size compared to traditional methods while making it difficult for attackers to recover the original weights. 2. Prune-for-Minimum-Encrypt (PrME): This method prunes less important weights based on global gradient information from previous training rounds, selectively encrypting only the most important weights to reduce computational and communication costs. Experimental Designs Or Analyses: The paper empirically validates that DictPFL achieves superior performance compared to existing HE-based FL methods in terms of communication cost reduction and training speed improvement. The study evaluates performance by comparing it to FedHE-Full and FedML-HE, using key metrics such as communication cost, training speed, model accuracy, security robustness, scalability to large models, and the impact of data distribution and pruning ratio. The experimental design effectively demonstrates the feasibility of DictPFL. However, as the number of clients increases, accuracy slightly decreases. Additionally, security evaluation is limited to Gradient Inversion Attacks, and the paper does not assess its resistance to other security threats. The lack of scalability testing in large-scale client environments and additional security evaluations remain limitations. Future research incorporating experiments with hundreds or thousands of clients and broader security threat evaluations would strengthen the empirical validity of this work. Supplementary Material: I did not check the supplementary material. Relation To Broader Scientific Literature: Existing HE-FL methods provide strong security but suffer from high computational costs and communication overhead. This paper introduces DePE and PrME to implement selective encryption, reducing communication costs by 402–748 times and improving training speed by 28–65 times compared to previous methods, thereby enhancing the practicality of HE-FL. Additionally, the use of lookup table encryption improves the security of conventional methods and enhances resistance to data reconstruction attacks, as demonstrated by experimental results. Essential References Not Discussed: Not applicable. Other Strengths And Weaknesses: This paper introduces a selective encryption strategy using DePE and PrME to address the high computational and communication costs of HE-based federated learning, significantly improving both security and efficiency. The paper experimentally verifies that this method is applicable to large-scale models such as ViT, BERT, and TinyLlama, demonstrating its scalability. Compared to FedHE-Full, DictPFL achieves 402–748 times lower communication costs and 28–65 times faster training speeds, making it highly practical. The lookup table encryption method also enhances security by reducing the possibility of original data reconstruction. However, the experiments do not fully evaluate scalability in large federated learning environments. The security assessment is limited to Gradient Inversion Attacks, and additional evaluations of Membership Inference Attacks or Model Inversion Attacks have not been conducted. These areas require further validation to strengthen the study’s findings. Other Comments Or Suggestions: The proposed selective encryption strategy effectively reduces communication and computational overhead in federated learning, as experimentally demonstrated. However, the paper lacks an evaluation of performance variations in large-scale client environments, which is necessary to assess real-world applicability. Further analysis is needed to determine whether the communication cost reduction and training speed improvements persist as the number of clients increases. Additionally, security evaluation is limited to Gradient Inversion Attacks, and further investigation of resistance to Membership Inference Attacks and Model Inversion Attacks would enhance the paper’s contribution. Evaluating the effectiveness of lookup table encryption against diverse attack methods would further strengthen its impact. This paper proposes an effective selective encryption method that significantly reduces the computational and communication costs associated with HE-based federated learning. The experimental results demonstrate substantial performance improvements, particularly in large models, while maintaining security. The method’s scalability and applicability to ViT, BERT, and TinyLlama strengthen its contribution. However, the lack of scalability testing in large federated learning environments and the limited security evaluation focused only on Gradient Inversion Attacks remain areas that require further investigation. Addressing these limitations would further improve the practicality of this approach. Questions For Authors: 1. Do you have plans for additional experiments to evaluate scalability in large-scale client environments? 2. Do you plan to conduct additional security evaluations against other attack methods? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer 4p9J for the thorough reading of our manuscript and for providing constructive comments. **Q1. Security evaluation is limited to Gradient Inversion Attacks, and the paper does not assess its resistance to other security threats. Do you plan to conduct additional security evaluations against other attack methods?** We focus on gradient inversion attacks because they represent the primary privacy risk unique to federated learning where attackers exploit gradients during training to reconstruct client data. In contrast, model inversion and membership inference attacks target the final trained model during inference, which is a post-training concern not directly related to the FL process itself. DictPFL’s encryption ensures no gradients are exposed during training, directly addressing the FL-specific threat. Other attacks are model-level risks applicable to any trained model, regardless of how it was trained. These are not influenced by FL’s training protocol. This aligns with prior SOTA HE-based FL work FedML-HE, which similarly focuses on gradient inversion. We will clarify this point in the next manuscript. **Q2. Do you have plans for additional experiments to evaluate scalability in large-scale client environments?** We appreciate the reviewer’s suggestion regarding scalability evaluation. In response, we conducted new experiments with 50, 100, and 200 clients, demonstrating that DictPFL consistently outperforms baselines in terms of efficiency. We will incorporate these new results in our new revision. | Clients | FedML-HE (insecure) | DictPFL (secure) | |----------|----------------------|-------------------| | 50 | 18.23 min | 0.75 min | | 100 | 25.66 min | 1.22 min | | 200 | 47.05 min | 1.96 min | The training time is evaluated to achieve 81% accuracy on GTSRB using ViT-16 models.
null
null
null
null
null
null
Branches: Efficiently Seeking Optimal Sparse Decision Trees via AO*
Accept (poster)
Summary: The paper introduces BRANCHES, a search method for decision trees. Using an AND/OT graph formulation for the decision tree search, the method relies an OA*-like exploration strategy with purification bounds for the heuristic. The introduced method provably recovers the optimal decision tree and the efficiency compares favorably with respect to the efficiency bound from OSTD in the literature. The method shows significantly improved empirical performance and efficiency an a wide range of UCI classification task. Claims And Evidence: I did not find any problematic claims. Methods And Evaluation Criteria: * Analyzing the method in terms of optimality and efficiency makes sense. I also appreciated the numerical comparison of the efficiency bound to the one from OSDT in order to give some perspective on how they compare to each other. * The method is empirically evaluated on standard classification tasks from the UCI repository (11 in total) and compared to three baselines (including a relatively recent one from 2024). Using accuracy/number of splits/time (and number of iterations, where applicable) as evaluation criteria also makes sense. Theoretical Claims: I did not check the proofs in Appendix G. Experimental Designs Or Analyses: I checked the experiment in Section 5 and did not find an issue with it. Supplementary Material: I read Appendix B and Appendix F. Relation To Broader Scientific Literature: * the optimization objective (including the penalty term) is the same as Bertsimas & Dunn (2017), Chaouki et al. (2024), Hu et al. (2019) and in case of binary classification, Lin et al. (2020). * the work is most closely related to Lin et al. (2020), but a detailed list of differences is given in the Appendix (namely: additional support for ordinal encoding, additional complexity analysis, improved empirical performance, value estimate update only along selection path, multiple local priority queues instead of a global one). Essential References Not Discussed: The related work section focuses on DFS, BFS, and AO* approaches. I believe there is also a line of work applying MCTS to find optimal decision trees: * Online Learning of Decision Trees with Thompson Sampling; Chaouki et al. (2024) [This reference is already included, but it is not discussed how it relates to the proposed method] * Learning decision trees through Monte Carlo tree search: An empirical evaluation; Nunes et al. (2020). Other Strengths And Weaknesses: I appreciated the relatively detailed problem formulation which was helpful even for readers without extensive background on decision trees. Other Comments Or Suggestions: I think it could help to add a remark for which nodes are the "AND" nodes and which nodes are the "OR" nodes. The notation is a bit unusual (not wrong, though) in some cases: * Section 3, first paragraph: I think the following notation is a bit confusing: $D=\ {X_m, Y_m\}_{m=1}^n$. Letting the index run from $m=1,...,M$ or from $n=1,...,N$ might be a bit easier to parse. * Denoting the transitions in a MDP with $F$ instead of $T$ is also a bit unusual. Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for very much for your reviews, please find our response below: ## Missing literature - Chaouki et al. (2024)'s TSDT algorithm is tailored to online classification where a data stream is observed instead of a batch of data, which is different from the batch setting we consider. Due to the online consideration, TSDT does not have a natural termination condition like Branches and GOSDT, it has to keep adjusting the prior distributions of the node values until reaching a prespecified number of iterations. For these two reasons we decided not to compare with TSDT. However, we thank you for this suggestion, and we can definitely add such a comparison in the appendix for illustration purposes. - Nunes et al. (2020) do not solve the sparsity problem (minimising $\mathcal{H}_\lambda$). In addition, we are not aware of a public code for the authors' algorithm, which prevents us from directly comparing with their algorithm. Nevertheless, (Table 4, Nunes et al. 2020) seems to indicate that the algorithm is significantly slower than the state of the art as it runs in hours. For example, car- evaluation takes 3 hours and 19 minutes and tic-tac-toe takes 4 hours and 59 minutes. Branches and GOSDT terminate and are optimal in under 2 minutes in both of these experiments. ## AND/OR representation Thank you for this remark. In fact, in section 3.3, we state that we follow the hypergraph convention in (Nilsson, 2014, Section 3.1), which replaces the notions of AND/OR nodes with nodes and connectors. The reason we chose this convention over the AND/OR nodes is to make the graph illustration more compact. With AND/OR nodes, Figure 1 would not be able to fit in the paper. We note that both conventions are equivalent. With AND/OR nodes, OR nodes would represent states (branches) and AND nodes would represent actions (split actions and the terminal action). Furthermore, the search space of branches is represented in several papers with the hypergraph convention (albeit without specifying a link to AND/OR search), e.g: - "Aglin, G., Nijssen, S., and Schaus, P. (2020). Learning optimal decision trees using caching branch-and-bound search. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 3146–3153." Figure 2. - "Nijssen, S. and Fromont, E. (2007). Mining optimal decision trees from itemset lattices. In Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 530–539." Figure 2. - "Nijssen, S. and Fromont, E. (2010). Optimal constraint-based decision tree induction from itemset lattices. Data Mining and Knowledge Discovery, 21:9–51." Figure 1. We thank you for this suggestion, we can try to include an equivalent representation of Figure 1 in terms of AND/OR nodes in the Appendix. ## Notation Thank you for this recommendation. In fact, we chose $F$ for transitions instead of $T$ to avoid confusion because we already use $T$ substantially with sub-DTs and DTs. The reason we chose $F$ specifically is because it is used in this context in some literature, e.g.: - "Learning Depth-First Search: A Unified Approach to Heuristic Search in Deterministic and Non-Deterministic Settings, and its application to MDPs" in section **Models**. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. My questions were addressed. I don’t object acceptance.
Summary: The paper presents Branches, a new approach for computing optimal decision trees by formulating the problem as AND/OR graph search and proposing an AO*-type algorithm that solves the problem. The proposed approach learns non-binary trees (i.e., trees with multiway splits) for non-binary features. The author provide a theoretical characterization of the algorithm complexity and experimental evaluation against three popular optimal decision tree baselines. ## after rebuttal I thank the authors for their response. I think the paper would benefit from a more detailed discussion on interpretability (as noted in the author response, the number of branches in multiway splits can definitely impact interpretability). I also think a more detailed comparison of testing accuracy including approaches focused on continuous features that are very common is important (very granular discretization will lead to many categories which may hurt performance or interpretability). Overall, I maintain my evaluation. Claims And Evidence: - See specific concerns below regarding the experimental evaluation and analysis. - In addition, the claim of interpretability requires further justification: in particular, different from previous work this work proposes a non-binary tree, i.e., trees with multiway splits. It is not clear whether they should be perceived as equally interpretable to people and there is no clear discussion on this. Methods And Evaluation Criteria: There are several concerns regarding the experimental evaluation setting: - The reporting of the results seem to focus on *training* accuracy rather than *test* accuracy (it is not clearly stated what accuracy is reported, but likely training accuracy as there is no mention of splitting the dataset to train and test set). Given that some algorithms time out (TO) and some reach different optimal solutions (as indicated by different objectives while both algorithms have run to completion, perhaps due to the setting of max depth), it is particularly useful to compare test accuracy in addition to training accuracy. - No comparison with simple baselines like CART. - No comparison with anytime approaches for optimal trees like Blossom [1]. - Benchmark dataset: many of the benchmarks on the list (monks and tic-tac-toe which together account for half of the results table) are synthetic. - Most datasets have low-dimensionality. The only one whose dimensionality is above 100 is mushroom where it seems Murtee and STreeD are significantly faster. - No analysis for dataset with continuous variables and in particular comparison with approaches designed for such datasets like Quant-BnB [2]. - Its not clear why running times and splits that are the lowest are sometimes not in bold font (if the bold font is based purely on objective, then the only thing that should be bold is the objective) [1] Demirović, Emir, Emmanuel Hebrard, and Louis Jean. "Blossom: an anytime algorithm for computing optimal decision trees." International Conference on Machine Learning. PMLR, 2023. [2] Mazumder, Rahul, Xiang Meng, and Haoyue Wang. "Quant-BnB: A scalable branch-and-bound method for optimal decision trees with continuous features." International Conference on Machine Learning. PMLR, 2022. Theoretical Claims: I did not carefully check the correctness of the proofs in the supplementary material. Experimental Designs Or Analyses: See relevant points under "methods and evaluation" above. Supplementary Material: I did not thoroughly check the supplementary material, and have only reviewed the Appendix D (implementation details). Relation To Broader Scientific Literature: Overall the paper includes a reasonable review of previous work, with some notable works missing (examples provided above). It provides a new approach for optimal decision trees which is an active research area with significant interest in recent years. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - Novel approach for optimal decision trees based on AND/OR graphs and AO*-style algorithm. - Experiments show significant gains in performance compared to the baselines. - Theoretical characterization of complexity is provided Other weaknesses: - No discussion of tie-breaking in search (e.g., if multiple actions have similar value in Eq. 15) and its impact on performance (also relevant for steps like "choose one of them arbitrarily", p.6). - It would be useful to provide a brief description of the base AO* algorithm. Other Comments Or Suggestions: N/A Questions For Authors: See above for my concerns. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your review, please find below our response: ## Interpretability of multi-way splits - Thank you for raising this point. The interpretability of DTs is due to their simple decision rules, it is not specific to binary DTs. On the other hand, we recognise that DTs where each node has a large number of children can become less interpretable than binary DTs. We alleviate this issue by penalising the number of splits, which leads to DTs with small number of splits and hence more interpretablity. - In Appendix E, we directly compare ordinal encoding with binary encoding in terms of interpretability, especially when we introduce the notion of collapse. We argue in these examples that DTs that stem from ordinal encoding can be more interpretable than binary encoded DTs. This is especially the case when comparing Figure 7 and Figure 8; and also Figure 12 and Figures 9 and 10. - In addition, Branches is not restricted to ordinal encoding, it can be applied to binary encoding in similar fashion to the state of the art. In which case, if we have a good binary encoding that we think yields interpretable DTs, we can choose to use Branches with it. The benefits of Branches compared to the other algorithms are still satisfied in this case. ## Methods And Evaluation Criteria: - Yes, Table 2 reports the training accuracy with no train/test split. The objective being to find the most accurate DT with least complexity (most sparse DT). Quoting (Lin et al.2020; Section 5)[2]: "Learning theory provides guarantees that training and test accuracy are close for sparse trees". However, we recognise that a direct comparison of test performance would be insightful as well. In Appendix H.2, we compare Branches with CART and DL8.5 in terms of Pareto fronts (accuracy vs number of splits) within a 10-fold cross-validation. The reason we restrict this comparison to Branches, CART and DL8.5 is to compare different types of DT construction algorithms (Branches seeks optimal sparse DTs, CART seeks DTs greedily, and DL8.5 seeks optimal DTs subject to a hard constraint on depth), comparing Branches with GOSDT and STreeD in this context would yield the same solutions when they terminate. - We compared with CART in Appendix H.2. - Blossom does not solve the problem of sparsity that Branches, GOSDT and STreeD solve. It rather seeks optimal DTs subject to hard constraint on depth in similar fashion to DL8.5. Nevertheless, we included an illustrative comparison with these types of DTs in Appendix H.2, where we chose the popular DL8.5 algorithm. We note that DL8.5 is anytime as well. - Indeed Branches struggles with highly dimensional data such as mushroom, but we showed that its handling of ordinal encoding alleviates this issue significantly, with a fast termination in only $0.15s$ for mushroom-o. Moreover, in Appendix H.4, we investigated the reason STreeD performs exceptionally well on mushroom, we found that it is mainly due to the depth 2 solver, a technique introduced in Demirovic et al.2022 [1]. This leads us to believe that a future incorporation of the depth 2 solver in Branches could further improve its current performance. - Branches handles categorical features. Any type of discretisation preprocessing can be applied to numerical features before feeding the dataset to Branches. This is similar to other algorithms such as GOSDT, STreeD and MurTree, with the additional benefit that Branches handles multiway splits and thus does not necessitate binary preprocessing. We did not compare with Quant-BnB because it is specifically tailored to continuous features, and it does not solve the sparsity problem. Rather, Quant-BnB seeks optimal DTs subject to a hard constraint on depth that is either 2 or 3. Branches and the algorithms we compare with can find optimal sparse DTs of higher depths. - We make text bold based on the objective $\mathcal{H}_\lambda$ first. Then we compare accuracy, splits and runtimes of the methods yielding the highest objective, with the corresponding bold text. ## Other weaknesses: - Thank you very much, this is a very interesting point. In appendix D.1, page 14, we provide a tie-break strategy "There is an additional benefit to storing -value_complete. When there are multiple split actions attribute that maximise value, then we prioritise the one maximising value_complete.". We will update the main paper to refer to this tie-break strategy. Thank you for this recommendation. - For space concerns, we refer the reader to (Nilsson, 2014, Section 3.2) for the base AO*. [1] Demirovic, E., Lukina, A., Hebrard, E., Chan, J., Bailey, J., Leckie, C., Ramamohanarao, K., and Stuckey, P. J. Murtree: Optimal decision trees via dynamic programming and search. Journal of Machine Learning Research, 23(26):1–47, 2022. [2] Lin, J., Zhong, C., Hu, D., Rudin, C., and Seltzer, M. Generalized and scalable optimal sparse decision trees. In International Conference on Machine Learning, pp. 6150–6160. PMLR, 2020.
Summary: The paper considers the problem of learning an optimal decision tree for a given dataset. Specifically, the DT learning problem is formulated as a heuristic search problem over an AND/OR graph representing the space of all possible DTs. Consequently, an efficient best-first search algorithm (aka AO* search) called Branches is developed to find an optimal DT in terms of splits. An empirical evaluation is carried out on standard machine learning datasets from the UCI repository. The results demonstrate that the proposed AO* search algorithm outperforms existing state-of-the-art approaches based on depth-first Branch-and-Bound search. Claims And Evidence: The claims are supported by experimental results. Methods And Evaluation Criteria: The evaluation makes sense Theoretical Claims: The theoretical claims appear to be sound. Experimental Designs Or Analyses: The experimental evaluation is sound. Supplementary Material: I briefly looked at the examples Relation To Broader Scientific Literature: The related work seems to be addressed well. Essential References Not Discussed: The related work seems to be addressed well. Other Strengths And Weaknesses: Main strengths: ------------------- The paper considers an important yet quite challenging problem in machine learning and AI, namely learning optimal decision trees for given datasets. Despite its difficulty, the problem has many real-world applications and therefore more efficient algorithms for solving it are warranted. DTs are interpretable and therefore well suited for situations where the model's decisions must be explained such is healthcare applications. The empirical evaluation is sound and is conducted in a principled manner. The results are presented in a relatively clear manner and therefore it is fairly easy to get the pig picture and appreciate the good performance of the proposed method. Main weakness: ------------------ I found the quality of the presentation quite poor. The presentation of the method is quite dense and it is not easy to follow the details. I think sections 3.1 and 3.2 need a good running example that would illustrate the technical details described such as the branches and sub-DTs. Section 3.2 and especially Section 4 are not easy to follow because the presentation mixes concepts common to the RL literature such as policies, actions and value functions, with concepts common to the heuristic search community such as search nodes, node expansion, node value update etc. Since the main contribution of the paper is a heuristic best-first search I suggest adopting a description closer to the search community. Specifically, I think it's important to describe clearly the search space in terms of OR nodes and AND nodes, as well as the values associated with the nodes, what these values represent and the way they are computed during search. As far as I can see, an OR nodes maximises the values of its children, while an AND node combines the values of its children by summation. The connector representation from Fig 1 is not very common and therefore is not easy to digest. Instead, I would represent the OR and the AND nodes explicitly. Also it is important to articulate clearly what the solution graph represents and perhaps illustrate it with an example. The observation that AO*-like algorithms are more efficient than depth-first branch and bound algorithms (of course at the expense of using additional memory) is well known in the heuristic search community. Therefore, the experimental results are not very surprising. Other Comments Or Suggestions: See previous section. Questions For Authors: 1. Regarding the heuristics used, is the proposed purification bound heuristic admissible? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your reviews, please find our response below: ## Presentation: Thank you for this feedback. We have included Figure 2 in the Appendix for the purpose of illustrating the notions of branches and sub-DTs. Do you have any recommendation that would improve this figure's quality? Thank you. ## Mixed concepts from the RL and heuristic search community RL and heuristic search share many common notions and terminology and we do not think that notions like policies, actions and values are exclusive to RL. In fact, several heuristic search papers employ these concepts such as: - "LAO*: A heuristic search algorithm that finds solutions with loops". - "Learning Depth-First Search: A Unified Approach to Heuristic Search in Deterministic and Non-Deterministic Settings, and its application to MDPs". The RL community also employs many concepts that are traditionally linked to heuristic search. The notions of node expansion, update... are extensively employed in the context of Monte Carlo Tree Search, e.g "Browne, C. B., Powley, E., Whitehouse, D., Lucas, S. M., Cowling, P. I., Rohlfshagen, P., Tavener, S., Perez, D., Samothrakis, S., and Colton, S. (2012). A survey of Monte Carlo Tree Search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1):1–43". These unified concepts are very helpful for us in deriving the proofs of our theoretical results. If we failed to define clearly some of these notions, we kindly ask if you could refer us to the ones in question so that we incorporate the necessary modifications. Thank you very much. ## AND/OR Search Graph In section 3.3, we state that we follow the hypergraph convention in (Nilsson, 2014, Section 3.1). The reason we chose this convention over the AND/OR nodes is to make the graph illustration more compact. With AND/OR nodes, Figure 1 would not be able to fit in the paper. We note that both conventions are equivalent, with AND/OR nodes, OR nodes would represent states (branches) and AND nodes would represent actions (split actions and the terminal action). Furthermore, the search space of branches is represented in several papers with the hypergraph convention (albeit without specifying a link to AND/OR search), e.g: - "Aglin, G., Nijssen, S., and Schaus, P. (2020). Learning optimal decision trees using caching branch-and-bound search. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 3146–3153." Figure 2. - "Nijssen, S. and Fromont, E. (2007). Mining optimal decision trees from itemset lattices. In Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 530–539." Figure 2. - "Nijssen, S. and Fromont, E. (2010). Optimal constraint-based decision tree induction from itemset lattices. Data Mining and Knowledge Discovery, 21:9–51." Figure 1. Thank you for the recommendation regarding a solution graph. A solution graph is the DT of a policy, for example, the graph constituted of the red connectors in figure 1 is a solution graph. We can add this remark to the paper, thank you. ## Observation regarding AO* vs DFS: Indeed, but to our knowledge, this advantage has not been explored for seeking optimal sparse decision trees. The contribution of our paper was to explore this and develop Branches. ## Questions 1- Yes, Proposition 4.2 proves that the purification bound is admissible as it always overestimates the optimal value of a node (overestimation as opposed to the traditional underestimation because we are in a formulation based on objective maximisation instead of cost minimisation). Moreover, Theorem 4.3 proves the optimality of Branches upon termination. --- Rebuttal Comment 1.1: Comment: Thanks for your response. It definitely clarified my concerns.
Summary: This paper presents "BRANCHES," a novel algorithm for learning optimal decision trees (DTs) by integrating Dynamic Programming (DP) and Branch & Bound (B&B) techniques. The study addresses the trade-offs in existing methods—where some approaches provide efficient DP strategies but lack strong pruning bounds, and others excel in pruning but sacrifice DP efficiency. The authors introduce a new analytical pruning bound, called the "Purification Bound," to enhance search space reduction. Empirical evaluations demonstrate that BRANCHES surpasses state-of-the-art methods in speed, optimality, and iteration efficiency while supporting non-binary features. Theoretical analysis confirms its computational superiority. Claims And Evidence: See Strengths And Weaknesses. Methods And Evaluation Criteria: See Strengths And Weaknesses. Theoretical Claims: See Strengths And Weaknesses. Experimental Designs Or Analyses: See Strengths And Weaknesses, and Essential References Not Discussed. Supplementary Material: Read the code but didn't run it. Relation To Broader Scientific Literature: Related to a new methods for optimal decision tree. Essential References Not Discussed: Many peer works that can handle with non-binary features and large datsets are missing: [1] McTavish, H., Zhong, C., Achermann, R., Karimalis, I., Chen, J., Rudin, C., & Seltzer, M. (2022, June). Fast sparse decision tree optimization via reference ensembles. In Proceedings of the AAAI conference on artificial intelligence (Vol. 36, No. 9, pp. 9604-9613). [2] Mazumder, R., Meng, X., & Wang, H. (2022, June). Quant-BnB: A scalable branch-and-bound method for optimal decision trees with continuous features. In International Conference on Machine Learning (pp. 15255-15277). PMLR. [3] Hua, K., Ren, J., & Cao, Y. (2022). A scalable deterministic global optimization algorithm for training optimal decision tree. Advances in Neural Information Processing Systems, 35, 8347-8359. Author should cite and compare with the newest literature to validate the claimed benefits in their work. Other Strengths And Weaknesses: Strengths The paper successfully integrates the strengths of both methodologies, providing an effective pruning strategy while maintaining computational efficiency. This new analytical bounding strategy significantly improves search space reduction, leading to faster convergence. The authors offers rigorous theoretical proofs and extensive empirical evaluations demonstrating the superiority of BRANCHES over existing methods. Weaknesses 1. The method currently only supports categorical data, necessitating pre-processing for numerical datasets. 2. The Python-based implementation, while effective, lags behind C++-based state-of-the-art implementations in execution speed. 3. Although BRANCHES is inherently parallelizable, the current implementation does not exploit multithreading capabilities. Other Comments Or Suggestions: While BRANCHES performs well, its complexity may still pose challenges for extremely high-dimensional datasets. The study does not compare BRANCHES to ensemble methods such as Random Forests, which could provide additional insights into practical performance trade-offs. The reliance on Python may hinder adoption in environments where execution speed is critical. Overall, this work presents a significant contribution to interpretable machine learning by improving decision tree optimization. However, addressing its limitations—particularly scalability, native numerical feature support, and parallelization—would further strengthen its impact. Minor clarifications on the empirical benchmarking methodology and additional comparisons with alternative optimization techniques would be beneficial. Questions For Authors: 1. How does BRANCHES handle large-scale datasets compared to existing methods in terms of memory usage and computational time? 2. Would extending the method to numerical features via native handling (rather than discretization) significantly impact its efficiency? 3. How does BRANCHES perform when tested on real-world applications beyond benchmark datasets? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you very much for your reviews, please find below our response. ## Missing Literature: - (McTavish et al. 2022): We cite these authors, in fact we use their implementation of GOSDT as mentioned in Appendix H. We also directly quote them in Appendix H.4. However, we only compared with GOSDT and not with the additional guided guesses that the authors introduced. The reason is that this loses the optimality guarantee (with respect to $\mathcal{H}_\lambda$) of GOSDT upon termination, and our main objective in this paper is to compare methods satisfying this optimality guarantee, hence the choice of STreeD, MurTree and GOSDT. Furthermore, the guesses strategy is not exclusive to GOSDT, it can be incorporated within Branches search strategy as well in similar fashion to how the authors incorporated it in GOSDT and DL8.5. For this reason, it makes sense to compare the base algorithms and to consider an incorporation of the guesses strategy in the future for Branches. - (Mazumder et al.2022) is specifically tailored to continuous features. Moreover, the main objective of this work, which is (1), does not incorporate sparsity concerns in terms of actively seeking to minimise the complexity of the solution while simultaneously maximising its accuracy. Quant-BnB rather optimises its objective subject to a hard constraint on depth as either $2$ or $3$. This is different from our work, which aligns more with literature that jointly optimise accuracy and sparsity via the objective $\mathcal{H}_\lambda$. - (Hua et al. 2022) is also specifically tailored to continuous features and large scale applications as demonstrated by the costly experiments of 1000 cores and two hours run-times. Our experiments on the other hand are less costly, we run them on a personal Machine (2,6 GHz 6-Core Intel Core i7) as stated in Appendix H and for 5 minutes, which makes them easily reproducible. ## Weaknesses: 1- Indeed, Branches necessitates a discretisation of numerical features, this is common among many works in the literature of DT optimisation. Moreover, contrary to the state of the art, Branches is not restricted to binary encodings but can deal with ordinal encodings as well. 2, 3 - Indeed, we have mentioned these limitations in Section 6 as avenues for future work. ## Comparison with Random Forest: Ensemble methods such as Random Forest focus on performance in terms of accuracy (for example) but forgo interpretability. The objective of this paper is to jointly optimise for both concerns in similar fashion to papers from this literature such as: - Demirovic, E.,Lukina, A., Hebrard, E., Chan, J., Bailey,J., Leckie, C., Ramamohanarao, K., and Stuckey, P. J.Murtree: Optimal decision trees via dynamic programming and search. Journal of Machine Learning Research, 23(26):1–47, 2022. - Hu, X., Rudin, C., and Seltzer, M. Optimal sparse decision trees. Advances in Neural Information Processing Systems, 32, 2019. - Lin, J., Zhong, C., Hu, D., Rudin, C., and Seltzer, M. Generalized and scalable optimal sparse decision trees. In International Conference on Machine Learning, pp. 6150–6160. PMLR, 2020. For this reason, works from this literature generally do not compare with ensemble methods. ## Questions: 1- Being a BFS (Best First Search) method, Branches is more memory consuming than DFS methods such as MurTree and STreeD. On the other hand, its informed BFS strategy allows it to achieve optimality faster thus alleviating the issue. This is a general trade-off between BFS and DFS, and it is thus valuable to devise an BFS strategy that terminates optimally quickly before consuming too much memory. This is what we achieved with Branches, Theorem 4.4 and Table 1 analyse its time complexity (which is directly linked to its memory complexity as the search graph grows with each iteration) and show its superiority compared to the literature. Furthermore, Table 2 shows that the number of iterations to termination of Branches is significantly smaller than GOSDT (a BFS method as well), which allows it to terminate in smaller runtimes even though Branches is currently implemented in Python and GOSDT in C++ . The memory concerns of BFS and DFS are discussed in Section 2. 2- Perhaps, but we have to keep in mind the optimality concerns with respect to sparsity as well. It is unclear whether a native handling of numerical features would keep this guarantee. 3- As shown with the mushroom dataset, high-dimensional data can be hard to deal with as the degree of each node in the search graph becomes too large. On the other hand, Branches seems to work well on large datasets in terms of rows.
null
null
null
null
null
null