title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
SAEBench: A Comprehensive Benchmark for Sparse Autoencoders in Language Model Interpretability
Accept (poster)
Summary: This paper introduces SAEBench, a new benchmark to evaluate sparse autoencoders (SAEs). The authors point out that current SAE research may over-index on sparsity-recontruction tradeoffs, a metric that may not be a good proxy in practice. SAEBench covers seven metrics covering interpretability, feature separation, and practical uses like unlearning. The authors tested over several SAEs with different designs. Their key finding is that improvements on common metrics don't always lead to better practical performance. Notably, Matryoshka SAEs perform better on feature separation metrics despite scoring worse on traditional metrics - and this advantage grows as models get larger. The authors provide an interactive website (saebench.xyz) for exploring relationships between metrics. Claims And Evidence: - The main claims are well-supported by extensive testing of several SAE methods. - The authors demonstrate that standard metrics don't always predict practical performance. - The finding about Matryoshka SAEs excelling at feature separation is interesting, with clear evidence this advantage increases with scale. Methods And Evaluation Criteria: - The benchmark is well-designed with five key goals: diversity, extensibility, speed, automation, and reproducibility. - The metrics cover important areas: concept detection, interpretability, reconstruction, and feature separation. - The unlearning and spurious correlation metrics are valuable for testing practical control over model behavior. There are some limitations though: - **The "feature absorption" metric lacks clear definition and motivation.** The paper doesn't provide sufficient details on how this metric functions or why it specifically focuses on first-letter classification tasks. Without a clear mathematical definition and justification, it's difficult to determine whether this metric genuinely captures meaningful properties of SAEs or if it's simply an ad-hoc measurement. The arbitrary nature of focusing on first-letter classification raises questions about whether this metric generalizes to other types of concepts that SAEs might learn. The current draft needs to (a) define the feature absorption metric works and (b) motivate the focus on first-letter classification tasks. - **Several metrics are described vaguely without precise definitions.** The paper presents multiple evaluation metrics without providing rigorous mathematical formulations or clear procedural descriptions. This vagueness makes it difficult for readers to fully understand how these metrics are calculated and what exactly they measure. For better reproducibility and comparison, metrics should be defined with sufficient precision that another researcher could independently implement and verify them. - **The interpretability metric may be measuring the underlying model's properties rather than the SAE itself.** This is a fundamental conceptual issue with the benchmark's approach to measuring interpretability. If an underlying language model learns features that are inherently uninterpretable to humans, then a faithful SAE should accurately reflect this "uninterpretability". The automated interpretability metric appears to penalize SAEs that faithfully capture uninterpretable features, which creates a misaligned incentive. Similarly, for the concept detection metric, if a model does not learn a given concept (e.g., sentiment), then it unclear why the SAE should encode these either. The list of concepts that are used for evaluation should be filtered based on whether the model's internal activations encode them or not. Theoretical Claims: N/A Experimental Designs Or Analyses: - Strengths: - Comprehensive testing across many SAE architectures and training methods - Scaling analysis showing how different designs behave as they grow - Interesting results on optimal sparsity of SAEs and how it impacts proposed metrics - Weaknesses: - **The experiments can benefit from non-SAE baselines for editing and spurious correlation removal.** The paper lacks comparisons with established baselines like those in recent works (e.g., [https://arxiv.org/abs/2410.12949](https://arxiv.org/abs/2410.12949), [https://arxiv.org/abs/2204.02937](https://arxiv.org/abs/2204.02937), [https://arxiv.org/abs/2404.11534](https://arxiv.org/abs/2404.11534)). Without these comparisons, it's difficult to contextualize how well SAEs perform in an absolute sense. A model editing approach might be much simpler and more effective than using SAEs for certain tasks, but without these baselines, readers cannot make this determination. Including these comparisons would provide crucial context about the difficulty of the tasks and whether SAEs represent the most efficient solution approach. - **The benchmark includes limited tasks demonstrating SAEs being used for practical model editing.** While the paper emphasizes the importance of practical utility, the actual benchmark contains relatively few tasks that demonstrate SAEs being used to solve real-world problems. More diverse editing tasks would better showcase the practical applications of SAEs in model control and modification scenarios. For instance, adding tasks related to factual knowledge editing, or toxicity mitigation or evaluating SAEs on tasks from MUSE (https://arxiv.org/abs/2407.06460) would make the benchmark evaluation more robust to task specifics. - **The use of AI judges for interpretability evaluation introduces potential biases without adequate controls.** Recent work ([https://arxiv.org/abs/2502.04313](https://arxiv.org/abs/2502.04313)) shows that language models tend to favor outputs from models similar to themselves, raising concerns about using them as evaluators. The benchmark might benefit from accounting for this bias in the automated interpretability metric. Without proper controls or human verification, the interpretability scores may reflect LLM preferences rather than true human interpretability. The paper should discuss these limitations and ideally validate the LLM judgments against human evaluations on a subset of the data. Supplementary Material: Skimmed it Relation To Broader Scientific Literature: - The work addresses a timely need in neural network interpretability / SAE research. - **Aligns with recent work (https://arxiv.org/abs/2501.17727, https://arxiv.org/abs/2501.16615) that sanity-check the usefuless of SAEs.** For example, recent work examining SAEs applied to random models highlights concerns about whether these methods actually capture meaningful representations or merely create post-hoc explanations that may not reflect true model reasoning processes. This benchmark helps address these concerns by evaluating SAEs across multiple dimensions of practical utility. - **Comparison with AxBench benchmark (https://arxiv.org/abs/2501.17148)**. A discussion on differences and similarities with a concurrent benchmark on LM steering is needed. Essential References Not Discussed: No but see sections above re: description of evaluation metrics introduced in previous papers. Other Strengths And Weaknesses: Please check my review above Other Comments Or Suggestions: None Questions For Authors: Please check my review above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive review. We address each of your points below and will incorporate corresponding clarifications and improvements in the camera-ready version, if accepted. **Metric Definitions** We appreciate your emphasis on clear metric definitions. We already provide detailed descriptions for all seven evaluation metrics in Appendix D, including mathematical definitions, implementation details, and hyperparameter settings. We will make the cross references to the detailed descriptions more prominent in the main text. **Feature Absorption Metric** Our approach builds directly on [1], who introduced the feature absorption phenomenon and proposed first-letter classification as a testbed. We retain this setting because (1) it provides dense ground truth labels and (2) it targets a nontrivial behavior: identifying a token’s first letter is difficult for language models and emerges with scale. In manual inspections, our absorption score correlates with qualitative analysis of gender related features. Future work could expand our absorption dataset using hierarchical relations from WordNet, such as scientific subfields. **Faithfulness vs Interpretability Concern** We agree this is a real concern. A faithful SAE may inherit uninterpretability if the model’s features are themselves uninterpretable. However, interpretability is desirable in its own right for applications such as debugging, auditing, or analysis. SAEBench does not claim interpretability equals faithfulness, but includes it as a distinct evaluation axis. Due to this tension, our benchmark emphasizes nuanced evaluation across multiple axes rather than optimizing a single objective. **Controlling for Underlying Model Confounds** SAEBench’s primary objective is to systematically evaluate SAE architectures and training methods within a fixed model and layer, rather than comparing across models. Some metrics inherently reflect the underlying model’s internals. To rigorously control this, we trained two comprehensive sweeps: one on layer 12 of Gemma-2-2B and another on layer 8 of Pythia-160M. This setup ensures fair SAE-to-SAE comparisons. **Concept Detection and Model-Encoded Features** We confirm that the model does encode the benchmarked concepts: linear probes on raw model activations achieve >90% accuracy on all sparse probing tasks in Gemma-2-2B. We’ll add this result in the paper to justify our concept selection. **Practical Editing Tasks** We agree that editing-focused evaluations are valuable. Since submission, we added the RAVEL evaluation, which tests whether SAEs can edit factual attributes without side effects. Our Unlearning and Spurious Correlation Removal metrics also assess model control. SAEBench is modular and extensible, and we welcome future community additions (e.g., from MUSE). **Lack of Baselines** We appreciate this suggestion and will incorporate the following baselines into our manuscript: - For RAVEL, we implemented the MDAS baseline from [3] and found it outperformed SAE-based editing. - For Unlearning, we build on [2], who benchmarked SAE methods against RMU and found RMU performed comparably or better depending on evaluation details. For Spurious Correlation Removal, we adapt the SHIFT method from Sparse Feature Circuits, where SAEs outperformed other baselines under realistic constraints—i.e., using human-guided feature selection without access to disambiguating data. However, human or LLM-based feature selection introduces variability and complexity in a benchmark. To remove this variable, we instead use disambiguating data only to select which latents to ablate, and found this correlates well with LLM-based selection. Giving full access to this data would trivialize the task for simple baselines like linear probes, making fair comparison difficult. SAEBench is designed to provide a more nuanced understanding of SAE behavior across diverse settings, even settings where SAEs are not the best solution. We will clarify this in the paper. **AI Judge Bias Concerns** We follow the protocol from [4], who compared human and LLM judge accuracy and found only small differences (Appendix A.6.7). We’ll clarify this in the paper. **AxBench** SAEBench is designed for benchmarking SAE variants across concept detection, interpretability, disentanglement, and reconstruction. In contrast, AxBench specifically focuses on downstream steering. We aim to establish SAEBench as the comprehensive platform for SAE evaluations and AxBench-style steering tasks would be a valuable addition to our suite. We see SAEBench and AxBench as complementary efforts. Thank you again for the feedback! Do these changes address your concerns with the paper? If not, what further clarification or modifications could we make to improve your score? Citations [1] https://arxiv.org/abs/2409.14507 [2] https://arxiv.org/abs/2410.19278 [3] https://arxiv.org/abs/2402.17700 [4] https://arxiv.org/abs/2410.13928v2 --- Rebuttal Comment 1.1: Comment: Thanks for the thorough rebuttal, I have increased my score.
Summary: The paper introduces SAEBench, to evaluate SAE, of course, on various design choices. It proposes a unified evaluation suite that uses diverse metrics—concept detection, automated interpretability, reconstruction fidelity, and feature disentanglement—to assess SAE performance. The authors train and benchmark over 200 SAEs across seven architectures and various sparsity levels. This paper reveals that gains on conventional proxy metrics do not necessarily lead to better practical interpretability, with hierarchical architectures like Matryoshka excelling in feature disentanglement despite lower sparsity-fidelity scores. They also find that optimal sparsity levels vary by task, with moderate sparsity often striking the best balance between reconstruction and interpretability. Claims And Evidence: Most of the paper's claims are supported by comprehensive experimental results and detailed comparisons across multiple evaluation metrics. - The claim that traditional sparsity‐fidelity metrics do not reliably predict practical interpretability is well backed by evidence showing that architectures like Matryoshka excel in feature disentanglement despite lower proxy scores, and the scaling experiments clearly illustrate nuanced trade-offs in performance. However, some claims—such as those regarding the unlearning evaluation—are less convincingly supported due to limitations in ground truth data and inherent noise in certain automated metrics, which may warrant further investigation. Methods And Evaluation Criteria: The proposed evaluation metric for SAEs make sense to me. They address both traditional metrics (like the sparsity-fidelity trade-off) and novel criteria (such as feature disentanglement through spurious correlation removal and targeted probe perturbation). Theoretical Claims: No theoretical claims made Experimental Designs Or Analyses: Yes. They look valid and sound to me. Supplementary Material: No. Reviewers are not required to review the supplementary materials. Relation To Broader Scientific Literature: There are many proposed SAE variants out there, and this paper tries to have a unified story on which is better. Essential References Not Discussed: None, as far as I am aware of. Other Strengths And Weaknesses: - The writing style is informal and, at times, reads more like a blog post than a rigorously written scientific paper, which undermines its academic tone. - The paper's contribution is highly niche, focusing narrowly on a specific benchmark for sparse autoencoders; this raises questions about whether ICML is the right venue since the work appears more like a course project or blog post rather than addressing a significant research question. - While the paper introduces novel evaluation metrics, its heavy reliance on proxy metrics leaves some doubt about the real-world impact and generalizability of the findings, potentially limiting its broader relevance. And it fails to provide sound explanations of every finding it has. Other Comments Or Suggestions: - Reformatting some of the figures and tables for clarity could improve the readability and overall presentation. For example, say that the L0 sparsity is measured not pre-determined could help understand the figures much better. - While the discussion of limitations is valuable, expanding it to include actionable future directions and a more structured comparison with related work could strengthen the paper further. Questions For Authors: 1. Is the SAEBench generalizable to other lanague modeling architectures, for example, RWKV and Mamba. 2. What is the central research question your work aims to address, and how do you see SAEBench advancing our fundamental understanding of model interpretability, rather than serving as a narrowly focused blog post? A clearer articulation of the research question and its broader implications would help contextualize the work’s contributions and impact. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank Reviewer 55S8 for their thoughtful and detailed review. We’re glad you found the evaluation criteria sound and appreciated the comprehensive comparisons across sparse autoencoder (SAE) variants. First, we highlight the addition of RAVEL, a metric for feature disentanglement and model editing, to our evaluation suite and invite you to compare SAE architectures in our existing interactive result browser at www.saebench.xyz We appreciate your suggestions regarding unlearning evaluation, writing tone, and figure clarity, and respond to these below. **Q: What is the central research question your work aims to address?** Our central question is: How can we rigorously evaluate sparse autoencoders for interpreting large language models in a way that reflects their real-world utility? While over a dozen new SAE variants have recently been proposed, evaluation practices have lagged behind. Most papers still rely heavily on convenient proxy metrics like the sparsity-fidelity trade-off, which, as we show, fails to reliably predict downstream interpretability or performance. SAEBench addresses this gap with a unified, extensible evaluation suite that captures both diagnostic and task-grounded metrics. Beyond benchmarking, our evaluations have yielded new insights, such as the strong feature disentanglement properties of Matryoshka SAEs despite their weak proxy scores. This illustrates how rigorous, multifaceted evaluation can shift our understanding of what makes an SAE effective. While SAEBench focuses on a specific methodology (SAEs), we respectfully disagree that the contribution is niche. SAEs have rapidly become one of the most popular tools in mechanistic interpretability research, with: - multiple oral presentations at ICLR 2025 [1, 2, 3] - active research across both academia and industry, including interpretability teams at OpenAI, DeepMind, and Anthropic - interpretability startups, such as Goodfire, who recently collaborated with ARC institute to apply SAEs to protein generation models [4] Their use spans concept discovery, editing, unlearning, and circuit analysis. Providing a standardized and extensible benchmark for evaluating this fast-growing class of methods fills a timely and important gap in the literature. **Q: Is SAEBench generalizable to other language modeling architectures, for example, RWKV and Mamba?** Yes. SAEBench evaluates the quality of sparse autoencoders applied to model activations, and is in principle agnostic to model architecture. We constructed the codebase with extensibility in mind – as long as internal activations can be extracted, SAEs and our evaluations can be applied. While our current codebase is tailored to transformer models, supporting other architectures (e.g., RWKV or Mamba) would primarily require adapting the activation-extraction interface, not major architectural changes. **Q: Concerns around proxy metrics and real-world relevance** To clarify, a key contribution of our work is moving beyond traditional proxy metrics (such as reconstruction loss and L0 sparsity) to evaluations that more directly reflect real-world interpretability and application. While prior work has relied heavily on these proxies, we show they often fail to correlate with meaningful downstream outcomes. For example, Matryoshka SAEs underperform on proxy metrics yet outperform on multiple real-world metrics like disentanglement and absorption. To capture more meaningful desiderata, our benchmark incorporates a diverse set of evaluations targeting concept disentanglement, interpretability, editability, and knowledge removal. To further strengthen this, we have added the RAVEL evaluation from [7], which tests whether interventions on SAE latents can reliably modify model behavior in controlled and interpretable ways. **Q: Concerns around unlearning evaluation and noise in automated metrics** We agree that unlearning is a challenging setting to evaluate and that some automated metrics are noisy. We explicitly acknowledge these limitations in Section 6. Our current unlearning setup uses the strongest available dataset for Gemma-2-2B (WMDP-bio). Extending this evaluation to larger models or new domains is a promising future direction. **Q: Informal writing style and figure clarity** We appreciate this feedback and will revise the paper to adopt a more formal academic tone. We will also reformat figures and improve captions for clarity. Thank you again for your constructive review. Do these changes address your concerns with the paper? If not, what further clarification or modifications could we make to improve your score? Citations [1] https://openreview.net/forum?id=tcsZt9ZNKD [2] https://openreview.net/forum?id=I4e82CIDxv [3] https://openreview.net/forum?id=WCRQFlji2q [4] https://www.goodfire.ai/blog/interpreting-evo-2 [5] https://arxiv.org/abs/2410.22366 [6] https://www.biorxiv.org/content/10.1101/2024.11.14.623630v1.full [7] https://arxiv.org/abs/2402.17700v2
Summary: The paper introduces SAEBench, a new benchmarking framework for sparse autoencoders (SAEs) in language model interpretability. The authors identify limitation in existing evaluation approaches, which often rely solely on unsupervised metrics like the sparsity-fidelity tradeoff with limited practical relevance. SAEBench addresses this by measuring SAE performance across seven diverse metrics spanning interpretability, feature disentanglement, and downstream applications like unlearning. The authors evaluate over 200 SAEs across seven architectures (ReLU, TopK, BatchTopK, Gated, JumpReLU, P-annealing, and Matryoshka) with varying widths (4k, 16k, and 65k latents) and sparsities. Key findings include: (1) gains on proxy metrics (e.g., sparsity and fidelity) don't reliably translate to better practical performance, (2) Matryoshka SAEs substantially outperform other architectures on feature disentanglement metrics despite underperforming on traditional proxy metrics, and (3) this advantage grows with SAE scale. Claims And Evidence: The claims in the paper are generally well-supported by empirical evidence. This work: - demonstrates that the sparsity-fidelity trade-off doesn't reliably predict performance on downstream tasks. - show that Matryoshka SAEs excel at feature disentanglement despite underperforming on traditional metrics. - provides evidence for scaling behaviors across dictionary sizes (4k to 65k), with inverse scaling effects for most architectures on certain metrics. - supports the claim that no single sparsity level is optimal for all tasks with detailed experiments across L0 values from 20 to 1000. Methods And Evaluation Criteria: The proposed evaluation methods and benchmark criteria are thoughtfully designed and appropriate for the problem. The authors: - develop a diverse set of metrics covering four fundamental capabilities: concept detection, interpretability, reconstruction, and feature disentanglement. Include both established metrics from prior work and novel metrics for comprehensive evaluation. - ensure computational tractability (65 minutes per SAE) while maintaining reproducibility. - present a standardized framework that can be easily extended with new evaluation methods. - Their choice to evaluate across a much wider range of sparsities (L0 from 20 to 1000) than typically studied (20 to 200) provides valuable insights. Theoretical Claims: The paper doesn't make substantial theoretical claims Experimental Designs Or Analyses: The experimental designs and analyses appear sound and carefully conducted: - The authors control for variables by using identical data, training procedures, and hyperparameters across architectures. - They appropriately sweep across different intervention sizes to ensure robustness of results. SAE training dynamics are analyzed over token counts from 0 to 500M tokens, providing insights into when performance plateaus. - Performance differences are analyzed across both model scales (Pythia-160M vs. Gemma-2-2B) and SAE scales (4k to 65k latents). - Appropriate ablation studies and control conditions are included, such as comparing against PCA baselines. - The analyses consider limitations of the metrics, such as unexpected TPP performance at higher L0 values, and the authors discuss these findings. Supplementary Material: The Suppl. Material provides extensive additional results that support the main claims: - Detailed analyses of scaling behaviors across all sparsities (Figure 4) - Training dynamics (Figure 6) - Intervention set size analyses (Figures 7-9) - Results across different model scales (Pythia-160M) - Evaluation of Gemma-Scope SAEs (16k to 1M latents) Relation To Broader Scientific Literature: The paper effectively situates itself within the broader literature on sparse autoencoder evaluation. It builds on: - Sparse probing techniques (Gurnee et al. 2023, Gao et al. 2024) - Prior work on automated interpretability using LLMs (Paulo et al. 2024) - Feature absorption phenomena (Chanin et al. 2024) - SAE-based unlearning approaches (Farrell et al. 2024) Essential References Not Discussed: The paper covers all relevant literature. Other Strengths And Weaknesses: Strengths: - The benchmark's multi-faceted approach reveals important trade-offs that would be missed with traditional metrics, providing a more nuanced understanding of SAE performance. - The discovery that Matryoshka SAEs substantially outperform other architectures on feature disentanglement metrics despite underperforming on traditional proxy metrics is an interesting finding with practical implications. - The analysis of scaling behavior across dictionary sizes reveals inverse scaling effects for most architectures on certain metrics. - 200+ SAEs to be open sourced will likely accelerate progress in the field. Weaknesses: - As acknowledged by the authors, the paper evaluates on only two model architectures (Gemma and Pythia); including more diverse architectures would improve extrapolation of the results. - The ground truth feature sets used for supervised metrics could be limited, raising questions about how well the results would generalize to other types of features. - The unlearning evaluation was constrained by model capabilities, with only one test set showing sufficient performance. Other Comments Or Suggestions: The paper would benefit from a more detailed discussion of how the metrics relate to other real-world use cases of SAEs in model interpretability. A discussion of how SAEBench could be extended to other modalities (e.g., vision models) would strengthen the paper's impact. Questions For Authors: Have you identified specific architectural components of Matryoshka SAEs that contribute to their superior feature disentanglement? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer HzWd for their thoughtful and encouraging review. We’re glad you found the benchmark design, breadth of evaluations, and insights on SAE scaling and architecture choices valuable. First, we highlight the addition of RAVEL, a metric for feature disentanglement and model editing, to our evaluation suite and invite you to compare SAE architectures in our existing interactive result browser at www.saebench.xyz Below, we address your comments and suggestions: **Q: Have you identified specific architectural components of Matryoshka SAEs that contribute to their superior feature disentanglement?** Yes — we believe the key factor is Matryoshka’s nested loss function, which encourages learning at multiple levels of abstraction. Unlike standard SAEs that optimize a single dictionary (often leading to oversplitting of features into overly specific latents), Matryoshka SAEs optimize a sequence of nested dictionaries. This hierarchical structure likely helps preserve more coherent, disentangled features and may explain their strong performance on disentanglement metrics. **Q: How do the metrics relate to real-world use cases of SAEs?** SAEBench includes both real-world-motivated tasks and diagnostic evaluations. For example, our Spurious Correlation Removal metric builds on the SHIFT method from Sparse Feature Circuits — a compelling real-world case where researchers used SAEs for white-box, human-guided model editing that outperformed other baseline methods. Our Unlearning evaluation also reflects a real downstream goal, though we find that SAEs are not yet the strongest-performing method for this task. As noted above, we've incorporated the RAVEL evaluation from Huang et al. [1] into SAEBench to further enhance our coverage of practical SAE capabilities—particularly for tasks involving selective editing of model knowledge. Findings on RAVEL support the hypothesis that Matryoshka SAEs outperform other architectures in the L0 in [40, 160] range. Other metrics, such as Sparse Probing and Feature Absorption, are more diagnostic in nature, designed to measure underlying properties of SAE representations. Together, these metrics give a more holistic picture of SAE performance across the four desiderata: Concept Detection, Human Interpretability, Reconstruction and Feature Distentanglement. We appreciate the reviewer’s suggestion and agree that this discussion of real-world applicability versus diagnostic utility should be made more explicit in the paper. We will incorporate this clarification in our final submission. **Q: Suggestions on extending to other modalities (e.g., vision models)** We appreciate this suggestion. Extending SAEBench to other modalities is an exciting direction for future work, and we purposefully standardized the SAEBench codebase to be easily extensible. In this paper, we focused on the language domain to enable systematic comparison across a broad range of SAE architectures on a shared model. However, we believe many of the evaluation principles and metrics—especially those related to disentanglement and white-box editing—could be adapted to multi-modal or vision-language models. For instance, the specificity score for SAE features in the vision model SDXL turbo judged by a VLM [2], and a supervised sparse probing evaluation for matching SAE latents trained on Protein LMs to known biological concepts from [3] would be useful additions to SAEBench. We will include these pointers for future work in the final manuscript. **On concerns around supervised metrics and model diversity** We agree with these limitations, which are already discussed in Section 5. Expanding the benchmark to cover more diverse models, layers, and ground-truth concepts is a natural next step, and we hope the open-sourced SAEs and codebase will enable the community to contribute further evaluations. We appreciate the reviewer’s thoughtful suggestions and are glad that SAEBench was seen as a meaningful step forward for SAE evaluation. We hope it will be a valuable resource for future research in interpretability. Citations [1] https://arxiv.org/abs/2402.17700v2 [2] https://arxiv.org/abs/2410.22366 [3] https://www.biorxiv.org/content/10.1101/2024.11.14.623630v1.full
Summary: This paper introduces a benchmark framework for evaluating SAEs. SAEBench introduces a comprehensive evaluation suite with seven metrics across four capabilities: - Concept Detection: Measuring how precisely latents correspond to concepts - Interpretability: Evaluating human-understandability of latents - Reconstruction: Quantifying faithful preservation of model behavior - Feature Disentanglement: Assessing proper separation of independent concepts The authors proposed three new metrics under this framework and evaluated over 200 SAEs across seven architectures with varying dictionary sizes (4k, 16k, 65k latents) on Gemma-2-2B and Pythia-160M models. Claims And Evidence: Yes, the most of the claims in the paper are properly validated through experiments, though there are some hypotheses which are hard to validate. Methods And Evaluation Criteria: Yes, the authors proposed three extra metrics to evaluate the capability of SAEs. More details of these metrics could be provided beyond the three paragraphs in the paper, since I reckon it as the main contribution of this paper. Theoretical Claims: No theoretical claims have been made in this paper. Experimental Designs Or Analyses: Yes, the experiment design is quite solid. Supplementary Material: There are no supp. material for this paper. I briefly reviewed the appendix attached in the PDF given, since it's too long. Relation To Broader Scientific Literature: The authors proposed a benchmark which is capable to evaluate most of the existing works on SAE-based mechanistic interpretability methods. Essential References Not Discussed: No Other Strengths And Weaknesses: This work is already gaining impact on the open-source and interoperability research community and is already used by various works on improving SAEs. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments and are glad that the benchmark’s goals and structure came through clearly. First, we highlight the addition of RAVEL, a metric for feature disentanglement and model editing, to our evaluation suite and invite you to compare SAE architectures in our existing interactive result browser at www.saebench.xyz We will address your questions and suggestions below: Regarding the suggestion to provide more detail on the three novel metrics: we agree this is a central contribution of our work, and we include detailed implementation descriptions for all metrics (including the new ones) in Appendix D. We’ll improve the cross-references from the main text to make this easier to find. We also agree that some of the hypotheses in the paper are inherently hard to validate — interpretability is challenging to evaluate due to the lack of ground-truth labels for model internals. This is one of the main motivations behind SAEBench: to test SAE quality across a diverse range of practical, measurable tasks that reflect different aspects of interpretability in practice. As noted above, we've incorporated the RAVEL evaluation from Huang et al. [1] into SAEBench to further enhance our coverage of practical SAE capabilities—particularly for tasks involving selective editing of model knowledge. Findings on RAVEL support the hypothesis that Matryoshka SAEs outperform other architectures in the L0 in [40, 160] range. We also want to highlight a key takeaway enabled by our benchmark: architectures like Matryoshka, which perform worse on standard sparsity-fidelity metrics, actually outperform others across several concept detection and feature disentanglement tasks. We believe this underscores the value of evaluating SAEs on a broader range of criteria. Thank you again for the feedback! Do these changes address your concerns with the paper? If not, what further clarification or modifications could we make to improve your score? Citation [1] https://arxiv.org/abs/2402.17700v2
null
null
null
null
null
null
Policy Design for Two-sided Platforms with Participation Dynamics
Accept (poster)
Summary: This paper studies the effect of matching policies in two-sided platforms, taking the evolution of both viewer and provider sides into consideration. The authors show that myopic matching policies are only optimal in strong assumptions and appeared to be suboptimal at other cases. The authors propose a new matching policy, called "look-ahead", that appears to perform better and more stable than myopic and uniform policies in experiments. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I find some steps confusing in the proof of Proposition 2. * In line 645-646, what do you mean by saying $\nabla _\lambda S$ and why can you say $\lambda _{t+1} - \lambda _{eq} = (\nabla _\lambda S) (\lambda _t - \lambda _{eq})$? I can only derive $\lambda _{t+1} - \lambda _{eq} = S(\lambda _t ) - S(\lambda _{eq})$ from the information given up to there. I also note that $S(\lambda)$ is a vector instead of a real number, so one can not directly use the statement $S(\lambda _t ) - S(\lambda _{eq}) = (\nabla _\lambda S(\lambda^*)) (\lambda _t - \lambda _{eq})$ for some $\lambda^*$. * In line 654-658, I'm confused about the derivation, it seems that $\{A _{1,1} \} _{k,k'} = -1 \mathbb{I} \{k = k'\}$, same for $\{A _{2,2}\}$. For $\{A _{1,2}\} _{k,l}$, it should be $\{A _{1,2}\} _{k,l} = \nabla _{\lambda _l} \bar{\lambda _k}(\sum _{l'} \pi _{k, l'} (b _{k,l'} + f _{k,l'}(\lambda _{l'})))$, and no $\eta _k$ ($\eta _l$) term should appear in this term. Experimental Designs Or Analyses: Yes. The experiment results support the main findings of this paper (myopic matching policies are suboptimal). Supplementary Material: N/A Relation To Broader Scientific Literature: The author mentioned that this paper is the first paper that studies matching policies in two-side platforms, taking the evolution of both viewer and provider sides into consideration. Unfortunately, I'm unfamiliar with the literature and thus unqualified to evaluate this statement. Essential References Not Discussed: N/A Other Strengths And Weaknesses: * Strengths: S1: The presentation is good and easy to follow. S2: The dynamic provided in Equation (4), (5), and the game-theoretic interpretation in Equation (6), are novel and interesting. * Weaknesses: W1: Many results stated in this paper are straightforward and not surprising. For examples, 1) Proposition 1 is a straightforward statement of fact. 2) In Theorem 1, the existence and non-uniqueness of NE are well-established throughout game theory and not surprising. The second statement of Theorem 1 is standard in optimization theory. 3) The stability in Proposition 2 is also straightforward (same intuitions with those in Theorem 1). 4) The derivation of Theorem 2 seems to be straight-forward. The theorem actually states some intuition of "decomposition". 5) In Theorem 3, the assumption in first statement is strong (K=1). For second statement of Theorem 3, the result can say nothing about whether $R$ is monotone decreasing over $\varepsilon$, even if $h(\varepsilon)$ can be arbitrarily close to 1. In addition, Theorem 3 only considers the interpolation between greedy policy and uniform policy, and ignores the high-dimensional policy space. 6) In Proposition 3, the suboptimality of myopic policy is standard in almost every field. W2: In RHS of line 176, there is no definition about the concept used in "$\lambda_{eq}$ is stable". W3: In the paragraph below Proposition 2, the explanation about the polarized equilibrium of exposure-concentrated policy is hard-to-believe, with no theoretical and empirical evidence. W4: The proposed policy, "look-ahead policy", is straight-forward to design. Besides, no theoretical guarantees are provided for this policy. W5: In experiments, the only considered baselines are uniform and myopic policies, both of which are weak. In other hands, the only contribution the authors claim seems to be that this paper is the first to consider two-side dynamics in this literature, and I believe it indeed is. However, the results derived in this paper (e.g., non-uniqueness and stability of Nash Equilibriums, suboptimality of myopic policy) are likely to hold and be discovered in many other settings. So it seems that the only contribution is to extend these straight-forward results to a new setting, and such contribution I think does not warrant an ICML acceptance. Other Comments Or Suggestions: C1: In line 194, it seems that "Theorem 4" should be "Theorem 1". C2: In Equation (3), it seems that the introduction of $b_{k,l}$ is actually unnecessary because it can be incorporated into $f_{k,l}(\lambda_l)$. Questions For Authors: I have no questions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their time and efforts on the review. We respond to the key comments and questions below. --- **(factual discussion)** --- > **W3 [polarized equilibrium]** We clarify the reasoning behind the paragraph below Proposition 2 step by step. First, consider the case where $f$ and $\bar{\lambda}$ are monotonically increasing concave functions. This means that the upper bound of the first order derivative, $C_1$ and $C_2$ are monotonically decreasing to the viewer/provider populations. Then, let us assess if a polarized equilibrium can be allowed (i.e., satisfies Ineq. (8)) given some specific policies. When the equilibrium population is polarized, this means that some population diminishes, and thus $C_1$ and $C_2$ can be large. Therefore, RHS of Ineq. (8) becomes small (e.g., near-zero). While the exposure-fair policy may exclude this point due to the violation of Ineq. (8), the exposure-concentrated policy can allow this point (Ineq. (8) is satisfied) by letting the LHS of Ineq. (8) to be (near-)zero or smaller than RHS. --- > **Theoretical guarantees of look-ahead policy**. Please also refer to the rebuttals for Reviewer E67u. --- > **Baselines in experiments**. Please refer to the rebuttals for Reviewer 54YD. --- **(contributions)** --- > **W1 [why our results are not trivial]**. While our results may appear straightforward, we respectfully disagree that they are *trivial* and would like to clarify why they are novel. **Theorem 1**, a notable insight of our result is the guaranteed existence of pure NE in a two-sided market setting. This contrasts with findings in relevant prior works, such as [1], [2], and [3], which study one-sided markets involving only strategic content creators and demonstrate that pure NEs may fail to exist or only exist under restrictive conditions. Our result highlights a meaningful and somewhat surprising message: when the passive resource (e.g., user attention) in previous models becomes an active participant (as in our two-sided market), the game structure always ensures the existence of pure NE. We recognize this as an important conceptual contribution and will highlight it more clearly in the revised version. **Proposition 2**, we would like to emphasize two non-trivial messages. First, it is well-established (e.g., [4], [5]) that the existence of pure NE does NOT automatically imply convergence under multi-agent gradient dynamics; in fact, such dynamics may get stuck in local NEs [5] or even non-Nash stationary points [4]. In contrast, our model establishes that gradient dynamics provably converge under certain conditions, which is an important theoretical contribution. Second, the sufficient conditions we identify for convergence—namely, weak population effects and more equitable exposure allocation—are not only intuitive but also offer actionable insights for policy design. We briefly discussed these points in the paragraph following Eq. (8) and will make them more explicit in the revision. **Theorem 3**, we agree that fully characterizing the optimality of myopic-greedy policies is technically challenging. However, our synthetic experiments in Section 6 do well support our theoretical insight in Section 4. In fact, the main purpose of Section 4 is to conceptually illustrate the scenarios in which myopic-greedy policies perform well versus when they fall short. As a result, Section 4 serves as a motivating example that leads naturally into our exploration of optimal policy design in subsequent sections. [1] Modeling content creator incentives on algorithm-curated platforms. [2] Supplyside equilibria in recommender systems. [3] How Bad is Top-$ K $ Recommendation under Competing Content Creators? [4] On finding local nash equilibria (and only local nash equilibria) in zero-sum games. [5] User welfare optimization in recommender systems with competing content creators. --- > **Additional clarifications about our contributions**. We would like to kindly argue that these contributions are actually acknowledged by reviewers KANq and 54YD as “this paper studied an interesting problem ... The theoretical and numerical results both appear sound and provide interesting insights.” “the model is interesting, as are the results.” If the proposed method appears to be straightforward, it means that our method is well-motivated by the theoretical analysis provided in advance in the paper (especially Theorem 2). Moreover, as we have discussed about Theorem 1 and Proposition 2, non-uniqueness and stability of NEs do NOT actually hold in many other similar settings. Providing theoretical evidence for a seemingly reasonable hypothesis, highlighting the overlooked problem, and giving a reasonable explanation is crucial for scientific research. We would appreciate it if the reviewer could acknowledge our contribution regarding this point. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I apologize for my inappropriate words in my initial comments, and I have changed my words. I would like to give further comments. Regarding the results about the existence and non-uniqueness of pure NEs, since I'm not familiar with this field, I do not know why these results can not be established in other settings. But these results are well-established in concave games [1]. The game considered in this submission is exactly a (strongly) concave game. For this reason, the results about the existence and non-uniqueness of pure NEs seem to be straightforward from my point of view. [1] Rosen, J. Ben. "Existence and uniqueness of equilibrium points for concave n-person games." Econometrica: Journal of the Econometric Society (1965): 520-534. In addition, I found some proof details confusing, and I have updated my initial review. Finally, I believe that the problem studied in this paper is important and novel. But my main concerns lie in that the results seem to be straight-forward and not strong (from my general knowledge of game theory). Furthermore, I believe this concern should not be resolved only because other reviewers think the results are strong. Above all, I will maintain my initial score.
Summary: In this paper, the authors formulated and studied the dynamics of population effects on two-sided platforms, where viewer and provider populations evolve based on certain rules. Theoretically, the authors show that the myopic-greedy policy can fail to perform well when the population effects are heterogeneous across providers. They further investigated the shortcomings of the myopic policy by decomposing the overall welfare regret into both the policy regret and the population regret, where the latter captures the long-term welfare loss that is not captured by the myopic policy. In response to this, the authors proposed an algorithm that balances policy and population regrets, and show the effectiveness of this approach via synthetic and real-world numerical experiments. Claims And Evidence: Overall, I think the claims made in this paper are quite convincing. I think the key assumptions used in the model quite realistic and could contribute to understanding how to best balance short-term and long-term welfare in two-sided platforms. The authors also justified their results using both solid theoretical and numerical evidence. Methods And Evaluation Criteria: I think it's good that the authors evaluate their method on both synthetic and real-world data. However, I do think they can consider more baselines methods in addition to the myopic and uniform random policy. For example, could some related works that model the population departure be included as baselines? Theoretical Claims: The theoretical results appear sound, though I did not check the proofs in the appendix. Experimental Designs Or Analyses: - Overall, the experiment results appear sound. I also appreciate that the authors conduct both synthetic and real-world experiments to validate the effectiveness of their approaches. - I have some questions related to the practicality of the algorithm, which I detailed below. Supplementary Material: N/A Relation To Broader Scientific Literature: - I believe this work contributes to the broad literature on two-sided platform. It touches on important subjects including balancing the interests of strategic agents and balancing the tradeoff between short-run and long-run welfare. In particular, the topic of how to best evaluate and achieve long-term fairness has not been studied a lot. - That being said, I think the paper should have a more in-depth discussion on the related works which appear to be missing from the main body as of now. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: - As I mentioned above, I think this paper studied an interesting problem that formulate the population dynamics in two-sided platforms. The theoretical and numerical results both appear sound and provide interesting insights. Weakness: - It'd be good if the authors can provide more details for the look-ahead policy in Section 5. For example, (1) The authors proposed to use the softmax policy as the approximation of $\pi_t^1$; would this impact the performance of the policy? (2) It appears that one needs to know the viewer satisfaction between each viewer/provider pair in order to solve the optimization problem for the look-ahead policy. What if the platform lacks such knowledge in a realistic setting? - I also think the authors should offer more discussions how to best select $\beta$ in real-world applications and whether any theoretical results can be obtained. While the authors claimed that setting $\beta = 1.0$ already appears to work well and thus minimizes the effort to tune this parameter, it's somewhat unrealistic to adopt a fully look-ahead policy in real-world applications, especially when platforms might want to lay more emphasis on short-term outcomes. I wonder if the authors can provide more comments on this aspect. Could you establish any guarantees on short-term outcomes under the fully look-ahead policy? Or, if a platform would like to understand how to select the best $\beta$, is there a reasonable way to do so? I suspect that simple A/B tests won't be helpful because they only capture short-term impacts. Other Comments Or Suggestions: N/A Questions For Authors: See my comments above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for valuable feedback and the acknowledgment of the contributions. We respond to the key comments and questions below. --- > **Could some related works that model the population departure be included as baselines?** Thank you for the great point. Unfortunately, the related works, which model the population departure, cannot be directly applied to our setting. There are two reasons: (1) The existing papers assume that the system dynamics are linear, but we consider the non-linear dynamics. Therefore, their linear programming approach is not solvable in our setting. (2) The existing papers do not model the population increase, and even if the dynamics are linear, it is not trivial how to apply the existing work taking the population increase into account (because we do not have a binary indicator of the population). Our paper proposes the first reasonable baseline for optimizing long-term outcomes under flexible and non-linear dynamics of the population. --- > That being said, I think the paper should have a **more in-depth discussion on the related works** which appear to be missing from the main body as of now. Thank you for the thoughtful suggestion, and we appreciate your point. We will have an additional page in the camera-ready version, and we plan to move the discussion of related work that is currently in the Appendix to the main text upon publication. --- > **The authors proposed to use the softmax policy as the approximation of $\pi_t^1$; would this impact the performance of the policy?** This is a great point. By using the softmax policy as the approximation of $\pi_t^1$, we expect some underestimation of the objective function ($R(\cdot)$ in Eq. (10)). However, as long as the maximizer of this objective function ($\pi$) is the same (which is often the case), the performance of the policy should not change by a large amount. --- > **It appears that one needs to know the viewer satisfaction between each viewer/provider pair in order to solve the optimization problem for the look-ahead policy. What if the platform lacks such knowledge in a realistic setting?** This is also a great question. We consider the scenario where we work on sub-group level, and thus the (expected) satisfaction is usually accessible as a subgroup level, and this is also a standard assumption in existing work [1]. However, when this information is unavailable, we need some additional estimation process of viewer satisfaction using past interaction data. We may then need additional consideration of exploration-exploitation tradeoffs, which should be an interesting future direction. [1] Optimizing Long-term Social Welfare in Recommender Systems: A Constrained Matching Approach. --- > I also think the authors should offer more discussions **how to best select $\beta$ in real-world applications and whether any theoretical results can be obtained**. > **Could you establish any guarantees on short-term outcomes under the fully look-ahead policy?** These are indeed important points. While we cannot provide guarantee on the short-term outcomes, it is possible to validate the short-term outcome before deploying the look-ahead policy. Throughout the paper, we consider the situation where the immediate outcome is accessible for both baseline and the proposed methods (following existing work). Thus, once we identify $\pi_t^{(d)}$ using Eq. (10), we can compare the short-term (i.e., myopic) outcome of the proposed method and myopic policies. If the platform aims to guarantee some certain short-term outcome, the platform can adjust the value of $\beta$ to satisfy such constraints.
Summary: The paper models the dynamics of a two-sided platform with viewers and providers. Viewers receive satisfaction from watching content assigned to them from providers that they like, and providers receive exposure from having their content assigned to viewers. The population of both the viewers and providers are dynamic, and a higher satisfaction for one group of viewers leads to more viewers of that group joining the platform and a higher exposure to one group of providers leads to more providers from that group joining the platform. The latter also improves the quality of the content of that platform. They argue that myopic policies that only seek to optimize welfare according to the current population can fail to be optimal long-term. They provide an alternative "look-ahead" policy to do this that is solvable (approximately) via gradient ascent. They show via experiments that their policy performs as good (or better) than the myopic policy in certain scenarios. Claims And Evidence: Their primary claim is that their look-ahead policy performs better than the myopic policy, hence justifying their model and results. This is convincing and believable. Their first set of experiments appear promising, but their second experiment shows that the myopic policy works just as well, if not slightly better. I am not sure why they haven't discussed more about why this is the case. Methods And Evaluation Criteria: Yes. Theoretical Claims: I only briefly skimmed some of the proofs in the appendix. Experimental Designs Or Analyses: The experiments appear to be make sense within their framework. Supplementary Material: I checked C.5 in particular. Although I think it should be the case, the authors should have explicitly clarified whether or not the look-ahead policy works better than the greedy policy in this example. Relation To Broader Scientific Literature: The paper mentions several related works such as Mladenov et al (2020), Huttenlocher et al (2023), Hashimoto et al (2018) that consider dynamics under population change of viewers or providers, but not at the same time. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is well-written and the model is interesting, as are the results. I am a bit wary about some of their claims, though. For example, they state, after Theorem 3 and Proposition 3, >"these results demonstrate that the myopic-greedy policy is optimal only under highly restrictive conditions and emphasize the need for practical solutions accounting for the long-term effect.". Their results in fact, do not show this. I agree that the situation in Theorem 3 is restrictive. But that is only one scenario where it is optimal. The real-world experiments in fact also show a scenario where the myopic policy is near-optimal. Proposition 3 also says nothing about whether greedy is optimal only in such settings. I do however, believe that a stronger statement can be shown that would suggest something more substantial. Other Comments Or Suggestions: It would be good to have some explanation as to why the greedy policy worked so well in the real-world experiments. In fact, the greedy policy seems to have the highest (or almost the highest) welfare throughout all the timesteps. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for valuable feedback and the acknowledgment of the contributions. We respond to the key comments and questions below. --- > **I am a bit wary about some of their claims**, though. For example, they state, after Theorem 3 and Proposition 3, (overstatement) Thank you for the thoughtful feedback. We appreciate your point, and we plan to rephrase the pointed-out sentence in the revision as follows. - (Current) “.. the myopic-greedy policy **is optimal** only under highly restrictive ..". - (Revision) “.. the myopic-greedy policy **is guaranteed optimal** only under highly restrictive ..” --- > It would be good to have some explanation as to why the greedy policy worked so well in the real-world experiments. In fact, the greedy policy seems to have the highest (or almost the highest) welfare throughout all the timesteps. Thank you for the great point. Throughout the experiments (including the ones not presented in the paper), we found that there are tradeoffs in concentrating and distributing exposures. Which is better often depends on the problem instance. For example, suppose the special case where viewers do not change their population ($\eta_k = 0, \forall k \in [K]$) and the total viewer population is fixed to 100. Then, consider a scenario with 100 provider groups. In this situation, distributing exposure among different subgroups can result in expected exposure of 1 for each provider group. In such cases, concentrating the exposure to one provider group can be a better strategy than distributing allocation for the total population growth. This is why myopic policy performs well in some scenarios, and our argument is that the proposed method can work adaptively well (i.e., at least better or competitive than both myopic and uniform) regardless if the myopic policy succeeds or falls short. Note that the reason for having a small gap between the myopic policy and the proposal in the real-data experiment is that our method involves an optimization process (e.g., estimating dynamics and optimizing policy) and thus has small modeling errors. We hope this answer could resolve your question.
Summary: This paper proposes and analyzes a theoretical model of population effects of the (e.g. recommendation) policies for two-sided platforms, where there are consumers and producers. The basic intuition is that a recommendation system influences both the short-term satisfaction of the content consumer but also the success of the producers, which may influence the size of the producer pool, creating long-term feedback loops. The model consists of finite sets of consumer and producer types, for which consumers derive utility heterogeneously across types of producers. They compare a “myopic” policy that only optimizes for consumer satisfaction with a “look-ahead” policy that also estimates feedback loops. Claims And Evidence: The claim that the myopic policy can be sub-optimal is a mild one, since all that is necessary is for there to exist some problem instance where it performs poorly. There is no theoretical evidence in support of the look-ahead policy. Instead, there are simulations showing that it performs well. My main criticisms are two-fold: (1) the main conclusion, that platforms should attend to producer welfare as well as consumer welfare, is reasonably intuitive and the paper fails to explore more implications of their model beyond this basic point. (2) there is a lack of theoretical results in support of their proposed look-ahead policy. Given the intuitive and reasonably simple model, I might have expected this kind of result to be tractable. Methods And Evaluation Criteria: The lack of theoretical results in support of their main conclusion is a weakness. Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: The focus on population effects in platforms is interesting, well-motivated and deserving of more study. Their model is similar to others in the prior literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: What additional insight do we get from multiple simulation experiments? I would have preferred to just have one and then have expanded theoretical analysis. (Or an actual implementation of the policies in a lab/crowdsourced setting to see how they each do.) Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their time and effort on the review. We respond to the key comments and questions below. --- **(factual discussion)** --- > **there is a lack of theoretical results in support of their proposed look-ahead policy** Thank you for the valuable feedback. We would like to kindly argue that providing a theoretical guarantee to the look-ahead policy is challenging due to two reasons: (1) interactive dynamics between viewers and providers, and (2) non-linearity and non-convexity of the systems. First, one of the related works, [1] considers the departure of viewers and providers in two-sided platforms. While [1] considers a more restrictive setting than ours in modeling linear dynamics and not modeling the increase in viewer and provider participation, identifying the optimal policy is proven to be an NP-hard problem. Second, guaranteeing sub-optimality in non-linear systems is in general considered a hard problem, and without assuming convexity, we often cannot get a theoretical guarantee [2]. The most related theoretical analysis to our method is [3]. This paper studies minimizing loss function for predictions when the population gradually reacts to the model in a single-sided platform. This paper provides some theoretical guarantees when replacing the reference point to be the (estimated) fixed point and under the convexity assumption. Note that many existing papers empirically demonstrate the performance of the algorithm when some assumption (e.g., convexity) is violated. Our paper also provides empirical evidence that leveraging the reference point (instead of the fixed point) performs reasonably well (i.e., at least competitive or better than both myopic and uniform baselines) across multiple configurations. We will clarify these points and provide additional discussion about the theoretical connection to existing work in the revision. [1] Matching of Users and Creators in Two-Sided Markets with Departures. [2] Performative Prediction in a Stateful World. [3] How to Learn when Data Gradually Reacts to Your Model. --- > **What additional insight do we get from multiple simulation experiments?** This is to show that the look-ahead policy performs adaptively well in the case where the myopic succeeds and fails. As we can see in the experiment result, both uniform and myopic can be a good choice in one of the scenarios, however, fail catastrophically in the opposite scenario. In contrast, because the proposed approach can care about both myopic outcome (i.e., viewer satisfaction) and investment for provider population growth (i.e., exposure) by design, ours works adaptively well across two opposite situations. Our real-data experiment also provides an example of how an actual population effects can arise in real data. Reviewer 54YD acknowledges our contribution on this point as “it's good that the authors evaluate their method on both synthetic and real-world data.” **(contributions)** --- > **that platforms should attend to producer welfare as well as consumer welfare, is reasonably intuitive and the paper fails to explore more implications of their model beyond this basic point** Thank you for sharing your concerns, and we would like to kindly emphasize that, even if the outcome seems reasonably intuitive, highlighting the overlooked problem is crucial for scientific research. Regarding this point, we share a similar contribution to existing papers: e.g., [4], which points out the importance of guaranteeing provider exposure under the modeling of viewer/provider departure. We generalize the discussion in the setting where the viewer/provider both increases and decreases their population (i.e., a more complex situation), and demonstrate that caring about producer welfare is indeed beneficial. Our paper provides the theoretical implications and evidence for the platform designers that caring for provider welfare is indeed crucial for the long-term success of the platform, not only for the ethical perspectives. Reviewer 54YD also acknowledges this point “I believe this work contributes to the broad literature on two-sided platform. It touches on important subjects including balancing the interests of strategic agents and balancing the tradeoff between short-run and long-run welfare.” [4] Optimizing Long-term Social Welfare in Recommender Systems: A Constrained Matching Approach.
null
null
null
null
null
null
Direct Motion Models for Assessing Generated Videos
Accept (poster)
Summary: This paper proposes TRAJAN, an architecture designed to obtain dense high-level motion features using tracks predicted by the BootsTAPIR model. The authors demonstrate that these extracted features can effectively measure pairwise distances between videos in terms of motion, as well as evaluate the temporal distortions within individual videos. While the authors conduct extensive experiments to showcase the effectiveness of the learned TRAJAN features, several claims remain inadequately supported. Additionally, the paper's presentation requires substantial improvement. Claims And Evidence: The claims in Section 5.2 are not all well-supported: - The authors claim that "TRAJAN latent space is capturing something fundamentally different from pixel error" (line 352). This claim is supported by the evidence presented in Table 2. - However, there appears to be a contradiction between interpretations of Figure 5. In Figure 5 (left), the authors claim "since the camera motion is correct, the distance between their TRAJAN embeddings is small." Yet in Figure 5 (right), even when camera motion is identical in two videos (both static), the TRAJAN difference is large. This raises an important question: is the TRAJAN embedding more sensitive to camera motion or subject motion? Furthermore, Table 3 shows that the RAFT score outperforms TRAJAN-Len. in measuring object speed and performs comparably in evaluating camera speed. This finding potentially diminishes the value of the proposed metric. Methods And Evaluation Criteria: The authors conducted comprehensive experiments demonstrating correlations between the TRAJAN metric and human evaluation. However, they did not provide results on using the TRAJAN metric to compare existing video generation models. Such analysis would provide valuable insights into the performance of current methods and help researchers better understand the proposed metric's effectiveness. Theoretical Claims: N/A Experimental Designs Or Analyses: Please refer to the sections above. Supplementary Material: The reviewer checked the attached supplementary texts and note that the authors did not submit separate supplementary material. Relation To Broader Scientific Literature: The paper builds upon prior work in evaluating video generation models, focusing on measuring temporal distortions in generated videos. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Please see comments above and below. Other Comments Or Suggestions: The paper is poorly written. Below is a non-exhaustive list of issues identified during review: - Figure Quality: Figure 1 is blurry and difficult to interpret, even after zooming in and carefully reading the captions. I suggest embedding videos into the PDF or including them in supplementary material for better clarity. - Citation Inconsistency: The citation format varies throughout the paper. For example, one venue is listed as "ICLR," while another appears as "ICLR 2024-Twelfth International Conference on Learning Representations." A consistent citation format should be maintained. Inconsistent Capitalization: Some subsection titles have the first letter capitalized (e.g., Sec. 3.2), while others do not (e.g., Sec. 5.1-5.3). Consistency in formatting is necessary. - References: The SSv2 dataset is mentioned in the main paper but is only cited in the supplementary material. All references should be properly included in the main text. - Formatting Issues: The spacing between Section 3.2 and Section 3.2.1 titles is irregular. - Metric Descriptions: For TRAJAN-Len. and TRAJAN-Radii metrics, clear descriptions or direct links to their corresponding explanations should be provided rather than placing this information randomly in the Appendix. Questions For Authors: The authors should provide more insights on their motivation. Is the proposed TRAJAN feature intended to serve as a standard metric to evaluate motion magnitude or plausibility for video generation models? The paper would benefit from clearer articulation of the broader purpose and application of this work. This paper potentially offers insights to the community. However, based on the poor presentation and unsupported claims noted above, I recommend a weak reject. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your constructive comments and feedback. > there appears to be a contradiction between interpretations of Figure 5. TRAJAN is sensitive to differences in motion between videos (whether they be from a camera, or from an object). In Figure 5(a), the generated and reference video motions are the same for both the camera and the objects – the camera pans to the left in both cases, and none of the objects move independently of the camera, even though they have different appearances between the two videos. In Figure 5(b), the camera does not move, but the man moves in a distinctly different way. As a result, TRAJAN predicts a small distance in the first case (all motions are the same), and a large distance in the second case (the motion of the object is different). *There is therefore no contradiction in Figure 5.* > is the TRAJAN embedding more sensitive to camera motion or subject motion? We separately analyze how well TRAJAN captures human evaluations of generated videos when they have either high camera motion (but low subject motion) or high subject motion (but low camera motion) for the EvalCrafter dataset. The results are provided [here](https://sites.google.com/view/trajan-videos-anonymous/home) in the last section. __In both settings, TRAJAN outperforms all baselines__. TRAJAN also captures human judgments better in the high camera motion vs. high object motion setting (motion consistency 0.53 vs. 0.30; appearance consistency 0.56 vs. 0.29; realism 0.45 vs. 0.24). However, we also found that human raters were less consistent within the high object motion setting, suggesting that this may be a generically harder task (for both humans and models). > Table 3 shows that the RAFT score outperforms TRAJAN-Len. in measuring object speed (...). This finding potentially diminishes the value of the proposed metric. > The authors should provide more insights on their motivation. Is the proposed TRAJAN feature intended to serve as a standard metric to evaluate motion magnitude or plausibility for video generation models? The paper would benefit from clearer articulation of the broader purpose and application of this work. The proposed metric was not primarily intended for measuring object and camera speed. We included this result as an interesting addition to the __main arguments of the paper__: __TRAJAN is useful as a metric for measuring motion quality and motion differences in videos by simultaneously providing a compact latent space representation of motion and reconstruction errors for evaluating individual videos__. Importantly, in this work, and for the first time, we study metrics across distribution-level comparisons, video-video comparisons and evaluations of individual videos in terms of their motions. This involved adapting several prior methods, such as I3D, to the per-video setting as well as exploring new approaches, such as MooG, for evaluating videos. We also conducted a human study that includes fine-grained motion categories for evaluating metrics. All in all we identify numerous shortcomings of existing approaches (including RAFT) and propose a new metric based on point tracks (TRAJAN) that works well across all of the settings we tested. __To our knowledge, TRAJAN is the first metric that can operate across the distribution-, video-video comparison-, and single-video evaluation- levels, providing a compelling metric for motion in all cases.__ > The authors conducted comprehensive experiments [...]. However, they did not provide results on using the TRAJAN metric to compare existing video generation models. Our human evaluations on VideoPhy and EvalCrafter are using video samples from existing video generation models. Based on your feedback we have recomputed the results from Table 1 comparing human judgements and metric scores separately for each model and then aggregating. __We found that TRAJAN correlates significantly better with human evaluations of which video model creates the most realistic videos: 0.76 (TRAJAN) 0.53 (RAFT) 0.43 (VideoMAE) 0.51 (I3D) 0.58 (MooG)__. > The paper is poorly written. Figure Quality. Citation Inconsistency. Capitalization. References. Formatting Issues. Metric Description. We were surprised that you found this to be the case and listed “poor presentation” as one of the reasons for recommending weak reject. The examples you listed, such as citation forms, subsection capitalization, and spacing, can be easily addressed for a camera-ready paper. We will ensure the camera-ready version of our manuscript will have each of these addressed. Thank you for pointing these out. We provide videos for Figure 1, with additional experiments, [here](https://sites.google.com/view/trajan-videos-anonymous/home). We hope that by ensuring that the formatting concerns are addressed in a camera-ready version of the manuscript, the “comprehensive evaluations” and “insights to the community” that you highlight might lead you to reconsider your score. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' rebuttal, which addresses some of my concerns. However, a significant issue remains unaddressed: the absence of TRAJAN metric results comparing existing video generation models. For a metric intended to evaluate video generation models, it is essential to demonstrate how existing models (such as SVD, CogVideoX, Hunyuan, and Cosmos) perform on this benchmark. Without such comparisons, it's difficult to assess whether the proposed metric aligns with users' experience of these models. Furthermore, the metric's inability to differentiate between camera motion and subject motion raises questions about its utility to the video generation community. This limitation significantly impacts its interpretability. Consider, for example, the well-known "Tokyo Walk" video generated by Sora, which incorporates both camera and subject motion. I am also **surprised** by the **numerous errors** throughout the manuscript. Such mistakes suggest insufficient proofreading and a lack of attention to detail. In my humble opinion, rigorous scientific work demands a clear presentation. The issues I identified were easily noticeable even upon a cursory review, indicating they could have been readily corrected with careful proofreading. --- Reply to Comment 1.1.1: Comment: > “However, a significant issue remains unaddressed: the absence of TRAJAN metric results comparing existing video generation models. For a metric intended to evaluate video generation models, it is essential to demonstrate how existing models (such as SVD, CogVideoX, Hunyuan, and Cosmos) perform on this benchmark. Without such comparisons, it's difficult to assess whether the proposed metric aligns with users' experience of these models.” We draw your attention to section 5.3 with results on the following video generation models: __within EvalCrafter__: MoonValley, ZeroScope, Floor33, Gen2, HotShot, ModelScope, Pika (versions 1 and 2), Show-1, VideoCrafter-1. __within VideoPhy__: __CogVideo-X-2B__, __CogVideo-X-5B__, LumaAI, Gen2, LaVie, OpenSora, Pika, __SVD__, VC2, ZeroScope. This covers 17 different video generation models. We show in all of our results that the TRAJAN metric captures a significant fraction of the human variance in how these models are rated for individual videos. This is exactly the users’ experience of these models – we directly ask users to rate videos generated by each of these video models, and show that our metric correlates well with their responses. For our rebuttal, we also showed that the metric captures the users’ rankings in which video generation model performs best. Note that this exactly tracks norms in the field: see e.g. EvalCrafter Figure 4 and Table 2. > Furthermore, the metric's inability to differentiate between camera motion and subject motion raises questions about its utility to the video generation community. This limitation significantly impacts its interpretability. Consider, for example, the well-known "Tokyo Walk" video generated by Sora, which incorporates both camera and subject motion. We show that TRAJAN can capture human ratings of realism and consistency for __both subject motion and camera motion__, both in isolation and together, going beyond prior work like SIFT-Sora which can only be applied to videos with camera motion. The goal of the metric is not to differentiate what kind of motion is occurring (a classification problem), but rather whether the motion is realistic or not. The metric would directly apply to the Tokyo Walk video, and likely rate it very highly, because the motion of both the subject and the camera is smooth and realistic. > I am also surprised by the numerous errors throughout the manuscript. Such mistakes suggest insufficient proofreading and a lack of attention to detail. The issues that you point to are primarily minor formatting mistakes and “numerous errors” is an exaggeration. For example, there is *only one subsection* with incorrect capitalization. All others are capitalized because they are proper names (of methods, or metrics). Similarly, *only one subsection* has incorrect spacing and there is *only one instance* of a dataset that was only cited in the Appendix but not in the main text. Other remarks such as that information is placed “[...] randomly in the Appendix” is subjective, especially considering its subtitle in the Appendix is “Track motion radii calculation”, an effort to make sure it was easily found by readers. Issues with citation consistency are also common for conference papers in machine learning, for example the references section of this [*this best paper award winner in ICML last year*](https://openreview.net/pdf?id=bJbSbJskOS). We are happy to correct these issues for the camera-ready paper, but again, we point to the fact that none of the other reviewers expressed any concern with the presentation or quality of the results. >This calls into question the authors' commitment to their own work. When authors appear not to respect their own research, it becomes difficult for reviewers and readers to accord it the respect it might otherwise deserve. This is an unprofessional and unproductive remark. We put a significant amount of time into the presentation of this work, including the rebuttal, as evidenced by the clear interpretation of the work by the other two reviewers.
Summary: This paper proposes TRAJAN, a novel motion-focused evaluation framework for assessing the quality of generated videos. Unlike traditional metrics like FVD that emphasize appearance, TRAJAN uses auto-encoded point tracks to assess temporally extended motion features. It supports distribution-level, video-pair, and per-video evaluation. Experiments show TRAJAN achieves higher sensitivity to temporal distortions and better alignment with human judgments across benchmarks (EvalCrafter, VideoPhy), outperforming existing motion and appearance-based metrics. Claims And Evidence: The paper makes several key claims: 1. TRAJAN provides better sensitivity to temporal distortions than prior metrics like FVD, VideoMAE, or I3D. 2. TRAJAN embeddings correlate better with human ratings of motion consistency, realism, and interaction quality. All claims are thoroughly validated by extensive empirical evaluation. Methods And Evaluation Criteria: The TRAJAN model architecture is methodologically sound and well-motivated. It uses a Perceiver-style transformer to encode dense point tracks and reconstruct them from query points. Comparisons with other motion-based (e.g., RAFT warp error, motion histograms) and appearance-based (e.g., VideoMAE, I3D) metrics are comprehensive and insightful. Theoretical Claims: The paper does not introduce formal theoretical analysis but provides strong conceptual justification for modeling motion independently from semantics via point track encoding. Experimental Designs Or Analyses: The experiments are rigorous and cover multiple evaluation modalities: 1. Controlled distortions on UCF101 to measure temporal sensitivity. 2. Video-to-video comparisons using WALT. 3. Per-video quality assessments using human ratings from EvalCrafter and VideoPhy. Supplementary Material: This submission did not include any supplementary material. Relation To Broader Scientific Literature: This work fills a critical gap in the video generation literature by emphasizing motion quality assessment over frame-based metrics. Essential References Not Discussed: The paper comprehensively reviews prior work. No critical omissions were identified. Other Strengths And Weaknesses: Weaknesses: 1. TRAJAN performance still misses edge cases where semantic context is crucial (e.g., object disappears unrealistically). 2. Reliance on point tracks may limit applicability in occlusion-heavy or textureless scenes. Other Comments Or Suggestions: N.A. Questions For Authors: N.A. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive comments and feedback. We were pleased to see that you found our approach “methodologically sound and well-motivated” and that “All claims are thoroughly validated by extensive empirical evaluation”. You also mentioned how “The paper [...] provides strong conceptual justification for modeling motion independently from semantics via point track encoding.” and how “This work fills a critical gap in the video generation literature by emphasizing motion quality assessment over frame-based metrics.” >“TRAJAN performance still misses edge cases where semantic context is crucial”, “Reliance on point tracks may limit applicability in occlusion-heavy … scenes” We highlight and discuss this first point about semantic context in Figure 6, where __none__ of the metrics can capture the unexpected disappearance of the beer glass. Fully solving this challenge would require learning a fully accurate world model. This is an exciting future direction, but it is outside the scope of our work. Occlusion-heavy scenes are similarly challenging for all existing automated metrics which we compare to here (optical flow, action recognition, and masked auto-encoding will similarly suffer in occlusion-heavy scenes, although possibly less so than point tracking). >“Reliance on point tracks may limit applicability in … textureless scenes” Working with textureless scenes is a general challenge for point tracking. In practice, many of the videos we evaluate are textureless. In such cases, the base point tracker does produce relatively poor point tracks which can drift and be inappropriately marked as occluded. However, because TRAJAN is trained to do point track *reconstruction*, it can *learn* this from data, and the reconstruction errors on textureless scenes are still, on average, reasonably small. __Some examples of this can be seen [here](https://sites.google.com/view/trajan-videos-anonymous/home) (second section) where point tracks for textureless parts of the background are still well reconstructed (shown in blue)__. This is an advantage of using a trained autoencoder over the point tracks instead of calculating a metric directly on the point tracks themselves. Thank you for raising this point, we will add a discussion of this to the paper.
Summary: The authors propose a new video evaluation metric using point tracks instead of pixel reconstruction or recognition features, which can evaluate temporal consistency. Claims And Evidence: The claims seem to be supported well in this paper. Methods And Evaluation Criteria: The evaluation makes sense that demonstrates that this proposed metric is better at capturing motion distortions. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design is valid but the corrupted images as demonstrated in figure.4 do not seem to be low-level distorted (like blurred or locally rotated and shifted). That casts doubt on whether this proposed TRAJAN metric can indeed capture the distortions of real motions. Some other methods also include motion distortion such as EvalCrafter [1] , VBench [2], or SIFT-sora [3]. This paper should compare against those methods. [1] Liu, Yaofang, et al. "Evalcrafter: Benchmarking and evaluating large video generation models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [2] Huang, Ziqi, et al. "Vbench: Comprehensive benchmark suite for video generative models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [3] Sora Generates Videos with Stunning Geometrical Consistency Supplementary Material: Yes. The materials are easy to follow. Relation To Broader Scientific Literature: This work can be impactful if they have a good motion quality assessment method as validated thoroughly. Essential References Not Discussed: N/A/ Other Strengths And Weaknesses: See above. Some work like [3] works on pure geometry for estimating motion score, but this work still uses representation learning, which lacks explainability a little. Including more analysis on score interpretation will be helpful. Other Comments Or Suggestions: N/A/ Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive comments and feedback. Please see [https://sites.google.com/view/trajan-videos-anonymous/home](https://sites.google.com/corp/view/trajan-videos-anonymous/home) for additional experiments on score interpretation. > “The corrupted images as demonstrated in figure.4 do not seem to be low-level distorted (like blurred or locally rotated and shifted). That casts doubt on whether this proposed TRAJAN metric can indeed capture the distortions of real motions” The distortions in Figure 4 are taken from prior work by [Ge et al. (2024)](https://arxiv.org/abs/2404.12391) which were chosen to highlight challenges faced by existing metrics (like FVD) in addressing *temporal* above and beyond *appearance-based* distortions. These distortions include local noise (Corruption 2.3, Figure 4e) and local warping (Corruption 1.2, Figure 4b) and were designed to capture realistic distortions from cameras or other sensors. __This experiment primarily highlights the advantage of TRAJAN over prior work (including Ge et al, 2024) for being sufficiently sensitive to temporal rather than appearance-based distortions (i.e. motion biased), and we therefore matched the experimental conditions from prior work__. However, we agree with the reviewer that synthetic distortions are not necessarily representative of real motion distortions. This informed our choice to investigate different model’s sensitivities to the kinds of distortions introduced by state of the art video generation methods. Our evaluations on VideoPhy and EvalCrafter (as referenced in your review) are doing precisely this. For example, in Appendix B.1.3 we describe how “[...] we randomly select 104 generated videos from 11 of the text-to-video models (the original 5 from the EvalCrafter human evaluation dataset, and 6 additional models).” to conduct a human study. __In Table 1 and 2, we report how TRAJAN is the top-performing model that best correlates with human judgements of motion/appearance consistency and realism.__ >”Some other methods also include motion distortion such as EvalCrafter [1] , VBench [2], or SIFT-sora [3]. This paper should compare against those methods.” The __RAFT-Warp error__, which we report in our paper, __is the one used in EvalCrafter, with a related RAFT-based metric in VBench for assessing motion quality and distortions (“Temporal Quality - Dynamic Degree”)__. Across all of our experiments, __TRAJAN performs better than or equal to RAFT-Warp for assessing motion quality and realism__ including on the EvalCrafter dataset. SIFT-sora is a compelling method for interpretability, but can *only* be applied to videos where the motion solely comes from the camera. __If any of the objects move independently of the camera (which is the case in many generated videos), SIFT-sora cannot be applied__. > “Some work like [3] works on pure geometry for estimating motion score, but this work still uses representation learning, which lacks explainability a little. Including more analysis on score interpretation will be helpful.” Thank you for this suggestion – we have added a detailed investigation of whether TRAJAN scores can provide more interpretability. Since TRAJAN measures the reconstruction of individual point tracks (Figure 1), it is possible to localize errors in both space and time (with badly reconstructed tracks, shown in red, indicating where the largest errors are). __We provide 3 further examples of this kind of score interpretation [here](https://sites.google.com/view/trajan-videos-anonymous/home)__. In each example, we show on the left the full video, in the center the Average Jaccard across all points for each frame of the video, and on the right a clip from the video centered on the frame with the worst overall Average Jaccard. __In the first two cases, there is a major change in object appearance partway through the video, which is picked up in the time course plots. Looking at individual points, the greatest errors are centered on the part of the video that unnaturally changes in appearance, as indicated by the red and white colored point tracks.__ In the last case, this video maintains consistent motion throughout, as evidenced by the consistently high Average Jaccard. Most points are well reconstructed, with the only exceptions being points on the border of the moving object, which are generally ambiguous (their motions could go either way). We will update the paper to discuss this aspect of our approach in greater detail and include more examples of localizing motion errors in this way. --- Rebuttal Comment 1.1: Comment: The authors address my concern well, and I plan to improve my rating.
null
null
null
null
null
null
null
null
Latent Imputation before Prediction: A New Computational Paradigm for De Novo Peptide Sequencing
Accept (poster)
Summary: This work aims to design peptide sequence based on observed mass spectra, addressing the issue of missing fragmentation. This problem is due to the incomplete fragmentation of precursor peptides or inherent limitations within tandem mass spectrometer. The author design a bipartite matching algorithm to impute the missing information. Claims And Evidence: Yes. Methods And Evaluation Criteria: The proposed algorithm, Latent Imputation before Prediction, generally makes sense and is well-motivated. Theoretical Claims: This work does not have theoretical claim. Experimental Designs Or Analyses: The experimental results show great performance improvement. The authors design extensive experiments and ablation studies to show the validness. Supplementary Material: The Supplementary Material looks fine. Relation To Broader Scientific Literature: None. Essential References Not Discussed: None. Other Strengths And Weaknesses: 1. The motivation is quite clear and supported by the experimental results. 2. The design of proposed algorithms is generally reasonable. 3. The weakness of this work may can be the overall algorithm design is not that impressive and cannot be adapted to other fields. However, personally I like such application paper with simple yet effective approach. 4. The imputation may also introduce additional time consumption. Could the authors also provide the time portion of imputation in the entire pipeline. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We acknowledge with gratitude for the reviewer's appreciation that our work's motivation is quite clear and supported by the experimental results, as well as the recognition that the design of proposed algorithm is generally reasonable. We address the reviewer's concerns in detail below. **Q1: The weakness of this work may can be the overall algorithm design is not that impressive and cannot be adapted to other fields. However, personally I like such application paper with simple yet effective approach.** Thanks for the constructive feedback and appreciation of our simple yet effective approach. While our algorithm is specifically designed for peptide sequencing, it may offer insights into other computational biology areas facing similar challenges. For instance, in single-cell transcriptomics [3], expression values appear undetected due to technical limitations rather than true biological absence, and the number of missing expression values varies across cells. Our method could serve as inspiration for developing analogous solutions in such contexts. **Q2: The imputation may also introduce additional time consumption. Could the authors also provide the time portion of imputation in the entire pipeline.** Thanks for the valuable suggestion. We evaluate the computational overhead of the imputation module in **Table 4**, which reports the **inference time and proportion** (averaged over the Nine-Species test set) of each module in LIPNovo. The results show that the imputation module accounts for only **0.46%** of the total pipeline runtime. In **Table 5**, we also provide the training time comparison, which shows that our LIPNovo adds 8 minutes per training epoch compared to the baseline model. All experiments were conducted on a server equipped with: - CPU: Intel(R)Xeon(R)Gold 5418Y @ 2.00GHz (96 cores,AVX2) - GPU: NVIDIA GeForce RTX 4090 (24GB) - OS: Ubuntu 20.04.6 LTS **Table 4:** Inference time (ms) and proportion of each module in LIPNovo. | Module | Spectrum Representation | Imputation | Peptide Prediction | Total | | -------------- | ----------------------- | ---------- | ------------------ | ------ | | **Time (ms)** | 6.03 | **3.60** | 770.10 | 779.73 | | **Proportion** | 0.77% | **0.46%** | 98.77% | 100% | **Table 5:** Training time (minutes) for an epoch ($499,402$ samples with batch size=$32$) compared to the baseline model. | Method | Time (minutes) | | ------------------- | -------------- | | Baseline (CasaNovo) | 55 | | **LIPNovo (Ours)** | 63 | [3]. Deng et al. Scalable analysis of cell-type composition from single-cell transcriptomics using deep recurrent learning, Nature Methods, 2019.
Summary: This paper proposes LIPNovo, which is devised to compensate for missing fragmentation information within observed spectra before executing the final peptide prediction. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence Methods And Evaluation Criteria: The proposed method is novel, and the evaluation is fair. Theoretical Claims: This paper has no theoretical proofs. Experimental Designs Or Analyses: I check the validity of the experiments. Supplementary Material: I have checked the appendix part. Relation To Broader Scientific Literature: Introduce missing data imputation into the de novo peptide sequencing. Essential References Not Discussed: No essential references not discussed Other Strengths And Weaknesses: This paper addresses the key issue of missing fragments in de novo peptide sequencing, which can advance progress in this field. Other Comments Or Suggestions: No other comments or suggestions. Questions For Authors: It is better to demonstrate the experimental results to see whether this method is orthogonal to HelixNovo, which is also a method for solving the missing fragment problem. Using the complementary spectra as the LIPNovo's input. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer for the recognition that the proposed method is novel, and the evaluation is fair, which addresses the key issue of missing fragments in de novo peptide sequencing and can advance progress in this field. We address the reviewer's question as follows. **Q: It is better to demonstrate the experimental results to see whether this method is orthogonal to HelixNovo. which is also a method for solving the missing fragment problem. Using the complementary spectra as the LIPNovo's input.** Thanks for the thoughtful suggestion. $\pi$-HelixNovo and LIPNovo actually **address different aspects of the missing fragmentation issue**. $\pi$-HelixNovo handles the case where at least one ion ($b$ or $y$) in a pair is present, using the **complementary spectrum**. Specifically, if the $b$ ion is observed but the paired $y$ ion is missing, $\pi$-HelixNovo can recover the missing $y$ ion by subtracting the $b$ ion's mass from the precursor mass, which generates a complementary spectrum. However, $\pi$-HelixNovo cannot handle cases where a pair of $b$ and $y$ ions are all missing, which is how the missing ratio is calculated according to [2]. **In contrast, LIPNovo is designed to impute all types of missing scenarios.** This is achieved by leveraging an imputation module that predicts the theoretical spectrum in a latent space, thus it effectively addresses problems that $\pi$-HelixNovo cannot resolve. This has been further evaluated through ablation experiments (as shown in **Table 3**). Specifically, "Baseline+ Complementary Spectrum" improves peptide precision by 0.8%. In contrast, "Baseline + our Imputation mechanism" achieves a **+4.0%** increase. Moreover, "Baseline + Imputation + Complementary Spectrum" further leads to 1.3% increase, because the complementary spectrum can provide additional information for the imputation process. **Table 3:** Experiments regarding the complementary spectrum used in $\pi$-HelixNovo. | Method | Peptide Precision | Peptide AUC | Amnio-acid Precision | Amnio-acid Recall | | ------------------------------------------------------------ | ----------------- | ------------ | -------------------- | ----------------- | | Baseline (CasaNovo) | 0.529 | 0.493 | 0.741 | 0.740 | | Baseline + **Complementary Spectrum** | 0.537(+0.9%) | 0.500(+0.7%) | 0.755(+1.4%) | 0.755(+1.5%) | | Baseline + **Imputation Module** | 0.569(+4.0%) | 0.536(+4.3%) | 0.782(+4.1%) | 0.782(+4.2%) | | Baseline + **Imputation Module** + **Complementary Spectrum** | 0.582(+5.3%) | 0.547(+5.4%) | 0.797(+5.6%) | 0.797(+5.7%) | [2]. Zhou et al. NovoBench: Benchmarking Deep Learning-based De Novo Peptide Sequencing Methods in Proteomics, NeurIPS 2024 D&B track. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. Since my original score was positive, I will maintain it. --- Reply to Comment 1.1.1: Comment: Thank you for your reply and your time and effort in providing a thoughtful review of our work.
Summary: This paper presents a novel computational paradigm called **LIPNovo** for **de novo peptide sequencing**, addressing the problem of missing fragmentation information commonly encountered in mass spectrometry data. Unlike existing methods that rely on incomplete spectra, LIPNovo performs latent space imputation before prediction. The method formulates the imputation process as a set prediction problem, introducing a set of learnable peak queries to generate latent representations of theoretical peaks. Optimal bipartite matching is employed to align these predictions with ground truths, and an imputation loss function is designed to guide the process. Experiments conducted on the **Nine-species, Seven-species, and HC-PT datasets** demonstrate that LIPNovo significantly outperforms state-of-the-art methods across various metrics, including amino acid-level, peptide-level, and PTM-level performance. The proposed model achieves substantial improvements over the competitive baseline CasaNovo, confirming the effectiveness of the imputation mechanism. Ablation studies and sensitivity analyses further validate the contributions of each module. LIPNovo offers a powerful new approach to enhance peptide sequencing accuracy and robustness. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: The method of this paper has been verified through experiments. No theory is proposed that requires additional proof. Experimental Designs Or Analyses: Yes. The method of this paper has been verified through experiments. Supplementary Material: I read the appendix carefully. Relation To Broader Scientific Literature: This paper presents a novel computational paradigm called **LIPNovo** for **de novo peptide sequencing**, addressing the problem of missing fragmentation information commonly encountered in mass spectrometry data Essential References Not Discussed: N/A. Other Strengths And Weaknesses: ### **Advantages** 1. **Innovation**: - It is novel to propose to fill in missing fragment information in the latent space instead of directly operating on the original mass spectrometry data. - Bipartite matching is introduced into mass spectrometry data analysis, and the matching strategy in the field of target detection is borrowed to effectively solve the problem of predictive alignment of variable-length theoretical peaks. 2. **Technical Contribution**: - An end-to-end filling-prediction framework is designed, and the filling process is guided by the latent representation of the theoretical spectrum, which significantly improves the robustness of the model to incomplete mass spectrometry data. - The positive correlation between filling quality and sequencing performance is verified experimentally, and the necessity of each module is proved by ablation experiments and parameter analysis. 3. **Experimental Results**: - On three mainstream data sets (Nine-species, Seven-species, HC-PT), LIPNovo has achieved SOTA in amino acid, peptide and PTM recognition, especially in the case of high fragment missing rate. - The generalization and efficiency of the method were further verified by cross-validation and comparison with GraphNovo. --- ### **Improvement suggestions** 1. **Limitations of theoretical spectrum generation**: - The theoretical spectrum relies on the ideal fragments of the target peptide (such as b/y ions), but in real scenarios, unknown peptides or complex PTMs may cause deviations in the theoretical spectrum. It is recommended to supplement the discussion on how to alleviate such problems. 2. **Computational efficiency analysis**: - Filling modules may increase inference time, but the article does not mention the comparison of computational overhead (such as inference speed/memory usage with the baseline model). It is recommended to supplement relevant analysis to evaluate the practicality of the method. 3. **Improvement direction for extreme missing scenarios**: - Experiments show that the performance improvement is limited when the fragment missing rate is >70%. It is recommended to further explore possible solutions (such as cross-sample learning or introducing auxiliary information) in the discussion section and design targeted experiments. Other Comments Or Suggestions: Please see above. Questions For Authors: N/A. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the positive feedback on our work's novelty and the recognition that our method significantly improves the robustness of the model, achieves SOTA performance, and demonstrates generalization and efficiency. Our responses to the concerns are as follows. **Q1: Limitations of theoretical spectrum generation.** We make some clarifications about the theoretical spectrum generation. In our work, the theoretical spectra are computed from ground truth peptide sequences in the training set, where PTMs are explicitly considered. This ensures that peptides with different PTMs yield distinct theoretical spectra, and the mass deviations introduced by modifications are properly accounted for during training. Importantly, theoretical spectra are only used during training. Our method does **not require theoretical spectrum computation during inference**. LIPNovo is trained to map incomplete observed spectra to their corresponding theoretical spectrum representations. This acts as a **signal enhancement mechanism**, helping to mitigate deviations between observed and theoretical spectra and clarify spectral patterns, ultimately improving peptide sequence prediction. In more complex fragmentation scenarios (e.g. Electron Transfer Dissociation producing $c/z$ ions), the theoretical spectra can be extended to include these additional ion types. LIPNovo remains compatible with such scenarios **without requiring changes to its computational paradigm**, further demonstrating its flexibility. **Q2: Computational efficiency analysis.** Thanks for the valuable suggestion. We evaluate the computational overhead compared to the baseline in **Table 1**, which reports the inference time and GPU memory usage. The values are the mean across examples in the Nine-Species test set. The results show that LIPNovo increases the inference time by 14.82 ms (**+1.94%**) per example, and consumes an additional 98MB of GPU memory. All experiments were conducted on a server equipped with: - CPU: Intel(R)Xeon(R)Gold 5418Y @ 2.00GHz (96 cores,AVX2) - GPU: NVIDIA GeForce RTX 4090 (24GB) - OS: Ubuntu 20.04.6 LTS **Table 1:** Comparison of computational overhead compared to baseline. | | Inference Time (ms) | **GPU Memory** (MB) | | ------------------- | ------------------- | ------------------- | | Baseline (CasaNovo) | 764.91 | 678 | | **LIPNovo (Ours)** | 779.73 | 776 | **Q3: Improvement direction for extreme missing scenarios.** Thanks for the thoughtful suggestion. In **Lines 425-436**, we have discussed such limitations and introduced possible improvement directions. As indicated, one potential way is utilizing cross-sample information for missing peak imputation. To investigate this, we conduct a preliminary exploration to incorporate **cross-sample learning** into LIPNovo. Specifically, for each spectrum we search for the top one most similar spectrum (i.e., reference spectrum) from the training set with mass constraints, following [1]. Spectral similarity implies possible peptide similarity, where shared fragments can offer additional information for imputation. The reference spectrum is encoded by the spectrum encoder, and the resulting representation is fed into the imputation module to assist imputation. The results are presented in **Table 2**. As shown, under the high missing ratio range [0.7, 0.8), while the LIPNovo improves the baseline by only 0.86%, the simple extension **LIPNovo+Cross-Sample** achieves a **2.22%** improvement. On average, this extension enhances baseline performance by **8.38%** (*v.s.* 6.58% by LIPNovo). **Table 2:** Experiments on LIPNovo's improvement direction for extreme missing scenarios. Amnio-acid precision (%) is reported. | Missing Ratio Range | [0,0.1) | [0.1,0.2) | [0.2, 0.3) | [0.3, 0.4) | [0.4, 0.5) | [0.5, 0.6) | [0.6, 0.7) | [0.7, 0.8) | [0.8, 0.9) | [0.9,1.0] | Mean | | ------------------------- | ------- | --------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | --------- | ------ | | Baseline (CasaNovo) | 75.399 | 57.784 | 40.009 | 28.595 | 24.021 | 19.532 | 13.706 | 10.628 | 10.517 | 4.998 | 28.519 | | LIPNovo | 88.265 | 73.296 | 53.276 | 39.715 | 30.740 | 24.026 | 16.029 | 11.490 | 9.333 | 4.854 | 35.102 | | **LIPNovo+Cross-Sample** | 88.946 | 74.296 | 55.983 | 41.622 | 33.831 | 26.047 | 17.712 | 12.849 | 11.069 | 6.624 | 36.898 | | $\Delta$ *w.r.t* Baseline | 13.547 | 16.512 | 15.974 | 13.027 | 9.810 | 6.515 | 4.006 | 2.221 | 0.552 | 1.626 | 8.379 | | $\Delta$ *w.r.t* LIPNovo | 0.681 | 1.000 | 2.707 | 1.907 | 3.091 | 2.021 | 1.683 | 1.359 | 1.736 | 1.770 | 1.796 | [1]. Xia et al. SearchNovo, ICLR 2025.
null
null
null
null
null
null
null
null
Automated Benchmark Generation for Repository-Level Coding Tasks
Accept (poster)
Summary: This works presents SETUPAGENT - an automated LLM-based approach to generate repository-level coding benchmarks from a list of github repos. It enables creation of larger and diverse benchmarks suitable for evaluating software engineering agents, and automates steps such as dependencies setup, test execution, result parsing, etc. Authors use SETUPAGENT to create 2 new benchmarks: SWEE-Bench (extended version of SWE-Bench encompassing more diverse repositories) and SWA-Bench (focusing on applications rather than libraries). Authors highlight the distributional differences of SWEE and SWA to SWE-Bench (issue description quality, fix complexity, etc.) and highlight possible contamination of models by showing difference in SWE agent success rates before and after model’s knowledge cutoff dates. Claims And Evidence: Yes - The claim of 40% lower agent success rates on SWEE and SWA compared to SWE claim is unclear. Table 5 and 6 show very similar success rates for all 3 benchmarks, only some difference between SWEE and SWA with one of the models. Methods And Evaluation Criteria: Yes Theoretical Claims: None Experimental Designs Or Analyses: Yes Supplementary Material: Appendix Relation To Broader Scientific Literature: Overall, the LLM-based approach to generate SWE benchmarks is novel and very valuable. This work contributes 2 new benchmarks, extending the SWE-Bench & addresses its limitations (diversity, potential contamination of models.). Essential References Not Discussed: Nil Other Strengths And Weaknesses: Weakness: - Authors have shown experiments with 2 models (GPT4o and 4omini), any performance differences of SWE agents using opensource models on SWE, SWA, and SWEE will strengthen the value of the work. - Although authors point out distributional differences of SWE with SWEE & SWA, a deeper understanding (based on multiple attributes) of type of new diverse samples present in SWEE and SWA (not present in SWE) would be welcome. Other Comments Or Suggestions: None Questions For Authors: See claims & weakness sections Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable and positive review, highlighting both the novelty of our work and the value it brings to the community. Below, we address their remaining questions: **Can you extend the evaluation of the created benchmarks?** We have added three additional agents/methods (SWE-agent v1, ZeroShot-BM25, ZeroShot-Oracle) and four additional models (Haiku3.5, Llama 3.3 70B, Qwen 2.5 72B, DeepSeek V3) to our evaluation. See the results below **Model comparison with ACR** | |SWA|SWEE|SWE| |-|-|-|-| |GPT4o-mini| 8.4% | 9.0% | 8.2% | |GPT4o| 10.2% | 15.1% | 16.6% | |Haiku| 10.8% | 12.9% | 13.6% | |Llama3.3 70B| 8.8% | 10.8% | 12.5% | |Qwen2.5 72B *| 3% | 2%| 4% | |DeepSeek V3 *| 8% | 13% | 26% | *only evaluated on a random subset of 100 instances **Agent comparison with GPT4o-mini** | |SWA|SWEE|SWE| |-|-|-|-| |AutoCodeRover v2| 8.4% | 9.0% | 8.2% | |Openhands| 3.9% | 4.4% | 4.6% | |SWE-Agent v1|2.6% | 8.9% | 8.2% | |ZeroShot (Oracle) | 0.9% | 2.2% | 2.8% | |ZeroShot (BM 25) | 1.3% | 2.8% | 1.5% | While most of the additional results are in line with what we observed so far, we want to highlight that there are some interesting behaviours. In particular, DeepSeek V3 performs much better (2-3x) on SWE than on both other benchmarks, similarly SWE-Agent v1 performs much worse on SWA than on both SWEE and SWE. This highlights the value of diverse benchmarks, enabling a holistic evaluation of new models and methods. **Where do you observe the performance differences mentioned in the Abstract?** E.g., in Table 6, we show that the performance of GPT4o is 40% lower on SWA compared to SWE. We observe even bigger differences in the results reported above.
Summary: The paper addresses the problem of automatically creating repository-level execution benchmarks for software engineering tasks. The authors first describe SetupAgent, an LLM-powered agentic framework to setup the execution environment for any Python repository. This is then used to create SWA- and SWEE-Bench, increasing diversity of problems represented and minimizing contamination against models like GPT-4o. Evaluations show similar level of performance on SWE-Bench as these benchmarks with SWA-Bench showing slightly lower performance for problems after the models’ knowledge cut-off. Claims And Evidence: 1. SetupAgent is a very useful contribution to the community allowing researchers to turn any Python repository into a benchmark instance. However, this system is not adequately evaluated for accuracy or reliability. While the authors run tests reproduce original issues, I believe that the system needs to evaluated deeper to check the presence, correctness, reliability and reproducibility of fail-to-pass tests. A thorough evaluation of the system with this in mind is missing from the paper. Further, SetupAgent incorporates multiple LLM-guided steps, and as far as I can tell, no attempts were made to vet the LLM outputs. While the system proposal is useful, I think the authors should take more steps towards improving the transparency of the robustness and reliability of SetupAgent. 2. The paper claims higher diversity in SWEE- and SWA-Bench compared to SWE-Bench. These are backed by statistical studies in Section 4. 3. Finally, the statistical significance of performance of various models on various benchmarks are also shown. In some cases the p-values are very high, e.g., Table 8 has many p-values > 0.1. Nevertheless, I commend the authors for being transparent about these results. Methods And Evaluation Criteria: I have specified a brief commentary on the creation of SetupAgent above. Besides that, I think the authors also miss to comment on the merits and comparisons of their efforts to SWE-Bench Verified [1]. It is now known that the original SWE-Bench has many issues at the systemic and problem semantic levels, and SWE-Bench Verified was an effective annotation effort to address such concerns. Given that SetupAgent leads to a similar benchmark as SWE-Bench, I think that the authors should also check if the same concerns are present in SWA- and SWEE-Bench. [1] https://openai.com/index/introducing-swe-bench-verified/ Theoretical Claims: n/a Experimental Designs Or Analyses: I have answered this in other parts of my review. Supplementary Material: I did not review the Appendix. Relation To Broader Scientific Literature: The paper is related to the evaluation of LLMs as agents in software engineering. The most relevant work in this direction is SWE-Bench [1]. [1] Jimenez, Carlos E., et al. "Swe-bench: Can language models resolve real-world github issues?." *arXiv preprint arXiv:2310.06770* (2023). Essential References Not Discussed: I have touched upon this above. The problems raised and addressed in the SWE-Bench Verified work are not mentioned in this work. I think this is essential for the adoption of SetupAgent, SWA- and SWEE-Bench just as SWE-Bench Verified is now the default benchmark for evaluation. Other Strengths And Weaknesses: 1. I believe that SetupAgent solves an important problem of automating the creation of execution environments but this needs to be done in a more rigorous way. I have elaborated on this above. 2. I think that Section 3.1 is completely unnecessary. It obfuscates a pretty simple setup and does not add much value. I recommend the authors to remove this section and all the following mathematical notations. 3. I did not understand the difference between SWA- and SWEE-Bench. Specifically, I’m not sure about what makes SWA-Bench unique. I do not understand this statement: “Code Agents develop software applications that suffer from different types of bugs compared to libraries due to architectural and structural differences.” Could you elaborate or maybe show an example? 4. I’m not sure if this should be a major or minor concern: I do not see much difference in model performances across SWE- and SWEE-/SWA-Bench (Table 5, 6). Am I missing something from these results? Other Comments Or Suggestions: Line 375 - e → We Questions For Authors: 1. Can you concretely clarify the difference between SWA- and SWEE-Bench? Are both needed? If not, which one is better? This will help researchers understand what’s useful for their use case. 2. How would you judge the accuracy and reliability of SetupAgent? Have you evaluated the various components of SetupAgent? This will help increase the trust in the system. 3. What is the status of issues raised in SWE-Bench Verified in SWEE- and SWA-Bench? Did you take any steps to address these issues? If not, what should researchers using your benchmarks be mindful of? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review, acknowledging the value of our contribution to the community and the quality of our analysis. Below, we address their remaining concerns. **Can you evaluate the presence and correctness of fail-to-pass tests more rigorously?** We first want to highlight that we already filter the dataset to only include instances where we extracted F2P tests by executing the test suite in the original state with test changes applied and in the fixed state with test changes applied. We have additionally conducted a manual review of the extracted tests (see response to Reviewer FoWC) and find them to be of high quality for 90% of instances. Finally, we have rerun the test execution and extraction 10 times for SWA to identify possibly flaky tests and extract 509 shared instances across all 10 runs (compared to 535 from the original run). We traced the remaining discrepancy down to two main causes: randomly parameterized tests and genuinely flaky tests. We will create a version of both the SWA and SWEE datasets with these filtered out but expect this to have minimal impact on our results and note that similar behaviour is present in SWE-Bench (see e.g., https://arxiv.org/pdf/2407.21787). **Have you evaluated SetupAgent both as a whole and with respect to its individual components?** We evaluate SetupAgent in Section 5.2, discussing its overall performance in Table 3 and conducting an ablation study in Table 4, demonstrating the importance of all its components. We have now additionally conducted a human evaluation, discussed in the response to Reviewer FoWC. **Can you analyse the issues discussed in SWE-Bench-Verified in context of your benchmarks?** Given the similarity in the benchmark generation pipeline, we expect similar findings to those for SWE-Bench(-Verified). However, the two main review/filtering criteria in the creation of SWE-Verified, namely, the completeness of the issue description and the specificity of added tests, only aim to filter out instances that can not be solved without additional information. This is crucial to assess the absolute performance of code generation systems but will affect all evaluated systems to a similar extent and thus have only marginal impact on a comparative analysis. We note that such a comparative analysis is sufficient to select among different systems for a given use case or guide their development. Similarly, we focus our analysis on the differences in the relative performance between models and compare our benchmarks to SWE-Full and not SWE-Verified. Finally, we have conducted a similar human review as for the creation of SWE-Verified on 30 random samples of SWA, which we discuss in the response to Reviewer FoWC. We are happy to include this discussion in the next revision of our work. **Can you discuss the differences between SWE, SWA, and SWEE? Are all needed?** The main difference is the set of considered projects. While SWE(E) focuses on libraries that typically expose functionality to other programs and thus tend to have more stable, well-tested APIs, SWA focuses on applications (e.g., locust or xonsh) that are directly used by humans and often have (graphical) UIs. As we show in Table 6, the performance of some models (GPT4o-min) is quite consistent across these different tasks, while others (GPT4o) varies significantly. We have conducted additional experiments (see response to Reviewer XqFN) where we see significant (3x) differences across benchmark for SWE-Agent v1 and DeepSeek V3. We believe this demonstrates the value of more diverse benchmarks in this highly active field. **Comment on Section 3.1** We introduced the notation in Section 3.1 to formalize our plain English (and sometimes incomplete for brevity's sake) descriptions of different settings. However, we are happy to move this formalization and its usages to the appendix. **Conclusion** We hope to have addressed the Reviewer's concerns, look forward to their reply, and remain happy to answer follow-up questions.
Summary: This paper introduces SETUPAGENT, a system for automatically generating repository-level benchmarks for code agents by setting up historically accurate execution environments. It extracts installation and testing commands from GitHub repositories using LLMs, iteratively refines them based on execution feedback, and validates correctness through test suite execution. Using SETUPAGENT, the authors create two new large-scale benchmarks: SWA-Bench, focused on software applications; SWEE-Bench, focused on diverse and less-popular Python repositories. Claims And Evidence: A few areas could benefit from additional validation. For example, " SETUPAGENT ensures high correctness and historical fidelity", while the 95% pass-rate threshold and test-level granularity are good proxies, it seems no human validation or reproducibility check. Methods And Evaluation Criteria: - The paper tackles the challenge of limited, manually curated benchmarks like SWE-Bench. - Integration of LLMs for interpreting semi-structured repo files: test execution provides concrete, verifiable feedback to supervise LLM-generated commands. - The authors balance automation with correctness through iterative feedback-driven correction and 95% test pass threshold in the validation phase Theoretical Claims: N.A. Experimental Designs Or Analyses: Benchmark quality relies on automated test results, which may miss subtle correctness issues (e.g., partial fixes, untested behaviors). No human evaluation or baseline “gold set” is used to further validate the correctness of the generated tasks. Supplementary Material: N.A. Relation To Broader Scientific Literature: The paper is well-situated within the broader scientific literature on code generation benchmarks and LLM-based code agents. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: Strengths - The paper goes beyond reporting benchmark creation—it analyzes failure modes, conducts ablation studies, and investigates performance correlations with task characteristics. - Designed to scale to thousands of benchmark instances without human intervention. - The system can be reused for future benchmark updates with new repositories or tasks. Weakness - No qualitative or manual evaluation is done to assess the correctness or realism of generated benchmark instances. The correctness relies entirely on test suite outcomes, which may miss semantic issues or partial correctness. - Evaluation focuses on few agents. - The system may not support projects requiring non-trivial external setup, and the automatically generated test case may be trivial. Other Comments Or Suggestions: Consider randomly sampling a subset (e.g., 50 instances) of generated benchmark tasks for manual review to verify: (1) The issue and patch are meaningfully connected. (2)The extracted test suite is valid and reflects the issue. (3) The task is realistic and unambiguous. This would boost confidence in the correctness and representativeness of the dataset. A direct head-to-head comparison (in terms of runtime, accuracy, coverage, and granularity) with EXECUTIONAGENT would solidify SETUPAGENT’s claimed superiority. Even a small-scale test (e.g., on 10 shared repos) would be informative. Consider including an example of how external researchers can integrate the benchmark into their own code agent evaluation pipelines. Add more baseline models of SWE-bench to test your benchmarks. (https://www.swebench.com/#verified) Questions For Authors: - How do you plan to encourage the research community to adopt and test their models on your benchmarks? - How to integrate SETUPAGENT with existing agent frameworks (e.g., SWE-Agent) to enable seamless evaluation? - How do you ensure that the generated benchmarks reflect realistic and meaningful developer tasks beyond just passing/failing the generated test cases(maybe trivial)? - Can SETUPAGENT be extended to support projects with more complex or non-standard setups? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback, highlighting the depth of our analysis, the scalability of our approach, and the value of our work for the community. Below, we address their remaining questions and concerns: **Can you conduct a manual review of generated tasks to assess their quality?** We have conducted a manual review of 30 randomly chosen SWA instances for both task and setup quality. To assess task quality, we follow the protocol used to create SWE-Verified. That is, we scored issue description quality and clarity on a scale of 0 (well-specified issue with clear success criteria) to 3 (almost impossible to solve correctly without further instructions) and test quality from 0 (test perfectly covers valid solutions) to 3 (tests are too narrow or broad or requiring information not provided in the issue description). For both questions, 0 and 1 are considered passing scores. We observe the following: 23 (77%) instances have a meaningful and sufficiently complete issue description, and 22 (73%) of these additionally have suitable tests (27 or 90% across all instances) to check whether the issue was fixed. To assess setup quality, we score the extract setup as either functionally equivalent to the described setup or incorrect and the test setup from 0 (functionally equivalent to described setup) to 2 (correct tests only partially or not at all executed). We observe the following: All instances run the correct tests with 77% using exactly the test commands provided in the reference. 22 (73%) instances additionally have a fully correct installation/setup. We will include an extended version of these results in a revised version of our paper. **How do you ensure the generated benchmarks are meaningful beyond passing (trivial) generated tests?** We first want to highlight that all tests are taken from the original repositories and were thus written by human contributors and considered to be sufficiently valuable to be merged. We further filter out all instances where not at least one such test that failed before the fix and succeeds after. Finally, our manual evaluation above demonstrates that cases where no meaningful tests are present are rare (only 10%). **Can you extend the evaluation of the created benchmarks?** Yes, please see the response to Reviewer XqFN. **How can your benchmarks/approach be integrated into existing agent evaluation pipelines?** To allow for a seamless integration of our benchmarks, we have made sure to create our benchmark instances such that they are compatible with the popular SWE-Bench evaluation harness (only requiring minor changes to load installation and testing commands from the dataset rather than being hardcoded) and will release both our dataset and the modified version of the harness. **How do you plan to motivate the community to use your tool and benchmarks?** We believe that the research community can benefit strongly from our benchmarks as the increased sample diversity will reduce the effect of overfitting and thus level the playing field for new methods that were not tuned to SWE-Bench at a significant cost. Further, we believe the ability to efficiently create new domain-specific benchmarks will be especially important for practitioners working on domains not well represented in SWE-Bench. We believe these will be strong motivators to adapt the benchmarks/method proposed here. Additionally, we plan to launch a benchmark website similar to swebench.com to allow for easy result tracking and comparison. **Can you include a direct comparison to ExecutionAgent?** We first want to highlight that ExecutionAgent is not capable of setting up specific (historic) states of repositories. It can thus not be used directly for benchmark generation and is limited to our repository setting in Table 3. Directly compared on 25 random repositories considered for SWA-Bench and using GPT4o-mini for both methods, SetupAgent (ours) succeeds on 13, requiring 29.2 s on average (12 min total), while ExecutionAgent succeeds on only 9, requiring on average 2700 seconds (18h total). We thank the reviewer for suggesting this experiment and will include it in the revised version of our paper. **Can SetupAgent be expanded to handle more complex setups?** While SetupAgent can already extract complex setup steps, a key limitation is that our dockerized execution environment does not support running docker instances inside this docker environment. We believe this is an exciting item for future work but out of scope here given the involved engineering challenges. We note that such a version of the benchmark would also be incompatible with existing SWE-Bench evaluation harnesses. **Conclusion** We hope to have addressed the Reviewer's concerns, look forward to their reply, and remain happy to answer follow-up questions. --- Rebuttal Comment 1.1: Comment: Thank you for the submission. I would appreciate some clarification on a few points to better understand the contribution. Currently, I couldn't see a clear performance difference between your benchmark and existing SWE datasets. This may be partly due to the selection of baselines—some appear to perform significantly below the state-of-the-art. It would be helpful to include stronger baselines from SWE-bench so that the unique strengths of your benchmark can be more effectively demonstrated. Also, while expanding available resources is valuable, introducing another large, automatically-generated dataset without rigorous analysis could place an additional burden on the community. Ensuring quality and clarity around the dataset’s behavior would make it more useful and trustworthy. Given that the AI community already has SWE-bench, it might be worth considering submitting this work to the Software Engineering (SE) community, where SE benchmark construction and usage are core concerns. Your work could benefit from more targeted feedback and engagement there. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for engaging in the discussion and address their follow-up questions below. **Do you observe a performance difference between SWA-, SWEE-, and SWE-Bench?** Yes! For example, using AutoCodeRover and GPT4o, we observe a 40% lower performance on SWA than on SWE (see Table 6). We have added more experiments in the reply to Reviewer XqFN, reproduced below for convenience. There, we see significant differences for Haiku 3.5 (20%), Llama3.3 (30%), and DeepSeek V3 (70%). We have also added more agents to our evaluation and see that SWE-Agent v1 also exhibits a significant difference (68%) between SWE and SWA. Model comparison with ACR | |SWA|SWEE|SWE| |-|-|-|-| |GPT4o-mini| 8.4% | 9.0% | 8.2% | |GPT4o| 10.2% | 15.1% | 16.6% | |Haiku| 10.8% | 12.9% | 13.6% | |Llama3.3 70B| 8.8% | 10.8% | 12.5% | |Qwen2.5 72B *| 3% | 2%| 4% | |DeepSeek V3 *| 8% | 13% | 26% | *only evaluated on a random subset of 100 instances Agent comparison with GPT4o-mini | |SWA|SWEE|SWE| |-|-|-|-| |AutoCodeRover v2| 8.4% | 9.0% | 8.2% | |Openhands| 3.9% | 4.4% | 4.6% | |SWE-Agent v1|2.6% | 8.9% | 8.2% | |ZeroShot (Oracle) | 0.9% | 2.2% | 2.8% | |ZeroShot (BM 25) | 1.3% | 2.8% | 1.5% | **Can you add more baselines performing closer to the state-of-the-art?** Yes, we have added more models and agents, now including the top 3 open source agents on the SWE-Bench-Full leaderboard. We had to evaluate them with cheaper models (GPT4o-mini instead of Sonnet 3.5/3.7, which would be 20x more expensive) due to budget constraints, with current experiments costing already ~5k USD. **Is this work a valuable contribution to the AI community beyond SWE-Bench?** We believe so. SWE-Bench is limited to few repositories and can not be extended automatically, leading to an increasing risk of overfitting and contamination dominating genuine improvements in code agents, potentially misleading the field. For example, DeepSeek V3, in combination with AutoCodeRover, performs much better on SWE than any other evaluated combination but is beaten by multiple other models on SWA-Bench. We have further shown in our manual analysis that SWA is of high quality, supported by some models obtaining very similar performance across datasets. Finally, our benchmark generation framework will allow researchers to create datasets for specific problem domains and thus build and evaluate more specialized software engineering agents.
Summary: To achieve the automatic generation of challenging and realistic repository-level coding benchmarks, this work proposes an LLM-driven method, SETUPAGENT, to automate the extraction of valid information from complex real-world repositories, ensuring the correct setup of the environment for perfectly reproducing issues encountered in practice. Using SETUPAGENT, this work constructs two coding benchmarks with different characteristics: SWA-Bench and SWEE-Bench, to evaluate the properties of the generated benchmarks. Claims And Evidence: I think the main claims have been supported by enough evidence, especially the characters of the generated benchmarks: real-world, diverse, efficient, generalizability. Methods And Evaluation Criteria: Although I am not an expert in this field (LLM-driven coding analysis), I have carefully studied the method proposed by the authors, which involves detailed and well-reasoned sub-processes. Overall, the method appears convincing to me. Theoretical Claims: NA Experimental Designs Or Analyses: In the experiments, the authors tested different code agents and benchmark generators. However, I believe the number of experiments conducted is somewhat limited, especially for benchmark generators. It would be more beneficial to include generators with different capabilities and varying degrees of relatedness, as this would help us better understand the impact of choosing generators with different attributes. Supplementary Material: I have read some of the appendix. Relation To Broader Scientific Literature: The automated construction of benchmarks is of great significance to the community. Essential References Not Discussed: NA Other Strengths And Weaknesses: I think it can be better to further validate the effectiveness of the benchmark, which may include: 1. Whether running SETUPAGENT multiple times on the same set of repositories produces stable benchmarks and whether these benchmarks lead to consistent evaluation results. If not, it implies that the evaluation results of model capabilities may fluctuate due to randomness in the benchmark construction process, making the evaluation unreliable. 2. The automated construction of benchmarks enables good scalability in sample size. Therefore, for this task, it is recommended to determine the optimal number of samples (benchmark size) that balances evaluation efficiency and stability. Other Comments Or Suggestions: The first line of chapter 5.3 misses a 'W' Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review and positive feedback, recognizing the relevance and quality of the benchmarks we created. Below, we address their remaining questions: **Can you assess the repeatability of SetupAgent and what it implies for the generated benchmarks?** First, we want to emphasize that benchmarks need only be constructed (and then updated) once to evaluate a large number of code generation methods. Therefore, a lack of repeatability would not have an adverse effect on the benchmark’s value and reliability. Nonetheless, we conducted an additional experiment corresponding to the repository setting in Table 3, rerunning SetupAgent 3 times for 100 random candidate Repositories from SWEE. This led to 26, 27, and 27 successes, where the last two runs install exactly the same instances and the first just drops one. This demonstrates that SetupAgent is highly repeatable. We thank the reviewer for suggesting this experiment and will include it in the revised version of our paper. **How was the sample size for the generated benchmarks chosen ?** We combined our own experience developing code agents and the community's preference for first the Lite and later the Verfied Version of SWE-Bench to choose a sample size of ~500 instances. As the reviewer noted, larger versions can be created easily, however, making evaluation more costly. **Can you add an experiment analysing the effect of the underlying model on SetupAgent?** We first want to highlight that our work focuses on the framework we propose, which we analyse in the ablation presented in Table 4. Further, we specifically chose GPT4o-mini for its great affordability, as it allows us to demonstrate that benchmark generation is now accessible to the wider community. Finally, we note that few cheaper models are available, e.g., even an 8B model hosted by together.ai having the same inference costs, and more capable models being significantly more expensive, e.g., GPT4o having ~15x higher inference costs. We conduct an additional experiment with GPT4o corresponding to 100 SWEE samples in the repository setting in Table 3. We observe GPT4o succeeds for 24 repositories compared to 26 with GPT4o-mini, demonstrating the robustness of our framework to the underlying model. Interestingly GPT4o uses fewer iterations to improve its output (2.1 instead of 3.4 on average), explaining the slightly lower success rate.
null
null
null
null
null
null
A Two-Stage Learning-to-Defer Approach for Multi-Task Learning
Accept (poster)
Summary: This paper presents a two-stage learning-to-defer (L2D) approach for the multi-task setting involving both classification and regression. The authors provide theoretical justification in the form of consistency bounds, as well as empirical justification in the form of validation on (multi-task) object detection and EHR analysis. ## Update after rebuttal As stated in my rebuttal comment, I thank the authors for the clarifications and will maintain my recommendation of Weak accept. Claims And Evidence: Regarding theoretical claims, I will have to defer to domain experts to verify accuracy. Empirical claims appear sound, though it is somewhat difficult to interpret these results without being intimately familiar with L2D literature and evaluation. Methods And Evaluation Criteria: The two validation datasets were appropriate for this unique setting. Theoretical Claims: I did not verify proofs due to time constraints. Experimental Designs Or Analyses: Experimental design of empirical experiments seems appropriate – performance metrics were appropriate, and results (+ variability) were presented over 4 trials. Source code was provided in order to reproduce experimental results. Supplementary Material: I reviewed Sections A and G, but did not evaluate proofs closely. Relation To Broader Scientific Literature: The authors clearly lay out how, despite recent progress in L2D, previous two-stage L2D approaches do not address multi-task “classifier-regressor models”, which are relatively common in more complex tasks such as object detection. Essential References Not Discussed: If there are key references missing, then I am not aware of them nor qualified to propose them. Other Strengths And Weaknesses: *Strengths* - The paper is very well-written and carefully organized to logically walk the reader through each section. - The related work is thorough, with unique contributions of this study clearly laid out relative to prior work. *Weaknesses* - Empirical results may be difficult to interpret for non-experts in L2D. Other Comments Or Suggestions: References to tables/figures are unusual (ex: “Figure 5.1” [L372] and “Table 5.2” [L435]). These should refer to the tables/figures themselves, not the section. Questions For Authors: I want to be transparent that this is outside of my area of expertise, so I may have elementary questions regarding the background of L2D and interpretation of results. 1. When describing a medical use case of L2D, the authors write, “If the model is sufficiently confident, its diagnosis is accepted; otherwise, the decision is deferred to a medical expert who provides the final diagnosis.” I take it this is an illustrative example of the concept of deference rather than a concrete description of L2D as manifested in empirical experiments, correct? For example, in the empirical experiments, it seems that “deference” refers to accepting/rejecting (weighting) the predictions of task experts rather than deferring to, say, a human expert. Perhaps this is an obvious point to researchers like yourselves, but based on this description in the Introduction, I was expecting some sort of human-in-the-loop evaluation where low-confidence cases were deferred to an actual human expert. Is my understanding correct, and could the authors provide some way to make this distinction clearer for non-experts? 2. In the object detection setting, why do the authors not compare performance to other approaches? Also, can the authors explain why this approach is practically useful if it never outperforms the largest model (Expert 2)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their careful and constructive feedback. We are pleased that they found the paper well-written, the empirical setup appropriate, and the contributions clearly positioned in relation to prior work. Below, we provide clarifications on several points raised in the review. > Empirical results may be difficult to interpret for non-experts in L2D. To clarify, we conducted two distinct sets of experiments. In the object detection task, we considered three models of varying complexity (a lightweight Faster R-CNN, a medium-sized Faster R-CNN, and a large Faster R-CNN). To reflect their varying complexity and computational cost, we assigned different consultation scores to each agent: $\beta_0=0$, $\beta_1=\beta_1$, and $\beta_2=R_G\beta_1$ with $R_G$ being a ratio of GLOPs. This setup explicitly illustrates the trade-off between computational efficiency and prediction correctness, as the largest the model is, the better is its performance. Figure 1 shows that, for higher consultation costs $\beta_1$, our approach allocates queries mostly to the main lightweight model (cost $\beta_0$) or the first expert (cost $\beta_1$). Conversely, when $\beta_1$ is lower, the rejector strategically allocates queries across both experts (expert 1 and expert 2). In the EHR task, baseline approaches fail to achieve strong performance because they base allocation decisions solely on individual tasks (classification or regression separately). In contrast, our method jointly considers both tasks, resulting in more balanced and effective query allocation. We will clearly emphasize these interpretations in the revised manuscript to improve clarity. > References to tables/figures are unusual (ex: “Figure 5.1” [L372] and “Table 5.2” [L435]). These should refer to the tables/figures themselves, not the section. Thank you for pointing this out. We have taken note and will update this in the revised manuscript. > When describing a medical use case of L2D, the authors write, “If the model is sufficiently confident [...] I was expecting some sort of human-in-the-loop evaluation where low-confidence cases were deferred to an actual human expert. Is my understanding correct, and could the authors provide some way to make this distinction clearer for non-experts? Thank you for raising this important point. To clarify, the medical use-case mentioned in the Introduction is intended solely as an illustrative example of deference rather than a description of the concrete mechanism used in our empirical experiments. In our experiments, “deference” refers to the process of automatically selecting predictive agents (e.g models, humans) based solely on their predictions and associated costs. A key strength is its agent-agnostic nature: it does not rely on any specific internal structure, training paradigm, or decision process of the agents. As long as we have access to an agent’s predictions, our framework can learn a rejector $r\in\mathcal{R}$ that, for each input $x\in\mathcal{X}$, computes confidence scores and allocates the query to the agent deemed most reliable. Importantly, during inference, we only query the selected expert. In our empirical implementation, we have used automated black-box models or synthetic distribution as the experts. However, the framework is fully flexible and can incorporate human experts or any other decision-making system, as long as their predictions are available. We will update the manuscript to explicitly clarify this distinction, thereby reducing any potential confusion. > In the object detection setting, why do the authors not compare performance to other approaches? Also, can the authors explain why this approach is practically useful if it never outperforms the largest model (Expert 2)? The primary reason we did not include comparisons with other object detection approaches is that our paper focuses explicitly on evaluating the effectiveness of the allocation mechanism within the Learning-to-Defer framework, rather than competing directly with state-of-the-art detection methods. Importantly, our method remains practically valuable even when it does not surpass the performance of the largest expert model (Expert 2). This is because Expert 2, while accurate, is typically too computationally expensive or slow to deploy on every query in realistic, resource-constrained environments. Our L2D approach provides a principled solution to this critical challenge by strategically routing queries to less computationally intensive models whenever appropriate, significantly reducing resource usage without severely impacting accuracy. Furthermore, when expert models have specialized strengths or operate on slightly different distributions—as demonstrated clearly in our EHR experiments—our method can exploit these discrepancies to effectively improve overall system performance beyond what any single agent could achieve alone. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' clarifications and will likely maintain my recommendation of Weak accept.
Summary: The paper presents a new application of learning to defer to the context of multitask, where the task's target consists of both a regression and a classification task. The authors provide a theoretical analysis of two-stage L2D, showing that the proposed surrogate loss is both Bayes-consistent and $\mathcal{G}, \mathcal{R}$ consistent. Empirical results showcase the effectiveness of the approach. Claims And Evidence: I think the paper is convincing enough in terms of contribution and impact of the approach in real-life scenarios. The theoretical analysis correctly supports the paper's claims. Methods And Evaluation Criteria: The proposed datasets and evaluation criteria make sense. The authors consider two datasets, one that is new in the setting of learning to defer, and one used also in previous works [Mao et al., 2023a, 2024e]. Hence, the empirical evaluation makes sense overall. Theoretical Claims: The proofs seem correct, but I might have missed some details. Experimental Designs Or Analyses: The analyses provided are sufficiently sound, with enough details to reproduce results. Overall, the main concern I have is regarding the choice of subsampling the dataset for MIMIC IV, as detailed in Appendix G.2. Other options could be available, e.g., weighting the loss between classes differently, and the authors could have commented a bit more this aspect. Supplementary Material: I quickly passed through the proofs and looked at the details for the dataset creation. Relation To Broader Scientific Literature: The paper is correctly positioned in the literature, with most works in learning to defer correctly referenced. Moreover, the work extends existing literature, considering cases where regression and classification occur simultaneously. I think this is a valuable contribution with sound proof and analysis. Essential References Not Discussed: The essential literature is correctly discussed, the only reference I think must be necessarily added is the work by Okati et al., 2021, which considers a formulation with explicit constraints in terms of coverage. [Okati et al., 2021] - Okati, N., De, A., & Rodriguez, M. (2021). Differentiable learning under triage. NeurIPS 2021 Other Strengths And Weaknesses: The paper is clearly written, with convincing motivations for the proposed approach. Other Comments Or Suggestions: I provide here for completeness a few recent articles, which could enlarge the current related work section (as they expand different aspects of Learning to Defer): - In [Wei et al., 2024], the authors provide a refinement of Bayes consistency, called Bayes-dependent consistency; - In [Palomba et al., 2025], the authors bridge causal inference and learning to defer for improved evaluation of such systems; - In [Strong et al., 2025], the authors present an application of L2D for healthcare using LLMs Finally, according to ICML guidelines, the caption for tables should be placed above. Please consider adjusting the ones in the paper. [Wei et al., 2024] - Wei, Z., Cao, Y., & Feng, L. (2024). Exploiting human-AI dependence for learning to defer. ICML '24 [Palomba et al., 2025] - Palomba, F., Pugnana, A., Alvarez, J., Ruggieri, S. (2025). A Causal Framework for Evaluating Deferring Systems. AISTATS '25 [Strong et al., 2025] - Strong, J., Men, Q., & Noble, A. (2025). Towards Human-AI Collaboration in Healthcare: Guided Deferral Systems with Large Language Models. AAAI '25 Questions For Authors: I think the paper is sound enough, with good motivation and adequate evaluation. Hence, I am prone to suggesting acceptance of the paper. I have a couple of questions for the authors: - Have the authors considered how to directly model the coverage for experts, as considered for instance in [Okati et al.,2021; Mozannar et al., 2023; Palomba et al., 2025]? - Could the authors comment on my concern for the choice of subsampling the dataset? I guess adding a weighted loss is a safe option, but I do not want to overlook consistency concerns. ### Update After Rebuttal I am satisfied with the answers from the authors. I keep my positive score. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their constructive feedback. We appreciate their positive assessment of the strength of our theoretical and empirical contributions. Below, we address the potential connection to coverage constraints and provide further details on specific aspects of our experimental design. > The analyses provided are sufficiently sound, with enough details to reproduce results. Overall, the main concern I have is regarding the choice of subsampling the dataset for MIMIC IV, as detailed in Appendix G.2. Other options could be available, e.g., weighting the loss between classes differently, and the authors could have commented a bit more this aspect. > Could the authors comment on my concern for the choice of subsampling the dataset? I guess adding a weighted loss is a safe option, but I do not want to overlook consistency. Thank you for raising this point. Indeed, alternative heuristics such as class-weighted losses could have been employed. However, our primary objective was not necessarily to optimize overall performance metrics, but rather to highlight the relative improvement and effectiveness of our approach. Nevertheless, your suggestion of using weighted losses is valid and aligns with our theoretical framework, as our consistency proofs hold for any positive cost $c_j$. A promising direction for future experiments would involve adapting agent-specific cost functions in a cost-sensitive manner to address class imbalance explicitly. > The essential literature is correctly discussed, the only reference I think must be necessarily added is the work by Okati et al., 2021, which considers a formulation with explicit constraints in terms of coverage. This is correct, we will add this paper in the related work. > Have the authors considered how to directly model the coverage for experts, as considered for instance in [Okati et al.,2021; Mozannar et al., 2023; Palomba et al., 2025]? This is an excellent question. In the single-expert setting considered in [5,6,7], modeling coverage explicitly is relatively straightforward. Consider a rejector $r:\mathcal{X}\rightarrow\mathbb{R}$, where $r(x)\geq 0$ indicates deferring to the expert, and $r(x)<0$ indicates no deferral. To directly model coverage, one can introduce a coverage level $k \in \mathbb{R}^+$ corresponding to the $\overline{k}^{\,\text{th}}$ percentile of the rejector scores $r(x)$ computed over a validation set. This yields an adjusted decision rule: $1\_{r(x)\geq k}$, instead of $1\_{r(x)\geq 0}$, resulting in an expected coverage of $\mathbb{E}[1\_{r(x)\geq k}] = 1-\overline{k}$. However, in the multi-expert setting, extending this idea is nontrivial because deferral decisions involve complex allocations across multiple experts. This complexity likely explains why [5,6,7] primarily focus on single-expert scenarios. **One possible direction to handle coverage explicitly in multi-expert settings is to consider margin-based rejectors**. Let $\mathcal{A}= \lbrace 0,\dots,J \rbrace$ denote the set of $J$ experts plus a main model, and define a rejector $r:\mathcal{X}\rightarrow\mathcal{A}$. We then define the margin-based score as $$\rho\_r(x,j)=r(x,j)-\max\_{j'\neq j} r(x,j'),$$ where a positive margin $\rho\_r(x,j)\geq 0$ leads to deferring to expert $j$. Coverage can be incorporated by setting thresholds $\tilde{k}\_j$ separately for each expert $j$, corresponding to specific percentiles of the margin $r(x,j)-r(x,0)$. The decision rule then becomes $1\_{\rho\_r(x,j)\geq \tilde{k}\_j}$. **Importantly, this approach naturally reduces to the single-expert coverage** definition used in [5,6,7], since deferral occurs when $\rho(x,1)\geq 0$, i.e., $r(x,1)\geq r(x,0)+\tilde{k}\_1$. We hope this aligns with your question and would be glad to discuss it further. We will include this clarification and an expanded discussion in the revised manuscript. ### References [1] Mao, et al. (2023). Two-Stage Learning to Defer with Multiple Experts. NeurIPS23 [2] Mao, et al. (2024). Regression with multi-expert deferral. NeurIPS24 [3] Narasimhan et al. (2022) Post-hoc estimators for learning to defer to an expert. NeurIPS22 [4] Mao et al. (2024) Theoretically grounded loss functions and algorithms for score-based multi-class abstention. AISTATS24 [5] Okati et al. (2021). Differentiable learning under triage. NeurIPS21 [6] Mozannar et al. (2023, April). Who should predict? exact algorithms for learning to defer to humans. AISTATS23 [7] Palomba et al. (2025). A Causal Framework for Evaluating Deferring Systems. AISTATS25 --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal and for addressing my questions. I am satisfied with their answers. Overall, I think this is a valuable contribution. Thus, I am keeping my acceptance score.
Summary: The paper developed a Two-Stage Learning-to-Defer framework for multi-task problems, enabling joint classification and regression. This framework features a novel two-stage surrogate loss family that is both $(\mathcal{G}, \mathcal{R})$-consistent and Bayes-consistent for cross-entropy-based surrogates. The authors derived tight consistency bounds and established minimizability gaps, extending prior work on Learning-to-Defer. Their learning bounds improve with richer hypothesis spaces and more confident experts. The authors validated the approach on object detection and electronic health record analysis, demonstrating its superiority over existing methods. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: Section 2 provides a sound discussion. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strengths: The paper is well-written, clearly presented, and provides a detailed discussion of prior work, making it accessible to non-experts. Weaknesses: The results, while sound, seem to offer incremental progress rather than significant novelty compared to existing work. Furthermore, I question whether classification can be viewed as a specific instance of regression, with the zero-one loss as the evaluation metric. If so, would the proposed framework be adequately addressed by existing regression-based deferral frameworks? Other Comments Or Suggestions: N/A. Questions For Authors: 1. Is it possible to view classification as a specific instance of regression, where the zero-one loss serves as the evaluation metric? If this perspective is valid, would the current framework be adequately addressed by the regression-based deferral framework presented in previous research? 2. What specific challenges arise when adapting the learning-to-defer framework for multi-task learning scenarios? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their careful reading and thoughtful comments. We are pleased that they found our framework well-presented and the theoretical analysis rigorous. In the following, we clarify the technical challenges that arise in the multi-task setting. First, we emphasize that **our paper provides theoretical novelty**. Our framework is generalized to accommodate any positive cost $c\_j$, enabling a unified treatment of classification, regression, and multi-task problems within Learning-to-Defer. We derived the Bayes-optimal rejector and established consistency guarantees for multi-task L2D. In particular, we prove consistency bounds that hold for any surrogate in the comp-sum family—parameterized by $\nu$ as defined in Equation 1—which encompasses commonly used surrogates such as the logistic, MAE, and exponential losses. These bounds are both tighter and more interpretable than those in [1, 2, 3], accounting for both the parameter $\nu$ and the $L\_1$ norm of the aggregated cost vector $\boldsymbol{\tau}$ (see discussion with reviewer @oatj). Additionally, we present a novel analysis of minimizability gaps in Theorem 4.6, demonstrating that the optimal conditional risk critically depends on both the norm of the aggregated costs $\boldsymbol{\tau}$ and the choice of the multiclass surrogate $\Phi\_{01}^\nu$ (Equation 1). Notably, our results hold without any assumptions on the underlying distribution $\mathcal{D}$—in contrast to [4]—and apply to any surrogate $\Phi\_{01}^\nu$ within the comp-sum family. Furthermore, we provide new generalization bounds specifically tailored to this setting. While classification can conceptually be viewed as a special case of regression under the zero-one loss, such a simplification overlooks important theoretical distinctions that arise when analyzing consistency and optimality in deferral decisions. To highlight this, we give some additional analysis: Let $g(x) = (h \circ w(x), f \circ w(x))$ be any multi‐head model where $w \in \mathcal{W}$ is a shared representation and $h \in \mathcal{H}$, $f \in \mathcal{F}$ are heads for classification and regression, respectively. Define $L(w,h,f) = \mathbb{E}\_{y,t \mid x}[c\_0(h(w(x)),f(w(x)),z)]$ as the conditional risk for the triple $(w,h,f)$. Let $L^\ast = \inf\_{(w,h,f)} L(w,h,f)$ be the optimal multi‐head conditional risk in the hypothesis class $\mathcal{G} = \lbrace x \mapsto (h(w(x)), f(w(x))) \mid w \in \mathcal{W}, h \in \mathcal{H}, f \in \mathcal{F} \rbrace$. We then write $L(w,h,f) - L^\ast$ to denote the conditional excess risk of $(w,h,f)$ above the best multi‐head. Observe that for a chosen triple $(w,h,f)$, we can decompose: $$L(w,h,f) - L^\ast = \bigl[L(w,h,f) - \inf\_{h',f'} L(w,h',f')\bigr] + \bigl[\inf\_{h',f'} L(w,h',f') - L^\ast\bigr],$$ with $h'$ and $f'$ acting directly on $x$. Define the heads gap $\Delta\_{\mathrm{heads}}(w,h,f) = L(w,h,f) - \inf\_{h',f'} L(w,h',f')$, which measures how well the specific heads $(h,f)$ perform given a fixed representation $w$. Then define the representation gap $\Delta\_{\mathrm{repr}}(w) = \inf\_{h',f'} L(w,h',f') - L^\ast$, which captures how close $w$ is to the best possible representation for both tasks. From this notation, we rewrite $$L(w,h,f) - L^\ast = \Delta\_{\mathrm{heads}}(w,h,f) + \Delta\_{\mathrm{repr}}(w).$$ We next introduce a multi‐task comparison by defining a separate‐training conditional risk. Let $L\_{\mathrm{sep}}(h',f') = \mathbb{E}\_{y,t \mid x}[c\_0((h'(x),f'(x)),z)]$ be the conditional risk of using completely separate models $h'$ and $f'$ without any shared representation. Let $L\_{\mathrm{sep}}^\ast = \inf\_{h',f'} L\_{\mathrm{sep}}(h',f')$. We measure the quality of forcing a single shared $w$ via $$\Delta\_{\mathrm{MTL}} = L^\ast - L\_{\mathrm{sep}}^\ast.$$ If $\Delta\_{\mathrm{MTL}} < 0$, then joint training (i.e. a shared $w$) is strictly better than separate solutions; if $\Delta\_{\mathrm{MTL}} > 0$, it is worse. Finally, we can link $\Delta\_{\mathrm{MTL}}$ to the multi‐head conditional excess risk by noting $$L(w,h,f) - L^\ast = \bigl[L(w,h,f) - L\_{\mathrm{sep}}^\ast\bigr] - \Delta\_{\mathrm{MTL}}.$$ Hence, if $\Delta\_{\mathrm{MTL}} < 0$, it follows that $L(w,h,f) - L^\ast < L(w,h,f) - L\_{\mathrm{sep}}^\ast$, meaning the multi‐head optimum is easier to approach than the separate‐training optimum. Conversely, if $\Delta\_{\mathrm{MTL}} > 0$, we get $L(w,h,f) - L^\ast > L(w,h,f) - L\_{\mathrm{sep}}^\ast$, so it is harder to reach the best multi‐head risk than to reach the best separate‐training solution. Following the discussion with Reviewer @oXmZ, we will also incorporate a discussion on how our approach can be adapted to account for coverage constraints. We hope this clarifies your question. We will add those novel analysis and revise the manuscript to make this point clearer in the final version. Please refer to the discussion with reviewer @oXmZ for references.
Summary: In this paper, the authors analyze the learning-to-differ (L2D) problem in the two-staged multi-task (classification and regression) setting. The paper introduces the pointwise Bayes rejector for the mult-task deferral and introduces a surrogate differal loss that is Bayes consistent. The paper further provides generalization guarantees for the learned rejector, which provide insights into the conditions of the problem setting that can lead to better generalization. Numerical experiments are provided to validate the proposed method, and a comparison of the proposed multi-task rejector with prior single-task rejectors shows that the proposed method can have a balanced trade-off between the tasks compared to prior methods. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Did not check all the proofs, minor typos in the proof of Lemma 4.2 (mentioned in Other Comments Or Suggestions). Experimental Designs Or Analyses: * It is unclear how the coefficients $\lambda^\text{cla}, \lambda^\text{reg}$ used in the agents' cost were determined. These coefficients seem crucial in attaining the balanced performance described in the discussion. * It is unclear how the rejection is implemented for the prior methods in the experiments in Section 5.2 in the multi-task setting. For example, when the classification rejector is triggered, does the model consult for the expertise on both classification and regression tasks from the expert, or only the classification task? Supplementary Material: Yes, Sections A, B, and G. Relation To Broader Scientific Literature: This paper extends the literature in the area of L2D, by providing theoretical insights into L2D in the two-staged multi-task setting. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths * The paper provides theoretical insights into L2D in the two-staged multi-task setting, which seems novel in the related literature. * The paper is mostly self-contained and easy to follow * The problem is well-motivated, and the theoretical insights are sufficiently discussed. Weaknesses * The definition of the deferral function class $L_{def}$ (column 1 line 319) to be a mapping to [0,1] seems unrealistic, given that the $\ell_{def}$ is made up of $c_0$, which in turn is assumed to be a summation of $\ell_{01}$ and $\ell_{reg}$. Since $\ell_{reg}$ is not necessarily bounded between 0 and 1, it is unclear why the definition of $L_{def}$ is correct. * Given the possibly different scales of classification and regression losses, more discussion seems to be needed regarding designing a balanced/meaningful rejector in this setting. Other Comments Or Suggestions: * It seems like in proof of Lemma 4.2, $C^B_{\ell_{\text{def}}}$ should not depend on $g, r$. * $\bar{\tau}$ is not defined in Theorem 4.4, and it is not clear where $T^{-1, \nu}, T^{\nu=1}$ appear in the Theorem. * The clusters $C_{cla}^{M1}, C_{cla}^{M2}$ defined in Experiments in 5.2 seem not distinct as described. Questions For Authors: * Please refer to the questions/concerns raised in previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and thoughtful feedback. We are glad that they found our theoretical contributions novel, the paper well-motivated, and the analysis of the two-stage multi-task Learning-to-Defer setting valuable to the literature. Please find some clarification below: > It is unclear how the coefficients $\lambda^{cla}$ and $\lambda^{reg}$ [...] balanced performance described in the discussion. > Given the possibly different scales of classification and regression losses, more discussion seems to be needed regarding designing a balanced/meaningful rejector in this setting. We agree that the choice of the coefficients $\lambda^{cla}$ and $\lambda^{reg}$ is crucial for achieving balanced and meaningful performance. In our experiments, we considered both classification and regression tasks equally important; thus, we set $\lambda^{cla} = \lambda^{reg} = 1$. This choice represents a balanced and task-agnostic baseline, ensuring no implicit bias toward either component. In practical scenarios, these coefficients should indeed be tuned according to task-specific priorities and performance requirements. **For instance, if the classification is more important in this setting, we should set $\lambda^{cla}>\lambda^{reg}$ to prioritize agent with more proficiency on the classification task**. We will explicitly discuss this and highlight the importance of tuning $\lambda^{cla}$ and $\lambda^{reg}$ in the revised manuscript. > It is unclear how the rejection is implemented for the prior methods [...] or only the classification task? In the EHR task, we treat classification and regression as independent but simultaneously allocated tasks. Specifically, when the classification rejector (as described in [1]) is triggered, we allocate both the classification and regression queries jointly to the selected agent. Similarly, when the regression rejector is triggered, it also allocates both tasks together [2]. We will clarify this joint-allocation implementation explicitly in the revised manuscript. >The definition of the deferral function class (column 1 line 319) to be a mapping to $[0,1]$ seems unrealistic. > It seems like in proof of Lemma 4.2, $\mathcal{C}\_{\ell\_{def}}^B$ should not depend on $g,r$. Thank you for pointing this out. You are correct—these are typos and do not affect the correctness of the proofs. We will fix them in the revised manuscript. > $\overline{\tau}$ is not defined in Theorem 4.4, and it is not clear where $T^{1-\nu}$ appear in the Theorem. Indeed, $\overline{\tau}\_j$ represents the expected aggregated cost of agent $j \in \mathcal{A}$, formally defined as $\overline{\tau}\_j = \mathbb{E}_\{y,t|x}[\tau\_j]$. Consequently, we define the vector $\overline{\boldsymbol{\tau}} = (\overline{\tau}\_0, \overline{\tau}\_1, \dots, \overline{\tau}\_J)$ and $||\boldsymbol{\overline{\tau}}||\_1$ its L1 norm. The transformation function $\mathcal{T}^{\nu}(u)$ is introduced to determine the function $\Gamma^{\nu}(u)$ corresponding to a given multiclass surrogate loss $\Phi^\nu\_{01}$. **We explicitly show that the consistency bounds of the deferral loss (Lemma 4.3) inherently depends on the consistency bounds of the multiclass surrogate loss** $\Phi^\nu_{01}$ from the comp-sum family (Equation 1). For example, choosing the log-softmax surrogate corresponds to setting $\nu = 1$. In this case, the relevant transformation function is explicitly given by $\mathcal{T}^1(u)=\frac{1+u}{2} \log[1+u] + \frac{1-u}{2} \log[1-u]$ [8]. Then, by taking the inverse of this transformation, we obtain the function $\Gamma^\nu(u)=(\mathcal{T}^\nu)^{-1}(u)$ leading to a bound with the function $\overline{\Gamma}^\nu(u)=||\overline{\boldsymbol{\tau}}||\_1 \Gamma^\nu(\frac{u}{||\overline{\boldsymbol{\tau}}||\_1})$ depending on both the surrogate transformation $\mathcal{T}^{\nu}(u)$ and the L1 norm $||\overline{\boldsymbol{\tau}}||\_1$. We will explicitly include these definitions and clarifications in the revised manuscript. > The clusters $C_{cla}^{M_1}, C_{cla}^{M_2}$ defined in Experiments in 5.2 seem not distinct as described. We confirm that clusters $C\_{cla}^{M_1}$ and $C\_{cla}^{M_2}$ were explicitly constructed to represent distinct regions of expertise, although they do indeed exhibit some overlap. Upon review, we acknowledge a minor inconsistency in the original manuscript regarding their definitions (but not on experiments). The correct cluster assignments are $C^{M\_1}\_{cla} = \lbrace C\_1, C\_2, C\_4\rbrace$ and $C^{M\_2}\_{cla} = \lbrace C\_1, C\_5, C\_6 \rbrace$. We will correct this in the revised manuscript. ### References [1] Mao, et al. (2023). Two-Stage Learning to Defer with Multiple Experts. NeurIPS23 [2] Mao, et al. (2024). Regression with multi-expert deferral. NeurIPS24 [8] Mao et al. (2023). Cross-entropy loss functions: theoretical analysis and applications. ICML23
null
null
null
null
null
null
MUDDFormer: Breaking Residual Bottlenecks in Transformers via Multiway Dynamic Dense Connections
Accept (poster)
Summary: The paper proposes Multiway Dynamic Dense (MUDD) connections to improve cross-layer information flow in Transformers. By dynamically computing per-position weights for query, key, value, and residual streams, MUDD mitigates representation collapse and enhances depth efficiency. Claims And Evidence: Most claims are well-supported, but efficiency trade-offs and broader comparisons need discussion. Methods And Evaluation Criteria: Methods and evaluation are well-aligned with the problem, but efficiency trade-offs and broader comparisons could strengthen the study. Experiments confirm efficiency gains, depth utilization, and representation diversity. However, Inference slowdown (~10–15%) is measured (Table 4) but not addressed. Focuses on DenseFormer and Hyper-Connections, lacking comparisons to OmniNet or MoE models. Theoretical Claims: The paper does not present formal theoretical claims or proofs. Its contributions focus on empirical improvements through MUDD connections rather than theoretical guarantees. The complexity analysis (Section 2.6) is straightforward and aligns with standard FLOP and parameter counting methods. No issues were found. Experimental Designs Or Analyses: The experimental design is rigorous and supports the claims, though broader architectural comparisons and efficiency optimizations would strengthen the analysis. Supplementary Material: Yes, I have reviewed the supplementary material. Relation To Broader Scientific Literature: The paper builds on residual connections (He et al., 2016) and DenseFormer (Pagliardini et al., 2024), addressing Transformer depth inefficiencies (Merrill et al., 2022) and representation collapse (Liu et al., 2020b) with dynamic multiway dense connections, though it lacks comparisons to alternative cross-layer architectures like OmniNet (Tay et al., 2021a) or MoE-based models (Fedus et al., 2022). Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The paper presents a novel combination of dense connections and dynamic weight generation, improving Transformer efficiency with minimal overhead. - Strong empirical validation with extensive scaling laws, ablation studies, and interpretability analysis. - Clear and well-organized writing with useful visualizations and pseudo-code for reproducibility. Weaknesses: - Inference slowdown (~10–15%) is measured but not optimized. - Limited comparison to alternative architectures beyond DenseFormer and Hyper-Connections. - No real-world deployment discussion, which would strengthen practical relevance. Other Comments Or Suggestions: N/A Questions For Authors: N/A Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks! > it lacks comparisons to alternative cross-layer architectures like OmniNet (Tay et al., 2021a) or MoE-based models (Fedus et al., 2022). **vs. OmniNet** We've already compared MUDD connections with OmniNet briefly in Related Work. While both aim at promoting information flow across the whole model, they have major differences which we elaborate below. The additional Attend operation in OmniNet for omni (i.e. all-to-all) attention is computationally expensive (overhead generally larger than the whole original forward pass because of much longer seq length). The authors propose to mitigate it by efficient attention variants. MUDD connections is much more efficient by: 1) Only depth-wise aggregation (DA) is introduced, because the other omni attn paths can be formed by composition of DA and within-layer MHA; 2) DA is implemented as lightweight query-wise attention (pls see reply to RYaY). The computation overhead (both theoretical and practical) is much lower. There is no official open source implementation of OmniNet and we do not implement and evaluate it due to time constraints. Instead here we do a rough comparison based on the C4 language modeling results in Table 1 of their paper, because it is most similar to our main experiments in terms of task, data and training compute. They compare models with roughly the same number of parameters, as also done in our experiments. MUDDPythia-1.4B/2.8B's relative reduction on ppl (5.1%/5.1%, see Table 2) is larger than OmniNet_T (4.2%) with the most expensive Transformer self-attention, which further degrades to OmniNet_P's 2.6% and OmniNet_B's 0.4% when more practical efficient attention is used to mitigate the prohibitive computation cost of OmniNet_T. The distinctive advantage of OmniNet over MUDD is allowing early layers to attend to late layers (i.e. back attention). This recurrent processing can increase the effective depth. How to enable back attention while maintaining efficiency is a very valuable research direction which we'd like to explore in the future. **vs. MoE** MoE and MUDDFormer are both architectures with dynamic weights. However, MoE uses these weights to select experts *within* a layer while MUDDFormer uses the weights to aggregate outputs *across* layers. They are complementary approaches and can be combined. To empirically compare MoE and MUDDFormer, we train two MoE models MoE-1.8B-A405M and MoE-4B-A834M with the same activated parameters as 405M and 834M dense models, using the same training settings in the scaling law experiments. For MoE specific settings, we largely follow OLMoE, choose 2 experts out of 16, use token choice policy with expert capacity 1.5 and load balance loss coefficient 0.01. Although the FLOPs of MoE is larger than that of MUDDFormer because of expert capacity greater than 1, and the number of total parameters is 4.4x - 4.8x larger, MUDDFormer is slightly better than MoE in perplexity (table below), showing its efficient utilization of *both* parameter and computation. In contrast, MoE works by *decoupling* parameter and computation and relies on significantly expanding parameters for good performance. |total params|activated params|TFM++|TFM++ MoE|MUDDFormer |-|-|-|-|-| |405M / 1.8B / 405M |405M|11.69 |10.83|**10.77** |834M / 4B / 834M |834M|9.47 |8.88|**8.85** >Inference slowdown (~10–15%) is measured but not optimized. As discussed in the paper, inference slowdown primarily stems from the series of small operations. Currently we implement inference in pure PyTorch and optimize speed mainly relying on torch.compile. We believe it can be further optimized by writing custom kernels which is left for future work. We'd like to note that even without further optimization, the 10-15% slowdown is entirely worth paying in practice considering the 1.8x-2.4x gain in computation and parameter utilization. > Limited comparison to alternative architectures beyond DenseFormer and Hyper-Connections. As most recent cross-layer architectures, the authors of DenseFormer and Hyper-Connections (HC) already compare them with several related approaches and show their advantage over these approaches, e.g. DenseFormer > Depthwise Attention (ElNokrashy et al., 2022), HC > ResiDual (Xie et al., 2023), Altup (Baykal et al., 2024). Therefore, we do not compare with these cross-layer approaches anymore by showing MUDD connections > HC > Denseformer. > No real-world deployment discussion, which would strengthen practical relevance. As a general component that significantly enhance Transformer architecture, MUDD connections have broad real-world applications, especially for compute or memory resource-constrained settings, e.g. running LLMs on mobile, considering MUDDFormer's much improved parameter and computation utilization over Transformer. For example, one can use a ~2B MUDDFormer model to replace a twice bigger Transformer model and still get similar performance, so that the resource requirement can be lowered,
Summary: The paper proposes a new method for aggregating information from previous layers that improves information flow in transformers. Specifically, the authors combine two methods for the aggregation: gated, token-specific weighting and separate streams for inputs to the attention (queries, keys, values, and residuals). The authors demonstrate improvements on the transformer models with fixed FLOPs per training, when compared to models not incorporating the MUDD framework. Claims And Evidence: The authors claim that adding multiway dynamic dense (MUDD) connections improves the performance of the model. They demonstrate this on the example of the transformer architecture by integrating MUDD and showing that it can achieve the performance of models trained with twice the theoretical compute. While the evidence in terms of theoretical compute is very strong, the paper would benefit from reporting additional empirical measurements of performance from the main training, such as memory usage and wall-clock time for the main experiments. Aggregating multiple layers might lead to increased memory consumption, so it is important to consider this while designing a scalable architecture. Moreover, there are discrepancies between theoretical FLOP computations and practical wall-clock time. While the paper reports tokens/s in Table 4, this metric reflects latency for different models than those compared in the main claim with scaling curves. The paper would be improved if the timing was shown for the main experiment or if both were provided. Methods And Evaluation Criteria: The selected benchmarking datasets and evaluation criteria are sensible but should be extended to include memory and wall-clock time measurements for the main experiments. Theoretical Claims: The only theoretical claim is the complexity analysis presented in Appendix B. However, the transitions in the analysis (equations 11 and 12) are extremely informal and could be clarified further with a more detailed derivation. Experimental Designs Or Analyses: The experimental comparison in the fixed FLOP scenario, particularly in relation to Pythia, makes sense. There are additional analyses that further give intuition about the underlying mechanisms in MUDD. Supplementary Material: The supplementary material, which comprises the theoretical derivation of the additional FLOPs in the MUDDformer, hyperparameters, and additional visualizations of the attention, is generally appropriate and helpful in context. Relation To Broader Scientific Literature: The paper effectively positions itself as a continuation of research on DenseNet and DenseFormer by introducing dynamic connections to improve selectiveness in aggregation. It covers relevant recent literature, and it may be beneficial to also cite HighwayNetworks [1], which introduced a gated version of the residual connection earlier. This citation could further enrich the paper’s motivation by linking it to foundational ideas that combine aspects of DenseNet with the original HighwayNetworks concepts. [1] Srivastava, Rupesh Kumar, Klaus Greff, and Jürgen Schmidhuber. "Highway networks." arXiv preprint arXiv:1505.00387 (2015). Essential References Not Discussed: I did not find any essential references not discussed in the paper. Other Strengths And Weaknesses: The paper would benefit from additional clarity and motivation in some sections, and figures could be described more thoroughly. Some claims might be refined or supported with further justification. For instance: - l14-l17: The discussion of how Pre-Norm stabilizes training by preventing representation collapse (Liu et al., 2020b) - the citation does not support this claim and the claim itself is not related to the paper. - l105-108: presenting dynamic dense connections as depth-wise single-headed self-attention needs further justification. Additionally, the paper is full of grammatical errors and wording issues: - l078: "benefit from differentiated" -> "benefit from different.", also 'different' as a word here is redundant because of what follows directly after. - l088: "differentially" -> "differently." - l138: "grows" -> "grow.", l640: "calculates" -> "calculate" (this error appears in multiple locations). - l643: "negible" etc. Other Comments Or Suggestions: - Questions For Authors: In summary, while the paper contains several minor issues that, if addressed, could further improve its clarity and impact, the main result is both strong and potentially impactful, particularly from a scalability perspective. I am willing to reconsider my evaluation once these issues have been addressed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks! (mem = memory, backprop = backpropagating) >mem usage and wall-clock time for main experiments Peak activation mem usage for training a transformer in float16 with L layers, N heads, seq length T, batch size B and model dim D occurs at the beginning of backprop and is composed of two parts: - hidden states for L layers: 2LBTD (grad checkpointing) - activation mem for a layer: BTD(34+6NT/D) (outlined in [1], recomputation of layer L when backprop it) Comparison of activation mem usages: |model|activation mem|mem $\Delta$ ratio| |-|-|-| |TFM++|2LBTD+BTD(34+6NT/D)|1| |DenseFormer|2LBTD+BTD(34+6NT/D)+**2LBTD**|L/(L+17+3NT/D)| |MUDDFM|2LBTD+BTD(34+6NT/D)+**6BTD**+**2LBTD**|(L+3)/(L+17+3NT/D)| DenseFormer adds 2LBTD to store gradients for each layer's hidden state when backprop DA after layer L. Based on this, MUDDFormer adds another 6BTD for recomputation of layer L's multi-way hidden states Q,K,V when backprop it (they are not stored for all layers but are recomputed during backprop). The extra memory ratio for MUDD is (L+3)/(L+17+3NT/D) and typical values are less than 30%. During inference, the activation mem usage is dominated by the KV cache, which is not impacted by the extra mem brought by MUDD. We report actual mem usage (measured using jax.profile) and wall-clock time for both the main (Fig 2, Tab 3) and efficiency (Tab 4) experiments below. Beside activation mem, the actual mem usage also includes model params, grads and optimizer states, and is affected by JAX compiler optimizations. |model|model size|wall-clock time (hour)| rel. speed |mem (GB)|mem $\Delta$ ratio|v5p pod size|tokens|batch size| |-|-|-|-|-|-|-|-|-| |TFM++ / MUDDFM|405M|5.7/7|81%|86/111|29%|16|7B|.5M| ||834M|20/25.1|79%|225/273|21%|16|15B|.5M| ||1.4B|29.5/32.5|91%|301/386|28%|32|26B|.5M| ||2.8B|122/145|84%|1352/1648|22%|128|300B|2M| ||6.9B|251/262|96%|1887/2222|17%|128|300B|2M| |Pythia / MUDDPythia|1.4B|163/183|89%|1296/1655|28%|64|300B|2M| ||2.8B|124/154|81%|1318/1655|25%|128|300B|2M| For all model architectures and sizes, the relative training speed is ~80%-90%, which could be further improved by custom kernel implementation. The extra memory ratio of MUDD is ~20%-30%, comparable to that of HyperConnections (Tab 9 in their paper). As noted in our paper, the results for efficiency experiments (row 3-5) are of more practical relevance because they represent more commonly used architecture (Transformer++) and model sizes. [1] Korthikanti et al. (2022). Reducing activation recomputation in large transformer models > the analysis (eq. 11 and 12) could be clarified further with a more detailed derivation. Due to space limit, we'll refine the derivation and add explanation in the revised version. >citation to HighwayNetworks As the precursor to residual connections, HighwayNetworks (HN) indeed has some fundamental linkage with MUDD connections, e.g. both take inspiration from sequence model architectures and apply depthwise (HN from LSTM, MUDD from attention). HN is also the first to propose the critical concept of *input dependent* gating when mixing outputs between layers. We'll cite HN in the revised version and discuss the linkage with MUDD in more detail. >The discussion of how Pre-Norm stabilizes training by preventing representation collapse (Liu et al., 2020b) There may be a misunderstanding. What Liu et al. propose is that pre-ln *causes*, rather than prevents, representation collapse (RC) (see the paragraph before Section 4.2 in their paper). Some other papers (e.g. Hyper-Connections, ResiDual) also cite Liu et al. when discussing RC. Since RC in deep pre-ln Transformers is one of the issues tackled by MUDD, we cite this paper too. >dynamic dense connections as depth-wise self-attention needs further justification. Given a sequence $x\in R^{T\times D}$, the output of dot-product self-attention at position i is: $softmax(x_iW^Q(x_{:i}W^K)^T)x_{:i}W^V$ (a) where $x_{:i}W^K\in R^{(i+1)\times d}$ (d is head dim) are i+1 input-dependent keys. Combining (5) and (6) in the paper, the output of dynamic DA at layer i for position t is (operating depthwise across layers): $(GELU(X_i[t]W_1)W_2+a_i)X_{:i}[t]$ (b) Comparing (a) and (b), $W_1$ plays the role of query projection ($W^Q$) and $W_2^T\in R^{(i+1)\times d}$ (d=i+1 is the inner dim of MLP, also the head dim) are i+1 keys as parameters independent of input. So dynamic DA can be seen as lightweight self-attention except: - keys are independent of input; - a learnable positional bias $a_i$ is used; - softmax is removed. Instead, GELU activation is applied to query (more like linear attention); - $W^V$ transformation is not used. While theoretically these simplifications may impact the representation capacity, we empirically found that adding more sophisticated ingredients in DA (e.g. input dependent keys, softmax) does not bring improvement and slow down training. >grammatical errors and wording issues Thank you! We'll fix them in the revised version. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the comments. Given the authors comments i decided to increase my score. The biggest concern left for me is that for now the method offers a trade-off between memory+wall-clock time and perplexity. It would be beneficial to demonstrate that MUDD can create configurations that at the same time limit memory, wall-clock time and have better perplexity than baselines (for example by limiting the number of parameters for MUDDformers). While current perplexity benefits are good, 30% of memory increase is also not negligible. --- Reply to Comment 1.1.1: Comment: >It would be beneficial to demonstrate that MUDD can create configurations that at the same time limit memory, wall-clock time and have better perplexity than baselines (for example by limiting the number of parameters for MUDDformers). We train a MUDDFormer using the same hyperparameters as training the 405M models in scaling law experiments, except reducing the number of layers from 24 to 19 to limit its number of parameters. As shown in the table below, compare with Transfomer++-405M, this model trains 4.4% faster, consumes 5.9% more memory, while still has lower perplexity. | model| n_layers | wall-clock time (hour) |rel. speed|mem (GB)|mem $\Delta$ ratio|ppl| |-|-|-|-|-|-|-| Transfomer++-405M | 24 | 5.7 | 100% | 86 | - | 11.68 | MUDDFormer-342M | 19 | 5.46 | 104.4% | 91 | 5.9% |11.12 | We'd like to emphasize two things: 1) **Interpreting this result**: Due to insufficient time, we only experiment with the ~400M sized models (81% relative training speed and 29% $\Delta$mem for MUDDFormer) to show the *existence* of such a configuration. Based on the results in the table in our previous reply, for other larger model sizes, e.g. 6.9B (96% relative training speed and 17% $\Delta$mem), we may reduce fewer parameters for MUDDFormer, and thus sacrifice less performance, to approximate the wall-clock time and memory of the Transformer++ baseline. 2) **Wall-clock time vs memory usage**: While you put a strong emphasis on memory usage and we have empirically confirmed that MUDDFormer's actual memory usage is comparable to those of other cross-layer approaches as baselines (DenseFormer, Hyper-Connections), we still argue that training throughput (or equivalently wall-clock time per step) is generally more important than memory usage for model training. In a matched total wall-clock time setting, an architecture with higher training throughput (e.g. the baseline Transformer++) can train more steps and naturally get better results. On the other hand, most of our training runs on TPU are compute-bound (by total FLOPS of tensor cores) rather than memory bound (by HBM memory capacity), which means that with a given model size, modest increase of memory usage has little impact on training throughput as measured in tokens/second, though it can affect the max batch size that can be used. As known, larger batch size doesn't always lead to better results.
Summary: The paper proposes Multiway Dynamic Dense (MUDD) connections to enhance cross-layer information flow in Transformers by augmenting residual connections with dynamic, multi-headed dense aggregations. Key innovations include decoupling the query, key, value, and residual streams of each Transformer block, enabling dynamic weighting of connections based on position-dependent hidden states. This improves expressiveness over static/dense-only approaches (e.g., DenseFormer) and boosts scalability. MUDDFormer matches performance of larger models (e.g., Pythia-6.9B) with 2.4× less compute, achieving state-of-the-art results in language modeling while adding minimal overhead (~0.2% params, ~0.4% FLOPs). Ablations and analyses (e.g., attention activation, representation collapse) validate the method’s effectiveness. Claims And Evidence: The claims include: MUDD improves Transformer scalability and efficiency by addressing residual stream bottlenecks. Supported by experiments showing better perplexity curves vs. baselines (e.g., 834M MUDD matches 1.8× compute-to-loss of Transformer++). Multiway dense connections enhance in-context learning (ICL). Evidenced by 5-shot performance rivaling 12B models and analysis of attention head activation. Minimal overhead (0.2% params, 0.4% FLOPs). Verified in Section 2.6 and Table 1, though real-world overheads (Table 4) are slightly higher. Potential gaps: Claims about "circuit composability" rely on indirect analyses (e.g., head activation) rather than explicit mechanistic studies. Efficiency claims could vary with hardware/implementation (real-world overheads are ~10-15% for training). Methods And Evaluation Criteria: The dense connection mechanism (dynamic, multiway) is well-motivated for addressing depth-related issues. Evaluation combines scaling laws, downstream tasks, and structural analyses. Benchmarks (Pile, FLAN) and baselines (Pythia, DenseFormer) are appropriate. However, results could benefit from more diverse tasks (e.g., structured vs. semi-structured data). Theoretical Claims: No strict Theoretical Claims. This paper relies on empirical validation. Experimental Designs Or Analyses: Scaling laws span 405M to 1.4B models, with fair comparisons to Transformer++/Pythia. Ablations isolate dynamic/multiway contributions. Downstream evaluations cover both generative and ICL tasks, though tests on vision models are limited. Supplementary Material: I have read the Supplementary Material, which contains the modeling part without training code and model weight. Relation To Broader Scientific Literature: Builds on DenseFormer/Hyper-Connections but innovates via dynamic weights and multiway streams. Essential References Not Discussed: Limited discussion on other dynamic weight methods (e.g., Llama 3’s routing) or global memory approaches (OmniNet). Other Strengths And Weaknesses: Strengths: 1. Simple yet effective method with minimal overhead. 2. Strong empirical results across scales. 3. Insightful analyses on representation collapse and head activation. 4. Extend experiments on imagenet. Weaknesses: 1. Limited theoretical understanding of dynamic connections’ role. 2. Overheads could be prohibitive for edge/hardware-constrained scenarios. Other Comments Or Suggestions: 1. Consider releasing pretrained models for reproducibility. Questions For Authors: 1. How does MUDD handle long contexts (e.g., 8k tokens)? Does it exacerbate memory issues? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks! >Limited discussion on other dynamic weight methods (e.g., Llama 3’s routing) or global memory approaches (OmniNet). Although we don't know what you mean by "Llama 3's routing", we do a comparison with another representative dynamic weight method MoE, and a global memory approach OmniNet (which in fact has already been discussed briefly in Related Work). For detail please see reply to 5mRp. >Limited theoretical understanding of dynamic connections’ role. In order to get a better understanding of dynamic connections’ role, we extend the discussions in the paper on connections between MUDD and self-attention. **dynamic dense connections as single-head self-attention** Given a sequence $x\in R^{T\times D}$, the output of dot-product self-attention at position i is: $softmax(x_iW^Q(x_{:i}W^K)^T)x_{:i}W^V$ (a) where $x_{:i}W^K\in R^{(i+1)\times d}$ (d is head dim) are i+1 input-dependent keys. Combining (5) and (6) in the paper, the output of dynamic DA at layer i for position t is (operating depthwise across layers): $(GELU(X_i[t]W_1)W_2+a_i)X_{:i}[t]$ (b) Comparing (a) and (b), $W_1$ plays the role of query projection ($W^Q$) and $W_2^T\in R^{(i+1)\times d}$ (d=i+1 is the inner dim of MLP and is also the head dim) are i+1 keys as parameters independent of input. So dynamic DA can be seen as lightweight self-attention except: - keys are independent of input; - a learnable positional bias $a_i$ is used; - softmax is removed. Instead, GELU activation is applied to query (more like linear attention); - $W^V$ transformation is not used. While theoretically these simplifications may impact the representation capacity, we empirically found that adding more sophisticated ingredients in DA (e.g. input dependent keys, softmax) do not bring improvement and slow down training. **multiway dynamic dense connections as multi-head self-attention** For aggregating each of the four input streams (Q, K, V, R) there is a head. In our implementation (see Pseudo-code in Appendix A), the four heads share the query projection $W^Q$ while each head has its own set of keys $W_2^T$. The head dim is $d$ expanded four times. While there is no more theoretical analysis, there is an intuitive understanding of dynamic connections’ role (just as pointed out in the original Transformer paper (Vaswani et, al., 2017) in "Why Self-Attention"): as the role the horizontal attention over sequence in MHA plays, the depth-wise attention over layers in MUDD creates a *direct* path between any pair of layers, besides *mediated* through the shared residual stream which may get crowded when the model gets deeper. >Overheads could be prohibitive for edge/hardware-constrained scenarios. Considering its significant improvement over Transformer as shown in extensive experiments, one can in principle use a MUDDFormer model to replace a twice bigger Transformer model and still get similar performance, so the resource requirement is actually *lower*, making MUDDFormer suitable for edge/hardware-constrained scenarios. Nevertheless, we admit that deploying LLMs on edge devices is beyond our expertise and the overhead on them needs further verification. We'd like to know how you make this speculation. We could discuss further if you have specific concerns. >Consider releasing pretrained models for reproducibility. Sure. We'll release training and inference code in both JAX and PyTorch and the pretrained models once the paper gets accepted. >How does MUDD handle long contexts (e.g., 8k tokens)? Does it exacerbate memory issues? For training a model with L layers, N heads, seq length T, batch size B and model dim D, the extra activation memory ratio for MUDDFormer over Transformer is (L+3)/(L+17+3NT/D) (please see reply to RYaY for derivation), which decreases as context length T increases. We measure and compare the actual memory usage for training MUDDFormer-6.9B and Transformer++-6.9B on 8K tokens of contexts, using the same batch size of 2M tokens as the efficiency experiment in Table 4. The extra memory usage ratio of MUDDFormer is ~10%, which would not become the bottleneck or cause memory issues under normal circumstances. For inference, the activation mem usage is dominated by the KV cache which grows with context length, so the relative impact of MUDD connections on memory consumption diminishes as the context gets longer.
null
null
null
null
null
null
null
null
Neurosymbolic World Models for Sequential Decision Making
Accept (poster)
Summary: The paper presents SWMPO, a framework for learning neurosymbolic Finite State Machines (FSMs) to model environmental structures for policy optimization. The key contributions are an unsupervised learning algorithm for training modular world-model primitives from low-level continuous observations, a state-machine synthesis algorithm for constructing environment-specific FSMs, and evaluations showing these FSMs effectively support model-based Reinforcement Learning (RL). The framework was tested in environments like PointMass, LiDAR-Racing, Salamander, and BipedalWalkerHardcore, demonstrating accurate FSM synthesis and competitive RL performance. Claims And Evidence: The claims in the submission are generally supported by clear and convincing evidence. The authors provide a detailed description of their framework, SWMPO, and its components, including the unsupervised learning algorithm for training modular world-model primitives and the state-machine synthesis algorithm for constructing environment-specific FSMs. They also present evaluations in various simulated environments, such as PointMass, LiDAR-Racing, Salamander, and BipedalWalkerHardcore, demonstrating the effectiveness of their approach in synthesizing accurate FSMs and its competitive performance in model-based RL tasks. However, some claims could be considered problematic due to the limitations of the framework. For instance, the assumption that the latent categorical variable Mt​ can be characterized by a function $m:(o_{t−1​},a_{t−1​},o_t​)\to m_t$​ might be too restrictive for more complex environments where the mode variable is not easily identifiable from a single transition. Additionally, the pruning mechanism, while helpful in simplifying the state machine, might lead to removing transitions that are important for accurate modeling in specific scenarios. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for the problem of synthesizing neurosymbolic world models for sequential decision making. The framework’s ability to learn structured world models from low-level observations and use them for efficient policy optimization is demonstrated through appropriate methods and comprehensive evaluations. The choice of benchmark environments and metrics effectively supports the claims made in the paper. Theoretical Claims: There is no theoretical analysis in this paper. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes, all parts. Relation To Broader Scientific Literature: The concept of neural world models has been explored in previous works, such as the recurrent world models by Ha and Schmidhuber (2018). These models use neural networks to predict future states and facilitate policy evolution. The proposed SWMPO framework extends this by incorporating a structured representation through FSMs, allowing for more interpretable and modular world models. The use of structured models in RL has been explored in various contexts, such as hierarchical RL (Xu & Fekri, 2021; Botvinick, 2012) and modular RL (Simpkins & Isbell, 2019). These approaches aim to improve policy learning by encoding structure into the policy architecture. The SWMPO framework differs by focusing on the synthesis of a structured world model rather than directly encoding structure into the policy. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths** 1. The paper proposes a creative combination of neurosymbolic approaches, integrating neural networks with finite-state machines to model complex environments. This hybrid approach is innovative and addresses the limitations of purely neural or symbolic models. 2. The paper is well-structured and clearly presents the proposed methods, experimental designs, and results. **Weaknesses** 1. The paper relies on several assumptions, such as the identifiability of the latent categorical variable $M_t$ and the minimality of modes. These assumptions might be too restrictive for more complex environments where the mode variable is not easily identifiable from a single transition. 2. The assumption that all POMDPs share the same set of modes but have different mode-transition dynamics might limit the framework's applicability to environments with highly diverse dynamics. 3. While the benchmark environments are relevant, they might not be sufficiently challenging to fully test the framework's capabilities. More complex, real-world scenarios could better assess the framework's performance. 4. The framework involves multiple components, including neural primitives, state-machine synthesis, and transition predicate synthesis. Its complexity might make it challenging to implement and optimize, especially for less experienced researchers or practitioners. Other Comments Or Suggestions: See questions. Questions For Authors: 1. The framework relies on assumptions about the identifiability and minimality of modes. How would the method perform when these assumptions are violated, e.g., in environments with overlapping mode dynamics or non-minimal modes? Can you provide strategies to mitigate such issues? 2. The evaluations are conducted in simulated environments. What challenges do you foresee when applying SWMPO to real-world tasks (e.g., robotics, autonomous systems)? Have you considered case studies or initial experiments with real data? 3. How does the framework’s computational complexity scale with the number of modes or environment complexity? Are there strategies to optimize efficiency for high-dimensional state spaces? 4. The pruning approach removes spurious transitions. How sensitive is the pruned FSM’s performance to the choice of the error tolerance factor ε? Can you provide an analysis or guidelines for selecting ε? 5. How does SWMPO compare to hybrid neuro-symbolic models that explicitly encode domain knowledge (e.g., physics-based constraints)? Could combining such knowledge with SWMPO further improve performance? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer **pyge** for their thoughtful comments and are committed to incorporating your feedback into the manuscript. > **Note:** Since our submission, we have demonstrated stronger RL performance (see response to **B4Kd**). Please excuse our brevity due to the character limit. ## Mode Identifiability Assumption Please refer to the response to Reviewer **bZQ6** (*"Mode Identifiability Assumption"*). ## Mode Sharing and Diverse Environments In our original submission, we stated a stronger assumption than needed. Assumption 4.5 should instead be: > “The modes in the active POMDP must be a subset of the modes in the offline POMDPs.” E.g, consider two offline POMDPs: one with modes A and B and one with modes C and D; the active POMDP could consist of modes A and C which we argue is not very restrictive. We will update our manuscript to reflect this correction. We also note that the **Salamander** environment already exhibits high diversity, as it is a 3D simulation of a real robotic platform [5, 6] with computational fluid dynamics and rigid-body contacts. ## Real-World Applicability Please see response to **bZQ6** (*"Real-World Applicability"*). ## Framework Implementation Challenging To help the community build on our results: - We open-source our source code - The pseudocode typeset in the manuscript closely follows our implementation - As mentioned, we leverage off-the-shelf, well-established implementations of key algorithms ## Computational Complexity Let $n$ be the number of experiences $(o_t, a_t, o_{t+1})$ of dimension $d$, $m$ the number of modes. The computational complexity of training is driven by these steps: 1. **Solving Eq. 1:** To solve Eq. 1 we use SGD, which is $O(G_1 n d K_1)$, where $G_1$ is the number of SGD steps, and $K_1$ accounts for the architecture of the model (see [9] for discussion). The variables corresponding to the amount of data, the number of iterations and the model complexity needed for training, are not independent. E.g., a complex system (e.g., high number of modes) might need more data. However, a monolithic model would also require more data because it too must implicitly learn the same complex dynamics. Thus, we do not expect this step to present significant more overhead than a monolithic approach. 2. **Training the $m$ neural primitives:** We assume datasets will have in expectation $O(\frac{n}{m})$ elements. If models are trained sequentially, this is $O(m G_2 \frac{n}{m} d K_2)$, where $G_2$ and $K_2$ account for the number of SGD steps and the architecture of the primitives. The caveats in 1. apply, but each step is at most as expensive as training a monolithic model. Primitives can be trained in parallel if wall-clock performance is critical. 3. **Transition Predicate Synthesis:** This step trains $2m^2$ decision trees with CART [12]. For the average case, we assume each dataset will be of size $O(\frac{n}{m})$. Therefore, the overall cost in expectation is $O(m^3 d \frac{n}{m} \log^2(\frac{n}{m}))$. Applying dimensionality reduction techniques may help with high-dimensional spaces. In some high-dimensional domains (e.g., visual domains, which are not the focus of our work), we would expect a different class of models (e.g., CNNs) to be more effective. Simplifying, the average complexity of training a SWMPO model is $$O(GnmdK + nmd + nmd\log^2(\frac{n}{m})),$$ where $G = \max(G_1, G_2)$ and $K = \max(K_1, K_2)$. Thus, the expected overhead of training in SWMPO compared to a monolithic model is only a linear factor on $m$. Then, during each model-based RL step, SWMPO evaluates the predicates of the current mode, which are small functions that could be evaluated in parallel, and the active mode's neural network. Thus, the expected run time is only an added small constant factor slower than a traditional monolithic world model. ## Pruning Guidelines We manually tuned $\epsilon$ by inspecting error plots (e.g., Figure 4), but did not find the state machines overly sensitive to it. As a guideline, we suggest (1) inspecting model error plots to see if there are transitions that should be pruned in the first place, and (2) using error between models as a starting value for $\epsilon$. Additionally, hyperparameter tuning techniques could automate the process. ## Domain Knowledge While out of scope for this manuscript, we consider incorporating domain knowledge explicitly a promising future direction. For instance: 1. Replace models in Eq. 1 with physics-informed neural networks or other models that enforce physical constraints 2. Integrate syntactic/semantic constraints into the predicate synthesis using systems like *Sketch* [12] We hypothesize that each approach would improve the local models and the FSM transition dynamics, respectively. We note that the assumption that systems can be decomposed into modes is a form of domain knowledge in itself. ## References Please refer to the response to Reviewer **bZQ6**.
Summary: In the setting of POMDP, the paper introduces SWMPO, a framework based on a Markov Decision space model in which each transition can be characterized by mode (FSM). That is, at each $t$, the transition occurs by $(o_t, a_t) \mapsto o_{t+1} = f(o_t, a_t | M_t) $ where $M_t is the mode at time $t$. The mode $M_t$ is to change based on $o_t, a_t$ through what they call "transition predicate". They train this model in two steps: (1) the first step in which the proxi $m_{\theta1}$ is included as "soft" modes so that the fuction of the form $$f_{\theta1}(m_{\theta2}(o_{t-1}, o_t, a_t) , o_t, a_t) $$ can predict the the state well and that $m_{\theta2}$ is not correlated to the future. Based on this $m_{\theta2}$, the clustering is conducted, so that any $(o_{t-1}, o_t, a_t)$ can be assigned to a particular mode. Then, for each mode $m$, $f(o_t, a_t | m)$ is trained to predict the observation whenever $(o_{t-1}, o_t, a_t)$ is of mode $m$. The transition predicate is trained so that the model transition to the best predicting state at any given $(o, a)$. At each round, SWMPO then uses small number of new environmental rollouts, and then uses the FSM that is newly synthized from the dataset. Their methods are compared against Monolithic Neural Model based RL as well as model-free RL and is shown to outperform them on several benchmark examples. Claims And Evidence: While it is true that their approach extended the FSM based approach to continuous domain, it is unfortunately a hard call to approve the claim that they demonstrate the advantage of the approach based on the model-based RL using a single monolithic neural network with no structure. Methods And Evaluation Criteria: The evaluation criteria of the work is based on mode-label accuracy and the sheer policy performance. I believe that these criteria are fair, with the first one validating the claim that they can nail down the mode when there is one. Theoretical Claims: There seems to be limited theoretical discussion. Experimental Designs Or Analyses: The experimental designs seems conventional, and I believe they are sound. Supplementary Material: There seems to be no major extra experiment or deep theoretical claims in the supplementary material. Relation To Broader Scientific Literature: N.A Essential References Not Discussed: Nothing in particular that comes to my mind, but PlaNet(Hafner et al 2019), Dreamer (Hafner et al 2021), SimPLe (Kaiser et al. 2020) might be worthy of mention as the the members of the kind that uses future-prediction model in a rather "meta" way. Also, in Neural Fourier Transform (Koyama et al) and Unsupervised Learning of Equivariant Structure from Sequences (Miyato et al) they also use the two-stage framework that consists of the first step of making the "prediction model" with the similar loss and the second "block-diagonalizing step" of decomposing the prediction into disentangled features. They might be somewhat related as well? Other Strengths And Weaknesses: ## Strengths - They extended FSM and its application to RL to the tasks in continuous domain, and brought it up to competitive level - They have shown an inspiring framework of including "mode/gear change"in the prediction, and realized it at the level that RL can be competitively performed. ## Weaknesses - It was sincerely my hope as the reviewer that this approach strongly outperforms the blackbox monolithic version, but it was not the case. - In the similar note to above, the paper is not too convincing in that the introduction of "mode" and of building sepeparate model is a beneficial approach. Other Comments Or Suggestions: It is hard to believe that, when the inductive bias of "neurosymbolic world model" is truely valid at the level of data-generation, the approach like this cannot strongly outperform the blackbox counter part neither in terms of training speed nor on sheer reward-based performance. Has there been any effort building an "extreme case" scenario? Clearly, the efficacy of the approach in terms of labeling accuracy / Labenshtein distance is proven to be solid. I wonder if there is no Reward that more directly depends on these labels. I value that you highlight the fact that SWMPO did not train any neural model online. I wonder if there is more quantitative way to highlight the SWMPO's gray-box approach. Questions For Authors: I think I have asked questions in the comments/suggestions section. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank reviewer **B4Kd** for their thoughtful comments and appreciate your recognition of SWMPO’s extension of FSM-based modeling to non-linear continuous domains and of its practical competitiveness. We are committed to incorporating your feedback into the final version of the manuscript. > **Note:** Since the results in our initial submission, we have further demonstrated a stronger performance of our approach over baseline methods in the RL experiment (see below). ## Performance over Monolithic Models We understand your interest in a stronger empirical performance gap between SWMPO and the monolithic model-based RL baseline. We decided to submit our manuscript with early results, but since the submission we have been able to make some small improvements to the algorithm that in turn have resulted in significant improvement in performance. We also spent comparable time improving the performance of the baseline. Specifically, this is what we have done for both SWMPO and the monolithic model-based RL baseline: 1. We ran the RL training 64 times, instead of the 16 times in our original experiment. 2. We further tuned the hyperparameters of all algorithms. 3. We applied a simple modification to the model-based rollout logic for both SWMPO and the baseline: instead of relying on the models for long trajectories (150 timesteps) ---which can lead to compounding errors--- we sampled a random observation from an environment trajectory and performed a shorter (30-step) model rollout. This reduces reliance on the model for long-horizon prediction, improving sample efficiency and stability for both methods. Note that the performance of the low-level neural models for long horizons is fully orthogonal to our claims, and the model-based RL community has mechanisms to scale neural forecasting performance. We will add this information to the final manuscript, along with our updated results. These adjustments increase the performance of both SWMPO and the model-based baseline. However, SWMPO sees a greater gain, resulting in a larger performance gap between SWMPO and the monolithic model of approximately **~40%** (~21 mean reward for SWMPO vs ~15 mean reward for vanilla model-based RL by timestep 75,000), a significant improvement over the much smaller gap in our original submission . Furthermore, SWMPO now reaches this performance level within **75,000 timesteps**, compared to **200,000** steps in our original submission. We stopped at 75,000 timesteps due to rebuttal time and compute constraints; however, we will include the experiment up to 200,000 steps in our revised manuscript. We encourage reviewers to check the updated results here (corresponding to an updated Figure 7): [https://anonymous.5open.science/r/anonymousfig-30AA/improvedrewards.png](https://anonymous.4open.science/r/anonymousfig-30AA/improvedrewards.png) We summarize these updated results in the table below: | Iteration | Baseline RL (Mean Reward) | Model-based RL (Mean Reward) | SWMPO (Mean Reward) | |----------------|---------------------------|-------------------------------|----------------------| | 20,000 | ~-1 | ~4.8 | **~5.2** | | 40,000 | ~0.5 | ~9 | **~14** | | 60,000 | ~0.5 | ~9 | **~17.5** | | 75,000 | ~4 | ~15 | **~21** | ## Extreme Case Scenario We hope that our new results above demonstrating improved performance over monolithic models renders the use of an “extreme case” where SWMPO would outperform unnecessary. Our `terrain-mass` environment is already a scenario that closely matches the assumptions of the framework. We did not construct a more extreme synthetic case, as the updated results already show a significant performance gap in our original setting --- without needing to simplify the task. ## Additional References We appreciate the reviewer's suggestion of additional references, and will include a discussion of all of them in the final manuscript.
Summary: The paper presents Structured World Modeling for Policy Optimization (SWMPO), a framework for unsupervised learning of neurosymbolic Finite State Machines (FSM) that capture environmental structure for policy optimization. The method operates in two main stages: (1) learning local “world-model primitives” that specialize in modeling different “modes” of a partially observed system, and (2) assembling those primitives into an FSM that captures transitions among those modes. These local world-model primitives are trained in an unsupervised fashion from offline data. Then, with limited new data from the current task, the system stitches the primitives into a new FSM that is specialized to the particular environment. Finally, the authors leverage this FSM representation for model-based policy optimization. In terms of experiments, the paper reports results on four main environments: (1) PointMass (2) LiDAR-Racing (3) Salamander and (4) BipedalWalkerHardcore. The empirical results suggest that (a) the approach can learn to recover latent modes with reasonable accuracy, sometimes outperforming classical switching system baselines like HMMs and switching linear dynamical systems, and (b) the resulting FSM-based world models can be used effectively for model-based RL, matching or marginally outperforming a comparable monolithic neural model in some test settings. Claims And Evidence: (1) The paper claims that modeling the environment with a finite set of local dynamics (modes) can lead to better structural world models. Through experiments (e.g., Figure 5 and related discussion), the authors show that the learned finite-state structure captures mode switching more accurately than certain baselines like HMM or switching linear models, especially in some of the environments (PointMass, LiDAR-Racing, Salamander). (2) The paper argues that offline training of local dynamics primitives, followed by environment-specific stitching, can improve efficiency in policy optimization. The authors’ experimental results (especially in PointMass) demonstrate that their approach, SWMPO, can use a short amount of new data in the active environment to synthesize an FSM that is then used for policy optimization. They compare a purely online-learned forward model vs. offline-learned primitives combined with minimal environment interactions, showing they achieve roughly similar or slightly better performance with fewer interactions. Methods And Evaluation Criteria: Methods (1) Neural Primitives: A neural network is trained to embed each observed transition into a continuous latent vector that captures the local mode. Then, an additional network predicts the next observation from the latent mode. (2) Clustering and Pruning: Those embeddings are then clustered (k-means), and the resulting assignment to clusters is used to label each transition with a symbolic “mode.” A pruning step removes spurious transitions among modes. (3) FSM Predicate Synthesis: Decision trees are used to learn conditions under which the FSM transitions from one mode to another, given the environment’s (observation, action) pairs. (4) Model-Based RL: The final FSM is used in a model-based policy optimization loop (Soft-Actor Critic). Evaluation Criteria (1) Mode Accuracy: Measured via Levenshtein distance between the predicted mode sequence and ground-truth mode labels for new episodes. (2) Policy Performance: Cumulative rewards achieved by the learned policy. Overall, the method and evaluation are comprehensive. Theoretical Claims: No detailed proofs (e.g., in an appendix) of correctness or identifiability are provided beyond references to known standard assumptions (like the existence of a unique minimal partition). Experimental Designs Or Analyses: The experimental design is centered around four environments of increasing complexity, each with a known “ground-truth” notion of mode (e.g., land vs. water, track type, or different obstacle types). This design is appropriate. Supplementary Material: The paper only provides the notation and partition pruning. Relation To Broader Scientific Literature: The references provided do give an overall sense that the authors are aware of classical baselines in this area. Essential References Not Discussed: The references included do not appear incomplete for an initial demonstration. Other Strengths And Weaknesses: Strength: 1. Structured Representation: The approach explicitly partitions the environment’s dynamics into interpretable “modes,” making it easier to reason about or visualize transitions. 2. Modular Reuse: Potentially very useful if the same local dynamics (e.g., “land locomotion,” “water locomotion,” “curved track,” etc.) reappear across multiple tasks. Weaknesses: 1. Scalability: If there are many different modes or if the environment is very complex high-dimensional, the overhead of separate networks plus a large transition graph may become expensive and potentially complicated. 2. Partial Observability: The method’s success hinges on the assumption that a single time step (previous observation, action, new observation) is enough to identify the latent mode. This assumption may fail in more complex partially observed tasks. 3. More analysis on extending to complex environment and real-world data. The mode presented in the paper is too simple, and might not be practicle for more complex setting. 4. Writing and Figure. (1) The authors should give examples of primitives at the very early stage for better readibility. (2) Mode should be presented in teaser figure,  as it is a key in this paper. Other Comments Or Suggestions: Analyzing or providing examples of failure cases Cocould help future readers understand the method’s limits. Questions For Authors: Thank you for the paper submission. Overall, it presents a promising neurosymbolic approach that merges mode discovery and WM. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank Reviewer **bZQ6** for their thoughtful comments, and we are committed to incorporating your feedback into the final manuscript. > **Note:** Since our initial submission, we have demonstrated stronger RL performance of SWMPO over baselines (see response to **B4Kd**). Please excuse our brevity due to the character limit. ## Scalability We understand your concern about the scalability of SWMPO in environments with a high number of modes or high dimensionality. The computational complexity of SWMPO is driven by three steps: (1) solving Eq. 1, (2) training of neural primitives, (3) synthesizing predicates. It can be shown that, in expectation, the overhead of training a SWMPO model is a constant factor dependent on the number of modes (training the end-to-end model while solving Eq. 1 is asymptotically equivalent to training a standard monolithic model). Additionally, this can be amortized by training the neural primitives in parallel if wall-clock performance is critical. We argue that this overhead is not prohibitive. For more details, please refer to our response to **pyge** (*”Computational Complexity"*). Additionally, the “forward pass” of the SWMPO model consists only of evaluating the transition predicates of the current mode (small functions that can be evaluated in parallel if needed) and the active mode's neural network. Thus, runtime overhead is only a small constant factor slower than a standard monolithic world model. ## Mode Identifiability Assumption As part of our follow-up work, we are working on generalizing the framework to cases where mode inference may require longer temporal context or probabilistic reasoning. However, we believe that for many useful applications this assumption holds. Indeed, many state-of-the-art systems continue to assume fully Markovian dynamics (e.g., Bhatt et al., ICLR 2024 [7]; Kuznetsov et al., ICML 2020 [8]). We argue that this assumption does not preclude the application of our method to useful problems. ## Real-World Applicability Our evaluation focuses on challenging but controlled simulated environments, which allow for systematic study of the components of SWMPO. These environments are aligned with recent work in the field, which are often benchmarked on systems of similar or lower complexity---e.g., simulated Cartpole in [0] (ICML 2020), simulated low-dimensional three-mode systems in [3] (NeurIPS 2021), and simulated grid-world environments in [4] (NeurIPS 2024). We thus argue that simulation is a valuable setting for benchmarking novel structure learning frameworks. Moreover, we note that the **Salamander** environment already features high complexity, including 3D locomotion of a simulated real robotic platform [5, 6], rigid-body dynamics, and computational fluid dynamics to simulate water. We believe this makes it a strong intermediate testbed bridging controlled and real-world scenarios. Nonetheless, we acknowledge the limitations of not including physical robotic platforms. While real-world deployment introduces important challenges, we believe it is important to first validate the core elements of the framework in simulation. Furthermore, there is precedent for transferring small learned automata to hardware: Liu et al. (preprint, 2025) [1] demonstrate a three-mode FSM controlling a quadruped robot. We do not claim that our current results are directly applicable to real-world systems, but we strongly believe that SWMPO is a promising foundation for future real-world deployment. ## Writing and Figure We will incorporate your feedback into the final manuscript by modifying the teaser figure to highlight environment modes. Primitives are presented early in the introduction (e.g., paragraphs 2–5), but we will add more detail to their description. ## References [0] Zhang et al., *Invariant Causal Prediction for Block MDPs*, ICML 2020 [1] Liu et al., *Discrete-Time Hybrid Automata Learning: Legged Locomotion Meets Skateboarding*, arXiv preprint, 2025 [3] Poli et al., *Neural Hybrid Automata: Learning Dynamics with Multiple Modes and Stochastic Transitions*, NeurIPS 2021 [4] *WorldCoder: a Model-Based LLM Agent*, NeurIPS 2024 [5] https://www.epfl.ch/labs/biorob/research/amphibious/salamandra/ [6] https://www.cyberbotics.com/doc/guide/salamander?version=R2021a#salamander-wbt [7] Bhatt et al., *CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity*, ICLR 2024 [8] Kuznetsov et al., *Controlling Overestimation Bias with Truncated Mixture of Continuous Distributional Quantile Critics*, ICML 2020 [9] Goodfellow, I., Bengio, Y., & Courville, A. (2016). *Deep Learning*. MIT Press [10] David Arthur, Sergei Vassilvitskii, *How slow is the k-means method?*, SCG 2006 [11] Klusowski & Tian, *Large Scale Prediction with Decision Trees*, Journal of the American Statistical Association, 2023 [12] Solar-Lezama, *The Sketching Approach to Program Synthesis*, APLAS 2009
null
null
null
null
null
null
null
null
EPIC: Efficient Position-Independent Caching for Serving Large Language Models
Accept (poster)
Summary: Existing work accelerates LLM workload by reusing the KV cache of a text chunk when it is the prefix of the request. Existing work break the "prefix" limitation by dynamically finding and computing the attention for a subset of tokens. This work further accelerate existing work by only performing recomputation on a subset of tokens. Experimental results show that this work can reduce TTFT by upto 8x. Claims And Evidence: The key assumption of this paper is attention sink, meaning that the first token in each text chunk will "absorb" the most attention. This assumption is not well-supported with empirical evidences (since this is the key assumption and it is not very well-known, I would expect some empirical evidences to prove that this is correct) and rationales. Methods And Evaluation Criteria: The datasets (5 datasets from LongBench) and evaluation metrics (TTFT, Accuracy) are pretty standard and make sense. Theoretical Claims: There is no formal proof involved in the paper. Experimental Designs Or Analyses: The evaluation involves 3 models (mistral, Llama 3.1 and Yi). I am not familiar with Yi, but mistral and Llama 3.1 all contains sliding window layers. Not sure how the authors handle sliding window layer. Not sure whether the proposed method work well when the chunk size is large. But since 512 is a typical chunk size in RAG application I would say evaluating it is just a better-to-have. Supplementary Material: Yes. The runtime breakdown makes sense because CacheBlend do full recomputation on the first layer. Relation To Broader Scientific Literature: This paper can be hugely beneficial to RAG applications, which is one of the key for a lot of modern LLM applications (e.g. search results summary, code generation, etc) and will benefit the broad audience. Essential References Not Discussed: Necessary references are included. Other Strengths And Weaknesses: Strengths: * Super fast and easy to implement in real production Weaknesses: * Cannot smoothly trade-off between full recomputation and no recomputation given a fixed chunk size. * The key assumptions (attention sink) need more empirical backup and more rationales. Other Comments Or Suggestions: No. Questions For Authors: Thanks for submitting to ICML. The problem that this paper tackles is definitely important and the proposed approach is pretty effective. My major confusion is about the attention sink assumption. Would be really nice if there are more empirical evidences or rationales. This assumption sounds to me that the cross-attention is mainly contributed by the first token in the text chunk, which seems to imply that other tokens don't really matter in terms of understanding the meaning of the text chunk. Several questions regarding the technical detail: how do you handle positional encoding and how do you handle sliding window layers? Regarding the evaluation: I would like to have some ablation study --- is your improvement mainly come from better token selection, or from static token selection scheme? A side question: will your solution remain effective when the text chunk size is large (say tens of thousands of tokens, e.g. a research paper). Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We address the questions and confusion below. > Key assumptions — attention sink. The attention sink phenomenon is well-studied in attention sparsity research. For example, StreamingLLM [1] and DuoAttention [3] discover that only a few useful tokens receive some attention score, while the remaining useless attention score (summing to 1.0) is absorbed by the first few tokens (called sink tokens). If the sink tokens are numerous, they overshadow meaningful ones. In addition, Minference [4] describes this phenomenon as a ^-shape pattern, while more recently LightTransformer identifies lazy layers dominated by attention sink heads. Besides, a key clarification is that too many sink tokens are actually bad rather than representing important information for each chunk. Relevant related work is provided below to clarify the phenomenon of attention sink further. We hope this addresses your concern. [1] G. Xiao, Y. Tian, B. Chen, S. Han, and M. Lewis, “Efficient streaming language models with attention sinks,” in The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024. [2] J. Tang, Y. Zhao, K. Zhu, G. Xiao, B. Kasikci, and S. Han, “QUEST: query-aware sparsity for efficient long-context LLM inference,” in Forty- first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024. [3] G. Xiao, J. Tang, J. Zuo, J. Guo, S. Yang, H. Tang, Y. Fu, and S. Han, “Duoattention: Efficient long-context LLM inference with retrieval and streaming heads,” in The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. Open- Review.net, 2025. [4] H. Jiang, Y. Li, C. Zhang, Q. Wu, X. Luo, S. Ahn, Z. Han, A. Abdi, D. Li, C. Lin, Y. Yang, and L. Qiu, “Minference 1.0: Accelerating pre- filling for long-context llms via dynamic sparse attention,” in Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024, A. Globersons, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. M. Tomczak, and C. Zhang, Eds., 2024. [5] R. Chen, Z. Wang, B. Cao, T. Wu, S. Zheng, X. Li, X. Wei, S. Yan, M. Li, and Y. Liang, “Arkvale: Efficient generative LLM inference with recallable key-value eviction,” in Advances in Neural Information Pro- cessing Systems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024, A. Globersons, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. M. Tomczak, and C. Zhang, Eds., 2024. > Positional embedding (PE) PICI prefills a doc, whose first token is given position 0, to generate their KV cache. Then PICI uses the cache directly without modifying PE. On the contrary, new queries receive position IDs based on their real positions. Again, too many tokens with PE 0 at the beginning of each chunk absorb excess attention without contributing useful information. We will add this detail in future versions. > Sliding window We disabled the optional sliding window for the Mistral model. > Large chunk size Currently, we chunk large documents into mid-sized segments (e.g., 512 tokens). We will add an analysis of the trade-off between chunking and processing long documents as a whole in future versions. > Source of improvement: from better token selection, or static token selection? LegoLink’s improvement comes from static token selection, specifically, the first few tokens of each chunk (except for the first chunk). These tokens are preselected for recomputation, offline before execution. The recomputation gives these "first few tokens" actual position IDs, makes them realize they are not the "first few" tokens anymore (they are in the middle now), and stops them from being sink tokens and absorbing too much attention.
Summary: The paper introduces a context caching method with limited recomputation called LegoLink. Prior work either fully recomputes KV caches for new inference calls (e.g. full re-encoding) or uses dynamic recomputation to recompute a fraction of total cached values (i.e. CacheBlend). To further reduce cost, LegoLink has a static set of values to recompute when reusing cache (primarily the first few tokens in each block, i.e. the attention sink). The authors demonstrate that this decreases time to first token and maintains strong performance. Claims And Evidence: I think the claims are generally supported by the evidence. A nitpick: I think the framing in the very first paragraph-- that LLMs have advanced the progress of AGI-- is not generally agreed and doesn't add much to the paper. This did not factor into my review scores, but personally I don't think it adds anything to use the term AGI here when it is still a contested idea. Methods And Evaluation Criteria: Generally yes, I think measuring time-to-first-token is reasonable for a method focused on improving efficiency in the prefill stage. The model sizes and choice of models seems reasonable. The authors substitute Needle in a Haystack for multi-doc retrieval (in line 260), saying that it is a more suitable benchmark for retrieval. I disagree-- I think this is actually generally a worse test of capabilities, as it is a much more synthetic task where the needle will be very distinct from the context. I don't think this really has an impact on the paper's claims, but I would consider revising the comment about suitability to be a comment about ease of evaluation or setup, or adding further justification for why you truly believe this is more suitable for assessing retrieval. Theoretical Claims: No-- no theoretical claims. Experimental Designs Or Analyses: I looked at the main experimental setup, comparing LegoLink to CacheBlend variants and full recomputation on performance and efficiency. I think the setup seems reasonable and well-documented. Supplementary Material: I read the appendices. Relation To Broader Scientific Literature: The paper brings attention to an interesting area of active study-- position-independent caching for efficient model serving-- and proposes a new method in this space. LegoLink has the advantage of being very simple conceptually and much faster than the prior state of the art (CacheBlend); the LegoLink-0 setting is also an interesting alternative to recomputation. I think these methods have the potential for adoption because of their low impact on performance, improvement on efficiency, and ease of integration into existing setups (strengthened by the implementation of the method to integrate with vllm workflows). Essential References Not Discussed: I think more careful discussion is warranted around the idea of caching context. Line 293 says that context caching was introduced in late June 2024, but this is only the time that Gemini started supporting this feature. The related work on page 8 actually does a much better job of tracing the line of work here from caching KV values for a single inference example, to prefix caching, to modular/position-independent caching. I think this is still missing some prior works that talked about PIC in terms of individual applications, including for [RAG](https://arxiv.org/abs/2410.07590) and [long-context ICL](https://arxiv.org/abs/2405.00200); both feature encoding in local-position-only/global-position-agnostic blocks and then potentially retrieving blocks to reuse. Other Strengths And Weaknesses: I've addressed strengths elsewhere in the review; I think my main concern not mentioned elsewhere is with the clarity of the framing. The paper starts as if it is fully introducing a new task of PIC and introduces a (to my knowledge) new view of PIC as analogous to the way code is pre-compiled. However, the paper then acknowledges that this problem has been studied before and instead begins framing itself as an improvement on an existing method (using LegoLink). Then, near the end, LegoLink-0 is introduced as a way to sidestep the need to do linking altogether. I think it's a good paper with a reasonable contribution, but it seems that the paper is not quite sure what story it is telling, and a revision of the narrative of the work would greatly enhance its readability and impact. Other Comments Or Suggestions: I would not describe tokens as "text-based words" (in line 26)-- I think something like "small units of text" is more accurate. The notation introduced in line 69 for the computational complexity of CacheBlend is strange and makes it a bit hard to follow. Generally we exclude constant factors in big-O notation, but here I understand you are trying to distinguish complexity between full recomputation, your method, and CacheBlend; I think writing CacheBlend as $O([0.15N]N)$ and yours as $O(kN)$ would make it clearer that your contribution is reducing the "15% of $N$" recomputation to a static $k$. In general I think that section could use an additional pass for clarity of text. Describing the models in 4.1.3 as "base" because they are not finetuned for these tasks is a bit confusing-- generally, I think of "base" as a distinction between instruct-tuned and non-instruct models, and you are using instruction-tuned models here. Figures are not always placed on the page where they are primarily referred to. Questions For Authors: Q1. In Figure 3, the full recomputation setting still shows stronger attention at the "chunk boundaries"-- is this solely because the chunks are separate documents? If you use the regular softmax instead of your min-max scaling, is this a really noticable difference? (In general a fan of using softmax instead of min-max for this figure, or at least including the softmax version in the appendix.) Q2. Do you consider your main contribution to be the framing of the PICI problem (using the metaphor of software linkage), the presentation of LegoLink as an improvement on CacheBlend, or the proposal of LegoLink-0 as a 0-linking alternative? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We address the review questions below. > Missing related work. Thank you for pointing this out. We will include a subsection discussing RAG applications and their connection to context caching. This addition will clarify PICI's positioning without modifying its intended scope. > AGI. We acknowledge that "AGI" is controversial and requires a precise definition when discussing it, especially in academic contexts. We will revise the first paragraph. > Needle in a Haystack Initially, we evaluated PICI on the same four datasets as CacheBlend (WikimQA, SAMSum, MultiNews, and Musique). To further demonstrate LegoLink’s generalizability, we selected the Needle in a Haystack dataset based on its popularity rather than ease of evaluation or improved results. We acknowledge its limitations as a synthetic task where the "needle" is highly distinct from the context. In the revised version, we will clarify our rationale for this dataset selection (popularity instead of methodological suitability) and include passage retrieval results from LongBench, where we expect similar findings. > Storyline and contribution. The revised story (or the original story we plan to tell) should go as follows: 1) We are the first to formally define the PIC problem and decompose its solution into a four-step framework: document chunking, cache generation, retrieval, and linking. This structured approach establishes a diverse solution space (contribution 1). 2) CacheBlend, the first PIC work, fits in our framework and mainly designs the linking step, while LegoLink enhances its efficiency (contribution 2). 3) LegoLink-0 further improves performance by mainly designing the cache generation step, thereby simplifying the linking step (contribution 3). This framework establishes a diverse solution space for future research. The key consideration is the distribution of computational costs across different steps. Allocating cost to the chunking or cache generation step resembles a compile-time expense—incurred once but benefiting all subsequent queries. Conversely, placing cost in the retrieval or linking step resembles link-time expense—introducing runtime overhead but allowing for more adaptive and potentially more accurate solutions. In the appendix, we call on future research to find more efficient and accurate solutions utilizing our framework. The above details are currently in the appendix and will be integrated into the main text. > Text clarity. We appreciate the feedback and will improve clarity on terms like "token," O-notation, "base" model, and figure positions. > Still strong attention on chunk boundaries (Figure 3) 1) Experiments show that the first token retains relatively strong attention absorption due to being a begin-of-sentence token ("<s>" in LLaMA). This insight motivates LegoLink-0, where we discard "<s>" tokens (and dummy tokens which are also "<s>" tokens), except for the first chunk. 2) In the softmax map, we could still observe that recomputation reduces boundary attention, but attention-score differences on mid-content are harder to observe. We will add a softmax map for cross-comparison. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I believe the proposed changes will improve the clarity and I think the work is a good contribution, so I have raised my score 3 -> 4. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer's recognition of the value of our paper and the increased score. Your constructive feedback has greatly helped us refine and clarify key aspects of our work. We remain committed to further improving the paper and addressing any additional suggestions or concerns. Thank you for your thoughtful evaluation and support.
Summary: The authors propose a system of context caching which can store pre-computed key value stores for multiple documents. The proper caches are then retrieved given a user query. The key insight is that only a fixed number of initial tokens needs to be recomputed in order to avoid shifting the attention score distribution due to the sink token phenomena. Claims And Evidence: The claims are supported by evidence. Methods And Evaluation Criteria: The evaluation is sufficient for the problem setting. Theoretical Claims: N/A Experimental Designs Or Analyses: The design of the experiments is sound. Supplementary Material: I did not view the supplementary material, as there was no reference to it nor perceived need to view it. Relation To Broader Scientific Literature: The contributions of this paper build on recent works which highlighted the appearance of sink tokens in transformers. Essential References Not Discussed: The cited references are sufficient. Other Strengths And Weaknesses: # Weaknesses - The pitfalls of prefix based caching are mentioned multiple times in the work, such as L138C1. But doesn't the presented algorithm suffer from the same limitation in that it is "limited to cases with exact prefix matches across requests?" - From what I can see, there must be a query to a database which has precomputed KV values, and your method proposes minimally recomputing the retrieved tokens, but it does nothing to address the above limitation, correct? Other Comments Or Suggestions: L227C2: Rough-L score --> Rouge-L I would suggest adding more details of the overall algorithm for cache retrieval to the appendix, as I am left with many questions regarding how this is done after fully reading the work. Questions For Authors: - In the current setup, what is the algorithm which decides a cache hit? - How does the current setup decide what to evict from the HBM cache in the GPU? - Are the caches only stored on GPU, or are they offloaded to CPU memory or hard disk space as well? - I assume that position encodings for the model are re-used by the chunks of the encoded prefix documents such that the whole process is permutation invariant? ## Post rebuttal Thanks for your responses to my review. I have decided to maintain my current score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We address the review questions below. > More details (overall workflow, cache hit def, indexing, cache swap in/out, too-large cache) of PICI system in the main text instead of the appendix. Due to page limits, we prioritized the attention sink effect in PIC and our simple yet effective algorithm, placing system details in the appendix. However, we acknowledge this omission and will move system details back to the main text. Below is a summary (from the appendix) related to review questions. PICI workflow: 1) Users cache frequently used documents via generate_cache API, receiving a cache_id. 2) Users pass queries and cache_ids to PICI in any order (position-independent), and PICI retrieves the cache with minimal overhead (simple cache_id → cache address mapping) and finishes linking as described in the paper. PICI design philosophy (similar to Gemini): users (or a RAG system) explicitly control caching. 1) Users decide what doc (big or small, frequently used or not) is worth caching and for how long at known costs (xx $/token/hour). PICI never actively deletes users’ PIC cache (more in the next paragraph). 2) Users explicitly pass cache_id to define cache hits. 3) PIC cache resides in GPU memory (no swap in/out). In contrast, systems like vLLM and SGLang use implicit caching, where cache hits rely on token prefix-matching and swap in/out for memory management. Future work will explore integrating explicit PIC caching with implicit prefix-based caching and optimizing their placement in the memory hierarchy. > Rough-L score --> Rouge-L Yes, thank you for pointing this out. We will fix it. > I assume that position encodings (PE) for the model are re-used by the chunks of the encoded prefix documents such that the whole process is permutation invariant? Yes. PICI prefills a doc, whose first token is given position 0, to generate their KV cache. Then PICI uses the cache directly without modifying PE. On the contrary, new queries receive position IDs based on their real positions.
Summary: This paper introduces PICI, an efficient position-independent context caching system for serving large language models. The system pre-computes the KV caches of unchanged contents and splits them into blocks. The incorporated method, LegoLink, leverages the static attention sparsity of each block, eliminating the influence of the attention sink phenomenon in each chunk to minimize recomputation for accuracy recovery when using position-independent context caching. Comprehensive experiments validate the effectiveness and efficiency of the proposed method, providing additional insight into efficient LLM. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense. More details of the proposed PICI system should be introduced in the main paper instead of in the appendix. Theoretical Claims: n/a Experimental Designs Or Analyses: I have checked the soundness/validity of any experimental designs or analyses. Supplementary Material: no Relation To Broader Scientific Literature: This paper's motivation is related to the KVLink algorithm, e.g., CacheBlend, and its idea is related to the KV cache eviction method, e.g., StreamingLLM, H2O. Essential References Not Discussed: See Other Strengths And Weaknesses. Other Strengths And Weaknesses: Strengths: 1. This paper is well-written and easy to follow. 2. The motivation of this paper is clear. Position-Independent Context Caching is practical in real world application. 3. Using a static attention sparsity that determines the tokens to recompute is both effective and efficient. 4. Comprehensive experiments show the effectiveness of LegoLink, and give some interesting insight (e.g., Section 4.4) into the PIC area. Weaknesses: 1. In the proposed PICI, how to retrieve the KV cache from a KV cache set remains exploration. The authors treat the implementation of PICI system as their contribution, but the details of PICI are in the appendix. 2. Pre-computing and saving the KV caches for long documents requires larger storage than string. 3. Missing related work in the Position-Independent Caching (PIC) of Section 2.2; only a Wikipedia website is cited. 4. Line 220: O=AVW_O, should be O=AV_{exp}W_O ? Other Comments Or Suggestions: See Other Strengths And Weaknesses Questions For Authors: See Other Strengths And Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We address the review questions below. > More details (overall workflow, cache hit def, too-large cache) of PICI system in the main text instead of the appendix. Due to page limits, we prioritized the attention sink effect in PIC and our simple yet effective algorithm, placing system details in the appendix. However, we acknowledge this omission and will move system details back to the main text. Below is a summary (from the appendix) related to review questions. PICI workflow: 1) Users cache frequently used documents via generate_cache API, receiving a cache_id. 2) Users pass queries and cache_ids to PICI in any order (position-independent), and PICI retrieves the cache with minimal overhead (simple cache_id → cache address mapping) and finishes linking as described in the paper. PICI design philosophy (similar to Gemini): users (or a RAG system) explicitly control caching. 1) Users decide what doc (big or small, frequently used or not) is worth caching and for how long at known costs (xx $/token/hour). For now, PICI never actively deletes users’ PIC cache (more in the next paragraph). 2) Users explicitly pass cache_id to define cache hits. 3) For now, the PIC cache resides in GPU memory (no swap in/out). In contrast, systems like vLLM and SGLang use implicit caching, where cache hits rely on token prefix-matching and swap in/out mechanisms are used for memory management. Future work will explore integrating explicit PIC caching with implicit prefix-based caching and optimizing their placement in the memory hierarchy. > Pre-computing and saving the KV caches for long documents requires larger storage than string. Yes, it is always a hard trade-off between memory efficiency and computational savings. However, in the PICI design philosophy, users retain the flexibility to enable or disable caching based on their specific needs. Given their deeper understanding of application semantics and intent, users are often better positioned than the system itself to make optimal PIC caching decisions. This flexibility helps prevent unnecessary memory consumption by avoiding the storage of large, unused caches. > Missing citations in Section 2.2. We will correct the citation problems in Section 2.2, making them as complete as in the Related Work section. > Line 220: O=AVW_O, should be O=AV_{exp}W_O? Yes.
null
null
null
null
null
null
Combinatorial Reinforcement Learning with Preference Feedback
Accept (poster)
Summary: This paper studies combinatorial reinforcement learning (Combinatorial RL) under Multinomial Logit (MNL) preference feedback, where the agent selects a subset of items (an "assortment") at each step, and user choices follow an MNL model. Unlike traditional MNL bandits, which optimize single-step rewards, this work extends the problem to reinforcement learning (RL) settings, where decisions impact long-term rewards through state transitions. Claims And Evidence: Most of the paper’s key claims are well-supported by theoretical analysis and synthetic experiments. Some claims, particularly those regarding computational efficiency and variance-weighting, require additional empirical validation. Methods And Evaluation Criteria: The proposed methods are well-motivated and aligned with the problem setting. While Online Sensitivity Sub-sampling is proposed to reduce computation, there is no formal complexity analysis comparing its efficiency with baselines like LSVI-UCB. Theoretical Claims: The theoretical claims and proofs are mostly correct, with well-supported regret bounds and sound reasoning behind optimism-pessimism switching. Some extensions (e.g., generalized Eluder dimension) require further justification or comparison, e.g., compare regret bounds using standard vs. generalized Eluder dimension since it is unclear if this extension significantly improves regret bounds compared to existing Eluder dimension results. Experimental Designs Or Analyses: 1. No empirical validation for the optimistic-pessimistic strategy; perhaps add an ablation study comparing optimistic-only, pessimistic-only, and alternating strategies. 2. The experiments only use synthetic data; should also run experiments on real-world datasets to test robustness under realistic conditions. 3. The action space in the experiments seems relatively small compared to real-world combinatorial RL settings. 4. The paper introduces multiple novel techniques (e.g., variance-weighted Q-learning, Online Sensitivity Sub-sampling, alternating optimism-pessimism) but does not test their individual contributions. Does each component improve performance, or is the gain from just one or two? 5. The results lack confidence intervals or statistical significance tests. Supplementary Material: The supplementary material consists of codes, but not thoroughly checked. Relation To Broader Scientific Literature: The work extends MNL bandits (Multinomial Logit Bandits), a well-studied framework for recommendation systems and assortment optimization. Instead of focusing on single-step rewards (as in MNL bandits), the paper extends the model to long-term reinforcement learning settings. The paper builds on literature in function approximation for RL, particularly when Q-values are learned in complex, structured action spaces. Essential References Not Discussed: The paper generalizes Eluder dimension but does not compare its new regret bounds with prior function approximation methods in RL. The paper proposes Online Sensitivity Sub-sampling but does not compare it with deep RL’s structured exploration techniques. Other Strengths And Weaknesses: Other Strengths: 1. The use of preference feedback in long-term decision-making is a meaningful contribution, as most prior MNL studies focus on single-step decision-making (bandits) rather than RL settings. 2. The paper establishes tight upper and lower regret bounds. Other Weaknesses: 1. The experiments are conducted only in a synthetic online shopping environment, without real-world datasets. 2. The results largely build on existing work in combinatorial RL/bandits under choice models, and RL with function approximation. How does this approach go beyond simply combining prior methods? 3. The paper lacks intuitive explanations for certain formulas, such as: a. Why is the generalized Eluder dimension more suitable for this problem compared to the standard Eluder dimension? b. How do the key mathematical principles of variance-weighted Q-learning influence its convergence? Other Comments Or Suggestions: 1. The proof outlines failure cases where a purely optimistic approach can lead to biased Q-value estimation due to state transitions. The theoretical argument is strong, but an empirical ablation study would reinforce this claim. 2. The model assumes fixed user preferences, which is unrealistic in many real-world settings. Many RL-based recommenders address evolving user interests, but this paper does not discuss or experiment with preference drift. 3. The condition for switching between optimistic and pessimistic utilities is not clearly motivated. Add intuition behind this condition. 4. Algorithm 1 Pseudocode: Step descriptions are too compact to follow easily. Questions For Authors: 1. The action space in combinatorial reinforcement learning grows exponentially, even with Online Sensitivity Sub-sampling. Can the proposed method efficiently compute the optimal policy? While the paper claims to avoid exponential computation, does it truly achieve polynomial-time optimization? 2. Your method is designed for recommendation systems, yet all experiments use synthetic data. Why did you not test MNL-VQL on real-world datasets like MovieLens or Amazon Reviews? 3. The paper claims Online Sensitivity Sub-sampling improves efficiency. What is the formal computational complexity of MNL-VQL? 4. How well does MNL-VQL scale to large action spaces? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your positive review! Below, we provide our responses to your comments and concerns. --- ### **Experiment** - **Additional real-world data experiments:** In response to the reviewer’s suggestion, we conducted experiments on the real-world MovieLens dataset (Harper & Konstan, 2015). Due to space limitations, we refer the reader to our response to reviewer **ofJq** for details on the experimental setup. - A link to the results is provided here: [[Link]](https://rebrand.ly/pyss7cc) - **Request for additional ablations:** While we appreciate the reviewer’s suggestion regarding ablation studies, we believe they may NOT be essential for the following reasons: - **Optimistic- (or pessimistic-) only variants**: Given the theoretical nature of our work, we believe it is sufficient to empirically evaluate algorithms that are either provably guaranteed or previously proposed, such as LSVI-UCB and Myopic (OFU-MNL+). However, if the reviewer strongly believes that ablation studies on "optimistic-only" or "pessimistic-only" variants are necessary to support our theoretical findings, we are open to conducting these experiments and will include the results in our final response. - **Variance-weighted Q-learning / Online sensitivity sub-sampling**: We believe there may be a misunderstanding—these techniques are NOT original contributions of our work. As stated in Section 4 and Appendix B, we adopted them from prior literature (e.g., Agarwal et al., 2023; Kong et al., 2021), adapting them to our problem setting. Therefore, we see no strong motivation to perform ablation studies specifically on these components. - **Confidence interval:** In Figure 1, the shaded area represents $\pm1$ standard deviation. --- ### **Computational efficiency** Our algorithm enjoys polynomial computational complexity with respect to the number of items $N$, making it highly efficient both computationally and statistically. The computational complexity per episode is $\mathcal{O}(\text{poly}(N) H + H \log \frac{T \mathcal{N}}{\delta} \max_h \text{dim}_{\nu,K}(\mathcal{F}_h) )$. The first term arises from the assortment selection (see Remark 4.2), and the second term comes from the online sensitivity sub-sampling procedure (see Proposition B.2). For further details regarding the computational cost of the sub-sampling, please refer to Theorem 1 in Kong et al. (2021), as space is limited here. Moreover, we'd like to clarify that the computational cost of the online sensitivity sub-sampling does **NOT scale with the action space size $A$** nor with the number of items $N$. (The generalized eluder dimension may depend on $N$, though not on $A$.) Therefore, even if the action space grows exponentially, the method remains highly effective. Additionally, we reported the runtime per round in Table G.1 (and provided results for larger item sets at [[Link]](https://rebrand.ly/pyss7cc)), which shows that the runtime of our algorithm indeed scales polynomially with $N$, and is significantly more efficient than LSVI-UCB, whose computational cost is $\mathcal{O}(HA)\simeq\mathcal{O}(HN^M)$. --- ### **Techinical novelties** Our key technical contributions are as follows: 1. A **novel assortment selection** strategy that alternates between optimistic and pessimistic utility estimates. 2. The **first** to incorporate both **unknown item values (rewards)** and **nonzero outside-option item values** in the MNL choice model. 3. A $\sqrt{H}$ **improvement** over naïve summation of $H$ bandit regrets. 4. A fine-grained regret analysis for general function approximation. For further details, please refer to Section 5.2. --- ### **Generalized Eluder dimension** The generalized Eluder dimension is applicable to weighted regression, while the standard Eluder dimension applies only to non-weighted regression. Thus, the generalized Eluder dimension is the appropriate choice for our setting. If we were to use unweighted regression with the standard Eluder dimension, it is straightforward to see that this would yield a looser regret bound for general function approximation—similar to what arises in $\mathcal{F}$-LSVI (Wang et al., 2020). For a detailed comparison between the two Eluder dimensions, we refer the reader to Theorem 4.6 in Zhao et al. (2023), as mentioned in Line 184. --- ### **Others** - **Explanation of variance-weighted Q-learning**: This method adaptively adjusts update weights using variance estimates, giving more importance to low-variance (hard) regions. This also allows for tighter confidence intervals and stronger theoretical guarantees (see Agarwal et al., 2023). - **Fixed user preference**: We do NOT assume fixed user preferences; instead, the state can capture dynamic factors like satisfaction and interaction history, allowing preferences to evolve over time. - **Intuition behind switching utilities**: We design an optimistic strategy where the estimated value is likely greater than the true value. --- Rebuttal Comment 1.1: Comment: The authors claim that they propose "a novel assortment selection strategy that alternates between optimistic and pessimistic utility estimates." However, the explanation provided in the "Intuition behind switching utilities" section feels odd. At the very least, it fails to convince me of why switching between the two is necessary, as opposed to consistently adhering to either an optimistic or a pessimistic strategy. --- Reply to Comment 1.1.1: Comment: Thank you for your prompt response to our rebuttal. In our initial reply, we focused primarily on addressing your main concerns—such as the additional experiments and computational efficiency—which we hope were satisfactorily resolved, as no further questions were raised on those points. Unfortunately, due to space limitations, we were unable to provide a detailed explanation of the "intuition behind the switching utilities." We're happy to address that now as an additional comment! --- We believe the reviewer may be unclear about which specific quantity the adjectives "optimistic" or "pessimistic" refer to. In our paper, we consider **three** types of quantities: (1) *MNL utilities*, (2) *item-level Q-values*, and (3) *Q-values*. Here, the item-level Q-values, denoted by $\bar{Q}_h(s,a)$, represent the expected cumulative reward given a state $s$ and a **base action $a$** at horizon $h$. In contrast, the Q-values, denoted by $Q_h(s,a)$, represent the expected cumulative reward given a state $s$ and an **assortment $A$** at horizon $h$. When we say that the strategy "alternates between optimistic and pessimistic utility estimates," the relevant quantity is the (1) *MNL utilities*. In contrast, in our initial response regarding the "intuition behind switching utilities," the relevant quantity was the (3) *Q-values*. The key idea is that by **alternating between optimistic and pessimistic** *MNL utility estimates*, we can induce **optimism** in the *Q-value estimates*. Specifically, the Q-value estimate (as defined right below Equation (8)) is greater than the true Q-value with high probability. In other words, our *switching MNL utilities* strategy serves as a **sufficient** condition for ensuring **optimism** in the *Q-value estimates*. This is the key intuition. The switching technique is **essential** for guaranteeing optimism in the *Q-value estimates*, as we do **not** have access to the true *item-level Q-values*. Moreover, the item-level Q-value for the outside option (i.e., not choosing any item) can be non-zero—and potentially even the largest among all options—which contrasts with the standard MNL bandit setting. In MNL bandits, it is typically assumed that the item-level values (often referred to as *revenue parameters*) are known, and that the value for the outside option is zero. Therefore, our setting is **strictly more general** than the standard MNL bandit setting and introduces **additional challenges**. To address these challenges, we first **optimistically** estimate the *item-level Q-values*, denoted as $f^k_{h,1}$ (or $f^k_{h,2}$), which incorporate uncertainty. Based on these estimates, we then choose to use either **optimistic** or **pessimistic** *MNL utilities* according to the rule in Equation (7) to ensure **optimism** in the *Q-value estimates*. - **Case 1**: When the estimate $f^k_{h,1}$ (or $f^k_{h,2}$) for the outside option is **not the largest**—that is, there exists some $a \neq a_0$ such that $f^k_{h,1}(s^k_h, a) > f^k_{h,1}(s^k_h, a_0)$ for a given state $s^k_h$—then using **optimistic** *MNL utilities* is **sufficient to ensure optimism** in the *Q-value estimates*. The proof is quite technical and difficult to present in this limited space; for formal details and a complete justification, please refer to Lemma D.15 (Optimism) and the supporting results in Lemmas D.5 and D.14. - **Case 2**: When the outside option's item-level Q-value is the **largest**, i.e., $f^k_{h,1}(s^k_h, a) \leq f^k_{h,1}(s^k_h, a_0)$ for all $a \neq a_0$, using optimistic *MNL utilities* alone is **not sufficient**. In this case, we find that using **pessimistic** *MNL utilities* can actually **induce optimism** in the *Q-value estimates*. This case is more intuitive: since the outside option is always included in the assortment and has the highest estimated item-level Q-value, increasing its MNL choice probability—while reducing the choice probabilities of the other items—can result in a higher *Q-value estimate* for the assortment. This can be achieved by using **pessimistic** *MNL utilities*. The pessimistic *MNL utility* is smaller than its true value (with high probability), which increases the choice probability of the outside option (which has the highest item-level Q-value). This leads to a higher Q-value estimate for the assortment. For a detailed analysis, please refer again to Lemma D.15. --- We sincerely hope that these additional clarifications help convey the novelty of our MNL utility switching technique. To the best of our knowledge, this is the first work to consider *unknown* *Q-value estimates* (or *revenue parameters* in the MNL bandit setting) within the existing literature on bandits (or RL) with MNL preference models. We believe that both the **algorithmic design** and the **proof techniques** are **novel** in guaranteeing optimism of the *Q-value estimates*, and we hope this work contributes meaningfully to the community.
Summary: The paper addresses the problem of combinatorial reinforcement learning with preference feedback, where a learning agent offers an assortment of multiple items (an action) to a user, whose preferences follow a multinomial logistic model. This framework is particularly relevant for applications like recommender systems and online advertising, where long-term user engagement is key. The paper identifies two main challenges: the unknown value of each item and the difficulty of ensuring optimism while maintaining tractable assortment selection in the combinatorial action space. To tackle these challenges, the authors propose an algorithm called MNL-VQL, which is both computationally and statistically efficient. The algorithm estimates optimistic item values using point-wise optimism under general function approximation and selects assortments that ensure sufficient exploration and optimism. The paper also establishes the first regret lower bound for linear MDPs with MNL preference feedback and shows that MNL-VQL achieves nearly minimax-optimal regret. Main contributions from the paper are: - Introduction of MNL-VQL, an algorithm that addresses the challenges of combinatorial RL with preference feedback, ensuring computational and statistical efficiency. - Regret upper bound and Minimax-Optimal regret using MNL-VQL (linear MDPs) is the feature dimension of the linear MDPs, and establishes a matching regret lower bound, showing the minimax-optimality of the algorithm. - Development of analytical techniques to prove optimism and related results, avoiding naive combinatorial enumeration by reformulating the optimization problem as a linear program. Claims And Evidence: Most of claims reported in the paper are supported by clear and convincing evidence. Notwithstanding, there are a few areas where the evidence could be strengthened. ** ''Full'' Supported Claims** 1. Introduction of MNL-VQL algorithm: - The paper provides a detailed description of the MNL-VQL algorithm, including its steps and theoretical foundations. The algorithm is well-supported by the mathematical formulations and the step-by-step explanation provided in the paper. 2. Regret Upper Bound: - The paper claims that MNL-VQL achieves a regret upper bound. This claim is supported by Theorem 5.1 and the accompanying proof in Appendix D. The theoretical analysis appears rigorous and thorough. 3. Minimax-Optimal regret for linear MDPs: - The claim that MNL-VQL achieves nearly minimax-optimal regret for linear MDPs is supported by Theorem 5.2 and the corresponding proof. The paper provides a detailed comparison with existing work to highlight the novelty and effectiveness of their approach. ** "Weaker" Claims ** 1. First theoretical guarantee in combinatorial RL with preference feedback: - The paper claims to be the first to provide statistical guarantees in combinatorial RL with preference feedback. While the paper does provide strong theoretical results, it would benefit from a more comprehensive review of existing literature to ensure that no prior work has addressed this problem. The claim could be seen as overstated without a thorough literature review. 2. Empirical validation: - The paper focuses heavily on theoretical contributions and provides limited empirical validation. While the theoretical results are strong, more extensive empirical experiments demonstrating the practical effectiveness of the MNL-VQL algorithm would strengthen the claim. The lack of more extensive empirical results may contrast the benefits of the proposed approach Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria in the paper make sense for the problem at hand. Theoretical Claims: No Experimental Designs Or Analyses: The paper primarily focuses on theoretical contributions and does provide limited empirical analyses. Supplementary Material: Yes. The appendix Relation To Broader Scientific Literature: The key contributions of the paper are closely related to several areas within the broader scientific literature, particularly in the fields of reinforcement learning (RL), multi-armed bandits, and preference modelling. Combinatorial RL involves selecting combinations or subsets of actions from a set of possible actions. Concerning prior works on the topics, previous studies have addressed problems within this setting, particularly in deep RL (e.g., Sunehag et al., 2015; He et al., 2016; Swaminathan et al., 2017; Metz et al., 2017; Ryu et al., 2019; Ie et al., 2019; Delarue et al., 2020; McInerney et al., 2020; Vlassis et al., 2021; Chaudhari et al., 2024). However, the authors of the paper formalize the concept of combinatorial RL with preference feedback, which had not been theoretically defined in prior work. Essential References Not Discussed: Yes Other Strengths And Weaknesses: Overall, the paper makes significant and original contributions to the field of combinatorial RL with preference feedback. The theoretical foundations are strong, and the proposed algorithm addresses important challenges. **Strengths** - Novel framework: the paper introduces a novel framework for combinatorial reinforcement learning (RL) with preference feedback, which had not been theoretically defined before. The development of the MNL-VQL algorithm is a notable contribution, addressing specific challenges in combinatorial RL with preference feedback that were not previously tackled. Such a framework is especially relevant for applications such as recommender systems and online advertising. - Theoretical contributions: the paper provides strong theoretical contributions, including the first regret lower bound for linear MDPs with MNL preference feedback and nearly minimax-optimal regret bounds. - Detailed explanations: the paper provides detailed explanations of the algorithm steps, theoretical foundations, and proofs. This clarity helps readers understand the complex concepts and methods used. ** Weaknesses ** 1. Lack of experiments: the main issue of the paper is the lack of an extensive empirical validation of the proposed algorithm. Including experiments on benchmark datasets and comparisons with more baseline methods would strengthen the claims and demonstrate the practical effectiveness of the algorithm. Other Comments Or Suggestions: see Other Strengths And Weaknesses Questions For Authors: see Other Strengths And Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate your time to review our paper and your valuable feedback. It seems that your main concern is the lack of empirical validation for the proposed algorithm. However, we would like to emphasize that this is a theoretical paper, submitted to the theory category. Our work introduces a new framework—combinatorial RL with preference feedback—in a principled way, supports non-linear function approximation, and achieves near-optimal regret guarantees in linear MDPs. We believe these contributions represent a substantial advancement in the foundations of RL and, on their own, are sufficient to merit acceptance. Nevertheless, in response to the reviewer’s request, we have also conducted experiments on the large-scale, **real-world** MovieLens dataset (Harper & Konstan, 2015) to provide additional empirical validation. --- ### **Additional real-world data experiments** The MovieLens dataset contains 25 million ratings on a 5-star scale for 62,000 movies (base items $a$) provided by 162,000 users ($u$). We define the state $s$ as the number of movies a user $u$ has watched after entering the system, denoted by $s = (u, n)$, where $n \in \\{0, \ldots, H-1\\}$ is the number of movies watched during the session. We interpret the ratings as representing MNL utilities. In each episode $k$, a user ($u_k$) is randomly sampled and arrives at the recommender system, initiating the state $s^k_1 = (u_k, 0)$. The agent offers a set of items with a maximum size of $M$. If the user clicks on an item, they receive a reward of $1$ and transition to the next state $s^k_2 = (u_k, 1)$. If no item is clicked, the user receives no reward and remains in the current state ($s^k_2 = s^k_1$). In addition, certain *junk* items—such as those with a provocative title and poster but poor content—can cause users to leave the system immediately. This is modeled as a transition to an *absorbing state*, where no further rewards are received and the state remains unchanged regardless of future actions. We believe the presence of such junk items is quite natural and reflective of real-world recommendation environments. For our experiments, we use a subset of the dataset containing $1.1 \times 10^3$ users and a varying number of movies, $N \in \\{50, 100, 200\\}$. To construct MNL features, we follow a similar experimental setup as in [1], employing low-rank matrix factorization. For linear MDP features, we apply the same approach as used in our synthetic data experiments. We set the parameters as follows: $K = 10000, H=3, M=4, |\mathcal{S}|=100*(H+1)=400$ (including the absorbing state), $d=26$ (MNL feature dimension), $d^{lin}=204$ (Linear MDP feature dimension), $N\in\\{50, 100, 200\\}$ (number of base items) and $|\mathcal{A}|= \sum_{m=1}^{M-1}$$N \choose m$ $\in \\{20875, 166750, 1333500\\}$. The proportion of junk items is set to $30\\%$. - We provide an anonymous link to the experimental results here: [[Link]](https://rebrand.ly/pyss7cc) The results demonstrate the superior performance of our algorithm, highlighting its practicality for real-world scenarios. [1] Shuai Li, Tor Lattimore, and Csaba Szepesvari. Online learning to rank with features. In International Conference on Machine Learning, pages 3856–3865, 2019. --- ### **Comparison to other baselines in the experiments** The reviewer also suggested including comparisons with more baselines. However, to the best of our knowledge, our framework is the first of its kind, and there are **NO** directly comparable baselines available. We believe it is sufficient to compare against the state-of-the-art myopic algorithm (OFU-MNL+, Lee & Oh, 2024) and a representative linear MDP algorithm (LSVI-UCB, Jin et al., 2020), and to demonstrate the limitations of these existing approaches within our more practical and general setting. --- ### **First theoretical guarantee in combinatorial RL with preference feedback** We strongly believe that our claim of providing the "first theoretical regret guarantee" is **NOT overstated** and therefore should **not be considered a weakness** of the paper. This is because, *to the best of our knowledge*, no prior theoretical work has addressed the framework we propose—combinatorial RL with preference feedback—which underscores the novelty of our contribution. Although there have been recent significant advances in RL theory, we are not aware of any existing study that considers this framework. The most closely related line of research is cascading RL (Du et al., 2024); however, that setting involves presenting items sequentially, one at a time, rather than simultaneously as sets, which represents the key distinction of our framewokr. --- > We sincerely believe that we have addressed all of your concerns. If you agree, we would greatly appreciate it if you could kindly reconsider your initial evaluation. Additionally, please feel free to reach out if you have any further questions or require clarification on any point.
Summary: This paper considers the combinatorial RL with a MNL preference distribution, where given an combinatorial action(assortment), the final action is sampled from a linear MNL model. In this setting, the learner needs to estimate both the underlying MNL model parameter and the transition dynamics, as in the standard RL task. Authors provide an algorithm that combines the weighted regression routine for value estimation and online mirror descent for learning MNL parameters. This algorithm works for general non-linear value function classes with finite generalized Eluder dimension. Corresponding lower bound results are also provided in linear value approximation setting to illustrate the optimality. Claims And Evidence: Yes. Most statements made in this paper are clear and well-supported. Methods And Evaluation Criteria: The proposed algorithm is well-founded, the authors also provide an LP formulation for assortment optimization to address computational challenges, making the algorithm applicable to real-world problems. Theoretical Claims: I didn't go through the proof in detail since it is too long to check. However, the theoretical guarantee provided for the algorithm make sense for me as both its value estimation block and the MNL parameter estimation block are well supported by existing literature. And the final regret bound can be decomposed to the summation of these two parts. Experimental Designs Or Analyses: No. Supplementary Material: No. Relation To Broader Scientific Literature: The topic studied in this paper is novel to both combinatorial RL and MNL bandits, I believe it contributes to both areas. Essential References Not Discussed: No. Other Strengths And Weaknesses: Other Strength: It is somewhat surprising that the authors directly establish a relatively complete set of results in a newly proposed setting, which allows for non-linear function approximation and achieves nearly optimal regret. This makes the contributions of this paper solid. Other Comments Or Suggestions: No. Questions For Authors: I don't have additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your positive support and recognition of the value of our work! We truly hope this research helps to shed light on a new direction for the RL community, particularly in the area of combinatorial RL. Please don’t hesitate to reach out if you have any further questions.
Summary: This paper studies a combinatorial reinforcement learning setting in which an agent repeatedly offers a subset (assortment) of items and observes the user’s choice according to a multinomial logistic (MNL) model. Key challenges include (1) learning long-term (multi-step) item values rather than merely single-step rewards, and (2) preserving computational efficiency in selecting an assortment optimistically from an exponentially large set. To address these, the authors propose MNL-VQL, an algorithm that uses a linear (contextual) MNL model for user preferences and general function approximation for item values. They provide theoretical guarantees on regret, including a nearly minimax-optimal bound for the linear MDP case. Claims And Evidence: Authors claim "This is the first work to provide statistical guarantees in combinatorial RL with preference feedback." From the literature review, this claim is true while some important related works are missing. Methods And Evaluation Criteria: This paper proposes the algorithm, MNL-VQL, that learns both: the MNL preference parameters (how likely each item is to be chosen), and the long-term value of each item using value-based RL techniques (with function approximation). Ensures computational feasibility by carefully separating the MNL-utility estimation from the state-value estimation. The key novelty is how it maintains “optimistic” and “pessimistic” estimates of item Q-values under unknown item values, and then constructs optimistic MNL utilities in a way that still admits polynomial-time optimization over the combinatorial action set. Theoretical Claims: I didn't carefully check the proof. Experimental Designs Or Analyses: The paper includes an experiment in a synthetic “online shopping with budget” environment. Each state represents a user’s budget level; the agent recommends an assortment, observes which item is selected (if any), and the user’s budget transitions based on the purchase. However, the experiments don't include the real-world datasets and bandit-based baselines. Supplementary Material: No, Relation To Broader Scientific Literature: Provide new theoretical insights to the combinatorial RL/Bandit literature. Essential References Not Discussed: The following two works also study combinatorial action space in bandits and provide the theoretical guarantee. Authors should include them into related works. [1] Multi-facet contextual bandits: A neural network perspective. KDD'21 [2] Combinatorial neural bandits. ICML'23 Other Strengths And Weaknesses: The algorithm’s regret bounds are clearly laid out, with a special focus on minimax-optimal results in linear settings; The paper avoids intractable enumeration of subsets by formulating the assortment selection via an LP-based approach, making it feasible for moderate I. However, the empirical evaluation is relatively weak, given the synthetic data and a few number of baselines. Other Comments Or Suggestions: No Questions For Authors: (1) Is the exploration method (Line 20) effective? compared to other classic exploration methods like UCB and TS? (2) The following work are also related to combinotorial bandits, can author add them to related work as well? [1] Multi-facet contextual bandits: A neural network perspective. KDD'21 [2] Combinatorial neural bandits. ICML'23 Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for acknowledging the value of our work and providing a positive evaluation! We will address your questions below. --- ### **Additional real-world data experiments** As per the reviewer’s request for real-world experiments, we have additionally conducted experiments on the large-scale, real-world MovieLens dataset (Harper & Konstan, 2015) to provide further empirical validation. The MovieLens dataset contains 25 million ratings on a 5-star scale for 62,000 movies (base items $a$) provided by 162,000 users ($u$). We define the state $s$ as the number of movies a user $u$ has watched after entering the system, denoted by $s = (u, n)$, where $n \in \\{0, \ldots, H-1\\}$ is the number of movies watched during the session. We interpret the ratings as representing MNL utilities. In each episode $k$, a user ($u_k$) is randomly sampled and arrives at the recommender system, initiating the state $s^k_1 = (u_k, 0)$. The agent offers a set of items with a maximum size of $M$. If the user clicks on an item, they receive a reward of $1$ and transition to the next state $s^k_2 = (u_k, 1)$. If no item is clicked, the user receives no reward and remains in the current state ($s^k_2 = s^k_1$). In addition, certain *junk* items—such as those with a provocative title and poster but poor content—can cause users to leave the system immediately. This is modeled as a transition to an *absorbing state*, where no further rewards are received and the state remains unchanged regardless of future actions. We believe the presence of such junk items is quite natural and reflective of real-world recommendation environments. For our experiments, we use a subset of the dataset containing $1.1 \times 10^3$ users and a varying number of movies, $N \in \\{50, 100, 200\\}$. To construct MNL features, we follow a similar experimental setup as in [1], employing low-rank matrix factorization. Specifically, we randomly split the user set into two subsets, $U_1$ and $U_2$, where $|U_1|=100$ and $|U_2|=1000$. The rating matrix from users in $U_1$ is used to derive user and movie embeddings via singular value decomposition (SVD), with both user and movie feature dimensions set to $d_U = 5$. We define the user-movie feature as the outer product of the corresponding user and movie embeddings, resulting in a $25$-dimensional vector. The final MNL feature vector is then given by $\phi(s,a)= [\text{user-movie feature}, \text{number of movies watched}]^\top \in \mathbb{R}^{26}$. For linear MDP features, we apply the same approach as used in our synthetic data experiments. We set the experimental parameters as follows: $K = 10000, H=3, M=4, |\mathcal{S}|=100*(H+1)=400$ (including the absorbing state), $d=26$ (MNL feature dimension), $d^{lin}=204$ (Linear MDP feature dimension), $N\in\\{50, 100, 200\\}$ (number of base items) and $|\mathcal{A}|= \sum_{m=1}^{M-1}$$N \choose m$ $\in \\{20875, 166750, 1333500\\}$. The proportion of junk items is set to $30\\%$. - We provide a link to the experimental results here: [[Link]](https://rebrand.ly/pyss7cc) The results demonstrate the superior performance of our algorithm, highlighting its practicality for real-world scenarios. [1] Shuai Li, Tor Lattimore, and Csaba Szepesvari. Online learning to rank with features. In International Conference on Machine Learning, pages 3856–3865, 2019. --- ### **Additional realted works** We will ensure that combinatorial bandits are discussed in the *Related Work* section and that the papers suggested by the reviewer are included in the final version of the paper. Specifically, we plan to add the following discussion: *"Our work is related to prior studies on combinatorial bandits with semi-bandit or cascading feedback [appropriate references]. However, our framework differs significantly from these approaches. In the semi-bandit and cascading settings, user feedback on a specific item depends only on that item’s context, independent of other items offered simultaneously. Thus, these approaches do not account for substitution effects among items. In contrast, our work utilizes an MNL choice model where user feedback depends explicitly on the entire assortment presented, leading to additional analytical challenges."* --- ### **Question** > *Is the exploration method (Line 20) effective? compared to other classic exploration methods like UCB and TS?* We can evaluate the effectiveness of the exploration method (Eqn.(9)) from two perspectives: statistical and computational efficiency. - Statistical efficiency: The proposed exploration strategy ensures that overly optimistic item $Q$-estimates (which have larger bonuses) are used infrequently, resulting in tighter regret bounds compared to classical methods. - Computational efficiency: Since our exploration method employs a non-Markovian policy, it requires computations involving the entire history up to horizon $h$. Consequently, its computational cost is roughly $H$ times greater than that of UCB or TS.
null
null
null
null
null
null
Stream-level Flow Matching with Gaussian Processes
Accept (poster)
Summary: This paper extends Conditional Flow Matching by introducing Gaussian processes to model latent "streams" connecting source and target distributions. Key contributions include: (1) a generalized CFM framework using GP streams while maintaining simulation-free training; (2) demonstrating reduced variance in vector field estimation and improved sample quality; (3) enabling flexible incorporation of correlated observations; and (4) empirical validation on synthetic, image, and time series datasets. ## update after rebuttal The author resolved my confusion, so my current high rating remains unchanged. Claims And Evidence: The main claims are generally well-supported by both theoretical analysis and empirical evidence. The authors show (1) GP streams produce smoother vector fields with reduced variance; (2) GP-CFM improves sample quality across datasets;(3) The approach effectively incorporates correlated observations. Methods And Evaluation Criteria: The methods are sound and well-motivated. The authors evaluate their approach using standard metrics (FID, KID, Wasserstein distance) on appropriate datasets. Their comparison against baseline methods (I-CFM, OT-CFM) is reasonable, though they could potentially compare against more recent state-of-the-art generative models for broader context. Theoretical Claims: The theoretical foundations appear solid. I verified (1) the proof that the marginal vector field generates the probability path (Section J.1), (2) The gradient equivalence proofs (Sections J.2 and J.3), (3) the Bayesian decision-theoretic perspective (Appendix A). Experimental Designs Or Analyses: The experimental design is generally sound: 1. Synthetic examples effectively demonstrate the conceptual advantages. 2. Image generation experiments follow standard practices. 3. Time series experiments illustrate the unique capabilities of GP-CFM. However, some experimental choices could be better justified: 1. The choice of hyperparameters for the GP kernels isn't thoroughly explained. 2. The comparison on CIFAR-10 could include more recent generative models beyond just CFM variants. Supplementary Material: I reviewed the supplementary material, including mathematical derivations and proofs, details of GP kernel construction, additional experimental results on variance reduction. The supplementary material provides valuable additional context and validates the paper's claims. Relation To Broader Scientific Literature: The paper builds appropriately on previous work in flow matching (Lipman et al., 2023; Tong et al., 2024) and stochastic interpolants (Albergo et al., 2023, 2024). It extends CFM by incorporating Gaussian processes. The authors also make connections to Bayesian perspectives on generative modeling and optimally regularized estimation. Essential References Not Discussed: The paper covers most relevant literature. Other Strengths And Weaknesses: **Strengths**: 1. The paper presents an theoretical framework for extending Conditional Flow Matching with Gaussian process streams. 2. The authors provide a novel perspective on variance reduction in flow matching through appropriate GP regularization. 3. The method shows valuable practical applications for time series and correlated data that standard CFM approaches cannot handle effectively. 4. The empirical results demonstrate clear improvements across multiple datasets as measured by standard quality metrics. **Weaknesses**: 1. Computational complexity analysis could be more thorough. 2. Some hyperparameter choices need better justification,such as what principles should guide kernel selection and parameter tuning for different data modalities (images v.s. time series). 3. Potential limitations of the GP approach in very high dimensions, e.g, Imagenet64/256, aren't fully addressed. Other Comments Or Suggestions: 1. It would be better to include a more detailed comparison of computational requirements in the main text. 2. In the section of time series modeling, incorporating a discussion of Trajectory Flow Matching (Zhang et al., NeurIPS 2024) would provide useful context and highlight how your GP-based approach extends or differs from their method. Questions For Authors: 1. How does the computational complexity of GP-CFM scale with data dimensionality compared to standard CFM approaches? 2. What guidelines would you suggest for choosing appropriate GP kernels and their hyperparameters for different types of data? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks a lot to the reviewer for their positive comments, and thanks for suggestions on writing. We will update our manuscript accordingly, and add more details in the experiments (e.g. compare to more methods besides CFM variants) whenever possible. Here, we clarified some specific points… 1. > Potential limitations of the GP approach in very high dimensions, e.g, Imagenet64/256, aren't fully addressed. For computational convenience, we use independent GP for each dimension. Therefore, the proposed method should be scalable to high dimensional data. Even for many correlated data (e.g. long time seires), since the GP covariance is independent on observed values, we can calculate the covariance matrix inversion once before training. The current running time is reported with re-calculating matrix inversion for each training iteration. We will update the running time later. 2. > In the section of time series modeling, incorporating a discussion of Trajectory Flow Matching (Zhang et al., NeurIPS 2024) would provide useful context and highlight how your GP-based approach extends or differs from their method. Thanks for mentioning this paper. The time series application is one extension of our GP-CFM framework. The trajectory flow matching (Zhang et al., NeurIPS 2024) is based on autoregressive model (using FM to fit the AR functions), while we model the time seires by GP conditional stream. We will add this reference and the discussion in our camera-ready version if the paper is accepted. 3. > How does the computational complexity of GP-CFM scale with data dimensionality compared to standard CFM approaches? Since we use independent GP for each dimension, the computational bottleneck comes from the GP covariance matrix inversion over (artificial) time. However, the GP covariance and its inversion is independent on observed values, and hence we can calculate once before training. The current running time is reported with re-calculating matrix inversion for each training iteration. We will update the running time in later. 4. > What guidelines would you suggest for choosing appropriate GP kernels and their hyperparameters for different types of data? The choice of GP kernels may depend on prior information and constraint of problems. For example, if there’s a periodic pattern, we should chose the periodic kernel. For hyperparameter selection, it's a trade-off between systematic variance (from extrapolation error of neural net) and Monte Carlo error. The GP-stream reduces the systematic variance, but will introduce more Monte Carlo error. When implementing GP-CFM, we currently manually choose the GP parameter so that the GP conditional path covers a slightly wider region than linear interpolation (by checking several paired samples from the target to the source). To reduce the Monte Carlo error while preserving the reduction in systematic variance, instead of sampling one GP path, we can sample multiple GP paths or resort to importance sampling over t. We will add a discussion on tuning parameters in our camera-ready version if the paper is accepted. --- Rebuttal Comment 1.1: Comment: The author has addressed my concern and I'll keep the score. --- Reply to Comment 1.1.1: Comment: Thank you very much again for your thoughtful comments and suggestions.
Summary: The paper proposes a novel flow matching method that incorporates **stochastic** bridges instead of **deterministic** bridges, which are typically used in the flow matching framework. In the context of generative modeling, flow matching (FM) is used to train neural ODEs with an initial distribution so that the distribution of their solutions matches a target (or terminal) distribution, commonly a data distribution. Specifically, FM enables the ODEs to learn and mimic the path measure (in a weak sense) defined by a collection of deterministic bridges, each connecting two points—one from the initial distribution and the other from the terminal distribution. A deterministic bridge is often chosen as a linear interpolation between two points over time. However, it is also possible to use a nonlinear deterministic bridge as long as it is path-wise continuously (time-)differentiable. Unlike the typical FM, this paper proposes using stochastic bridges instead of deterministic ones. Here, stochastic bridges mean that for any joint sample pair from the initial and terminal distributions, there can be multiple time-dependent functions—i.e., sample paths (or streams, as termed in the paper)—that connect the two points. In order to generate such stochastic bridges, the paper proposes using Gaussian measures, as their sample paths are path-wise continuously (time-)differentiable, and the time derivative of the sample path has a closed-form solution. In particular, this property of Gaussian measures naturally facilitates their use within the flow matching framework. It is important to note that the proposed method differs from Bridge matching, which relies on Brownian bridges and other Itô diffusion-based bridges while still employing a Markovian projection-style training approach similar to FM. In particular, sample paths generated by Itô diffusion-based bridges may not be time-differentiable in the conventional sense, even though they remain continuous. Nevertheless, the paper theoretically demonstrates that incorporating such stochastic bridges into flow matching is effective. The authors demonstrate the efficacy of the proposed method on several benchmark datasets. Claims And Evidence: Overall, I find the paper novel and original. However, the motivation for using nonlinear stochastic bridges in the flow matching context is not entirely convincing, both theoretically and empirically. For example, the use of a linear map (often referred to as the condOT path) is straightforward, and the discussion relating its linearity to reducing numerical errors provides a reasonable justification—particularly in how it accelerates the solving of ODEs with fewer evaluation steps. However, in comparison to this widely adopted approach, the motivation for introducing nonlinear bridges remains unconvincing despite its crucial role in the proposed method. Similarly, making the bridges stochastic does not seem to offer a clear advantage, and its justification is not well-supported. In this regard, the experimental results also appear somewhat limited. While the authors provide several comparative results on popular image generation datasets such as MNIST and CIFAR-10, the performance differences between the proposed method and existing approaches do not appear substantial. The reported improvements, if any, seem marginal, making it difficult to assess the practical advantages of using nonlinear and stochastic bridges. Furthermore, it remains unclear whether the observed gains, if they exist, are due to the proposed modifications or other confounding factors. It would be helpful if the authors could provide stronger empirical evidence or further theoretical insights to clarify the benefits of using nonlinear/stochastic bridges of this framework. Methods And Evaluation Criteria: N/A Theoretical Claims: See “Claims And Evidence” Experimental Designs Or Analyses: See “Claims And Evidence” Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: See "Other Comments Or Suggestions" Other Comments Or Suggestions: Once again, I find the paper novel and original. As mentioned earlier, it is also distinctive even when compared to Bridge matching. In addition, I appreciate how this work broadens my perspective on flow matching, and I found it an enjoyable read. However, aside from its novelty and originality, the experimental results did not fully convince me. I believe the chosen tasks may not be well-suited to the proposed method. There could be more relevant applications where the sample paths of the generation process need to be controlled in specific ways, which might better showcase the advantages of the approach and strengthen its motivation. Additionally, while it may not be strictly necessary, it could be helpful to explicitly clarify how this method differs from other Bridge matching approaches. At first glance, I initially found the two somewhat confusing due to their use of stochastic interpolation paths (or streams), and I believe other readers might have a similar impression. Addressing this distinction more clearly could enhance the paper’s accessibility and impact. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks a lot to the reviewer for their positive comments, and thanks for suggestions on writing. We will update our manuscript accordingly, and add more details in the experiments whenever possible. Here, we clarified some specific points… 1. > However, in comparison to this widely adopted approach, the motivation for introducing nonlinear bridges remains unconvincing despite its crucial role in the proposed method. Similarly, making the bridges stochastic does not seem to offer a clear advantage, and its justification is not well-supported. The stochastic interpolant was previously studied theoretically in [1], where the benefits of stochastic interpolant was discussed in section 4.3 of their paper. In their paper, they demonstrated that the stochastic interpolant suppresses spurious intermediate, which smooth path and vector field (Figure 7 in our paper). Therefore, stochastic interpolant can simplify estimation and accelerate the ODE integration. In our paper, we further extend the “stochastic interpolant” in [1], since the Brownian bridge used in [1] is a special case of a Gaussian process (GP), and therefore, conditioning on the entire stream and modeling it using a GP offers greater modeling flexibility. Furthermore, by resorting to properties of GP (conditional distribution and derivative of a GP are still GPs), we preserve the simulation-free property of FM algorithm. Besides, we further provide a bias-variance trade-off perspective for advantages of stochastic interpolation. 2. > It could be helpful to explicitly clarify how this method differs from other Bridge matching approaches. The connection to bridge (e.g. Schr\"odinger bridge) and other related models are discussed in [1]. Specifically, in section 3.4, they showed that the stochastic interpolants can recover the Schr\"odinger bridge (from source to target densities), if we explicitly optimize over the interpolant. We will add more discussions on this in the updated manuscript. [1] Stochastic Interpolants: A Unifying Framework for Flows and Diffusions, Michael S. Albergo, Nicholas M. Boffi, Eric Vanden-Eijnden, 2023 --- Rebuttal Comment 1.1: Comment: The authors have partially addressed my concerns, but I remain somewhat unconvinced on a few points, so I would prefer to keep my original score. For example, the authors provide a one-dimensional experiment to support the claim that stochastic interpolants are beneficial. Unfortunately, I do not believe this level of experimentation sufficiently supports the general statement that *“all stochastic interpolants help smooth the generation path.”* If the authors aim to make such a broad claim, experiments on large-scale datasets, where this property would be of real interest, would have been more appropriate. In addition, the authors cite the phrase *“stochastic interpolant suppresses spurious intermediate ...”* from a relevant work to support the claim as well. If the authors believe that their proposed stochastic interpolant is effectively the same as prior stochastic interpolants—and that the smoother integration path is therefore a general property—this would actually diminish the novelty of the paper. Conversely, if the analysis is truly specific to the differentiable Gaussian measure used in this work, then the conclusions should not be presented as broadly applicable to stochastic interpolants in general. --- Reply to Comment 1.1.1: Comment: Thank you for your thought-provoking comments and suggestions. We acknowledge that the one-dimensional experiment is indeed limited and we would love to carry out more extensive, multi-dimensional numerical experiments in the future to substantiate our discussion. Regarding the novelty of our proposed method, while our GP strategy inherits some general properties of stochastic interpolates, it also enjoys some unique properties that come from properties of GPs that general stochastic interpolants do not enjoy. We believe the novelty in our approach lies in exploiting these unique properties of GPs to construct a computational efficient and robust extension to the CFM algorithm. We hope we will be able to make our argument more convincing in future revisions. We very much appreciate your time and consideration. Thank you again.
Summary: This paper proposes a generalization of conditional flow matching (CFM) models using Gaussian process (GP) streams. While CFM uses two endpoints as condition, GP stream defines a GP over time that connects 2 or more points from $t=0$ to $t=1$, providing more controls over the mean and variance of the path (thus providing stronger regularization) and enabling time-series modeling. Claims And Evidence: Yes. The claimed contributions appear are supported by the theoretical analysis and experiments. Methods And Evaluation Criteria: The method is derived from a theoretical perspective and is a reasonable extension to flow matching. The scale of the experiments is small but reasonable for a theory-focused paper. Theoretical Claims: The theoretical claims appear correct. The proofs in the Appendix are adapted from established prior work. Experimental Designs Or Analyses: Some aspects of the experimental setup are unclear. In Section 4.1, how is the GP specifically constructed? Please provide an equation or reference the appendix. Is a GP stream connecting two endpoints the same as or different from stochastic interpolant ($\sigma>0$)? Why would noise-free GP streams still provide stronger regularization (my understanding is that only the mean is altered)? In Fig. 7, the comparison does not seem fair because I-CFM and GP-CFM appear noise-free, whereas their GP counterparts appear stochastic. Supplementary Material: I have reviewed the Appendix except for the proofs. Relation To Broader Scientific Literature: The proposed method is a theoretical extension to CFM, with finer control over the flow paths. The finding that adding noise to the path improves sample quality due to stronger regularization and less over-fitting is already observed in prior work. Essential References Not Discussed: Essential references have been discussed. Other Strengths And Weaknesses: The paper is challenging to read as the writing is also unclear in some parts. For example, in Section 3.2, there is no formal definition of the Gaussian process in equation form in the main text. Although it is presented in Appendix C, it would be better to include a simplified equation in the main text (e.g., assuming a diagonal covariance) or at least provide a direct reference to the appendix. Additionally, several notations are not explained, such as $m(t)$ and $I_d$. Other Comments Or Suggestions: The y-axis label in Fig. 2 is missing, which should be $x$ I think. Questions For Authors: Although the GP stream enables time-series modeling, an alternative approach is to treat it not as a time-series flow but as a joint distribution over multiple frames (e.g., as in video diffusion models). Is it possible to model videos as a time-series flow? Would this offer any actual advantages over the current paradigm? It seems to me that the additional conditioning (covariate) would introduce extra complexity, making it more similar to auto-regressive modeling. Speaking of auto-regressive modeling, would time-series modeling using a GP suffer from drifting (error accumulation)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks a lot to the reviewer for their positive comments, and thanks for suggestions on writing. We will update our manuscript accordingly. Here, we clarified some specific points… 1. > Some aspects of the experimental setup are unclear. The details of GP construction can be found in Appendix C and E. The GP interpolant is different from stochastic interpolant referred here (linear interpolant with $\sigma > 0$). To visualize the difference, please refer to Figure 2 and 8. The $\sigma$ adds random jitters to the interpolation, but GP stream allows the interpolant oscillate smoothly. We can design $\sigma(t)$ to further help the GP interpolant, as shown in the Figures. In figure 7, we set $\sigma=10^{-3}$ for all four algorithms. 2. > Although the GP stream enables time-series modeling, an alternative approach is to treat it not as a time-series flow but as a joint distribution over multiple frames (e.g., as in video diffusion models). Is it possible to model videos as a time-series flow? Would this offer any actual advantages over the current paradigm? Yes, we can model the time-series via joint distribution over multiple frames, but we may need to model in latent space and factorize the spatial and temporal components of model architecture. Otherwise, the dimension is too high to be feasible for applications such as video generation. Using the GP-CFM for time series modeling explicitly models the correlation over multiple correlated samples into one unifying model, without increasing the dimension linearly with number of time points, in the context of time series. 3. > It seems to me that the additional conditioning (covariate) would introduce extra complexity, making it more similar to auto-regressive modeling. Speaking of auto-regressive modeling, would time-series modeling using a GP suffer from drifting (error accumulation)? In our paper, adding additional covariates (labels) is related to AR modeling, but not exactly the same. Here, we use the starting point at $t = 0$ as covariates, which is static along the time. Therefore, there's no drifting issue. It's possible to use past lag-p observations as covariates to be AR model, but this would make problems more complicated, and as mentioned by the reviewer, we need to concern about drifting issue. --- Rebuttal Comment 1.1: Comment: Thank you for providing additional clarifications. If my understanding is correct, the primary distinction between GP-CFM and the baseline methods lies in their covariance kernels: the baselines employ white noise kernels, resulting in time series composed of independent samples, whereas GP-CFM generalizes to kernels beyond white noise, thus capturing temporal dependencies. However, I'm still not convinced of how this generalization inherently leads to stronger regularization. It seems that by simply adjusting the time-varying variance within the stochastic interpolant baseline, one could achieve marginal distributions similar to GP-CFM at any time slice. For instance, taking Figure 7 as an example, by increasing the variance ($\sigma$) around $t=0.5$, the histogram produced by I-CFM could resemble that of GP-I-CFM. Thus, I feel that the comparison is still not entirely fair. --- Reply to Comment 1.1.1: Comment: If we simply replace the kernel in the GP-CFM with a white noise (which is referred to as nuggets in the GP literature). What we would get would be a non-smooth oscillating stochastic path that connects the end points in the CFM algorithm. Such a path is generally not differentiable and hence would not enjoy the close-form, GP derivative property and therefore will not lead to a simple algorithm such as our GP-CRM. Now if in addition, we reduce the nugget variance down to zero, which removes the stochasticity entirely from the GP-CFM, then yes the GP-CFM will reduce down to the baseline model such as I-CFM or the OT-I-CFM depending on how the endpoints are coupled. An important property of GP-based time-series in comparison to many AR-based models is that GP-based times-series does not require the observations to be lying on shared, equal-spaced time points. In fact, observations can even lie on irregular intervals that are unique to each path, and the GP provides an effective modeling of the correlation structure over the shared observations. This property is important in the context of CFM training, in that we don’t want to restrict ourselves to applications that always share the time slices. Some time slices may have more or less points than others and some subjects may have more or less observations over time than others. Additionally, while the GP approach does not accumulate errors over time as AR-based models since there is not a natural ordering of time in GP-modeling, there is indeed a limitation in the GP approach compared to AR-based time-series modeling, however, which may limit its effectiveness in video modeling as time-series. It is that GP-based time-series modeling generally attempts to use all observations in the entire time domain to model the transition of observations overtime, (in contrast, AR-based models using only one or a small number of previous time points to model the next,) and thus can be ineffective in capturing drastic changes in adjacent time frames. This can create visually blurry transitions in some frames. Because videos are represented as frames of images over evenly spaced time grid, AR-based models can in fact be very effective. At the same time, the referee has pointed to an excellent direction to explore GP modeling in this regard as well. We believe it is possible to create a hybrid of AR and GP modeling that enjoys the unique benefits of each. This may lead to a very effective model for videos. We would love to explore this direction in the future. Many thanks again for your thoughtful comments and excellent suggestions!
Summary: The paper introduces stream-level flow matching with Gaussian processes (GP-CFM), which extends conditional flow matching to matching streams, i.e. latent stochastic paths that connect the source and target end points using Gaussian processes. The proposed framework naturally allows to include correlated observations (e.g. time series date) while remaining simulation-free, as the position and velocity can be readily sampled from the GP. Claims And Evidence: The claims and evidence, experiments, and arguments provided to validate the claims are generally sound (more details below). The authors demonstrate the workings of their method on synthetic data, which adds to the overall exposition, and evaluate its utility on multiple standard datasets. While none of the considered datasets are inherently high dimensional or used to compare state-of-the-art image generation with flows, it sufficiently demonstrates the utility of the proposed framework in multiple settings. **Variance reduction in the estimation of the marginal vector field leads to improved sample quality.** The authors show that using the GP-stream variants of CFM results in improved sample quality in terms of lower average Wasserstein-2 distance (synthetic example), FID (MNIST, CIFAR10), and KID (MNIST). However, the authors do not directly access estimator variance and I don't see how the improvement in average Wasserstein-2 distance (or FID) can be directly attributed to a lower (per stream) estimator variance. Without further assumptions it could also be attributed to lower bias. If the assumption is that the estimator of the marginal vector field is unbiased, I can see how an improvement must be the result of a reduction in estimator variance. However, in practice we don't have access to the true posterior probability path required to estimate the marginal vector field (Equation 2 in the manuscript), but have to resort to the learned approximation, which leads to a biased estimator. Can the authors please clarify this point (see questions)? **GP stream variants can naturally accommodate correlated observations.** The authors first demonstrate how GP-steam variants can leverage multiple training observations using 2d synthetic examples. They further show the utility of their framework in the time series setting on data from the LFP dataset and synthetic data, corresponding to modifying digits from HWD+. The latter shows significantly better FID scores, w.r.t the correct digit distributions, across time. Methods And Evaluation Criteria: Yes. Theoretical Claims: I have not checked any proofs. Experimental Designs Or Analyses: I have read the experiment section and have validated the soundness of the experiments. Supplementary Material: I have not read the supplementary material. Relation To Broader Scientific Literature: The authors extend prior work from Lipman et al. [1] and Tong et al. [2], which specify conditional probability paths by defining a reference vector field given one or both endpoints respectively, by instead specifying a stochastic process (specifically a GP) that connects these endpoints. The authors appropriately cite relevant work on conditional flow matching and cite Rasmussen & Williams [3] for their fundamental work on GPs. While not essential I think the authors should make an effort to also point out existing work that aims to directly model ODEs using GPs, and to discuss the conceptual differences to these approaches. [1] Lipman et al. Flow Matching for Generative Modeling. ICLR, 2023. [2] Tong et al. Improving and generalizing flow-based generative models with mini-batch optimal transport. TMLR, 2024 [3] Rasmussen and Williams, Gaussian Processes for Machine Learning. MIT Press, 2005 Essential References Not Discussed: All essential references are included. Other Strengths And Weaknesses: ### Strengths - The paper presents a novel extension to Conditional Flow Matching, which is mathematically elegant and offers several advantages: - The approach provides a principled way to model the uncertainty in flow paths. - The GP formulation remains "simulation-free" - The ability to incorporate multiple correlated observations into a unified model - The framework is complementary to existing methods like OT-CFM. ### Weaknesses The computational overhead of the GP calculations and their implication should be discussed in more detail. While the authors mention "moderate computational cost," a more detailed discussion of the computational complexity, especially in the case of high-dimensional data (without independence assumption) and multiple correlated observations, would be valuable. Other Comments Or Suggestions: None. Questions For Authors: 1. Regarding the reduction in estimator variance - Can you please clarify how the improvement in Wasserstein-2 distance (or FID, KID) relates to a reduction in estimator variance opposed to a possible reduction in bias? - Did you derive the true posterior probability to report the results for the synthesis 2D 2-Gaussian mixture? 2. While broadening the coverage region reduces problems associated with extrapolation it intuitively also seems to increase the data demands to robustly learn a model covering the broader coverage region. Is this a potential concern for real application? Are there practical guidelines on how practitioners should choose/tune their covariance kernels to achieve a good amount of coverage while extending it too much Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks a lot to the reviewer for their positive comments. Here, we clarified some specific points… 1. > However, the authors do not directly access estimator variance and I don't see how the improvement in average Wasserstein-2 distance (or FID) can be directly attributed to a lower (per stream) estimator variance. Can you please clarify how the improvement in Wasserstein-2 distance (or FID, KID) relates to a reduction in estimator variance opposed to a possible reduction in bias? We thank the reviewer for pointing this out, which has helped us clarify the sources of bias and variance in our algorithms. There are two sources of variance: 1) systematic variance from extrapolation of the neural network, 2) Monte Carlo variance of vector field estimation. The GP-stream reduces the systematic variance by expanding the search region, but the stochastic interpolation introduces more Monte Carlo error. To reduce the Monte Carlo error, instead of sampling one GP path, we can sample multiple GP paths at the same time. To validate the argument above, we did a quick experiment. We considered the target to be a 1D 2-Gaussian mixture, and drew 200 samples for training. We tried I-CFM, GP-I-CFM with 1 GP path (GP(1)-I-CFM) and 10 GP paths (GP(10)-I-CFM). We repeated the training 30 times and generated 10,000 samples for each. We smoothed the generated samples by Gaussian kernel density estimation (KDE). The means of the KDE match the true density well for all three algorithms. However, compared to I-CFM, the standard deviation of the KDE lines is lower in GP(1)-I-CFM (oscillates less around the true density), and GP(5)-I-CFM further reduces the standard deviation. To summarize results, we calculate mean of $|\overline{kde}(x) - f(x)|$ and $s_{kde}(x)$, where $x$ are 1000 evenly spaced points over a $(-6, 6)$. Denote $\overline{\lvert \overline{kde}(x_i) - f(x_i) \rvert} = \frac{1}{1000}\sum_{i=1}^{1000} \lvert \overline{kde}(x_i) - f(x_i) \rvert$ and $\overline{s_{kde}(x_i)} = \frac{1}{1000}\sum_{i=1}^{1000} s_{kde}(x_i)$ | | $\overline{\lvert \overline{kde}(x_i) - f(x_i) \rvert}$ | $\overline{s_{kde}(x_i)}$ | |-------------|-----|-----| | I-CFM | 0.0035 | 0.0154 | | GP(1)-I-CFM | 0.0036 | 0.0146 | | GP(5)-I-CFM| 0.0036 | 0.0136 | Since we are not allowed to update the manuscript or upload figures in the rebuttal, we will include more detailed experiments and discussions in the updated manuscript. 2. > Did you derive the true posterior probability to report the results for the synthesis 2D 2-Gaussian mixture? No, we didn't. For GP-I-CFM of 2D 2-Gaussian mixture, we can derive the marginal probability path. However, here we are considering the quality of generated samples, and checking the bias and variance of generated samples at $t=1$ plays the same role as checking the whole sample path from $t=0$ to $t=1$. 3. > While broadening the coverage region reduces problems associated with extrapolation it intuitively also seems to increase the data demands to robustly learn a model covering the broader coverage region. Is this a potential concern for real application? Are there practical guidelines on how practitioners should choose/tune their covariance kernels to achieve a good amount of coverage while extending it too much. If we sample 1 GP path for each iteration, it's a trade-off between systematic variance and Monte Carlo error (as mentioned above). When implementing GP-CFM, we currently manually choose the GP parameter so that the GP conditional path covers a slightly wider region than linear interpolation (by checking several paired samples from the target to the source). To reduce the Monte Carlo error while preserving the reduction in systematic variance, instead of sampling one GP path, we can sample multiple GP paths or resort to importance sampling over $t$. We will add a discussion on tuning parameters in our camera-ready version if the paper is accepted.
null
null
null
null
null
null
Exactly Tight Information-theoretic Generalization Bounds via Binary Jensen-Shannon Divergence
Accept (poster)
Summary: This paper studies the information-theoretic generalization bounds within the conditional mutual information (CMI) framework by introducing a new information measure called binary Jensen-Shannon (JS) divergence. Specifically, the paper begins with a cleverly designed lemma that builds a relationship between binary JS divergence and mutual information. This key result allows the authors to derive novel, tighter CMI bounds in which the CMI term conditions only on a single random variable. The paper further extends these results by presenting evaluated CMI bounds. More importantly, under an invariance assumption, the authors demonstrate that the generalization error can be exactly characterized by their binary JS divergence measure. This argument applies not only to the zero-one loss but also to general bounded loss through a novel loss binarization technique. Furthermore, the paper generalizes its findings by extending mutual information and KL-based results to the broader class of $f$-divergence-based results. Finally, the authors also provide empirical study of their theoretical results, showing that their novel binary JS divergence bounds can exactly characterize generalization, making them tighter than all previous CMI bounds. Claims And Evidence: All claims are clearly supported. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense to me. Theoretical Claims: I checked all the proofs and they seem correct to me. Experimental Designs Or Analyses: The experiment settings follow some previous studies and are reasonable to me. Supplementary Material: I reviewed the entire appendix. Relation To Broader Scientific Literature: This paper is within the literature on learning theory and generalization theory, with a particular focus on the information-theoretic generalization analysis framework. Notably relevant works include Steinke \& Zakynthinou (2020), Hellström \& Durisi (2022b), and Wang \& Mao (2023a). Essential References Not Discussed: The following paper may need discussion in this work: [R1] Hellström, Fredrik, and Benjamin Guedj. "Comparing comparators in generalization bounds." International Conference on Artificial Intelligence and Statistics. PMLR, 2024. The current paper uses binary JS divergence as the comparator between empirical and population loss, whereas [R1] explores a general convex comparator for this purpose and further investigates the optimal convex comparator. Given these conceptual connections, discussing [R1] may provide additional understanding for the choice of binary JS divergence in this work. Other Strengths And Weaknesses: Strengths: 1.This paper makes an important technical contribution to the field. While Hellström \& Durisi (2022b) shows that the binary KL term $d(L_n||\frac{L_\mu+L_n}{2})$ is upper bounded by the CMI term, the authors cleverly identify that this binary KL term is embedded within their proposed binary JS measure. More importantly, they utilize the fact that the mutual information between an arbitrary R.V. $X$ and a Bernoulli R.V. $Y$ is equivalent to the JS divergence between $P\_{X|Y=0}$ and $P\_{X|Y=1}$. They further demonstrate that the binary JS divergence between $\mathbb{E}\_{X|Y=0}[X]$ and $\mathbb{E}\_{X|Y=1}[X]$ is exactly equal to this mutual information when $X$ is also binary. This enables an exact characterization of the generalization error using their binary JS measure, and the technique itself may be of independent interest beyond generalization analysis. 2. The binarization and truncation techniques introduced in this paper are also novel and contribute valuable methodological advancements to the field. 3. Since the binary JS-based bounds provide an exact characterization of generalization error, they overcome key limitations of previous CMI-based bounds, as pointed out in recent works. Weaknesses: 1. The exact characterization results (e.g., Theorem 3.9, Corollary 3.10) require the algorithm to be invariant to sample permutations. While this is a notable restriction, it is a much more relaxed assumption compared to the interpolating algorithm assumption used in previous works for obtaining exact characterization results. 2. The assumption in Corollary 3.12 may not be easily satisfied in practice. The authors have already acknowledged this limitation in the paper. Other Comments Or Suggestions: 1. In the right column, Lines 110–114, the authors state that "JS divergence serves as a proper metric of distance". This statement is incorrect because **JS divergence does not satisfy the triangle inequality** and therefore is not a proper metric. Please remove this incorrect statement. 2. Please explicitly include Assumption 3.8 in the statements of Theorem 3.9, Corollary 3.10, and Corollary 3.12, as it is crucial for the exact characterization of generalization error in your framework. 3. In the right column, Line 139, $L=f(W,Z_u)$ should be $L=\ell(W,Z_u)$. Questions For Authors: 1. In [R1] (see "Essential References Not Discussed"), it has been shown that when the convex comparator is the Cramér function, one can obtain the tightest possible bound. How do the findings in your paper relate to their results? Could you provide further discussion on the connection between the Cramér function and your binary JS divergence? 2. I thoroughly enjoyed reading this paper, but I have a question regarding the motivation for obtaining the tightest possible CMI bound for generalization error. Specifically, why is it meaningful to derive a generalization bound that is exactly equal to the generalization error? In the CMI setting, where both a training sample and a ghost sample are available, estimating the generalization error directly is feasible. Given this, why should one use your binary JS bound as a predictor of generalization error? Additionally, what unique insights does your exactly tight bound provide, beyond simply computing the generalization error directly in the CMI framework? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Dear Reviewer PSQE, thank you for your kind words and insightful comments! We address your questions below: --- **On the Optimal Convex Comparator** We appreciate you highlighting this work. As stated in Theorem 4 of that paper, the Cramér function is defined as the convex conjugate of the CGF of a distribution $P_p$ from a set $\mathcal{P}$, where for any $r$ in the loss space $\mathbb{L}$, there exists a $P_r \in \mathcal{P}$ such that $\mathbb{E}[P_r] = r$. This means the Cramér function is not fixed but depends on the loss space. In their analysis, when $\mathbb{L} = [0,1]$ (as in our setting), $\mathcal{P}$ is the set of all Bernoulli distributions (Eq. (19)), making the Cramér function the binary KL divergence (Eq. (29)). In contrast, our work demonstrates that binary JS divergence outperforms binary KL in the supersample setting, suggesting that their optimality result does not directly extend to our framework. A key reason may lie in the choice of mutual information measure. Their formulation is based on $D_{\text{KL}}(Q_0 D^n \\| Q_n D^n)$, a generalization of $I(W;\mathbf{Z})$, which reflects the standard hypothesis-based mutual information. In contrast, our analysis focuses on the supersample setting, where the key quantities are $I(W;U|\widetilde{\mathbf{Z}})$ or $I(L;U)$. This may explain from another perspective why our binary JS method (and also the previous fast-rate one) does not apply to this original generalization analysis setting but only the supersample one. Extending their analysis to the supersample setting would be a promising future direction, and we will include these discussions in the revised manuscript. --- **On the Value of Tight CMI Bounds** This is indeed a crucial question for the CMI-based generalization literature. We believe our contribution goes beyond providing tighter estimates to the generalization error. In particular, we address an open question raised by [1]: For which learning problems and learning algorithms is the CMI framework expressive enough to accurately estimate the optimal worst-case generalization error? Our results show that the framework can achieve exact tightness in a broad range of scenarios, offering a conclusive resolution to this line of inquiry. While our setting is quite general, the practical significance of these bounds becomes more apparent when applied to specific downstream tasks. Due to space limitations, please refer to our response to ```reviewer o7Wh``` for how our results connect to the analysis of out-of-distribution generalization and noisy iterative learning algorithms. --- **Other Minor Points** Thank you for pointing these out. We will make these necessary corrections in the revised version. --- [1] Information complexity of stochastic convex optimization: Applications to generalization, memorization, and tracing. ICML, 2024. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their responses. Please incorporate the discussion on Hellström and Guedj (2024) into the revised manuscript. Regarding the value of a tight CMI bound, I appreciate the authors' insights. I encourage you to continue reflecting on this in your future work, after all, the tightest possible generalization bound is the generalization error itself. This raises the question: how tight do we truly need a generalization measure to be? Additionally, I would like to point out that an exactly tight IT bound was first given in [1] (rather than Wang and Mao (2023)). I recommend reading Remark 5.5 in the arXiv version (not the ISIT version) of [1], where the authors discuss a similar question. [1] Haghifam, Mahdi, et al. “Understanding Generalization via Leave-One-Out Conditional Mutual Information.” arXiv preprint arXiv:2206.14800 (2022). In light of your responses, I will increase my score.
Summary: This paper investigates the question of tightness in mutual information generalisation bounds. Authors propose exactly tight generalisation bounds based on the binary Jensen-Shannon divergence. They show that their results are also tighter than various existing bounds and successfully involve the impact of a statistical property of optimisation algorithms. They then extend the notion of Jensen-Shannon divergence beyond the KL case and propose associated generalisation bounds. The paper concludes with a numerical assessment of the tightness of their bounds.The Claims And Evidence: The paper is well-written and pedagogical. The tightness of their results is provably shown, and limitations of the proposed bounds are clearly highlighted. Methods And Evaluation Criteria: The experimental part consists in simple computations of various generalisation bounds for different pairs (loss, learning algorithms). The benchmark here consist in the binary KL bound which is a natural comparison point. Theoretical Claims: As I am not familiar with literature, I cannot assess the veracity of the proofs. However, the proposed contributions look coherent with existing results, and it is clear to understand the reason of why their results are tighter bounds than existing ones through Figure 2. Experimental Designs Or Analyses: The experimental part look sound and coherent with theoretical claims, although I did not check the details. Supplementary Material: As I know little about the MI/CMI literature, I did not carefully check the appendices. Relation To Broader Scientific Literature: I am not familiar enough with this literature to know whether all relevant references have been discussed. Essential References Not Discussed: I am not familiar enough with this literature to know whether any crucial reference is missing. Other Strengths And Weaknesses: See Questions. Other Comments Or Suggestions: See Questions. Questions For Authors: - In Secs 2.1 and 2.2 you defined twice $L_n$ and $L_\mu$, I assume that it is the definitions of Sec. 2.2 that holds in this work. - l.134-143 right column: Does this mean that in this work, you always consider $\tilde{Z}_i^0$ to be your training data and $\tilde{Z}_i^1$ the test one? - More generally, I am not sure to understand the notion of conditioning to $\tilde{Z}_i^0$ in Table 1 when defining your SICIMI framework. If this means that you always assume $\tilde{Z}_i^0$ to be your training data, then is it still relevant to invoke a supersample instead of directly mentioning training and test sets? In this case this would be relevant to discuss the difference with transductive learning, and more particularly the transductive PAC-Bayes learning (see e.g. Begin et al. 2014) which also consider directly a train and test set and propose generalisation bounds involving KL divergences (thus mutual information). - l.223 right column: would it be possible to briefly describe the proof of convexity of $d_{JS}$? References: Begin et al. 2014 PAC-Bayesian Theory for Transductive Learning Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer S7W7, Thanks for your valuable comments! We are addressing your questions as follows: --- **Redefinition of $L_n$ and $L_\mu$** You are correct that $L_n$ and $L_\mu$ are defined in both Sections 2.1 and 2.2. These two definitions are actually equivalent but expressed using different notations: one for the standard empirical and population risks, and the other adapted to the supersample setting. We will clarify this equivalence in the revised version. --- **Training and Testing Separation in $(\widetilde{Z}_i^0, \widetilde{Z}_i^1)$** The interpretation where $Z_0$ is the training sample and $Z_1$ is the test sample only applies to the illustrative example preceding Section 3.1. In the main analysis, we adopt the supersample setting, where the binary variable $U_i$ determines the training sample $\widetilde{Z}_i^{U_i}$ and the test sample $\widetilde{Z}_i^{\overline{U}_i}$. Hence, $\widetilde{Z}_i^0$ is equally likely to serve as either a training or test sample. The symmetry in the supersample setting implies that the distributions of $(L_i^0, L_i^1)$ are actually identical under these two procedures: - **Original supersample formulation:** Assign training and test samples as $\widetilde{Z}_i^{U_i}$ and $\widetilde{Z}_i^{\overline{U}_i}$ respectively, and define $L_i^0 = \ell(W, \widetilde{Z}_i^0)$, $L_i^1 = \ell(W, \widetilde{Z}_i^1)$. - **Illustrative example formulation:** Always fix $\widetilde{Z}_i^0$ for training and $\widetilde{Z}_i^1$ for testing, and define $L_i^0 = \ell(W, \widetilde{Z}_i^{U_i})$, $L_i^1 = \ell(W, \widetilde{Z}_i^{\overline{U}_i})$. We will revise the paper to unify the example and main analysis settings to avoid confusion. It is true that our SICIMI term $I(W;U_i|\widetilde{Z}_i^0)$ conditions on $\widetilde{Z}_i^0$, whose distribution reflects a mixture of training and test samples. However, our focus remains on inductive learning algorithms, not transductive ones. Unlike transductive methods, which may leverage unlabeled test data during training, inductive algorithms do not access any information about the test set in the learning phase. The test samples are only used in the analysis stage to evaluate generalization bounds. Therefore, the two setups are fundamentally different and not directly comparable. --- **Convexity of $d_{\text{JS}}$** Here is a proof sketch for this result: The joint convexity of $d_{\text{JS}}$ follows directly from that of the Jensen-Shannon divergence. Specifically, when considering Bernoulli random variables, we have $d_{\text{JS}}(p\\|q) = D_{\text{JS}}(\text{Bern}(p) \\| \text{Bern}(q))$ and $\frac{1}{2} \text{Bern}(p) + \frac{1}{2} \text{Bern}(q) = \text{Bern}\left(\frac{p+q}{2}\right)$. Moreover, the joint convexity of $f$-divergences follows from the convexity of the mapping $(p, q) \mapsto q f(p/q)$, which is inherited from the definition of convex $f$-functions.
Summary: This paper introduces a novel framework for deriving *exactly tight* information-theoretic generalization bounds in machine learning using the binary Jensen-Shannon (JS) divergence. By leveraging a binarization technique for loss variables and supersample frameworks, the authors propose hypothesis-based and prediction-based bounds that address key limitations of prior work, including slow convergence rates and overestimation in deep neural networks. Experiments validate the bounds on synthetic and real-world datasets, demonstrating superiority over baselines like Binary KL divergence and fast-rate bounds. Claims And Evidence: The claims are all supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods make sense for the problem. Theoretical Claims: The paper appears to be technically sound, but I have not carefully checked the details. Experimental Designs Or Analyses: The experiment evaluated three different classification tasks, generating Gaussian datasets, 4-layer CNN on binarized MNIST and Pretrained ResNet-50 model on CIFAR10. The experiment is generally reasonable, but it lacks experiments with high-dimensional data, such as ImageNet datasets. Supplementary Material: The supplementary material is code, I haven't read it. Relation To Broader Scientific Literature: This paper is focus on the theoretical side and does not have a significant connection with the broader scientific literature. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths**: 1. This paper proposes a new approach to characterizing the relationship between expected empirical and population risks through a binarized variant of the Jensen-Shannon divergence, which achieves faster convergence compared to existing fast-rate and binary KL-based methods. 2. Results can be applied to stochastic convex optimization and extend to f-divergence/Wasserstein metrics . 3. Lemma 3.1 may hold significance beyond the context of generalization analysis, offering potential applications in broader aspects. **Weaknesses**: 1. Corollary 3.12 requires $\delta_j L_n \leq \delta_j L_{\mu}$, which seems to be a stronger condition than Assumption 3.7. 2. Lack of experiments on larger datasets. Other Comments Or Suggestions: The related work lacks a discussion and comparison with the literature on PAC-Bayesian generalization bounds, such as Dupuis, Benjamin, et al. "Uniform Generalization Bounds on Data-Dependent Hypothesis Sets via PAC-Bayesian Theory on Random Sets." Journal of Machine Learning Research 25.409 (2024): 1-55. https://www.jmlr.org/papers/volume25/24-0605/24-0605.pdf Questions For Authors: 1. This paper introduce two reasons for producing tight generalization bounds. One is eliminating redundant random variables from the key mutual information terms (Line 179). Another is using the binary KL divergence (Line 141). Is it necessary to ignore redundant information in conjunction with binary KL divergence to improve the upper bound? 2. Your results rely on ​Assumption 3.7 ($L_n \leq L_{\mu}$). How restrictive are this assumption in practice? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer 537o, Thank you for your thoughtful comments and questions! We address them below: --- **Assumption in Corollary 3.12** We agree that the assumption in Corollary 3.12 is stronger than Assumption 3.7. This limitation is acknowledged in Section 5, and we leave the task of relaxing this assumption to future work. Importantly, the core contributions of our paper do not depend on this assumption and are already applicable to a wide range of practical learning scenarios. --- **Experiments on Larger Datasets** While MNIST and CIFAR-10 are relatively simple by today’s standards, they remain common benchmarks in generalization studies (e.g., Harutyunyan et al., 2021; Hellström \& Durisi, 2022b; Wang \& Mao, 2024). Our bounds are designed to scale with arbitrary $n$ and sample distributions, so there is no indication they would deteriorate on larger datasets. ```Reviewer o7Wh``` also found the current experiments sufficient, and we believe they effectively demonstrate our contributions. --- **Relation to PAC-Bayesian Bounds** We agree that PAC-Bayesian bounds are closely connected to information theory, especially through KL divergence terms. However, our work focuses on bounding the *expected* generalization error, whereas PAC-Bayesian approaches typically emphasize *high-probability* bounds. Therefore, the two are not directly comparable. This distinction is also acknowledged by ```Reviewer o7Wh```. --- **On Tighter Bounds** Yes, recent improvements in information-theoretic generalization bounds typically fall into two categories: (1) refining the dependence between mutual information and generalization error—where we propose the binary JS divergence, and (2) improving the information measure itself—where we propose SICIMI for hypothesis-based bounds and bl-MI for prediction-based bounds. These two approaches complement each other to achieve the tightest bounds. --- **On the Restrictiveness of Assumption 3.7** Assumption 3.7 effectively assumes non-negative generalization error, which is typically satisfied by well-trained models. In practice, test performance usually lags behind training performance, and this is exactly the reason why we need generalization analysis. Similar assumptions are also adopted in prior works [1], and it has been shown to always hold for certain algorithms such as Gibbs sampling [2]. --- [1] Estimation of generalization error: random and fixed inputs. Statistica Sinica, 2006. [2] An exact characterization of the generalization error for the Gibbs algorithm. NeurIPS, 2021.
Summary: The paper discusses tight information theoretic bounds for the generalization error. The bound is general and can be applied to any machine learning model. This line of work is based on the seminal works of Xu and Raginsky (2017) and follow-up works that use the mutual information between training data and the output of the learning algorithm as a measure of generalization. The research program is motivated by the goal of obtaining a tight computable bound. Here are the highlights: * The paper uses the supersample idea (Section 2.2), a data processing like inequality for Jensen-Shannon divergence (Lemma 3.1) to obtain a result. * Generalization bounds are obtained in Theorem 3.2(where $I(W;U_i|\tilde{Z}^0_i)$ is used) and Theorem 3.6 (where $I(L^0_i;U_i)$ is used). * Proposition 3.4 shows that the bound based on $I(W;U_i|\tilde{Z}^0_i)$ is strictly tighter. * In section 3.3, the bound in Theorem 3.6 is shown to be tight for various cases. Finally some extensions are presented in Section 4 based on f-divergence, and the experiments are presented in Section 6. Claims And Evidence: The main claim is tighter information theoretic bounds backed by proofs, which seem to be sound. The tightness of the bound is shown in various experimental results as well as theoretically proven. Methods And Evaluation Criteria: The paper is mainly a theoretical one. The main idea is the use of an inequality based on the Jensen-Shannon divergence and relate that to a mutual information term. The bound is shown to be provably tighter and, in some cases, exactly tight. The key is using the supersample framework and a data processing like inequality for Jensen-Shannon divergence (Lemma 3.1). The key improvement with respect to previous bounds is that the bound is the sum of single samples with the selector random variable in the MI term conditioned on a single sample. I will comment on the utility of these bounds later. Theoretical Claims: The theoretical claims are properly presented, and the proofs are well-readable and correct. I checked the proofs of Lemma 3.1, Theorem 3.2, and Proposition 3.4 in-depth and looked rapidly at other proofs, which mostly utilize a generally similar proof strategy. Experimental Designs Or Analyses: Experiments are conducted for MNIST and CIFAR10 which are considered simple datasets by today’s standard. Nonetheless, for the generalization error analysis, it is sufficient. Many GE works are already vacuous or do not apply for ResNet50 on Cifar10. The main issue is the low number of training samples used in MNIST and Gaussian experiments. Supplementary Material: The supplementary materials consist mainly of the proofs and a few more experiments. I checked the proofs as explained above. Relation To Broader Scientific Literature: The paper is about the generalization error analysis of learning algorithms. The approach is quite specific and therefore not directly connected to other bounds like PAC Baysian or Rademacher Complexity based bounds, which is fine. Essential References Not Discussed: Xu and Raginsky themselves cite the original work, where the information-theoretic generalization bound is presented: Russo and J. Zou, How much does your data exploration overfit? Controlling bias via information usage It is fair to say that this is the seminal work on the information-theoretic generalization bound, and should be cited. Other Strengths And Weaknesses: **Strengths:** The paper is well written, and the exact tightness is a merit. **Weakness:** I would like to clarify a dilemma I have with these information theoretic results. To put it simply, it is not clear what insights these bounds give us about learning. Naively, it seems that these results do not provide any additional insight beyond the fact that the learning algorithm should not memorize the training data, or in this case, it should not memorize the procedure of training data selection. Besides, the prediction-based generalization bound already involves losses that directly contribute to the precise generalization error, and I wonder whether we are just verifying an algebraic equality, self-fulfilling-ly. The paper in particular lacks more extensive insights about the results. It presents theorem and plots the numerical results. It is crucial that the authors clarify what these bounds tell us about learning, how they can be employed in practice, and why machine learning community should care about it. Note that the seminal work of Russo and J. Zou had interesting insights. Other Comments Or Suggestions: I wonder if the dependence of $W$ on the training data can be made more explicit for better readability. Questions For Authors: As I alluded to in my comment above, I wonder if Lemma 3.1 can be obtained using a data processing inequality for f-divergence, knowing that JS-divergence is one (maybe something can be found here f-Divergence Inequalities by Igal Sason, and Sergio Verdu?). The author mentions the convergence rate of $O(1/n)$ “frequently observed in practical learning scenarios”. Could authors provide additional reference for this claim? Regarding the bounds in Table 1, including what was presented in the paper: I am not sure if one can read much from the appearance of the term $O(1/n)$ or $O(1/\sqrt{n})$, because the way the other terms scale with $n$ impacts the final dependence. There are many norm-based bounds for deep nets that scale poorly with $n$ despite the explicit $O(1/\sqrt{n})$ dependence. It is difficult to guess the trend from the plots. It might be worth to study the dependence via some curve fitting. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer o7Wh, thanks for your thorough reading and constructive questions! We are addressing your questions as follows: --- **On the Nature and Significance of Information-Theoretic Results** This is an insightful and important question involving many works studying information-theoretic bounds. We will clarify the significance of our work from two key perspectives: **1. Understanding the Limits of the Information-Theoretic Approach** Recent efforts to tighten bounds have proceeded along two main directions: - **Improving dependencies between mutual information and generalization error:** progressing from square-root bounds → binary KL → fast-rate → and now, our binary JS. - **Refining the mutual information term itself:** evolving from MI → CMI → $f$-CMI → e-CMI → ld-MI → and finally, our bl-MI. These efforts have brought increasingly tighter (though still suboptimal) bounds. This naturally raises the question [1]: *For which learning problems is the CMI framework expressive enough to accurately estimate the optimal worst-case generalization error?* Or, are there learning settings where the CMI framework must fail? Some prior works (e.g., Haghifam et al., 2023) explore this on SCO problems. Our work provides a definitive answer: the information-theoretic approach is capable of achieving exactly tight bounds across a broad range of learning scenarios. In doing so, we have adequately addressed this open question and mark a meaningful milestone in this direction. **2. Strengthening Theoretical Guarantees for Downstream Applications** It should be noted that our results are developed in a very general setting. They can become especially valuable when applied to more specific contexts. Two particularly promising directions are: - **Out-of-distribution (OOD) generalization:** Prior works [2,3] use information-theoretic bounds to identify key components for OOD generalization and propose loss-level optimization objectives (e.g., Eq. (5) in [2], Sec. VII.D in [3]). Our results can be adopted to provide a more robust theoretical foundation for such methods. - **Understanding noisy, iterative learning algorithms:** For algorithms like SGD and SGLD, information-theoretic bounds (e.g., MI [4], CMI [5]) have been used to analyze the trajectory of the hypothesis and link algorithm behavior with some interesting factors like gradient variance or landscape flatness. We believe our loss-based bounds will further advance this line of works to analyze the loss trajectory (e.g., [6]). These applications highlight the practical value of our results, though a detailed exploration lies beyond this paper’s scope. --- **Alternative Proof for Lemma 3.1** Thank you for this perspective. While intriguing, we currently do not see a direct derivation of Lemma 3.1 from the $f$-divergence-based data processing inequality. This route may be able to characterize relationships between $d_{\text{JS}}$ and $I_{\text{JS}}$, but not Shannon's mutual information. We consider this a promising direction for future work. --- **On the $O(1/n)$ Convergence Rate** (Strongly) convex optimization is a well-known case exhibiting $O(1/n)$ convergence (e.g., [7]). We agree that information-theoretic bounds with $1/n$ or $\sqrt{1/n}$ terms do not necessarily reflect their real rates. Nevertheless, our aim is not to claim a universal $1/n$ bound, but rather to point out that earlier bounds with explicit $\sqrt{1/n}$ scaling may be inherently suboptimal when faster rates are achievable. As suggested, we fit the generalization error curves in Figure 3 using $y = ax^b$, and report: | Dataset | Gaussian | MNIST | CIFAR-10 | |----------|----------|-------|----------| | $b$ | -1.084 | -0.585| -0.326 | This indicates a convergence rate near $O(1/n)$ for synthetic data and closer to $O(\sqrt{1/n})$ on real datasets. --- **Other Points** We now cite Russo and Zou’s seminal work, and explicitly denote $W = \mathcal{A}(\mathbf{Z})$ to clarify the dependence on training data. Regarding the number of training samples, we note that the curves for different bounds already converge closely at large $n$, and thus increasing $n$ may not further enhance this comparison. --- [1] Information complexity of stochastic convex optimization: Applications to generalization, memorization, and tracing. ICML, 2024. [2] On $f$-Divergence Principled Domain Adaptation: An Improved Framework. NeurIPS, 2024. [3] How Does Distribution Matching Help Domain Generalization: An Information-theoretic Analysis. TIT, 2025. [4] On the generalization of models trained with SGD: Information-theoretic bounds and implications. ICLR, 2022. [5] Sharpened generalization bounds based on conditional mutual information and an application to noisy, iterative algorithms. NeurIPS, 2020. [6] Analyzing generalization of neural networks through loss path kernels. NeurIPS, 2023. [7] Train faster, generalize better: Stability of stochastic gradient descent. ICML, 2016. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their answers. Particularly, thanks for the comments on the convergence rate. I suggest to include this discussion in the final version and clarify these subtleties. Regarding your answer to \textit{Significance of Information-Theoretic Results}, the authors have tried to clarify further their contribution in the first point. I am not questioning this, and I acknowledge the progress made in this paper. However, my question was more general: \textit{what are we learning from these bounds? how can they impact the machine learning research and practice?} The examples provided in the second bullet point provide promising directions to address this question. However, I cannot think of similar examples with concrete outcome in the previous literature on IT generation bounds, so I tend to think that the lack of application is a weakness of this framework. Overall, I think the paper clearly passes the bar for acceptance, so I change my score to reflect that.
null
null
null
null
null
null
Telling Peer Direct Effects from Indirect Effects in Observational Network Data
Accept (poster)
Summary: Estimating causal effects in observational network data is challenging due to peer interactions. Existing methods struggle to distinguish different types of peer effects. To address this, the proposed approach defines a general setting that considers peer direct effects, peer indirect effects, and individual treatment effects, along with their identification conditions. Using causal mediation analysis tailored for network data, the method differentiates these effects. It incorporates attention mechanisms to capture varying neighbor influences and employs multi-layer graph neural networks (GNNs) to explore high-order neighbor effects. Additionally, the Hilbert-Schmidt Independence Criterion (HSIC) enhances model robustness. Extensive experiments on semi-synthetic and real-world recommendation datasets validate the approach, with potential applications in social networks and public health interventions. Claims And Evidence: This paper is well-structured and supported by evidence. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. The identifiability of PDE, PIE, and STE is a natural extension of previous works. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: This contribution enhances our ability to optimize intervention strategies in public health, marketing, and social influence analysis. Peer effects is an emerging topic in causal inference. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength. They introduced the concepts of self-treatment effects, peer effects, and both direct and indirect peer effects within a many-to-one framework. Additionally, they proposed a novel gDIS algorithm, which demonstrates strong performance. Weakness. See my questions. Other Comments Or Suggestions: In simulation, more details regarding the baseline models are required. Specifically, what types of peer and self-treatment effects do they evaluate? Typos: Line 041 (right) Questions For Authors: 1. Do you plan to move the motivating example of the product promotion campaign to the main body of the paper? 2. The notation W_{t_i} and W_{y_i} appears to be ambiguous. Do they depend on the realized values of T and Y for individual i ? 3. You have defined peer effects through Definitions 3.5 and 3.6. However, I believe that Definition 3.6 is not a definition but rather a proposition. 4. Does your definition of peer effects align with previous works in the case of one-to-one relationships? 5. What does the total of PIE, PDE, and STE represent? Does it correspond the totoal effect of w_{t_i}' to w_{t_i} and 0 to 1 for individual i on outcome? 6. In simulations, why do the estimates for baselines and gDISs differ significantly? Do baseline estimates different peer and self-treatment effects? Is it possible to compute the ground truth values of PDE, PIE, and STE? 7. How did you compute error for each estimator? The values of counterfactuals in each effect are unobserved. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's insightful comments and the high recognition of the value and importance of our work. **Comment:** *Baseline models details needed: Which peer and self-treatment effects do they evaluate?* In our simulation, the baseline models provide estimates of the overall peer effects (PE) and self-treatment effects (STE). Specifically, the baseline methods estimate: - **Peer Effects (PE):** This represents the aggregated influence of peers on an individual's outcome. These models capture the total effect that the treatments and outcomes of an individual's neighbors have on that individual but do not distinguish between the direct influence (Peer Direct Effects, PDE) and the indirect influence (Peer Indirect Effects, PIE). - **Self-Treatment Effects (STE):** This measures the effect of an individual's own treatment on their outcome. Thus, while the baseline models evaluate the combined peer effect and the self-treatment effect, they do not explicitly decompose the peer effects into PDE and PIE. Our proposed gDIS framework, on the other hand, is designed to disentangle these two components, offering a more detailed analysis of the peer influences within network data. **Q1:** *Product promotion example to main paper?* We plan to include the product promotion campaign example in the introduction, alongside the existing example from epidemiology. We will move the proof to the appendix to make space for the example. **Q2:** *$W_{t_i}$, $W_{y_i}$ notation appears to be ambiguous. Do they depend on the realized values of $T$ and $Y$ for individual $i$?* $W_{t_i}$ and $W_{y_i}$ do not depend on the realized values of $T_i$ or $Y_i$. They represent summary of the treatments and outcomes of unit $i$'s neighbors. $W_{t_i}$ and $W_{y_i}$ are illustrated in Figure 2b and explained in Line 313 (left column), and Appendix B. **Q3:** *Definition 3.6 as proposition?* We agree that Definition 3.6 may be more appropriately stated as a proposition as it presents a reformulation and decomposition of the peer effects introduced in Definitions 3.4 and 3.5. Our intention was to highlight this decomposition as a central conceptual component of our framework, which is why we initially presented it as a "definition." However, we appreciate the reviewer's point and will revise the manuscript to present it as a proposition. **Proposition** (Peer Effect Decomposition). The peer effect (PE) can be decomposed into the sum of the PDE and the PIE. That is, \\begin{aligned} \\text{PE}(w' _{t _i}) &= \\text{PDE}(w' _{t _i}) + \\text{PIE}(w' _{t _i}), \\end{aligned} \\begin{aligned} \\text{PE}(w _{t _i}) &= \\text{PDE}(w _{t _i}) + \\text{PIE}(w _{t _i}). \\end{aligned} The peer effect (PE) can be decomposed into the sum of the peer direct effect (PDE) and the peer indirect effect (PIE). That is, This proposition shows that the total peer effect integrates both the direct and indirect pathways of peer influence. Equation (3a) and Equation (3b) correspond to the vaccination and product promotion examples, respectively. **Q4:** *Alignment with previous work?* Our definition of peer effects aligns with prior works in the one-to-one setting, except that our peer treatment and outcomes are summaries of the treatments and outcomes of the neighbors of $i$, respectively. **Q5:** *Total effect representation?* Yes, you are right when there is no interaction between peer treatment $W_{t_i}$ and individual treatment $T_i$. We have assumed this in our work. This is a very reasonable assumption, as individual $i$'s treatment and peer treatment do not affect each other directly, and the reason for an individual to take the same treatment as their peers is because the individual and their peers have similar features (characteristics). **Q6:** *Baseline vs. gDIS differences?* a) In Table 2 of the paper, we made a mistake in summarizing the results for gDIS. We have updated the table and present the revised version at the following link (due to space limitations): https://anonymous.4open.science/r/icmlSupp-4556/table3.png b) The baseline methods' estimates of PE (peer effects) and STE (self-treatment effects) are formally consistent with our model; however, they do not explicitly decompose the PE into PDE (peer direct effects) and peer indirect effects (PIE). c) Using the given structural causal model (Equation (19) in Appendix F), we can obtain the potential outcomes under different treatment conditions. **Q7:** *Error computation for counterfactuals?* The potential outcomes with respect to $T_i$ for each $i$ are known since the structural causal model is assumed to be known for the synthetic dataset. Hence, the ground truth values of STE, PE, PDE, and PIE can be derived. Subsequently, compute the estimation error using the PEHE (Precision in Estimation of Heterogeneous Effect) metric, by comparing the estimated effect with the ground truth causal effect, i.e., the difference in potential outcomes.
Summary: This paper studies the causal effect estimation problem without SUTVA assumption. Specifically, the authors identify the overlooked problem that existing methods cannot distinguish between peer (in)direct effects and self-treatment effects. The authors propose a method called gDIS to estimate these estimands in the network scenarios. The proposed method is based on the back-door criteria, containing three stages to estimate the required density functions. The experimental results show its effectiveness. ## update after rebuttal All my concerns are well addressed. I will keep my positive score. Claims And Evidence: The claims in this paper are supported by clear proofs and experimental results. However, I am a bit confused by the DAG presented in Figure 2b. Q1: Since it is 'Many-to-One' interference, units will affect each other. For the summary causal graph of __all units__, is it reasonable to use the DAG tool to represent the causal relationship for their interference? Methods And Evaluation Criteria: The proposed gDIS method makes sense to the network effect estimation problem. Theoretical Claims: I checked all proofs roughly and did not find any problems. Experimental Designs Or Analyses: The experimental designs and analyses are sound overall. However, I still have the following questions: Q2: There is still a lack of comparison with existing peer direct/indirect effect methods cited in the introduction. Is it because they are of the 'One-to-One' type and cannot be applied to this scenario? Q3: The potential outcome simulation is through an iteration process. How do you obtain the ground true direct/indirect effects? Supplementary Material: I have reviewed all appendixes, but I did not check the code. Relation To Broader Scientific Literature: The studied problem is an overlooked problem, and this paper provides a practical and reasonable solution to estimating PDE/PIE and STE. Essential References Not Discussed: The key related works are cited in the paper. Other Strengths And Weaknesses: Strengths 1. The setting is interesting. 2. The proposed estimator is reasonable and effective. Weaknesses 1. Lack of comparison with PIE/PDE estimators. Other Comments Or Suggestions: typos: 1. Line 042: ana -> and 2. Line 087: method which do not. 3. Line 109: methods not -> methods do not 4. Line 198: frrom -> from Questions For Authors: Please see Q1-Q3 in the above discussion. Q4: Could you provide more experimental results with a wider range of $\lambda$? The paper claims $\lambda=0.3$ is optimal, but it seems only suboptimal as the curves in Figure 5 are still descent as $\lambda$ increases. Q5: Could you clarify how to choose the other hyperparameters shown in Table 6 and provide the hyperparameter space of the compared baselines? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments. Detailed responses to each specific comment are provided below. **Weakness 1:** *Lack of comparison with PIE/PDE estimators.* The PIE/PDE estimators cited in our introduction (e.g., VanderWeele et al., Shpitser et al.) are designed for One-to-One interference settings, assuming dyadic interactions (e.g., household or pairwise influence). Our work focuses on the more general and realistic Many-to-One network setting. As such, these estimators cannot be directly or fairly compared with our method. Instead, we compare with recent network-level causal inference methods (e.g., NetEst, TNet, 1-GNN), which are designed to handle complex interference and represent the state-of-the-art in our setting. --- **Other Comments Or Suggestions: Typos** We thank the reviewer for the careful reading. We have corrected all. --- **Question 1:** *Since it is "Many-to-One" interference, units will affect each other. For the summary causal graph of all units, is it reasonable to use the DAG tool to represent the causal relationship for their interference?* The DAG shown in Figure 2b illustrates the unit-level causal relationships from the perspective of a single unit $i$, depicting how $i$'s outcome is affected by their own treatment and neighbors’ treatments and outcomes. This type of localized DAG is commonly used in the literature on causal inference with interference [1]. [1] Jiang, Song, and Yizhou Sun. "Estimating causal effects on networked observational data via representation learning." *Proc. ACM Int. Conf. Inf. & Knowl. Manag.* 2022. --- **Question 2:** *There is still a lack of comparison with existing peer direct/indirect effect methods cited in the introduction. Is it because they are of the "One-to-One" type and cannot be applied to this scenario?* Yes. The methods cited in the introduction (e.g., VanderWeele et al., Shpitser et al.) are designed for One-to-One settings and assume dyadic interactions. Our setup considers Many-to-One interference, which is more general and realistic in network data. These earlier methods cannot be directly or compared in our setting. Hence, we have compared with recent methods (e.g., NetEst, TNet, 1-GNN) that support many-to-one relationships. --- **Question 3:** *The potential outcome simulation is through an iteration process. How do you obtain the ground true direct/indirect effects?* As stated in [2-3], Gibbs sampling provides a practical method for approximating the causal effects of interest in the presence of complex interdependencies among individuals in a network. This is achieved by iteratively sampling from the conditional densities of each individual. Our iteration continues until the difference between the values of $Y$ in successive steps is less than $1 \times 10^{-5}$, thereby ensuring that the generated data has reached a stable state. Then, using a given structural causal model (Equation 19 in the Appendix F), we can obtain the potential outcomes under different treatment conditions $T$. The ground true direct/indirect effects can be obtained from the potential outcomes. [2] Zhao, Ziyu, et al. "Learning individual treatment effects under heterogeneous interference in networks." *ACM Trans. Knowl. Discov. Data* 18.8 (2024): 1-21. [3] Tchetgen Tchetgen, Eric J., Isabel R. Fulcher, and Ilya Shpitser. "Auto-g-computation of causal effects on a network." *J. Am. Stat. Assoc.* 116.534 (2021): 833-844. --- **Question 4:** *Could you provide more experimental results with a wider range of $\lambda$? The paper claims $\lambda = 0.3$ is optimal, but it seems only suboptimal as the curves in Figure 5 are still descending as $\lambda$ increases.* In our experiments, we evaluated $\lambda$ values ranging from 0 to 0.5 (results available at the [anonymous link](https://anonymous.4open.science/r/icmlSupp-4556/2.jpg)). We found that $\lambda = 0.3$ produced the best overall performance. When $\lambda$ exceeds 0.3, it starts to over-penalize the feature representations, leading to a decline in performance. --- **Question 5:** *Could you clarify how to choose the other hyperparameters shown in Table 6 and provide the hyperparameter space of the compared baselines?* For our method, we performed a grid search on the validation set to select the key parameters (learning rate, hidden dimensions, and regularization strength) that yielded the best performance (shown in Table 6, Appendix H). Following this comment, we have updated the search space for these key parameters in Appendix H, as shown in [anonymous link](https://anonymous.4open.science/r/icmlSupp-4556/table1.png). For benchmark methods, we followed the recommended settings (as shown in [anonymous link](https://anonymous.4open.science/r/icmlSupp-4556/table2.png)) from their original papers and publicly available implementations.
Summary: This paper focuses on differentiating between various types of causal effects in network data: peer-direct effects (PDE), peer-indirect effects (PIE), and self-treatment effects (STE). The authors propose a general setting to identify and estimate these effects, with theoretical identification conditions and proofs. They developed a method called gDIS (group-level Direct and Indirect effects estimator), which leverages graph neural networks (GNNs) with attention mechanisms and Hilbert-Schmidt Independence Criterion (HSIC) regularization to estimate these effects. Claims And Evidence: The claims regarding the identification of causal effects appear well-supported by the theoretical results. The empirical claims about the superior performance of gDIS are backed by comprehensive experimental results comparing against several baselines. Methods And Evaluation Criteria: The proposed methods appear appropriate for the problem. The evaluation of semi-synthetic datasets with ground truth is a standard approach in causal inference research, and the metrics used (MSE and PEHE) are appropriate for evaluating causal effect estimates. Theoretical Claims: I reviewed the theoretical claims related to identification conditions in Section 4.1, including Lemma 4.1 and Theorem 4.2. The proofs appear sound, with appropriate application of do-calculus and backdoor adjustment principles. Experimental Designs Or Analyses: The experimental design comparing against multiple baselines and analyzing performance under various conditions (e.g., treatment flip rates, hyperparameter sensitivity) is thorough. The simulation of treatments and outcomes follows established practices in the causal inference literature. Supplementary Material: I reviewed the supplementary material, particularly focusing on the simulation procedure, experimental setup, and time complexity analysis. Relation To Broader Scientific Literature: The work clearly relates to the literature on causal inference in networks and mediation analysis, properly contextualizing its contributions in relation to existing approaches. Essential References Not Discussed: No major omissions were found, though recent work on continuous treatment effects in networks could be relevant. Other Strengths And Weaknesses: Strengths: 1. The paper addresses an important gap in the literature by differentiating between direct and indirect peer effects in network settings, which is crucial for many real-world applications such as public health interventions and marketing campaigns. 2. The theoretical foundation is solid, with clear identification conditions and proofs that provide guarantees for the proposed methods. 3. The use of causal mediation analysis principles to differentiate between different types of peer effects is elegant and well-executed. Weaknesses: 1. The assumption of network unconfoundedness (Assumption 3.8) is quite strong and may be violated in many real-world settings. While the authors acknowledge this limitation in the conclusion, more discussion on potential approaches to relax this assumption would strengthen the paper. 2. The complexity of the model with multiple components (GNN, attention, HSIC) makes it potentially difficult to implement and tune for practitioners. A more detailed analysis of the relative contributions of each component could help guide practitioners on which aspects are most crucial. 3. The simulated outcomes using Gibbs sampling may not fully capture the complex interdependencies in real network data. A more detailed sensitivity analysis of different data generation processes would strengthen the validity of the results. 4. The paper focuses primarily on binary treatments, while in many real-world scenarios, treatments might be continuous or multi-valued. Extending the approach to handle such cases would increase its practical utility. 5. While the paper mentions the time complexity analysis in the appendix, a more thorough discussion of scalability to very large networks would be beneficial given the computational demands of GNNs with attention mechanisms. Other Comments Or Suggestions: None Questions For Authors: 1. How robust is the approach to violations of the network unconfoundedness assumption? Have you conducted any sensitivity analyses to assess this? 2. Could the gDIS framework be extended to continuous treatments or more complex outcome structures (e.g., multivariate outcomes)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and recognition of our work's importance. **W1:** Network unconfoundedness assumption... more discussion needed We plan to explore: a) Instrumental Variable methods [1] to introduce variables influencing treatment but not outcomes; b) Hidden confounder modeling [2] to leverage graph-based representation learning for capturing unobserved confounding; c) Causal discovery [3] to employ data-driven methods for identifying adjustment sets and uncovering hidden confounding pathways. 1. Angrist, J.D., et al. Identification of causal effects using instrumental variables. 2. Louizos, C., et al. Causal effect inference with deep latent-variable models. 3. Colombo, D., et al. Learning high-dimensional directed acyclic graphs with latent and selection variables. --- **W2:** Multiple components (GNN, attention, HSIC) - each component contributions We'll add to Section 4.2: >**GNN Layers:** GNNs capture network interactions. Without them, our model couldn't leverage network structure for modeling peer effects, leading to oversimplified estimations that ignore relational dependencies. >**Attention Mechanism:** Enables assigning different weights to neighbors based on feature similarity, useful where peer influence isn't uniform. > **HSIC Regularization:** Mitigates overfitting by encouraging independence between node features and embeddings, reducing spurious correlations. --- **W3:** More detailed sensitivity analysis of data generation processes needed Gibbs sampling provides a practical method for approximating causal effects with complex interdependencies in networks by iteratively sampling from conditional densities [4]. In our data generation process, we varied noise levels and found that—since our iteration continues until the difference between values of $Y$ in successive steps is less than $1 \times 10^{-5}$—the data exhibits high stability across different noise levels. This indicates our process effectively captures complex network interactions. We plan to explore additional data generation methods and conduct further sensitivity analyses. 4. Zhao, Z., et al. Learning individual treatment effects under heterogeneous interference in networks. --- **W4:** Extension to non-binary treatments would increase practical utility Our framework is flexible and does not rely on the treatment must be binary. When computing PDE and PIE, the neighbor treatment exposure variable $W_{t_i}$ supports continuous, multi-valued, or binary treatments. For STE, we've followed standard binary treatment approaches, but as a CATE (conditional average treatment effect) problem, it's extendable to continuous/multi-valued treatments [5-6]. 5. Hirano, K., et al. The propensity score with continuous treatments. 6. Feng, P., et al. Generalized propensity score for estimating the average treatment effect of multiple treatments. --- **W5:** Scalability discussion needed on large-scale datasets We'll add the following to Appendix I: > Although attention mechanisms introduce additional overhead compared to standard GNNs, we optimize sparse graph processing using PyTorch Geometric [7], enabling efficient computation even on large-scale datasets. Additional strategies can enhance scalability: sampling-based techniques like those in GraphSAGE [8] can limit neighbors sampled for each node, reducing computational demands while preserving structural information. Approximation or sparsification techniques for the attention layer can also alleviate computational burdens [9]. 7. Fey, M., et al. Fast graph representation learning with PyTorch Geometric. 8. Hamilton, W., et al. Inductive representation learning on large graphs. 9. Child, R., et al. Generating long sequences with sparse transformers. --- **Q1:** Robustness to violations of network unconfoundedness We're currently analyzing violations of the network unconfoundedness assumption. Our preliminary expectation is that robustness depends on factors like the strength of association between hidden confounders and treatment/outcome. Future work will include comprehensive sensitivity analyses to quantify how these factors affect performance. --- **Q2:** Extension to continuous treatments or complex outcomes Yes, gDIS can handle continuous treatments and complex outcome structures. Our framework is flexible with no binary treatment requirement. When computing PDE and PIE, the neighbor treatment exposure variable $W_{t_i}$ accommodates any treatment type. For STE, our implementation follows standard causal inference with binary treatment effects, but as STE estimation is a typical CATE problem, our approach can extend to continuous or multi-valued treatments using existing techniques [6]. For multivariate outcomes, the framework can be adapted using joint modeling approaches (e.g., multi-output regression [10]) to capture dependencies among multiple outcomes. 10. Sener, O., et al. Multi-task learning as multi-objective optimization.
Summary: The paper addresses the challenge of estimating treatment effects in observational network data with network interference. The authors propose a framework to decompose peer effects into direct and indirect peer effects and provide theoretical analyses of the identification conditions. Additionally, the paper introduces gDIS, a novel algorithm that leverages graph neural networks and attention mechanisms to estimate effects in network data. Claims And Evidence: Yes, the paper provides the theoretical proofs and empirical evidence to support the claims. Methods And Evaluation Criteria: The authors validated the proposed approach on simulated and semi-synthetic data and compared the model performance with six baselines. Theoretical Claims: Yes Experimental Designs Or Analyses: I did not find particular issues with the experimental design. Supplementary Material: Appendix A - I. Relation To Broader Scientific Literature: This paper fits well into the classic literature on estimating peer effects on networks for observational studies. Essential References Not Discussed: Essential references were discussed in this work. Other Strengths And Weaknesses: **Strengths:** 1. The paper tackles a significant challenge in estimating peer effects under network interference. The paper employs GNN with attention mechanisms to capture network structure. 2. Theoretical justification and empirical results of comparison with other classic estimators are provided. **Weaknesses:** 1. The effectiveness of the proposed approach depends on strong assumptions, which may be challenging to satisfy in real-world observational data. Other Comments Or Suggestions: No Questions For Authors: 1. Are there any specific requirements for the network structure in order to ensure the validity of the proposed approach? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments. Detailed responses to each specific comment are provided below. **W1:** *The effectiveness of the proposed approach depends on strong assumptions, which may be challenging to satisfy in real-world observational data.* Our method is based on three key assumptions: Network Unconfoundedness, Network Consistency, and Network Overlap. These assumptions are commonly assumed in causal inference in networked data [1]. In this paper, our focus is on the challenge of distinguishing and estimating different types of causal effects in network data (i.e., the *Peer Direct Effect (PDE)*, *Peer Indirect Effect (PIE)*, and *Self-Treatment Effect (STE)*), and we adopt the assumptions commonly used in the network causal inference literature. We have included discussions regarding the limitations of assumptions in the "Limitations & Future Work" section. - [1] Jiang, Song, and Yizhou Sun. "Estimating causal effects on networked observational data via representation learning." *Proceedings of the 31st ACM International Conference on Information & Knowledge Management*, 2022. **Q1:** *Are there any specific requirements for the network structure in order to ensure the validity of the proposed approach?* Our proposed approach does not impose structural requirements on the network (e.g., specific topology or connectivity patterns). However, we do assume the causal relationships are as in Figure 2(b), which are realistic.
null
null
null
null
null
null
MemFreezing: A Novel Adversarial Attack on Temporal Graph Neural Networks under Limited Future Knowledge
Accept (poster)
Summary: The authors propose MemFreezing, an adversarial attack on temporal graph neural networks (TGNNs) that poisons TGNNs' recurrent neural network memory without knowledge about the future. For this, MemFreezing injects fake nodes into the graph that put their connected nodes into a "fronzen" state. Which means that their state becomes irresponsive and meaningless. The authors evaluate their attack on four TGNNs and compare to three baseline attacks. Claims And Evidence: The authors' claims are well supported by evidence. Methods And Evaluation Criteria: Yes. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: The experimental design is sound. The experiments are comprehensive and also covers auxiliary aspects (defenses, stealthiness, …). Supplementary Material: I skimmed the supplementary material. Relation To Broader Scientific Literature: The authors are the first to study adversarial attacks on TGNNs without knowledge about the future where they aim to put the memory in a frozen state. The authors reveal an important failure mode of TGNNs, and this failure mode is of interest to everyone who deploys TGNNs in practice. Essential References Not Discussed: The authors cover the related literature well. What comes to my mind is the fact that the authors also choose a subset of nodes they aim to attack and adversarial training. Selecting the nodes that shall be attacked is a common problem in (static) graph robustness due to the simultaneous predictions on many nodes/edges. In other words, an attacker with a limited budget needs to decide on which nodes they are going to attack. While MemFreezing heuristically targets high-degree nodes, other works have, e.g., [1,2] discuss countermeasures/loss choices s.t. the attack optimization decides which nodes are to be attacked. Adversarial training and the important considerations were discussed previously for static graphs (e.g., [3,4]). Incorporating brief discussions could provide valuable pointers for future work. [1] Ma et al. "Towards More Practical Adversarial Attacks on Graph Neural Networks" 2020 [2] Geisler et al. "Robustness of Graph Neural Networks at Scale" 2021 [3] Xu et al. "Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective" 2019 [4] Gosh et al. "Adversarial Training for Graph Neural Networks" 2023 Other Strengths And Weaknesses: 1. I am not particularly convinced that interesting all perturbations at a single timestamp is "realistic" nor desirable. I am much in favor of the setting where the attack is spread out over the duration of the test set. Hence, I would encourage the authors to place the results from section B.6 in the main body such that it is not overlooked. 1. It is not impossible that an adversary can have (limited) knowledge about the future. In some parts, it reads like this was not possible. 1. It might not be clear why vulnerabilities could be overlooked in idealized settings (e.g., as mentioned abstract). After all, the idealized setting is strictly more powerful. However, I agree that an attack in an idealized setting might not focus on vulnerabilities that are easy to utilize with limited knowledge. It seems that the attack is not generally revealing vulnerabilities in a more limited setting, it is rather utlizing one volnerabilitiy that exists in TGNN. Nevertheless, demonstrating that this limitation exists in TGNNs is a good contribution. 1. The authors tempered with the template to remove, e.g., "Anonymous Authors1" below the title and line numbers are missing. Other Comments Or Suggestions: 1. The last sentence before section 4.1 seems broken 1. $msg$ reads like $m \cdot s \cdot g$. I would recommend using "operatorname" or similar $\operatorname{msg}$. Same is true for UPDT, AGGR, GNN in Eq. 2-4 1. White space missing in Section B.3 after "GNNGuard" 1. Table 13 in Section B.6 should probably be Figure 13 Questions For Authors: 1. Why can adversaries reconstruct graphs of Meta or X reasonably well? Don't they have strict rate limits, effectively hindering crawling their data? 1. What perturbations are used for applying adversarial training? Is adversarial training applied to the training graph only? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We sincerely thank the reviewer for the positive feedback and valuable comments. In response, we added more discussion on data crawling, adversarial training, and victim node selection, and we will also revise our paper accordingly.** --- ## **Q1. Data Crawling** Thank you for the valuable question. While social media platforms like X or Meta indeed impose rate limits on their APIs, adversaries can bypass these restrictions using multiple accounts, rotating IP addresses (proxies), or publicly accessible endpoints such as profile pages. Furthermore, partial data from archives or aggregated third-party services can fill in missing pieces. These combined methods enable attackers (or researchers) to assemble sufficient public information on connections and interactions, yielding a reasonably accurate reconstruction of Meta or X’s dynamic graph, despite official rate limits. --- ## **Q2. Adversarial training** We adopt a minimax scheme to conduct the adversarial training: the objective function can be formulated as: $$\min_\theta \max_\epsilon \mathcal{L} (f_\theta(s+\epsilon),y)$$ We first perturb the memory of every training node by adding noise and use the noise to train the model. The training is conducted via a 10-step Projected Gradient Descent (PGD), and the noise is set to $\pm 0.2$. Although adversarial attacks and importance weighting have been extensively studied on static graphs [9, 10], these methods prove less effective on dynamic graphs for two main reasons: 1. **Dependence on a Complete Adjacency Matrix:** Methods such as the min-max topology attack [9] and robust diffusion [10] rely on the full adjacency matrix \(A\) of the graph. However, the complete \(A\) is unknown in dynamic graphs because future edges and nodes are not seen at the training graph. 2. **Different Attack Objectives:** Traditional adversarial training schemes target attacks that maximize classification or link prediction loss. In contrast, MemFreezing aims to freeze node memories into stable states, indirectly disrupting predictions. Since this mechanism does not directly alter the output loss, defenses designed for loss-based attacks are insufficient to counter MemFreezing. [9] Xu et al. "Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective" 2019 [10] Gosh et al. "Adversarial Training for Graph Neural Networks" 2023 --- ## **Q3. Attack node selection** We agree that selecting victim nodes under a limited attack budget is crucial and that more sophisticated methods beyond our high-degree heuristic may yield improvements. Our choice was motivated by the challenge of identifying optimal targets without precise future knowledge, as approaches in [12, 13] typically rely on stable graph structures or additional information about upcoming changes. In a dynamic setting, a node's importance (or receptive field) can shift drastically once it gains new neighbors, making future-oriented selection inherently difficult. Nonetheless, we acknowledge the potential for more specialized node selection methods, even under partial knowledge. To explore this, we integrated the importance-score-based node selection strategies from [11] into MemFreezing—applying their selection logic based solely on data available at the attack time—and tested both one-time and multiple-time attacks. As summarized in **Figure R6 (anonymous link:https://ibb.co/8LW5rtRc)**, the scheme rates each node via an importance score, and the attack targets the node with the highest score. It achieves similar overall attack effectiveness compared to our simpler high-degree heuristic. This outcome suggests that, in highly dynamic scenarios, high-degree targeting remains a practical fallback, though we believe further research on adaptive selection in dynamic graphs is warranted. [11] Ma et al. "Towards More Practical Adversarial Attacks on Graph Neural Networks" 2020 [12] Geisler et al. "Robustness of Graph Neural Networks at Scale" 2021
Summary: The paper studies adversarial attacks on temporal graph neural networks and proposes an effective approach to generate attacks that can persist over future timesteps. They consider an online adversarial attack setting and add fake nodes with carefully crafted memory representations at each timestep such that the victim nodes' memories reach an expected ``frozen'' state. Results on different temporal GNNs and tasks show the effectiveness of the MemFreezing attack as compared to baselines. Claims And Evidence: Yes Methods And Evaluation Criteria: - The paper claims more practicality for limited knowledge of the future graphs but gives full access to the model, which is also not well motivated. - It is also not practical to inject edges with high-degree nodes as they can be easily detectable as malicious through simple anomaly detection methods. They should follow unnoticeability constraints like: - Lee, Dongjin, Juho Lee, and Kijung Shin. "Spear and shield: adversarial attacks and defense methods for model-based link prediction on continuous-time dynamic graphs." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 12. 2024. - The memory terminology is specific to certain kinds of models and it is not clear how it can be extended to other models that do not explicitly formulate past interactions of the nodes in memory representations such as: - Luo, Yuhong, and Pan Li. "Neighborhood-aware scalable temporal network representation learning." Learning on Graphs Conference. PMLR, 2022. - Besta, Maciej, et al. "Hot: Higher-order dynamic graph representation learning with efficient transformers." Learning on Graphs Conference. PMLR, 2024. - Wang, Yanbang, et al. "Inductive representation learning in temporal networks via causal anonymous walks." arXiv preprint arXiv:2101.05974 (2021). - While the premise motivates that we do not know the timestep for the adversarial attack, the authors measure their task using a metric of accumulated accuracy which measures the accuracy at the current time step plus the previous timesteps accuracy. Thus, it basically measures the accuracy if the attack had happened at the current timestep. More detail and motivation about how the metric reflects the premise should be provided. - Theoretical Claims: N/A Experimental Designs Or Analyses: - The effectiveness of Memfreezing stagnates at a high perturbation rate with respect to accumulated accuracy while the premise of the work is to formulate attacks that can work regardless of future interactions. - The proposed method does not work as well in the case of black-box setting, indicating that MSE between memories requires the models to be precise. Here, one should also study a simpler transferability setting where the attacks from one model are used for another. - Running time analysis should also be conducted as this is supposed to happen in a practical online setting. - It is not clear why the model omits Chen et al., 2021 and Sharma et al., 2023 for discrete graph models (ROLAND). In addition, they should also include temporal dynamics aware perturbation constraint for unnoticeability of these perturbations. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper studies the vulnerability of memory-based TGNNs in online evasion-based attacks. Essential References Not Discussed: - Newer benchmark datasets should be considered for more comprehensive results [1]. - A discussion with other online adversarial attacks, as studied in the current work is also missing [2, Sharma et al., 2023]. - The paper studies node injection attacks in temporal graphs but does not discuss a similar setting studied in static graphs [3]. - The paper does not discuss a more recent dynamic graph attack that studies poisoning attacks but establishes various unnoticeability constraints. [1] Huang, Shenyang, et al. "Temporal graph benchmark for machine learning on temporal graphs." Advances in Neural Information Processing Systems 36 (2023): 2056-2073. [2] Mladenovic, Andjela, et al. "Online adversarial attacks." arXiv preprint arXiv:2103.02014 (2021). [3] Chen, Yongqiang, et al. "Understanding and improving graph injection attack by promoting unnoticeability." arXiv preprint arXiv:2202.08057 (2022). [4] Lee, Dongjin, Juho Lee, and Kijung Shin. "Spear and shield: adversarial attacks and defense methods for model-based link prediction on continuous-time dynamic graphs." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 12. 2024. Other Strengths And Weaknesses: - Motivating examples and insights are appreciated and supplement the writing well. - A lot of main results are deferred to the Appendix which makes the it very hard to understand the key results. Other Comments Or Suggestions: - The submission draft is not in the correct format. - Impact statement is not included. Questions For Authors: - How can the method be generalized to non-memory-based TGNNs? - Why does the accumulated accuracy stagnate? - How is it practical to form edges with high-degree nodes? Is it possible to make these attacks more unnoticeable given Spear and Shield and TDAP constraint? - How do the perturbations transfer across victim models in order to test the attacks under the more practical black-box setting? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We sincerely appreciate the valuable comments and insights from the reviewer. In response, we carefully respond to the reviewer’s questions and will also revise the paper accordingly.** --- ## **Q1. Generalizability to non-memory-based TGNNs** We acknowledge that MemFreezing primarily targets TGNNs that continuously track temporal information through evolving node features. Its memory-freezing objective is specifically designed to disrupt the dynamics inherent in such memory-based systems. For models that do not maintain temporal node features, the objective function needs to be modified. However, as recent studies [4] have shown, this TGNN family consistently achieves state-of-the-art performance on dynamic graph tasks, underscoring the practical relevance of our focus. We recognize the importance of extending our approach to non-memory-based architectures and plan to explore such adaptations in future work. --- ## **Q2. Discussion on accumulated accuracy** We use the accumulated accuracy to capture the impact of an attack over time—answering the question, “What is the accuracy of predictions up to a specific timestamp?” Specifically, accumulated accuracy is not equivalent to accuracy at a specific timestamp. Instead, it represents the accuracy up to a particular timestamp, and **the attack can happen anytime before/after the measurement**. A **stagnant accumulated accuracy** indicates that new predictions continue to be misclassified, demonstrating that the adversarial effect persists over time. In contrast, if the model were recovering, the metric would gradually improve. --- ## **Q3. Unnoticeability in targeting high-degree nodes** Thank you for the insightful question. As noted in [6] (Constraint C4), repeatedly injecting edges into a single high-degree node could trigger anomaly detection. However, MemFreezing introduces **only one fake edge per victim node**—even if that node is high-degree—thereby not violating C4 or exceeding typical per-node perturbation limits. Moreover, we explore if Memfreeze can integrate the TDAP constraints [6]. For C1 and C4, we follow our original setup to limit the number of changes (C1) and per node changes (C4). In addition, we enable C2 by restricting the number of changes per batch and enable C3 by only selecting victim nodes that have an event within the most recent five batches. The results in **Figure R4 (anonymous link: https://ibb.co/p6Mnw2hq)** indicate that Memfreezing can outperform the prior approach under such constraints, suggesting that it can be integrated with [6] to provide more unnoticeable attacks. [6] Lee, Dongjin, Juho Lee, and Kijung Shin. "Spear and shield: adversarial attacks and defense methods for model-based link prediction on continuous-time dynamic graphs." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 12. 2024. --- ## **Q4. Evaluation under transfer attacks** We agree that further exploring more black-box constraints is valuable. Please refer to our results in **Q2 to reviewer EZNr**. --- ## **Q5. Runtime** We evaluate the average latency of Memfreezing and other attacks. The results in **Figure R5 (anonymous link:https://ibb.co/JwmqR2LD)** show each attack's average latency per node. The results show that the Memfreezing attack nodes within seconds, indicating its potential in an online attack setup. --- ## **Q6. More related works** Thank you for suggesting additional literature. We note that the online adversarial attack in [7] supports our threat model by demonstrating attacks on streaming data rather than on fully known inputs. However, [7] differs from our work: even though it faces streaming data, predictions are made at the attack time with complete input information (e.g., an image), whereas in our setup, the input for prediction is unknown at the attack time, making the attack more challenging. Similarly, while the Homophily Unnoticeability approach in [8] effectively improves attack stealth, it relies on having the full adjacency matrix—a resource that is unavailable in our scenario. We analyze our attack's stealthiness in Appendix B10 and will expand our discussion on unnoticeability constraints from [8] and [6] in the revised paper. Lastly, we did not evaluate the methods in [Chen et al., 2021 and Sharma et al., 2023] because they assume a divergent threat model where attackers have full knowledge of the dynamic graph, allowing them to optimally select when and where to inject perturbations over the entire graph evolution. In our limited-knowledge scenario, determining the optimal allocation of the attack budget across timestamps is significantly more challenging, making direct comparisons less fair. [7] Mladenovic, Andjela, et al. "Online adversarial attacks." arXiv preprint arXiv:2103.02014 (2021). [8] Chen, Yongqiang, et al. "Understanding and improving graph injection attack by promoting unnoticeability." arXiv preprint arXiv:2202.08057 (2022). --- Rebuttal Comment 1.1: Comment: I thank the authors for additional experiments and discussion. My comments have been partially addressed and I will increase my scores accordingly, acknowledging the limitations of the work. --- Reply to Comment 1.1.1: Comment: Thank you for raising the score and for your time in reviewing and refining our paper. This is a great affirmation of our work. Your comments are very constructive (e.g., considering the unnoticeability, runtime, generalization of our method, etc.), making our paper stronger. We will address your comments in our revision. Thank you again for your valuable time. Best, Author
Summary: The paper introduces MemFreezing, a novel adversarial attack framework designed to disrupt temporal graph neural networks (TGNNs) under realistic constraints where attackers have limited knowledge of future graph changes. The core idea is to strategically freeze node memories in TGNNs, rendering them unresponsive to subsequent updates and propagating these frozen states through neighboring nodes. Experimental results indicate that MemFreezing consistently undermines the performance of TGNN in diverse tasks, providing a more long - lasting adversarial approach when future knowledge is restricted. Claims And Evidence: The main claims such as persistent degradation of TGNN performance with cross-freezing and future neighbor simulation mechanisms are well supported. However, one contribution claims that MemFreezing effectively misleads TGNN predictions across diverse datasets and models even in the presence of defenses. I am afraid that the paper does not test specialized defenses designed to counter memory - freezing attacks, such as dynamic memory reset mechanisms. The defensive strategies mentioned in 5.1 are insufficient and incomplete. Other potential defenses regarding the memory-oriented aspects should be summarized more comprehensively with references in appendix C.2. Methods And Evaluation Criteria: The main claims such as persistent degradation of TGNN performance with cross-freezing and future neighbor simulation mechanisms are well supported. However, one contribution claims that MemFreezing effectively misleads TGNN predictions across diverse datasets and models even in the presence of defenses. I am afraid that the paper does not test specialized defenses designed to counter memory - freezing attacks, such as dynamic memory reset mechanisms. The defensive strategies mentioned in 5.1 are insufficient and incomplete. Other potential defenses regarding the memory-oriented aspects should be summarized more comprehensively with references in appendix C.2. Theoretical Claims: Part 4.1 claims that node memories in TGNN can remain stable when surrounded by neighbors with similar memories. Furthermore, the ideal frozen states from different nodes are similar. However, it is not comprehensive enough to draw this conclusion merely from a single experiment with only 100 victim nodes sampled. Is there any support from previous literature or more detailed derivations of theoretical formulas? Experimental Designs Or Analyses: 1. The paper mentions that the effectiveness is less significant on JODIE which uses differences between a node's current and its last update time to decay the memory. This may imply that the effectiveness of the method depends on the model architecture. Therefore, has the evaluation covered a sufficient variety of model types for dynamic graphs? In part 5.1, only four models are compared, and there is no discussion on the differences in model structures. Thus, it is difficult for readers to determine whether the selection of dynamic graph models is comprehensive. 2. In the "Attack Setup" section of 5.1, the attack budgets only include the percentage of attacked nodes, but the magnitude of the Gaussian noise injecting to "fake future neighbors" in MemFreezing is not specified. Are there similar perturbation constants in other attack methods? Supplementary Material: The supplementary material provides critical details (e.g., algorithm, hyperparameters) and extends the main paper’s analysis (e.g., black - box attacks, LSTM models). While it supports the core claims, it also reveals areas for improvement (e.g., closed - source validation, defense innovation). Relation To Broader Scientific Literature: Dynamic Graphs are prevalent in real-world scenarios, while Temporal Graph Neural Networks (TGNNs) have become leading solutions for dynamic graph tasks. There is a pressing need to study their robustness towards adversarial attacks. Hence, in real-world cases, the adversarial may attack TGNN without knowing future changes on the graph. To address the challenges, this paper introduce MemFreezing, a novel adversarial attack framework that delivers longlasting and spreading disruptions in TGNNs without requiring post-attack knowledge of the graph. Essential References Not Discussed: More works in the area of Memory - Based Graph Networks should be cited. [Memory-Based Graph Networks’20] Other Strengths And Weaknesses: 1. The paper introduces cross-freezing to stabilize node memories but does not provide a formal proof of stability. Can you mathematically demonstrate that the system of mutually reinforcing nodes will remain in a frozen state indefinitely under dynamic graph updates? 2. In the appendix, The black-box evaluation uses surrogate models trained on partial data. How would MemFreezing perform against a truly unknown, closed-source TGNN? The credibility of the paper would be enhanced if the test results for closed - source models could be presented within the article. Other Comments Or Suggestions: The white - box attacks employed in the article yield highly remarkable results. Nevertheless, their usability in the real-world context may be constrained to a certain extent. It would be advisable to consider incorporating additional black - box attack methods, such as zero-shot transfer attacks. Questions For Authors: During simulating the future neighbors of victim nodes, could you please provide the reasons for choosing the two initialization methods(noise nodes and as all - zero nodes)? Is there any relevant previous literature or theoretical basis to support this choice? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We sincerely thank the reviewer for the positive feedback and valuable comments. In response, we clarify the rationale behind the future simulation choices, discuss more block-box attack setups, and clarify the observation on stable states and attack budget. We will also carefully revise the paper following the reviewer’s suggestion.** --- ## **Q1. Why simulate a neighbor with noisy and all-zero nodes?** The reasons for using noisy and all-zero nodes for future simulation are: - **Noise Nodes:** As indicated by the network homophily principle [1,2] (**as cited in Q3 for reviewer qaAY**) and Figure 3(d) and Appendix B.16, real-world neighbors often have high memory similarity. Hence, we create neighbors with similar features to the victim node’s current neighbors to simulate potential neighbors that already exist in the graph. - **All-Zero Nodes:** In many TGNNs (e.g., TGN, JODIE, DyRep, etc.), new nodes start with an all-zero memory. Thus, we initialize some future neighbors with zeros to mimic newly added nodes to the graph. Together, these strategies capture both the homophilous nature of existing neighbors and the default state of new nodes. --- ## **Q2. Zero-shot black-box attack** We agree that further exploring more black-box constraints is valuable. Hence, we evaluate Memfreezing under the zero-shot transfer attack setup following the ensemble-based approach first proposed in [3]. Specifically, on the Wikipedia dataset, we generate the fake message by jointly optimizing the adversarial message for three models and then evaluate the effectiveness of this unified adversarial message on diverse models. As shown in **Figure R1(anonymous link:https://ibb.co/6pkw68K)**, although it performs worse than the white-box attack, Memfreezing can still effectively perturb model predictions. We also acknowledge that memfreezing is less harmful than cases with more accurate model information. [3] Liu, Yanpei, et al. "Delving into transferable adversarial examples and black-box attacks." arXiv preprint arXiv:1611.02770 (2016). --- ## **Q3. Theoretical proof of cross-freezing** Please refer to **Q3 for reviewer qaAY** for more quantitative demonstrations of similar stable states. We provide the mathematical proof of cross-freezing under the similar stable states as detailed in **Figure R2 (anonymous link: https://ibb.co/XfVksqdR).** --- ## **Q4. The defensive strategies** Thank you for your valuable feedback. To our knowledge, no published methods currently detect or mitigate the specific mechanism of node memory freezing in TGNNs. However, we agree that one could devise a “memory reset” module that resets victim nodes’ memories. To explore the potential of randomly and periodically reset node memories, we conduct experiments in which we (a) random reset node memories upon each update or (b) reset memories after each 25 timestamps. The results are shown in **Figure R3 (anonymous link:https://ibb.co/NMrZsjq)**. As one can observe, doing so may jeopardize the models’ clean accuracy with limited effectiveness in defending against Memfreezing. As detailed in **Q2 to reviewer qaAY**, the key challenge is that doing so may overkill those naturally stable nodes, and we also follow up with a potential detecting scheme detailed in that response. --- ## **Q5. Attack budget in terms of noise magnitude.** We specify the magnitude of our Gaussian noise in Appendix A.2, where each “fake future neighbor” has a mean of 0 and a standard deviation set to 0.2 times the standard deviation of its corresponding real neighbor’s memory. Note that we do **not** inject simulated fake neighbors into the graph; instead, we use their noisy states to generate a single adversarial message per victim node, which matches the theoretical min/max for the clean messages (see Appendix B.10 for a stealthiness analysis). --- ## **Q6. Diversity of the models** We appreciate the reviewer’s suggestion to evaluate MemFreezing on a broader range of TGNN architectures. The four TGNNs represent diverse memory-update mechanisms across different temporal paradigms. Specifically, TGN and DyRep operate event-driven updates, while ROLAND processes discrete snapshots. Moreover, these models employ varied memory schemes: TGN and DyRep use event-based updates, JODIE incorporates a time-decay component, and ROLAND leverages repeated GNN layers or attention over snapshots. We acknowledge that Memfreezing mainly targets memory-based TGNNs. As recent studies demonstrated [4], this TGNN family achieves SOTA performance for dynamic graph benchmarks. We will also discuss the memory-based graph network [5] in our revised paper, as suggested by the reviewer. [4] Huang, Shenyang, et al. "Temporal graph benchmark for machine learning on temporal graphs." Advances in Neural Information Processing Systems 36 (2023): 2056-2073. [5] Ahmadi, Amir Hosein Khas. Memory-based graph networks. University of Toronto (Canada), 2020.
Summary: The method makes use of a design component (node memory states) of recent temporal graph neural networks to disturb model predictions in future unseen time steps. This is done by selecting high-degree nodes (referred to as root nodes) and their 2 neighbours with the highest node degrees (referred to as support nodes) and adding noise to their node features, so as to achieve "cross freezing" of node memories. Doing so is done mainly at time 0 (but at other times as well in the side experiments). The flow of the paper is clear, I appreciate the clear writing. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes, they totally support the idea. Theoretical Claims: The claims are supported emprically. Experimental Designs Or Analyses: Yes. Supplementary Material: - Relation To Broader Scientific Literature: - Essential References Not Discussed: This is one of my concerns. In the paper it is mentioned that "... While several studies have explored the effectiveness of adversarial attacks on dynamic graphs (Lee et al., 2024; Sharma et al., 2022; 2023; Chen et al., 2021), **they often** assume that attackers have complete knowledge of the input graphs at the time of the attack ..." It seems there have been previous adversarial attack method for dynamic graphs, as mentioned later in the paper: "... Recently, there have also been a few studies that explored the effectiveness of adversarial attacks on dynamic graphs and TGNNs (Lee et al., 2024; Sharma et al., 2023; 2022; Chen et al., 2021). ...". The distinctions of the proposed method should clearly be mentioned. Other Strengths And Weaknesses: - Other Comments Or Suggestions: Not included in score: typo in Sec. 2. "... Recent TGNNs focus on CTDGs since they can retain more information than DTDGs’ fixed intervals and more complex (Kazemi et al., 2020). ..." 'and more complex' should be corrected to 'and are more complex'. Questions For Authors: - In Sec. 3 under "Attacker’s Capability", in the provided social media example could you describe how the cross freezing of the proposed method would look like in that specific example? - Since the proposed attack method targets a specific design choice of TGNNs, isn't it easy to detect by, e.g., observing the node memory states? - It is mentioned that "... in Figure 3(d), the ideal frozen states from different nodes are similar; therefore, it ...". Does this property hold for any real-world graph in general? and if it's violated, how does it affect the performance of the method? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We sincerely thank the reviewer for the positive feedback and valuable comments. In response, we exemplify the cross-freezing in a social media case, discuss its potential defenses, and clarify the assumption of similar ideal stable states among nodes. We will also carefully revise the paper following the reviewer’s suggestion.** --- ## **Q1. Cross-freezing in social media** In a social media setting (e.g., Reddit, Facebook), an attacker may want to consistently deliver the content to a specific user (the victim). First, they collect the victim’s profile and its neighbors from public data. Next, for each victim node and its two supportive neighbors, the attacker creates a fake account and injects an adversarial message, such as malicious comments. These fake accounts and comments are later removed, but the TGNN has already recorded the noisy messages. As a result, the updates related to these messages trigger a “cross-freezing” effect that locks the victim’s (and its neighbors’) memory into a stable, noisy state, making them less responsive to real content. Consequently, the victim may consistently receive the same spam content even though their future actions show no interest in those contents. --- ## **Q2. Detecting Memfreezing** Detecting a MemFreezing attack by observing node memory is challenging because nodes can naturally exhibit stable updates. For example, using TGN on the Wikipedia dataset, over 70% of node updates show high similarity (although they may not be consistently stable), which can also occur in real-world cases (e.g., an Amazon user with consistent shopping preferences). Thus, it is hard to differentiate an attacked node from naturally stable ones. However, your insight also hints at a potential detection strategy: deliberately stimulate node changes and then observe how the node’s memory reacts. Concretely, we may check if a node is attacked by Memfreezing in the following two steps: 1. **Introduce Divergent Neighbors**: Temporarily connect the node to new neighbors with significantly different memory states (e.g., low cosine similarity). 2. **Monitor Update Response**: If the node remains unusually stable after interacting with these distinct neighbors—showing little or no memory shift—it suggests that MemFreezing may be in effect. By testing for unnaturally persistent memory under a deliberately introduced variation, one can check whether the node is compromised by the MemFreezing attack. --- ## **Q3. Observation of similar ideal frozen states** Thank you for the insightful question. The similar ideal frozen states widely exist in diverse real-world graphs. In addition to the experiments in Figure 3d, we further characterize the distributions of similarities between neighboring nodes’ stable states across diverse benchmarks and present the results in **Figure 32 of Appendix 16B** in our original paper. The results indicate that the property widely exists in other real-world graphs (i.e., Reddit and Reddit-body). This also aligns with the network homophily theory [1,2], which states that, based on node attributes, similar nodes may be more likely to connect to each other than dissimilar ones. We also investigate the performance under the cases where connected nodes may have divergent features, which potentially give divergent ideal states, in Appendix B16. Specifically, we have victim nodes in the graph connected to nodes with random memories after the attack timestamps. In such cases, the nodes tend to be divergent and pose distinct ideal stable states. The results in **Figure 33 in Appendix B16** indicate that MemFreezing effectively freezes these random neighbors despite resulting in lower similarities. [1] McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). "Birds of a Feather: Homophily in Social Networks". Annual Review of Sociology. 27:415–444. [2] Himelboim, I., Sweetser, K. D., Tinkham, S. F., Cameron, K., Danelo, M., & West, K. (2014). Valence-based homophily on Twitter: Network Analysis of Emotions and Political Talk in the 2012 Presidential Election. New Media & Society.
null
null
null
null
null
null
Retraining with Predicted Hard Labels Provably Increases Model Accuracy
Accept (poster)
Summary: This paper investigates the benefits of retraining a model using its own predicted hard labels in scenarios where training data contains noisy labels. There are two strategies for retraining the model: - *Full Retraining:* The model is retrained on the entire dataset using its own predicted hard labels. - *Consensus-Based Retraining:* Only samples for which the model's predicted label matches the original noisy label are used for retraining. The paper provides a rigorous theoretical analysis showing that *full retraining* with predicted hard labels can improve a model's population accuracy. In a linearly separable binary classification setting with randomly flipped labels, the authors derive error bounds and sufficient conditions when retraining is beneficial. The authors also conduct extensive experiments on datasets such as CIFAR-10, CIFAR-100, and AG News Subset (a language dataset). The results show that both full retraining and consensus-based retraining enhance model performance, with consensus-based retraining providing the most significant improvements. ## update after rebuttal The detailed response has resolved my concerns. Thus, I raise my score after the rebuttal. Claims And Evidence: The paper supports its main claims with a combination of rigorous theoretical analysis and extensive empirical validation. However, there are some aspects where the evidence is less complete: - The theoretical analysis focuses on full retraining under a uniform label noise model, while the consensus-based retraining, which empirically shows superior performance, lacks a corresponding theoretical analysis. - The experiments are conducted on moderate-scale datasets, so the scalability and generalizability of the approach to larger or more complex settings (e.g., experiments on the ImageNet dataset) remain to be further explored. Methods And Evaluation Criteria: The proposed methods, namely full retraining and consensus-based retraining, make sense to tackle the challenges of learning with noisy labels and label differential privacy. Additionally, benchmark datasets like CIFAR-10, CIFAR-100, CIFAR-100N, and AG News Subset are well widely recognized. Theoretical Claims: Main theorems (Theorem 4.1, Theorem 4.2, Theorem 4.8, Theorem 4.9) are checked. Experimental Designs Or Analyses: The benchmark datasets are widely recognized in the community, and the experimental designs are reasonable. But some experimental settings remain unclear (see questions). Supplementary Material: I have reviewed the problem setting part, proof part, experimental details part, and experiment on the real-world dataset (CIFAR-100N) part. Relation To Broader Scientific Literature: The key contributions of the paper relate to two broader scientific literatures: - **Learning with Noisy Labels:** There is a lot of work on training models in the presence of noisy labels, which often involves robust loss functions or noise-correction techniques. The paper contributes to this literature by offering the first theoretical guarantees showing that full retraining with predicted hard labels can provably improve model accuracy under uniform label noise. - **Label Differential Privacy (DP):** In the context of privacy-preserving machine learning, label DP has emerged as an important concept. Prior works have proposed various noise-injection mechanisms (such as randomized response) to ensure privacy for sensitive label information. This paper shows that retraining methods (full retraining and consensus-based retraining) can enhance the model's performance without additional privacy costs. Essential References Not Discussed: All essential related works are cited or discussed in the paper. Other Strengths And Weaknesses: This paper investigates the benefits of retraining a model using its own predicted hard labels for label differential privacy (DP) and provides theoretical analysis. However, it exists several limitations. First, the theoretical analysis is confined to binary classification using linear models. Consequently, the derived results and error bounds are limited in scope and may not extend to practical scenarios where many tasks involve multiclass classification and complex nonlinear models. In real-world applications, sufficiently powerful nonlinear models can potentially memorize all the noisy labels. As a result, the model's predicted hard labels would simply replicate the noisy labels, rendering full retraining ineffective. This limitation suggests that while the theoretical contributions are valuable for understanding retraining in controlled settings, their applicability to more complex, realistic models remains questionable. Second, although consensus-based retraining shows superior performance empirically, the paper does not provide a corresponding theoretical framework to analyze its behavior or guarantees. Third, the effectiveness of retraining is heavily dependent on the accuracy of the initial model's predictions. In scenarios where the initial model performs poorly, the retraining process might not yield significant improvements. Other Comments Or Suggestions: The meaning of $\epsilon$ is not explained in the introduction, yet it appears in both the abstract and the conclusion of the introduction. Readers who are not familiar with Label Differential Privacy (DP) may be confused. It would be beneficial to provide an intuitive explanation of $\epsilon$ in the introduction to enhance clarity and accessibility. Questions For Authors: 1. In scenarios where the initial model has low accuracy, how does the retraining process behave? Addressing this question could clarify the robustness of your method and whether it remains effective when the initial model is weak. 2. Regarding the training details (Lines 1439–1440), why must the number of gradient steps and the initial learning rate be chosen based on the performance of the baseline method? Are the retraining methods particularly sensitive to these hyperparameters? 3. What are the noise rates corresponding to different values of $\epsilon$? 4. The authors explicitly state that the forward correction algorithm is applied in the initial training stage for the experiments in Table 5. However, what loss function is used in the initial training stage for the experiments in Tables 1, 2, 4, and 6? Is it the standard cross-entropy loss? Clarifying this would improve the reproducibility of the reported results. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your review and questions! We address your concerns below. **Other Strengths And Weaknesses:** **1. "First, the theoretical analysis is…remains questionable."**: * We agree that our analysis on linear models for binary classification will not fully explain what happens in the case of non-linear models for multi-class classification, and we don’t intend to oversell the scope of our theoretical results. But we believe it is valuable as a first step; after all, *ours is the first work to analyze retraining with hard labels in any setting*. Moreover, we believe that some of our proof ideas could be useful even in the analysis of non-linear models. For instance, the proof technique of constructing dummy predicted labels that match the actual predicted labels with high probability (see lines 307-319 left column) should be useful in general, because the issue of dependence of each predicted label on the entire training set is universal regardless of the model type. * Regarding your point about complex models perfectly fitting noisy labels, we completely agree. And that is why, for such expressive models, it is important to apply (both in theory and practice) some kind of regularization when training them with noisy labels; for e.g., $\ell_2$ regularization, early stopping, etc. Applying regularization is reasonable in scenarios such as label DP, where we already know that the labels will be noisy. **2. "Second, although consensus-based retraining…its behavior or guarantees." / first bullet point under Claims And Evidence**: Agreed. We have admitted this limitation in Section 6, and plan to analyze consensus-based retraining in the future. Please note that the analysis of full retraining is itself pretty non-trivial (main technical challenges have been discussed after Thm. 4.8) and interesting in our opinion. We do acknowledge the above two weaknesses. However, it is usually very difficult to perfectly align theoretical analysis with practical settings, and it is common to analyze simplified settings. So we believe these weaknesses **do not fundamentally undermine the significance of our work**. **3. "Third, the effectiveness of retraining…yield significant improvements."** Indeed, retraining should intuitively only be beneficial when *the initial model’s predictions are more accurate than the given (noisy) labels* used to train the initial model. We have discussed/demonstrated this in several parts of the paper – Fig. 1 (see its caption), Tables 3 and 7 (these are on real data), and the comment on the range of $n$ after Remark 4.10 (specifically, regarding the lower bound on $n$). Moreover, in Appendix J & Table 10, we did an *ablation study* with and without a validation set. The initial model is naturally weaker w/o a validation set (due to overfitting); despite this, *retraining is still beneficial* but the gains are less than those with a val set. This observation is not surprising. **Second bullet point under Claims And Evidence:** As mentioned in Section 6, performing larger experiments is left for future work. While we didn’t have the time to train ImageNet from scratch now, we ran experiments on *DomainNet* dataset (available in Tensorflow) which has 345 classes & is much larger than CIFAR. We did linear probing (due to lack of time) with features extracted from a ResNet-50 pretrained on ImageNet. The setup is similar to our full fine-tuning experiments. DomainNet Results: |$\epsilon$|Baseline|Full RT|Consensus-based RT| |---|---|---|---| |$3$|$23.60\pm0.92$|$29.23\pm1.03$|$\mathbf{36.30}\pm0.75$| |$4$|$48.25\pm0.05$|$52.10\pm0.10$|$\mathbf{57.40}\pm0.20$| *So even here RT (especially, consensus RT) yields large gains*. **Questions For Authors:** **1.** Please see the response to weakness **3** above (especially the last two sentences about the ablation). **2.** They *need not* be chosen based on the baseline’s performance. We did this to avoid any further hyper-parameter tuning based on retraining – to demonstrate that retraining is *not* very sensitive to hyper-parameters. If one were to optimize the hyper-parameters based on retraining’s performance as well, the gains would only increase. **3.** If randomized response (RR) is used as the baseline, then with $C$ classes and for $\epsilon$-labelDP, each sample receives its true label $y$ w.p. $\frac{e^{\epsilon}}{e^{\epsilon} + C-1}$ and some other label $y'$ w.p. $\frac{1}{e^{\epsilon} + C-1}$ for all $y' \neq y$ (this has been explained in lines 177-181 left column). If the method of Ghazi et al. (2021) is used, then their first stage is RR (so the same as before), but the noise level of subsequent stages depends on the performance of the previous stage's model. **4.** Standard cross-entropy loss was used; we’ll mention this in the next version. Thanks for pointing this out! We hope to have resolved your concerns and we're happy to discuss further. If you’re satisfied, *we sincerely hope you will raise your score*! --- Rebuttal Comment 1.1: Comment: Thanks for your detailed rebuttal and extra experiments. My concerns have been resolved. Then, I decide to raise my recommendation score. --- Reply to Comment 1.1.1: Comment: Thanks for raising your score! We’ll add the extra experiments (and important clarifications from the rebuttal) in the next version.
Summary: The authors theoretically analyze retraining in a linearly separable binary classification problem and show that it can improve the model accuracy with respect to the initial training in presence of label noise. They show that retraining is particularly helpful with high levels of label noise. Then, the paper empirically shows that the proposed consensus-based retraining works better than the normal retraining. ## Update after rebuttal After reading all the reviews carefully and considering the additional effort made by the authors, I decided to raise my score from 3 to 4. I think this is an excellent paper. Claims And Evidence: The claims are almost all clear and convincing. - The main claim for which the clarity could be improved is the specification (especially in the abstract) that they theoretically analyze a **binary** classification problem. - In line 123 you claim that your work is on the fully supervised setting. Isn't the label noise scenario considered weakly-supervised? Methods And Evaluation Criteria: The proposed evaluation criteria make sense for the problem considered. Theoretical Claims: I checked the theoretical claims superficially and they seem correct and well written. Experimental Designs Or Analyses: I would have preferred to see a comparison with other algorithms that perform classification with label noise, but I only see a minor result on the combination of retraining and forward correction. I think that a wider comparison would help in understanding if the contribution of this paper is mainly theoretical or if there is also a possible advancement for state-of-the-art techniques. I don't understand why the authors did not share the code. This arises concerns on the reproducibility of their results. Supplementary Material: The code is not provided. The appendix is well written. Relation To Broader Scientific Literature: The paper contributions are incremental, as the retraining technique is well known. However, the theoretical analysis is interesting and novel in my opinion. Essential References Not Discussed: I am not aware of important related work that is not cited in the paper. Other Strengths And Weaknesses: Strenghts - The paper is well written and even though it is theoretically heavy, it can be easily read by non-experts - The related work section is very useful - The experimental results enforces the theoretical claims Weaknesses - No code - No comparison with other techniques for classification with label noise (apart from Forward correction) - No theoretical analysis or comments for the multi-class classification problem Other Comments Or Suggestions: - in line 165 at the beginning of pp. 4 I would prefer the authors to use $\cdot$ instead of $.$ Questions For Authors: - How do you relate the retraining technique with the problem of memorization of noisy samples? That is a well known problem in the noisy labels literature and I am afraid that retraining could worsen the memorization effect. Can you provide an empirical analysis of the memorization effect when using retraining? [a,b,c] - What does it happen when we increase the number of gradient steps? Does the gap between the accuracy achieved with and without retraining decrease? Is there a point in which, if we train the model for X steps, the retraining lowers the accuracy? Maybe this would be an interesting ablation study. - You use the baseline in Ghazi et al. 2021. Which objective function do you use to train your neural networks? I assume you use the cross-entropy. However, you did not study how the performance of your algorithm would change by changing baseline or objective function. This could arise questions on the general validity of retraining. Could you study the performance applying these changes? [a] Arpit, D., Jastrzębski, S., Ballas, N., Krueger, D., Bengio, E., Kanwal, M. S., ... & Lacoste-Julien, S. (2017, July). A closer look at memorization in deep networks. In International conference on machine learning (pp. 233-242). PMLR. [b] Zhang, C., Bengio, S., Hardt, M., Recht, B., & Vinyals, O. (2016). Understanding deep learning requires rethinking generalization. ICLR 2017. [c] Liu, S., Niles-Weed, J., Razavian, N., & Fernandez-Granda, C. (2020). Early-learning regularization prevents memorization of noisy labels. Advances in neural information processing systems, 33, 20331-20342. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your review and great questions! We address your questions/concerns below. **Claims And Evidence:** 1. We will clarify "binary" in the abstract. 2. Here we simply meant a setting where we have labels for all samples - to distinguish it from the setting of self-training where we are *not* given labels for all the samples. We’ll clarify this. **Experimental Designs Or Analyses / first two weaknesses**: * Regarding comparisons with other noise-robust methods, please note that we are *not* claiming retraining is a SOTA *general-purpose* noise-robust method (see lines 100-103 left column). We are just advocating it as **straightforward post-processing step** that can be applied **on top of vanilla training or a noise-robust training method**. In case it wasn’t clear, Table 5 shows results wherein initial training (baseline) was done with forward correction applied to the method of Ghazi et al. 2021, and retraining was done on top of this. Please also see our response to your third question (under Questions For Authors) below, where we show that retraining is beneficial as a post-processing step even *when using a noise-robust loss function* instead of the usual cross-entropy loss. Moreover, it’s not straightforward to apply many existing noise-robust methods to sophisticated label DP mechanisms (such as Ghazi et al.); retraining is very easy to apply in contrast. * We didn’t release the code because at the time of submission, we didn’t obtain our organization's approval to release it. We weren't sure if code can be shared in the rebuttal because the email on rebuttal instructions didn't mention anything about code. We will release the code upon paper acceptance. **No theoretical analysis or comments for the multi-class case (Weakness 3):** Extending our analysis to the multi-class case is left for future work. Here is a starting point: in the case of $C$ classes, the labels $y_i$’s will be $C$-dimensional one-hot vectors, the ground truth $\Theta^{\ast}$ will be a $C \times d$ matrix (features $x_i$’s are still $d$-dimensional vectors, but $y_i$’s need to be defined appropriately in terms of $\Theta^{\ast}$ and the $x_i$’s) and our predictor $\hat{\Theta} = \frac{1}{n} \sum_i y_i x_i^T$ will also be in $\mathbb{R}^{C \times d}$. **Questions For Authors:** **1.** Yes, memorization of noisy labels with powerful models is an issue. And if initial training is done naively, retraining may exacerbate this issue. That is why in almost all our experiments (except in Appendix J), we assume access to a clean validation set; please also see footnote 5 for the *practical version of this assumption*. This prevents the model from heavily memorizing. Moreover, as we show in Tables 3 & 7, the accuracy of the predicted (= given) labels on the consensus set is much more than the accuracy of both the predicted and given labels on the full set. This shows that regulated initial training is effective at avoiding memorization. Further, *as shown in Appendix J, even in the absence of a validation set, retraining is still beneficial but the gains are less – this is expected because the initial model’s performance is degraded due to more memorization/overfitting here*. **2.** Indeed, the benefit of retraining decreases when initial training is done for a larger number of steps. We studied this in Appendix J – here we don’t have a validation set and trained blindly for 100 epochs. Due to more overfitting here, the gains of retraining are lower than the corresponding gains with a validation set where we stopped at 40 epochs. If we train for even longer, the initial model will heavily memorize the noisy labels and this will probably render retraining ineffective. **3.** Yes, we used the cross-entropy (CE) loss; we’ll state this in the next version. Our baseline for AG News is actually randomized response (see lines 425-426 left column) to demonstrate the generality of retraining w.r.t. the baseline. Further, in Table 5, our baseline is forward correction applied to the method of Ghazi et al. 2021. So we do have results with other baselines. And based on your suggestion, we performed experiments with the *noise-robust symmetric CE loss function* proposed in [1] (1k+ citations) instead of the vanilla CE loss. In their loss (eq. 7 of [1]), we set $\alpha=0.8$ and $\beta=0.2$. Here are the results for CIFAR-100 w/ ResNet-34. |$\epsilon$|Baseline|Full RT|Consensus-based RT| |---|---|---|---| |$4$|$37.07\pm2.03$|$38.17\pm2.03$|$\mathbf{43.20}\pm1.77$| |$5$|$53.10\pm0.54$|$53.40\pm0.33$|$\mathbf{56.13}\pm0.25$| Thus, consensus RT yields meaningful gains even with the loss function of [1]. We hope to have resolved your concerns and we are happy to discuss further. If you’re satisfied with our answers, *we sincerely hope you will raise your score*! ---- [1]: Wang, Yisen, et al. "Symmetric cross entropy for robust learning with noisy labels." ICCV 2019. --- Rebuttal Comment 1.1: Comment: Thank you for the answers. I will keep my score as it is. --- Reply to Comment 1.1.1: Comment: Thanks for your reply. We are adding some new results on a *bigger dataset to show that retraining is effective when applied on top of label noise correcting methods*. Specifically, we show results when the baseline is *forward correction* and *backward correction* (from Patrini et al. 2017 cited in the paper) applied to the first stage of Ghazi et al. 2021 (similar to Table 5 in the paper); these results are in (A) and (B) below, respectively. For comparison in (C) below, we also show results when the baseline is just Ghazi et al. 2021 (i.e., no correction is applied). These results are on the *DomainNet* dataset (available on Tensorflow) *which has 345 classes and is much larger than CIFAR*. We did linear probing (using cross-entropy loss) with features extracted from a ResNet-50 pretrained on ImageNet. The setup is similar to our full fine-tuning experiments. **(A) Baseline = Forward Correction (Patrini et al. 2017) + Ghazi et al. 2021:** |$\epsilon$|Baseline|Full RT|Consensus-based RT| |---|---|---|---| |$3$|$31.23\pm0.56$|$33.30\pm0.65$|$\mathbf{36.07}\pm0.78$| |$4$|$58.50\pm0.08$|$58.63\pm0.12$|$\mathbf{61.80}\pm0.08$| **(B) Baseline = Backward Correction (Patrini et al. 2017) + Ghazi et al. 2021:** |$\epsilon$|Baseline|Full RT|Consensus-based RT| |---|---|---|---| |$3$|$30.17\pm0.61$|$31.47\pm0.74$|$\mathbf{35.03}\pm0.78$| |$4$|$56.63\pm0.37$|$56.80\pm0.37$|$\mathbf{60.47}\pm0.46$| **(C) Baseline = Ghazi et al. 2021 (no correction):** |$\epsilon$|Baseline|Full RT|Consensus-based RT| |---|---|---|---| |$3$|$23.60\pm0.92$|$29.23\pm1.03$|$\mathbf{36.30}\pm0.75$| |$4$|$48.25\pm0.05$|$52.10\pm0.10$|$\mathbf{57.40}\pm0.20$| As expected, forward and backward correction lead to better initial model performance (compared to no correction). The main thing to note however is that **consensus-based RT yields significant gains even with forward and backward correction**, consistent with our earlier results. Thus, consensus-based RT is a very effective post-processing step for improving learning with noisy labels. (It is worth noting that for $\epsilon=3$, consensus-based RT leads to similar accuracy with and without noise correction.) We hope you will take these extra results into consideration.
Summary: The paper gives a theoretical treatment on when learning with predicted hard label is beneficial than learning with original noisy label. Claims And Evidence: Yes, the claims were proved. Methods And Evaluation Criteria: Overall makes sense to me. Though not quite sure why consider the "label DP" setup, seems to me a standard label noise setup suffices. Theoretical Claims: I've skimmed through the proofs, but have not checked the details. Experimental Designs Or Analyses: Yes. Supplementary Material: I've skimmed through the proofs and read the experimental setups. Relation To Broader Scientific Literature: The benefit of using predicted hard label is has been studied empirically, the theoretical treatment is new. Essential References Not Discussed: Essential references are included. (Optional) there can be some supplementary references that are related, see below. Other Strengths And Weaknesses: Overall I enjoyed reading the paper. The theoretical treatment of hard labels is new to me, and I think it is a good contribution to the literature. Have some concerns at this point, I will be happy to read the authors' comments and re-assess my review. My biggest theoretical concerns are: 1) The form of the classifier considered: $$ \hat{\theta} = \frac{1}{n} \sum_{i=1}^n y_i x_i, $$ it does not correspond to any standard classifier. (At first glance, I would expect ERM or logistic regression type.) I look forward to see experimental setup that is more aligned with the theoretical setting: 1) a data simulation that corresponds exactly to the theoretical setting, e.g., a 2-dimensional mixture of two Gaussian (and use the exact form of the classifier in theory). 2) I think it is also possible to align CIFAR experiments with the theoretical setting, e.g., use a pretrained NN to extract the feature, then apply linear classifier on top of it (aka, "linear probing" in self-supervised learning). These should provide stronger evidence to the theory. Other Comments Or Suggestions: 1) Notation: in label noise literature, people usually use $\tilde{y}$ to denote the noisy label, rather than $\hat{y}$. $\hat{y}$ is usually used to denote the predicted label by the classifier. This is a bit confusing. 2) line 197: "perfect separable setting", I don't think it's separable, because Gaussian has infinite support, therefore positive and negative classes are overlapping. Do you mean "Bayes decision boundary is linear"? 3) Eqn. 4.4: the notion $ P(sign(<x, \theta>) \neq y ) $ is overloaded, here it integrates $x$, while in Theorem 4.1, it is conditioned on $x$. Questions For Authors: 1) line 072, Figure 1: what's the classifier used to get the result? (linear, MLP, or the one in Eqn. 4.3/4.8?) 2) line 217: why need "u"? It seems a bit redundant and does not seem to play a key role. 3) Eqn. 4.3 & 4.8: my biggest concern, why the classifier takes the form of $$ \hat{\theta} = \frac{1}{n} \sum_{i=1}^n y_i x_i, $$ it does not correspond to any standard classifier, and what's the multi-class version of it? 4) Theorem 4.6 (minimax lower bound): I am aware of three (non-parametric) lower bounds in [1-3], so I would like to know the position of this lower bound in the literature. 5) Theorem 4.6 also applies to the "retraining classifier" $\theta_1$ in Eqn. 4.8, therefore, predicted hard label do not provide a gain in terms of rate/sample complexity. Then the benefit of using the hard label is only in terms of the constants? [1] T Tony Cai and Hongji Wei. Transfer learning for nonparametric classification: Minimax rate and adaptive classifier. The Annals of Statistics, 49(1):100–128, 2021. [2] Hyungki Im and Paul Grigas. Binary classification with instance and label dependent label noise. arXiv preprint arXiv:2306.03402, 2023. [3] Yilun Zhu, Jianxin Zhang, Aditya Gangrade, and Clayton Scott. Label Noise: Ignorance Is Bliss. In Advances in Neural Information Processing Systems, 2024 Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the review and great questions! **(A) Label DP setting.** We focused on this because it’s not clear how to apply existing noise-robust techniques on top of existing label DP mechanisms, while retraining is a simple post-processing step. For e.g., as mentioned in lines 365-366 right column, it’s not obvious how to apply forward correction to the second stage of Ghazi et al 2021. **(B) Form of classifier $\hat{\theta} = \sum_i y_i x_i$.** This is a simplification to the least squares’ solution (LSS) obtained by removing the empirical covariance matrix’s inverse. The way to analyze the LSS would be to bound the deviation of the empirical covariance matrix from the population covariance matrix (which shrinks as $n \to \infty$), then analyze with the features pre-multiplied by covariance matrix’s inverse. This would just make the math more tedious w/o adding any meaningful insights. Also, as we wrote around eq. 4.3, our classifier is similar to kernel methods with the inner product kernel & it has been used in the highly-cited work of Carmon et al 2019. **(C) Experimental setup more aligned with theory setting** 1. Setting of Fig. 1 corresponds to our theory setting; see Appendix A. 2. We did linear probing (LP) for CIFAR-100 with features extracted from a ResNet-50 pretrained on ImageNet. The setup is similar to our full fine-tuning (FT) experiments; we omit details here due to lack of space. Results: |$\epsilon$|Baseline|Full RT|Consensus-based RT| |---|---|---|---| |$3$|$55.26\pm0.19$|$60.97\pm0.21$|$\mathbf{63.37}\pm0.26$| |$4$|$64.83\pm0.39$|$66.67\pm0.33$|$\mathbf{67.83}\pm0.37$| *So even here RT (especially, consensus RT) yields good gains*. Note that LP performs much better than full FT; this is often the case when training with noise due to less overfitting with LP. **(D) Line 197: perfect separable setting.** *Our modified GMM setting* (eq 4.1) is separable. As discussed in lines 172-176 right column, $\theta^{*} = \mu$ separates the data perfectly. We’ll fix/clarify notational ambiguities. **(E) Questions for Authors** **1.** The ones in eqs. 4.3 & 4.8. Also see Appendix A. **2.** We introduced $u$ so that the margin of a data point $x$ along $\mu$ is not the same. As explained in lines 172-175 right column, $|<x, \mu>| = (1+u)||\mu||^2 \geq ||\mu||^2$. If there is no $u$, all the points would have the same margin. We agree that from the analysis perspective, $u$ is not very important. **3.** For the binary case, see (B) above. Even in the multi-class case, something like this has been studied in reference [A] (see Section 2.2). Specifically, for $C$ classes, the labels $y_i$’s will be $C$-dimensional one-hot vectors, the ground truth $\Theta^{\ast}$ will be a $C \times d$ matrix (features $x_i$’s are still $d$-dimensional, but $y_i$’s need to be defined appropriately in terms of $\Theta^{\ast}$ & $x_i$’s) and our predictor $\hat{\Theta} = \frac{1}{n} \sum_i y_i x_i^T$ will also be in $\mathbb{R}^{C \times d}$. **4.** *These lower bounds are in much more general settings than ours and so they are weaker than ours*. In [1], our setting corresponds to $n_p = 0, n_q = n$. Per Definition 2 of [1], $\beta \leq 1$ and as per the paragraph after Remark 3, $\alpha \beta \leq d$. Now as per Thm 3.2, the lower bound on the error is effectively $n^{-O(\frac{1+\alpha}{d})}$. So when $\alpha \ll d$, this lower bound yields a much worse sample complexity than our result in Thm 4.6. In [2], the lower bound on the error (Thm 2) doesn’t reduce with $n$, so even if there are infinite samples, we can’t get 0 error in the worst case. As for [3], the lower bound on the error (Thm 1) also has a non-diminishing term depending on $\epsilon$. In the special case of $\epsilon = 0$ (or $\epsilon$ being small), there is a $n^{-1/2}$ dependence but no dependence on the dimension (or a related quantity). But their upper bound in Thm 2 does have a dependence on a VC dimension-like quantity as expected, so their lower bound is probably loose w.r.t. dimension. **5.** Yes, Thm. 4.6 also applies to the retraining (RT) classifier. If you see Remark 4.10, the min. # of samples $n$ needed for RT to be better is **more than** $d/(1-2p)^2$, i.e., the lower bound of Thm. 4.6. And as discussed after Remark 4.10, this requirement on $n$ is probably tight (modulo log factors) because we can only hope the RT classifier to be better if the accuracy of the labels with which it is trained – namely, the initial model’s predicted labels – is more than $(1-p)$ (= accuracy of the noisy labels with which the initial model is trained); this requires at least $d/(1-2p)^2$ samples (per Thm. 4.6). So yes, RT can’t improve the sample complexity beyond $d/(1-2p)^2$. We hope to have resolved your concerns and are happy to discuss further. If you’re satisfied, *we hope you will raise your score*! --- [A]: "Theoretical Insights Into Multiclass Classification: A High-dimensional Asymptotic View", Thrampoulidis et al., 2020 --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. It would be nice to see a more comprehensive discussion of the lower bound in the next version of the paper (can be a formalized version of the response). Given the limited exploration of lower bounds in the existing label noise literature, I think it is a good addition. Regarding the debate on "linear separability," my understanding is that distributions are separable if and only if their supports do not overlap. However, this difference in perspective is minor, and I am comfortable moving forward despite differing views. Overall, the authors have adequately addressed my concerns, and I anticipate the next version of the manuscript will provide further clarity. I recommend acceptance (and have raised the score from 3 to 4). --- Reply to Comment 1.1.1: Comment: Thank you for raising your score! Yes, we will add a discussion on the lower bounds in the next version and we agree, it’ll be a good addition. Thanks for pointing out these papers! We’ll also clarify what we mean by separability and add the extra experiments.
null
null
null
null
null
null
null
null
Linear Mode Connectivity between Multiple Models modulo Permutation Symmetries
Accept (poster)
Summary: The authors observe the linear mode connectivity hypothesis as proposed before has only been confirmed between two independent models, and propose an algorithm to merge multiple models such that the test loss doesn’t meaningfully change but the model has an approximately flat global minima. Claims And Evidence: The claims seem to be well-supported empirically, that the merging of two models doesn't scale and this new method does. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Mainly the theory section Relation To Broader Scientific Literature: The contribution seems important for the larger literature on linear mode connectivity, but this isn't particularly well motivated in the paper itself. The STE-MM algorithm seems novel, but because the methods either lower test accuracy or provide a very small increase, my understanding is that these algorithms are not intended to improve metrics for their own sake but to give more evidence that linear mode connectivity of independently trained networks is empirically true. I think this requires a bit more discussion in the paper. In particular, the fact that the convex basin of merging multiple models becomes less sharp is intuitive, but the consequences of this fact aren’t elaborated on much. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful comments and for taking the time to carefully read our paper. > The contribution seems important for the larger literature on linear mode connectivity, but this isn't particularly well motivated in the paper itself. The STE-MM algorithm seems novel, but because the methods either lower test accuracy or provide a very small increase, my understanding is that these algorithms are not intended to improve metrics for their own sake but to give more evidence that linear mode connectivity of independently trained networks is empirically true. I think this requires a bit more discussion in the paper. In particular, the fact that the convex basin of merging multiple models becomes less sharp is intuitive, but the consequences of this fact aren’t elaborated on much. Thank you for your thoughtful feedback. As you pointed out, we believe that our work contributes to the broader understanding of linear mode connectivity (LMC), particularly in the context of merging multiple independently trained models. While we touch on this connection in Section 1 of the paper, it may not have been sufficiently emphasized. To better reflect this perspective, we are considering revising the introduction to explicitly highlight the relevance to LMC and potentially modifying the title to more directly reflect this connection, for example, “Linear Mode Connectivity for Multiple Models Modulo Permutation Symmetries.” We use average-case sharpness as a metric because, as discussed in Andriushchenko et al. (2023), it quantifies how much the loss increases under Gaussian perturbations of model parameters. If multiple models can be transferred via permutations into the same convex basin, we expect the loss around their midpoint to remain relatively flat—reflected in low average-case sharpness. Indeed, our experiments show that STE-MM consistently reduces average-case sharpness around merged models. While this does not provide direct evidence of a single convex basin, it supports the possibility that such a basin exists. A more fine-grained analysis of this phenomenon remains an important direction for future research, which we intend to pursue. --- Rebuttal Comment 1.1: Comment: I appreciate the author's response. I will keep my score.
Summary: Prior work showed that linear mode connectivity can be achieved between two independently trained neural networks by applying an appropriate parameter permutation, suggesting that SGD-trained models converge to a shared low-loss basin under permutation symmetries. This paper extends their analysis to multiple models, proposing STE-MM, a runtime-optimized permutation search method that maintains low test loss after merging and reduces loss sharpness as the number of models increases, indicating that linear connectivity generalizes to multiple models. ## Update after rebuttal I thank the authors for the rebuttal. I maintain my score. Claims And Evidence: Yes, I find the results in the paper highly interesting and the work exceptionally well executed. I especially appreciate the approach of optimizing the permutation matrix search by reusing dual variables from previous iterations to dynamically adjust the cost matrix, significantly improving the efficiency of the LAP solver. Methods And Evaluation Criteria: Yes, the proposed approach and insights demonstrate a strong understanding of classical algorithms. The choice of datasets and baselines is appropriate. I recommend that the authors extend their approach to more challenging datasets, such as CIFAR-100 and ImageNet, to further validate its effectiveness. Theoretical Claims: The theoretical and algorithmic aspects of the paper are well-explained and appear solid. Experimental Designs Or Analyses: The presented results are compelling and should be well-received by the conference community. Supplementary Material: I reviewed the experimental section but did not carefully checked the proof. Relation To Broader Scientific Literature: Both the proposed optimization method (STE-MM) and the findings presented in the paper make a significant contribution to the scientific literature on validating the permutation hypothesis. Essential References Not Discussed: The references are appropriate. Other Strengths And Weaknesses: The paper presents a valuable result that advances the theoretical understanding of deep learning and introduces a method that is highly relevant for applications such as multi-task and distributed learning. I don't see major weaknesses. Other Comments Or Suggestions: - Typo "mdoel" on line 060, second column - Please revise lines 115-134 in the second column to ensure that the referenced conjecture is clearly presented as a conjecture rather than a theorem. - Line 350: "While the loss of the merged model also tends to in crease with the number of models in STE-MM, the amount of increase in test loss value becomes small as the number of models grows." - I think it's hard to see this in the provided figures, but could be shown in a dedicated plot. - I suggest to move Fig. 7 and Fig. 8 to the main paper - these are quite essential. Questions For Authors: - Since STE-MM is the first method to empirically place multiple models into a single basin, could it be used to analyze potential biases or inefficiencies in other, more lightweight, permutation search techniques? - Could you comment on how your paper relates to the concurrent work https://arxiv.org/abs/2403.07968? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful for your valuable comments and for your close reading of the paper. > I recommend that the authors extend their approach to more challenging datasets, such as CIFAR-100 and ImageNet, to further validate its effectiveness. Thank you for your suggestion. To test our method on more complex tasks, we performed an experiment on model merging using ResNet-50 models trained on the ImageNet dataset. The following table presents the test losses of merged models using various permutation search methods, including STE-MM. Results are shown for up to six models due to GPU memory limitations. The ResNet-50 models were trained using a training script published on GitHub by the FFCV library (i.e., https://github.com/libffcv/ffcv-imagenet). The table reports the average and standard deviation of the test loss across three model merging trials. As seen in the table, the test loss of the merged model using STE-MM decreases monotonically as the number of merged models increases, while that of other permutation search methods increases, demonstrating the effectiveness of our method. Note that the test loss of the merged model could potentially be improved by increasing the model width, even when multiple models are merged. Ainsworth et al. (2023), for instance, have shown that the test accuracy of the merged model improves with increased model width when two ResNet-50 models are merged. **Test loss of the merged model when ResNet-50 models trained on ImageNet are combined** | #models | MergeMany | CCMM | STE-MM | |---:|:------------------|:------------------|:------------------| | 2 | $5.835 \pm 0.134$ | $5.822 \pm 0.066$ | $5.306 \pm 0.022$ | | 3 | $6.341 \pm 0.023$ | $6.399 \pm 0.062$ | $5.179 \pm 0.044$ | | 4 | $6.555 \pm 0.034$ | $6.576 \pm 0.047$ | $4.887 \pm 0.039$ | | 5 | $6.661 \pm 0.084$ | $6.657 \pm 0.040$ | $4.684 \pm 0.010$ | | 6 | $6.689 \pm 0.047$ | $6.677 \pm 0.037$ | $4.559 \pm 0.027$ | > Typo "mdoel" on line 060, second column > Please revise lines 115-134 > could be shown in a dedicated plot. We apologize for any typos, unclear figures, or ambiguous expressions in the text. We will carefully revise the manuscript to address all of these issues in the camera-ready version. > I suggest to move Fig. 7 and Fig. 8 to the main paper. Thank you for your suggestion. As you mentioned, these are important results, so we will include Figures 7 and 8 in the main body of the paper. > could it be used to analyze potential biases or inefficiencies in other, more lightweight, permutation search techniques? This is a valuable perspective. Since methods based on $L^2$ distance, such as MergeMany and CCMM, do not require training, they can search for permutation matrices with low computational cost. However, our experimental results show that the performance of the merged model deteriorates as the number of models being merged increases when using these methods. This suggests a fundamental difference between the permutation matrices discovered by STE-MM and those identified by lightweight approaches. A more in-depth comparison of these permutation matrices may help uncover the causes of inefficiency in the latter and guide improvements in their design. > Could you comment on how your paper relates to the concurrent work https://arxiv.org/abs/2403.07968? Thank you for the reference. We were not previously aware of this paper. It proposes the star domain conjecture and introduces the Starlight algorithm to verify it by identifying a parameter $\theta^\ast$ from multiple SGD solutions. The authors show empirically that such a $\theta^\ast$ exists even for models with smaller widths. Entezari et al.'s conjecture (i.e., multiple SGD solutions can be transferred into a single approximately convex basin via permutations) makes a stronger claim than the star domain conjecture, as shown in the figure on page 2 of their paper. This result is particularly interesting, as the method performs well in the small-width regime. In contrast, our STE-MM approach requires sufficiently wide models to transfer them into a shared approximately convex basin; otherwise, a significant barrier remains. It is unclear whether this limitation arises from the difficulty of searching the large permutation space—due to the discrete nature of the optimization—or from a more fundamental property of the loss landscape. If the latter is the case, it may suggest that wider (i.e., overparameterized) models tend to induce simpler loss landscapes. In fact, a recent theoretical study (https://openreview.net/forum?id=4xWQS2z77v) on two-layer neural networks has shown that increasing the width leads to a simpler structure in the loss landscape. It is plausible that a similar phenomenon occurs in deeper networks as well. We believe that further investigation into this aspect is a promising direction for future research.
Summary: This paper investigates permutation-based methods for merging multiple models. The literature mainly focused on merging pairs of models, and this paper shows that these methods fail to transfer multiple models into the same basin. Then they introduce a method for merging multiple methods and find multiple permutations extending the Stringh Through Estimator method and propose an accelerated version of weight matching for faster permutation search Claims And Evidence: Claims are supported by the experiments. Methods And Evaluation Criteria: The paper is clear and introduces experiments to motivate the necessity of a new method to tackle the problem of merging multiple models. Then the proposed method is benchmarked against the correct set of baselines and it scales much better with the number of merged models in terms of loss. Theoretical Claims: There is a theoretical section (proof in appendix) claiming that finding n-1 permutations is sufficient. This result serves as a base for their algorithm 1. The proof seems correct. Experimental Designs Or Analyses: The experimental design is sound, all claims are supported by experiments. Supplementary Material: yes, the proofs. Relation To Broader Scientific Literature: yes. It connects to seminal papers on linear mode connectivity, permutation-based methods and also recent works on merging multiple models (Crisostomi et al 2024) Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: - important problem - scalable method over the number of models - experiments are sound and show improvement over the baselines - interesting analysis on the flatness of the final solution Weaknesses: - The algorithm explanation could be improved. I suggest the authors expand and clarify the reason for dummy variables and how the algorithm works in general by following the explanation of the seminal paper (Ainsworth et al 2023), section 3.3. - Experiments are only performed on academic benchmarks such as CIFAR10 and MNIST. I’m sorry about this comment, I hope it does not trivialize your analysis, but I recommend testing the scalability of the method on more complex tasks such as ImageNet. Other Comments Or Suggestions: typos: 190: Permuta*ion Search for Multiple Models 403 right - experime*tnal results Questions For Authors: please see weaknesses and strengths, and: - can the authors provide an analysis of the permutation statistics, for example, what is the percentage of the weights that are being permuted? - another thing is, can the authors provide a baseline where no permutation is applied? I think it is important to establish that there are actual permutations in the considered setting. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your insightful feedback and the effort you put into reviewing our work. > The algorithm explanation could be improved. I suggest the authors expand and clarify the reason for dummy variables and how the algorithm works in general by following the explanation of the seminal paper (Ainsworth et al 2023), section 3.3. Thank you for your suggestion on improving the explanation. We will revise the description of STE-MM to make it clearer, drawing on the explanation provided in Ainsworth et al. (2023), Section 3.3. > Experiments are only performed on academic benchmarks such as CIFAR10 and MNIST. I’m sorry about this comment, I hope it does not trivialize your analysis, but I recommend testing the scalability of the method on more complex tasks such as ImageNet. Thank you for your suggestion. To evaluate the scalability of our method, we conducted additional experiments of model merging with ResNet-50 models trained on the ImageNet dataset. Due to space limitations in the rebuttal, the detailed experimental results are included in our response to Reviewer SJMK’s first comment. The results demonstrate that our method scales effectively to more complex tasks such as ImageNet. > typos: 190: Permutaion Search for Multiple Models 403 right - experimetnal results We are sorry for the typos. We will carefully review the entire paper and make the necessary corrections. > can the authors provide an analysis of the permutation statistics, for example, what is the percentage of the weights that are being permuted? Yes, certainly. The following table provides an analysis of how closely the permutation matrices discovered by STE-MM resemble the identity matrix. Specifically, we measured this by counting the number of entries equal to 1 in the diagonal of each permutation matrix and dividing this count by the size of the matrix. This matching rate was calculated for each layer, then averaged across all layers in a model, and finally averaged across all models being merged. The table reports the mean and standard deviation over three model merging trials. As shown, the matching ratio remains close to zero regardless of the number of models, indicating that the found permutation matrices are significantly different from the identity matrix. **Percentage of matches between the permutation matrix and the identity matrix [\%]** | #models | MLP, MNIST | MLP, FMNIST | VGG-11, CIFAR10 | ResNet-20, CIFAR10 | |---:|:------------------|:------------------|:------------------|:---------------------| | 2 | $0.174 \pm 0.099$ | $0.217 \pm 0.136$ | $0.336 \pm 0.122$ | $0.217 \pm 0.033$ | | 3 | $0.141 \pm 0.082$ | $0.228 \pm 0.117$ | $0.298 \pm 0.258$ | $0.191 \pm 0.051$ | | 4 | $0.152 \pm 0.075$ | $0.217 \pm 0.075$ | $0.254 \pm 0.126$ | $0.242 \pm 0.023$ | | 5 | $0.157 \pm 0.052$ | $0.212 \pm 0.033$ | $0.171 \pm 0.012$ | $0.203 \pm 0.002$ | | 6 | $0.208 \pm 0.069$ | $0.152 \pm 0.015$ | $0.174 \pm 0.036$ | $0.227 \pm 0.012$ | | 7 | $0.177 \pm 0.023$ | $0.181 \pm 0.070$ | $0.138 \pm 0.043$ | $0.221 \pm 0.027$ | | 8 | $0.158 \pm 0.052$ | $0.180 \pm 0.019$ | $0.160 \pm 0.034$ | $0.193 \pm 0.006$ | | 9 | $0.146 \pm 0.008$ | $0.217 \pm 0.012$ | $0.141 \pm 0.032$ | $0.227 \pm 0.046$ | | 10 | $0.178 \pm 0.044$ | $0.219 \pm 0.018$ | $0.176 \pm 0.068$ | $0.232 \pm 0.010$ | > another thing is, can the authors provide a baseline where no permutation is applied? I think it is important to establish that there are actual permutations in the considered setting. The following table shows the test accuracy of the merged model when the models are combined without applying any permutation matrices. As shown, the accuracy drops significantly as the number of models being merged increases. This result highlights the importance of searching for appropriate permutation matrices when merging multiple models. **Test accuracy of the merged model without permutation** | #models | MLP, MNIST | MLP, FMNIST | VGG-11, CIFAR10 | ResNet-20, CIFAR10 | |---:|:-------------------|:-------------------|:-------------------|:---------------------| | 2 | $82.223 \pm 5.318$ | $46.930 \pm 7.659$ | $80.353 \pm 0.365$ | $86.653 \pm 1.317$ | | 3 | $23.020 \pm 9.077$ | $14.340 \pm 2.318$ | $32.777 \pm 4.380$ | $45.780 \pm 8.533$ | | 4 | $9.760 \pm 0.017$ | $10.043 \pm 0.075$ | $10.000 \pm 0.000$ | $10.267 \pm 0.281$ | | 5 | $9.740 \pm 0.000$ | $10.003 \pm 0.006$ | $10.000 \pm 0.000$ | $10.013 \pm 0.023$ | | 6 | $9.740 \pm 0.000$ | $10.010 \pm 0.017$ | $10.000 \pm 0.000$ | $10.000 \pm 0.000$ | | 7 | $9.740 \pm 0.000$ | $10.000 \pm 0.000$ | $10.000 \pm 0.000$ | $10.000 \pm 0.000$ | | 8 | $9.740 \pm 0.000$ | $10.000 \pm 0.000$ | $10.000 \pm 0.000$ | $10.000 \pm 0.000$ | | 9 | $9.740 \pm 0.000$ | $10.000 \pm 0.000$ | $10.000 \pm 0.000$ | $10.000 \pm 0.000$ | | 10 | $9.740 \pm 0.000$ | $10.000 \pm 0.000$ | $10.000 \pm 0.000$ | $10.000 \pm 0.000$ |
Summary: This paper focuses on the linear mode connectivity between neural networks (NNs) trained using stochastic gradient descent (SGD). First, it shows that existing permutation search methods perform poorly when more than two models are involved. To address this issue, the authors propose a novel search method, the Straight-Through Estimator for Multiple Models (STE-MM). Empirical results show that this method is both effective and efficient in improving linear mode connectivity among multiple models. Claims And Evidence: The authors claim that previous methods fail to achieve linear mode connectivity among more than three models. This is demonstrated in the experiments in Section 3, where, even when the remaining models are transformed into one specific model—thus theoretically entering the loss basin of that model—merging these models still leads to an increase in loss and a decrease in accuracy. The authors propose a new search method, STE-MM, to efficiently transfer multiple models into a single loss basin using a permutation matrix. They conduct experiments on MLP, VGG-11, and ResNet-20 trained on MNIST, FMNIST, and CIFAR-10, comparing their method with MergeMany and CCMM. The results indicate that: (1) the acceleration method introduced in Section 4.2 speeds up the search process; (2) their method generally outperforms others in terms of loss evaluation and accuracy on benchmark datasets; (3) they define the sharpness of the loss at the center point and demonstrate that it decreases. Methods And Evaluation Criteria: The models and evaluation criteria are well-suited for the field of computer vision. Although they do not involve NLP, this is not a limitation in the given context. Theoretical Claims: The theoretical claims see the (4) in Supplementary materials. Experimental Designs Or Analyses: See Claims and Evidence. Supplementary Material: It includes: (1) additional related work about linear mode connectivity and model merging; (2) detailed descriptions of the experimental setup; (3) additional experiments about the efficiency of the proposed method and the sharpness of the merged model; (4) theoretical supplements explaining the application of their method and the definition of sharpness in the experiments. Relation To Broader Scientific Literature: The paper proposes a merging method grounded in theoretical principles, which has been empirically demonstrated to be effective for model merging involving multiple models. However, other merging methods, such as TIES merging and DARE, are also widely used and have been shown to be effective. Additionally, due to computational cost considerations, simple averaging remains a common approach in multiple-model merging. Essential References Not Discussed: Most essential related works, as I know, are mentioned. Other Strengths And Weaknesses: S1. The paper writing is quite well and clear. S2. See **Claims and Evidence.** W1. See **Relation To Broader Scientific Literature.** Other Comments Or Suggestions: - The clarity of Figure 5 could be improved. It appears to illustrate that the distance ratio of STE-MM is more concentrated; however, the overlapping histograms make it difficult to discern specific details. Enhancing the visualization, such as by adjusting transparency or using distinct color schemes, may help improve readability. - Typos: 059 (right column) model; 061 (right column) denotes. Questions For Authors: - Could STE-MM also assist in determining the optimal hyperparameters for different models? For instance, in model merging, where different models have distinct weights, can STE-MM help in assigning appropriate values? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reading our paper carefully and for your constructive comments. > However, other merging methods, such as TIES merging and DARE, are also widely used and have been shown to be effective. Additionally, due to computational cost considerations, simple averaging remains a common approach in multiple-model merging. Thank you for your comment. While those methods are indeed important prior works, their assumptions differ significantly from ours. Specifically, TIES and DARE propose methods for merging multiple fine-tuned models based on a shared pre-trained model. In contrast, we focus on merging models that have been trained from scratch using different random seeds. Previous studies, such as model soups (Wortsman et al. 2022), have shown that simply averaging the weights of fine-tuned models originating from a common base model can improve performance. However, when merging models trained from scratch with different seeds, it has been observed that naive weight averaging can substantially degrade performance. Our method addresses the more challenging scenario of merging such models effectively, which we consider one of our key contributions. > The clarity of Figure 5 could be improved. We apologize for Figure 5's poor visibility. As you pointed out, we will revise it in the camera-ready version to improve its readability. > Typos: 059 (right column) model; 061 (right column) denotes. Thank you for pointing out the typos. We will thoroughly proofread the entire manuscript to ensure that all typos are corrected. > Could STE-MM also assist in determining the optimal hyperparameters for different models? For instance, in model merging, where different models have distinct weights, can STE-MM help in assigning appropriate values? Thank you for your question. If we understand correctly, you may be referring to whether STE-MM could be extended to assist with hyperparameter tuning when merging models with distinct weights. In its current form, STE-MM does not explicitly address hyperparameter optimization—its objective is to discover effective permutations for merging models. The algorithm itself only includes standard training hyperparameters such as learning rate and batch size. For example, in Equation 5, the values of $\lambda_1, \ldots, \lambda_n$ are drawn from a uniform distribution and are not considered hyperparameters. Therefore, applying STE-MM to hyperparameter selection would require substantial modification, as this lies outside its intended scope. --- Rebuttal Comment 1.1: Comment: Thank you for the helpful clarifications. The focus of merging models that have been trained from scratch addresses a more challenging issue. This will not affect my score for the paper, but i would like to clarify: is the aim of merging to achieve better generalization? Additionally, if possible, I would still appreciate it if the authors could improve the presentation of Figure 5 to enhance the overall appearance of the paper. --- Reply to Comment 1.1.1: Comment: Thank you for your additional questions. > This will not affect my score for the paper, but i would like to clarify: is the aim of merging to achieve better generalization? Obtaining a more generalized model is not the primary objective of this paper; rather, we consider it one of the potential outcomes resulting from our main goal. The objective of our study is to investigate whether Linear Mode Connectivity (LMC) holds across multiple models. In other words, we aim to determine whether it is possible to find suitable permutations such that multiple models can be transferred into the same approximately convex basin. As described in the introduction of the paper, this inquiry is driven by a scientific interest in understanding why Stochastic Gradient Descent (SGD) is so effective in training neural networks, as discussed in works such as Entezari et al. and Ainsworth et al. Therefore, the goal of our work is rooted more in scientific curiosity than in practical utility. That said, if we are able to find permutations that enable LMC to hold among multiple models, the resulting merged model could potentially outperform the original models. In fact, as shown in Figure 2(a), the test accuracy of the merged model improves as the number of models being merged increases. Furthermore, as shown in Figure 6, the loss landscape around the merged model becomes flatter. These observations suggest that model merging may lead to improved generalization. In this sense, although enhancing generalization is not the direct objective of our work, we believe it is a possible byproduct of the proposed model merging approach. > if possible, I would still appreciate it if the authors could improve the presentation of Figure 5 to enhance the overall appearance of the paper. We have uploaded the revised figures at the following URL—please have a look: https://anonymous.4open.science/r/ICML_rebuttal_figs-3029/figures.pdf In response to the concern that the differences between methods were hard to distinguish, we have now plotted the results separately for STE-MM, MergeMany, and CCMM. The top three figures in the PDF show histograms of the $L^2$ distances for all model pairs, while the bottom three show histograms of the angles formed by all model triplets. We apologize for not having conveyed this clearly in our previous response, but our intention with Figure 5 was to demonstrate that, regardless of the permutation search method used, the distances for model pairs after permutation tend to be roughly equivalent. The point was not necessarily to highlight that STE-MM results are more tightly concentrated around 1.0. As seen in the updated figures, similar trends are observed across all methods. Therefore, we plan to replace Figure 5 in the main text with the top figure from the PDF (i.e., the histogram of model pair distances after permutation using STE-MM). The remaining figures will be added to the appendix. We will also revise the main text to note that results from other methods are available in the appendix and that they exhibit similar behavior.
null
null
null
null
null
null
R*: Efficient Reward Design via Reward Structure Evolution and Parameter Alignment Optimization with Large Language Models
Accept (poster)
Summary: The paper introduces R*, an efficient framework for automatic reward function generation in reinforcement learning. R* addresses the challenge of designing high-quality reward functions by leveraging LLMs through two key components: reward structure evolution and parameter alignment optimization. The framework uses LLMs to generate modular reward functions, refines them with module-level crossover, and optimizes parameters via a critic-based voting mechanism for step-wise trajectory labeling. Experiments across eight robotic control tasks show that R* significantly outperforms existing methods, including Eureka and human-designed rewards, in both final policy performance and convergence speed. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: There is not theoretical claim in this submission. Experimental Designs Or Analyses: This work conducts experiments in Isaac Gym and the Bidextrous Manipulation (Dexterity) benchmark, which are commonly used benchmarks previously. Supplementary Material: Yes, the supplementary material involves the detailed prompts. Relation To Broader Scientific Literature: This work builds on existing LLM-based reward design methods and introduces technical enhancements to improve their performance. Essential References Not Discussed: no Other Strengths And Weaknesses: Strengths -- 1. The paper is well-structured and the idea of parameter alignment optimization makes sence. 2. The implementation details and prompt designs are clearly and thoroughly described. 3. The experimental results are impressive. Weaknesses -- 1. The proposed method heavily relies on several assumptions about the environments, including the need for environment code for LLM-based coding and detailed state information for state ranking. These requirements make it difficult to extend the method to more complex and real-world scenarios, such as image-based tasks. I understand that Eureka, the foundation of this method, has also been criticized for these limitations. I am curious whether the authors have considered strategies to address this issue, or at the very least, whether these limitations should be explicitly discussed in the manuscript. 2. The reported Eureka performance seems significantly lower than that reported in the original Eureka paper. Other Comments Or Suggestions: typos: In line 6 of the pseudocode, are $F_{p1}$ and $F_{p2}$ sampled from $D_T$ or $P_{reward}$? Questions For Authors: 1. I am uncertain about the crossover operation: How are the parents selected, and how is it determined which reward module to insert into another parent? 2. Which reward functions are used for training in Figure 2? 3. Are the trajectories for parameter alignment optimization sampled using the initially randomized policy? 4. Are the generated reward functions only suitable for PPO? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback. For the reviewer's questions, we will respond to them one by one as follows. 1. **[How are the parents selected, and how is it determined which reward module to insert into another parent?]** We maintain a buffer of reward functions with a fixed size of 5 (Only the top-5 best-performing ones are retained), where each reward function is associated with the success rate of the RL policy it guides. We apply a softmax over these success rates and sample two reward functions based on the resulting probabilities. We then randomly select reward modules for crossover. The main advantage of this approach is that it does not require any API calls. A potential improvement would be to leverage an LLM to guide the module selection process; however, this typically incurs additional LLM API usage. 2. **[Which reward functions are used for training in Figure 2?]** The reward function that achieves the highest RL success rate across all generations is selected for final evaluation, which is consistent with the setting used in Eureka. 3. **[Are the trajectories for parameter alignment optimization sampled using the initially randomized policy?]** The trajectories are collected using the trained policies guided by different reward functions within the population. 4. **[Are the generated reward functions only suitable for PPO?]** The generated reward functions are applicable to various RL algorithms. To demonstrate this, we conducted experiments on Metaworld, using the generated rewards to guide SAC for policy learning (max success rate and the required steps). The results are as follows: | | **drawer-open** | **button-press** | **window-close** | | --- | --- | --- | --- | | Expert Reward | 100% \| 33600 | 100% \| 399000 | 100% \| 220500 | | R* | 100% \| 36720 | 100% \| 421200 | 100% \| 772050 | We observe that **R*** achieves a 100% success rate on these tasks, matching the performance of the expert-designed rewards. 5. **[I am curious whether the authors have considered strategies to address this issue, or at the very least, whether these limitations should be explicitly discussed in the manuscript.]** For complex visual tasks, we believe a feasible direction is to obtain the relative positions of targets through object detection and prediction, and associate them with variables—this typically requires additional training. However, our current method does share the limitation mentioned by the reviewer: it cannot be applied to tasks where low-level information is inaccessible. 6. **[The reported Eureka performance seems significantly lower than that reported in the original Eureka paper.]** The main reason lies in our environment configuration, which limits the number of parallel environments. In the original Eureka paper, some tasks use a large number of parallel environments, requiring at least 4 RTX 4090 GPUs to run. We reduce the number of parallel environments to ensure that the program can run with only 40GB of GPU memory. However, this reduction significantly increases the learning difficulty for the algorithm. Besides, we use GPT-4o for our experiments, which may also introduce some differences. However, to ensure a fair comparison, all experiments are conducted under the same configuration. The specific configuration of the number of parallel environments is shown in the table below. | | Franka-Cabinet | Swing-Cup | Hand-Over | Hand-Scissors | Allegro-Hand | Door-Open-Outward | Kettle | Pen | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Env number | 4096 | 256 | 512 | 128 | 1024 | 2048 | 128 | 256 | 7. **[typos: In line 6 of the pseudocode]** Thank you to the reviewer for pointing out this typo. We will correct it in the revised revision. --- **We would appreciate it if the reviewer can confirm that the concerns had been addressed and, if so, reconsider the assessment. We’d be happy to engage in further discussions.** --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response. While some of my concerns have been addressed, I still have a few remaining questions: - What is the experimental setup for the MetaWorld experiments with SAC? Specifically, is SAC used as the training algorithm during iterative evolution? - I understand the discrepancy between the reported Eureka performance and that in the original Eureka paper. However, my concern about the comparability of the results remains, as the difference is quite substantial. I believe it is necessary to reproduce both Eureka and the proposed method under a similar setup used in Eureka. This is also important for evaluating the scalability of the proposed method across different LLMs and computational resources. - As acknowledged by the authors, the proposed method cannot be applied to tasks where low-level information is inaccessible. I consider this a significant limitation that restricts the method’s applicability, even though this constraint is shared by many prior works. I hope the authors will provide a serious discussion of this issue in the paper, along with a convincing realistic justification for the current experimental setup. If these concerns are addressed, I would be happy to raise my score to acceptance. **Update: Thank you for the authors' response. I have updated my score from 2 to 3. I believe the newly added experiments and discussions have improved the quality of the paper.** --- Reply to Comment 1.1.1: Comment: 1. **[What is the experimental setup for the MetaWorld experiments with SAC? Specifically, is SAC used as the training algorithm during iterative evolution?]** We replace the PPO with SAC. the population size is set to 5. Each evaluation is performed after 200,000 environment steps. Our experiments primarily demonstrate that our method are also capable of generating rewards to effectively guide the learning and optimization of other RL algorithms, achieving performance that is competitive with expert-designed rewards. 2. **[Reproduce both Eureka and the proposed method under a similar setup used in Eureka]** Thank you for the reviewer’s suggestion. One of the key factors in performing comparisons under the original setting is hardware resources. Due to limitations of our existing servers, we were unable to successfully run most tasks, which typically require over 60 GB of GPU memory, and in some cases up to 100 GB. To address the reviewer’s concern, we rent additional servers to carry out the experiments. **We strictly follow the original settings**, using GPT-4-0314 as the LLM model. The avg success rates are as follows: | | **Franka** | **Swing-Cup** | **Hand-Over** | **Kettle** | **Scissor** | **Door-Open -outward** | | --- | --- | --- | --- | --- | --- | --- | | Eureka | 33% | 53% | 83% | 70% | 100% | 98% | | R* | 73% | 96% | 93% | 95% | 100% | 100% | From the results, we observe that R* also outperforms Eureka under the original settings. 3. **[A serious discussion of this issue in the paper, along with a convincing realistic justification for the current experimental setup. ]** Thank you for the reviewer’s valuable suggestion. Our experiments primarily focus on manipulation and dexterous hand control tasks. In real-world robotic applications, it is typically feasible to directly access various low-level states from the robot itself. As for the information regarding external objects, it can usually be obtained using sensors such as LiDAR and depth cameras. For robot control, a real2sim2real paradigm is commonly adopted. Since real-world information from the robot and its environment are available, training can proceed by aligning the information from the real robot with that in simulation. This enables direct deployment of the trained policies in real-world scenarios. For tasks where low-level information is inaccessible—such as non-invasive control tasks involving software operation or game play—only visual observations (i.e., images) are typically available. We believe that the key to applying R* in such cases lies in effective information extraction. In these scenarios, A detection model (e.g., YOLO) can be used to identify and extract relevant features from visual input. Once these key features are obtained, the subsequent reward generation and policy training processes are consistent with those used in tasks where low-level information is accessible. We appreciate the reviewer’s insightful comment and will include a detailed discussion of this issue in the revised version. We welcome any further suggestions from the reviewer and are happy to incorporate further discussion as needed. --- We hope that the above response and experiments can address the concern raised by the reviewer. We sincerely appreciate the valuable time and suggestions provided by the reviewer throughout the entire review process. --- **Author Response:** **We are delighted to have addressed the reviewer’s concerns and sincerely appreciate the recognition and support for our work!** **Thank you again for your constructive comments and the in-depth discussions, which have helped us strengthen our work.**
Summary: This paper proposes a new method for designing reward functions with LLMs, R*. R* uses LLMs to generate modular reward function components, and maintains a population of reward functions. These population of reward functions are evaluated based on how well they guide the agent to the sparse reward. Based on their fitness, these reward functions undergo mutation using the LLM. Furthermore, they do parameter alignment of the parameters in the reward function using a pairwise preference loss. Claims And Evidence: The authors claims about prior works in the introduction were not fully supported. For example, in line 53 the authors say that the works they build off face instability and inaccuracies in parameter configuration. However, this is not explained in depth and it seems to be a crucial part of their motivation. The authors claims about superior performance compared to eureka are well supported by many experiments across different settings. Methods And Evaluation Criteria: Yes the proposed methods and evaluation criteria make a lot of sense. Their work tries automatically design reward functions, and conducts experiments in many popular RL environments to evaluate their approach. Their evaluation based on success rate makes sense as well. In general they try to improve upon Eureka, and use a very similar setting to that work. Theoretical Claims: na Experimental Designs Or Analyses: The experimental design seems quite sound to me. In all experiments they compare with SOTA baselines (eureka). They also conduct experiments over 5 random seeds and report the mean and standard deviation in their learning curves. Finally, they keep the hyperparameter setting to be very similar to Eureka, which makes it likely to be a fair comparison. Supplementary Material: I looked at the prompts and implementation details in the appendix. Relation To Broader Scientific Literature: This paper does not do a good job discussing their place in the broader literature. To me it seems like they make novel improvements to Eureka, which is a popular framework. However, they do not discuss Eureka in depth, so it is really hard to estimate the novelty and contribution of this work. Essential References Not Discussed: This paper does not really discuss in depth their differences with Eureka (Ma 2024). I think this is necessary, as they seem to directly build upon this work. This makes the contribution of this paper difficult to ascertain. Other Strengths And Weaknesses: Strengths: - The methodology is seemingly novel, although that is hard to judge. - The experimental results are comprehensive, and their method shows clear improvements. Weaknesses: - The writing is not that well structured. For example paragraphs 2 and 3 are very long and hard to read. - This paper mainly seems like a follow up work on Eureka. However, the authors do not clearly state their contribution compared to eureka, or state the benefits of their method. They only use vague language such as prior works have “instability and inaccuracy” (line 54) but do not say concretely how they improve upon it. Other Comments Or Suggestions: Typo in line 337. Questions For Authors: Can you provide a more in depth comparison to Eureka? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and for a positive assessment of our work. For the reviewer's questions, we will respond to them one by one as follows. 1. **[they do not discuss Eureka in depth, so it is really hard to estimate the novelty and contribution of this work. ]** Eureka maintains a population of reward functions and iteratively improves them using an LLM. The improvement process involves feeding the LLM with the best-performing reward function discovered in the current population, along with feedback from its RL training process. Based on this information, the LLM adjusts both the reward function logic and its parameters to enhance performance. The limitations of Eureka are mainly reflected in two aspects: 1. **Insufficient utilization of existing knowledge**: Eureka relies on the best-performing individual during each reward function update. However, the individual may represent suboptimal solution, and continuously optimizing based on it can lead to poor results. 2. **Inefficient parameter optimization**: Reward functions often involve numerous parameters, such as weights within and between reward components. Iterative optimization of these parameters using LLM is inefficient. To address the two issues mentioned above, we propose **reward structure evolution** and **parameter alignment optimization**. The former fully leverages high-quality reward functions without introducing any API calls, enabling thorough exploration through crossover among components within superior reward functions. The latter adjusts parameters via preference learning, where a key challenge lies in constructing a reliable preference dataset. To tackle this, we utilize the LLM to generate critic functions—a process that does not involve human participation. Moreover, to ensure the accuracy of preference labels, we build a population of critics and annotate preferences between states through a voting mechanism. Finally, we optimize the reward function parameters based on the learned preferences, providing more accurate reward signals. 1. **[The writing is not that well structured. For example paragraphs 2 and 3 are very long and hard to read.]** Thank you for the reviewer’s suggestion. We will improve the presentation in the revised version. --- **We would appreciate it if the reviewer can confirm that the concerns had been addressed and, if so, reconsider the assessment. We’d be happy to engage in further discussions.**
Summary: This paper introduces R* which designs reward function by utilizing LLMs to construct a set of reward functions, 'evaluating' these rewards by training PPO agents to maximize the rewards, and then improving the reward functions based on voting mechanism followed by preference-based learning. Ablation study shows that the proposed idea of using crossover operator (for better exploration of reward design space) and parameter alignment optimization (based on voting and preference learning) are indeed effective. Experiments are conducted on IsaacGym and Dexterity benchmarks and the main baseline is Eureka that is based on the same high-level idea of generating rewards with LLM, evaluating rewards, and improving rewards based on reflection. ## update after rebuttal My score has been updated from 2 to 3 during the rebuttal process. My acceptance recommendation is conditional on the authors' promise to largely update the manuscript (especially Introduction) for improving the positioning of the paper. Claims And Evidence: In terms of evaluating the efficacy of the newly proposed techniques, the paper provide an ablation study that supports it. But this paper is not making that much of a claim with regard to 'hypothesis that explains why this method should work better than previous algorithm'. Methods And Evaluation Criteria: Eureka also used IsaacGym tasks so it makes sense. Some qualitative/quantiative analysis on the quality of learned reward is missing. Theoretical Claims: N/A Experimental Designs Or Analyses: The performance of PPO with oracle reward seems very low to me -- it's much lower than the performance reported in Eureka tasks. Maybe I'm missing something here but I'm not sure why this happens. Supplementary Material: No Relation To Broader Scientific Literature: Key contributions of this paper will be very technical contributions around improving many parts of Eureka. But motivations for introducing these components are very weak so it is not clear if this is of interest to broader literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: One weakness of this paper is that writing is very verbose and some paragraphs are really long. In particular the ones in the introduction. Other Comments Or Suggestions: N/A Questions For Authors: - The biggest weakness of this paper is that it's not properly positioning the contributions of this paper with regard to existing works, and just say that "reward design is challenging and we did a bunch of things to do that!". But the very high-level idea of this work is very similar to Eureka's. Using LLMs to generate reward functions, training policies, and improvement based on reflection. So what's the fundamental limitation of Eureka and how is the proposed idea addressing the limitations? Why should the proposed algorithm work better than Eureka? - Would it be possible to provide quantitative/qualitative analysis that compares the quality of learned rewards, instead of only showing the downstream performance? - Why is performance with Oracle reward function worse than the ones reported in Eureka paper? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback. For the reviewer's questions, we will respond to them one by one as follows. --- 1. **[The performance of PPO with oracle reward seems very low to me -- it's much lower than the performance reported in Eureka tasks. Maybe I'm missing something here but I'm not sure why this happens.]** The main reason lies in our environment configuration, which limits the number of parallel environments. In the original Eureka paper, some tasks use a large number of parallel environments, requiring at least 4 RTX 4090 GPUs to run. We reduce the number of parallel environments to ensure that the program can run with only 40GB of GPU memory. However, this reduction significantly increases the learning difficulty for the algorithm. However, to ensure a fair comparison, all experiments are conducted under the same configuration. The specific configuration of the number of parallel environments is shown in the table below. | | Franka-Cabinet | Swing-Cup | Hand-Over | Hand-Scissors | Allegro-Hand | Door-Open-Outward | Kettle | Pen | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Env number | 4096 | 256 | 512 | 128 | 1024 | 2048 | 128 | 256 | 2. **[One weakness of this paper is that writing is very verbose and some paragraphs are really long. In particular the ones in the introduction.]** Thank you for the valuable suggestions. We will work on improving the overall clarity and expression throughout the manuscript in the revised revision. 3. **[What's the fundamental limitation of Eureka and how is the proposed idea addressing the limitations? Why should the proposed algorithm work better than Eureka?]** The limitations of Eureka are mainly reflected in two aspects: 1. **Insufficient utilization of existing knowledge**: Eureka relies on the best-performing individual during each reward function update. However, the individual may represent suboptimal solution, and continuously optimizing based on it can lead to poor results. The introduction of a crossover mechanism aims to fully leverage existing knowledge without introducing any API calls, enabling integrated exploration across different modules of the superior reward functions and ensuring more efficient exploitation. 2. **Inefficient parameter optimization**: Reward functions often involve numerous parameters, such as weights within and between reward components. Iterative optimization of these parameters using LLM is inefficient. To address this, we propose a preference learning-based approach, which makes the gradients of the reward function parameters differentiable. By collecting preference data from the critic population, we optimize these parameters more effectively. To solve above problems, we propose the **reward structure evolution and parameter alignment Optimization**. The former focuses on efficient structure search to avoid getting trapped in suboptimal designs, while the latter performs effective parameter optimization. By combining both components, our method enables more efficient reward design compared to Eureka. --- We would appreciate it if the reviewer can confirm that the concerns had been addressed and, if so, reconsider the assessment. We’d be happy to engage in further discussions. --- Rebuttal Comment 1.1: Comment: I have updated the score 2 to 3. Please make sure to re-write introduction to properly discuss prior works and the main contributions of the paper, and clarify the different experimental setup. --- Reply to Comment 1.1.1: Comment: We are delighted to have addressed the reviewer’s concerns and sincerely appreciate the recognition and support for our work! As requested, we will provide a comprehensive discussion of the related work and main contributions in the revised manuscript, and offer a clearer explanation of the experimental setup. Thank you again for your constructive comments and the in-depth discussions, which have helped us strengthen our work.
Summary: This paper proposes LLM-based reward function generation to train models for tasks such as robotic hand manipulation. The presented method can be decomposed into: 1. generation of modular reward functions using an LLM 2. augmentation with new reward functions based on modular mixing of the functions from step 1 3. Generation of trajectories and ratings from an LLM-generated critic population 4. Reward function parameter optimization based on the labeled trajectories 5. PPO-based training of agents for the end task 6. LLM-based reflection based refinement of the reward functions. Claims And Evidence: Claims made: the proposed method outperforms existing SOTA methods for reward function design Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are standard for the premise of the problem considered. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design follows standard methodology and is very similar to existing methods that address reward function design. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper improved upon previous SOTA methods (https://arxiv.org/pdf/2310.12931) that also uses LLMs for designing reward functions. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The paper's results are very strong and are significantly better than the existing SOTA methods. Weaknesses: - The novelty of the proposed ideas is low, and seem to be a collection of multiple ad-hoc steps - The writing can be improved. For example, the paper is focused mainly of robotic hand manipulation tasks, and there is no mention of this until the experiments section. The paper should do a better job of providing context and motivation. Other Comments Or Suggestions: N/A Questions For Authors: 1. It seems to me that the critic population annotation is more important to the success of the method than crossover-based reward function generation. Is this correct? 2. What is the intuition behind the crossover-based design? I am unable to get a feel for why it leads to such a big improvement in performance, and why the LLMs cannot generate such functions in the first place. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and for a positive assessment of our work. For the reviewer's questions, we will respond to them one by one as follows. 1. **[The novelty of the proposed ideas is low, and seem to be a collection of multiple ad-hoc steps]** Our work focuses on reward function construction and enhancement from two key perspectives: **reward structure evolution** and **parameter alignment optimization**. The former aims to design meaningful reward components using LLMs and evolutionary principles, while the latter focuses on optimizing the parameters within those reward components. In **reward structure evolution**, we introduce code-level crossover operations to fully exploit existing reward function implementations and address potential suboptimality issues. In **parameter alignment optimization**, we first use LLMs to construct state-level critic functions. To mitigate potential errors from relying on a single critic, we build a critic population and use it to generate a preference dataset. We then make the numerical values in the reward function code differentiable and apply preference learning to efficiently optimize the reward function. Our method is primarily designed and optimized based on the challenges associated with reward generation, aiming to construct more efficient reward functions. 2. **[the paper is focused mainly of robotic hand manipulation tasks, and there is no mention of this until the experiments section. The paper should do a better job of providing context and motivation.]** Thank you for the reviewer’s suggestion. We will improve the presentation in the revised version. 3. **[It seems to me that the critic population annotation is more important to the success of the method than crossover-based reward function generation. Is this correct?]** The goal of crossover-based reward function generation is to ensure the effective utilization of existing reward functions, which can facilitate the discovery of well-structured designs. Since parameter optimization does not modify the underlying structure, it is generally most efficient to apply parameter tuning only after a reasonable structure has been identified. In many tasks, the initial parameter settings provided by the LLM are already fairly reasonable. In such cases, parameter alignment may have limited impact, whereas crossover can continuously explore and recombine high-quality existing reward functions to uncover improved designs. However, when the LLM provides suboptimal parameter settings, parameter optimization becomes crucial for constructing more effective reward guidance. Based on our experimental results, parameter alignment is generally more efficient in most cases. 4. **[What is the intuition behind the crossover-based design? I am unable to get a feel for why it leads to such a big improvement in performance, and why the LLMs cannot generate such functions in the first place.]** Eureka continuously improves the reward function by providing the best-performing individual to the LLM, while other reward functions are directly discarded. When the best individual is actually suboptimal, reflecting and improving based on it can lead the entire population to converge toward a suboptimal solution, ultimately resulting in low-quality outcomes. In contrast, the crossover-based design aims to fully leverage existing superior reward functions without introducing any additional LLM overhead. By performing crossovers between different reward modules in superior reward functions, it enables thorough exploration and helps avoid being trapped in suboptimal solutions. As shown in the results of Figure 4, the reward functions discovered by EA have a significantly higher probability of being the best in the population (the probability of best policies originating from crossover exceeds 50% in most tasks,with some tasks surpassing 80%), which further demonstrates EA's ability to effectively explore existing rewards and discover better reward functions. --- **We would appreciate it if the reviewer can confirm that the concerns had been addressed and, if so, reconsider the assessment. We’d be happy to engage in further discussions.**
null
null
null
null
null
null
Diverging Preferences: When do Annotators Disagree and do Models Know?
Accept (poster)
Summary: This paper investigates when and why human annotators disagree, identifying 4 broad sources of preference divergence (task underspecification, response style, refusals, errors) covering >30% of responses in RLHF datasets. They then explore how divergent preferences impact LLM training (with reward modelling) and evaluation (with LLM-as-Judge), showing that BT and LLM-as-Judge paradigms both fail to distinguish between instances of unanimous agreement and majority opinion. Arguing that divergent preferences are useful training signals (not undesirable noise), the authors develop a distributional reward modelling technique to utilise all preferences during training to model a distributional reward, and improve LLMs’ abilities to model human disagreements. Lastly, they point out deficiencies in the LLM-as-Judge evaluation pipeline, which punishes pluralistically aligned models by always expecting the modal responses (even when humans disagree), with consequences of penalising safety guardrails and clarification requests on underspecified prompts. Claims And Evidence: This paper presents 3 central claims: 1) human annotators meaningfully disagree in RLHF datasets such that these divergences are useful training signals (not noise), 2) BT reward modelling fails to differentiate between unanimous versus majority rule and falls short of pluralistic alignment, 3) LLM-as-Judge evaluation techniques suffer similar pitfalls and penalise pluralistically aligned LLMs with safety guardrails and which know to ask for clarification on underspecified prompts. All claims are well evidenced with experiments, concrete data examples, thorough analysis, both numerical and qualitative results. Methods And Evaluation Criteria: The training proposal is evaluated on extended versions of MultiPrefDisagreements and HelpSteer2-Disagreements datasets, for a variety of frontier LLMs. Evaluations are thorough with results reported for both preference accuracy and diverging ID AUROC metrics, which respectively capture alignment to the modal preference and pluralistic alignment. LLM-as-Judge evaluation is benchmarked on ChatbotArena (Arena-Hard) and is shown to unfairly penalise pluralistically aligned models when given a divisive example. The method and evaluation procedures are sound and reasonable. Theoretical Claims: N/A: This paper does not make theoretical claims. Experimental Designs Or Analyses: Training stage comparisons between the proposed Mean-Var Reward Models (KL), scalar-valued reward modelling (BT, MSE-Regression, Skywork, 70B-Reward) and Mean-Var Baseline (NLL, Independent) are thorough, well-documented and well-structured. Analysis is granular and discusses in detail the different sources, types and degrees of disagreement; this closely matches the proposed method, which does not assume independence of judgement and models the mean and variance of the preference distribution to capture the shape/spread (strength and spectrum) of opinions. Discussion on the limitations of LLM-as-Judge as similarly nuanced and reveal future directions for improvement. Supplementary Material: I have reviewed the supplement, including section A, replication details for reward modelling and mean-var modelling; section B, (which) LLMs as judges; section C, additional dataset statistics; section D: additional results for BT reward modelling on aggregated labels; section E, examples of highly divisive (versus not) prompts. Relation To Broader Scientific Literature: 1. **Important dataset releases (of individual preferences) -** The authors collaborated with creators of MultiPrefDisagreements and HelpSteer2-Disagreements to release the individual annotator preferences (before aggregation) to enable further work on training and evaluation under human disagreements. 2. **Significant and relevant -** This submission relates to frontier directions on modelling the preferences of different users and adapting LLM responses to be more helpful, user-aligned and target-specific. 3. **Important implications for benchmarking -** This paper confirms that mainstream LLM-as-Judge evaluation pipelines are mismatched with many desiderata (e.g. pluralistic alignment, AI safety, clarity of thought) of LLMs, and will hopefully motivate further research and advances in robust evaluation. Essential References Not Discussed: The related work section is thorough and satisfactorily engages with relevant work. Other Strengths And Weaknesses: To summarise the above, this paper is a worthwhile contribution that will hopefully pave the way for further work in pluralistic alignment. It is significant (with dataset and method contributions), clearly presented and soundly argued. While the idea that annotators disagree, or that reward modelling does not capture the full spectrum of human perspectives has been explored before, the existence of prior intuition does not detract from the merits of this work since it ventures beyond mere contemplation to contribute an original distributional reward modelling approach for LLMs. One possible weakness of this work is the lack of formal investigation of how BT or LLM-as-Judge collapses to the modal opinion, though this is likely beyond the scope of this article. Other Comments Or Suggestions: The following are more suited as comments or out-of-interest questions; discussing them could lead to a score increase but not addressing them fully will not lead to a score decrease: - The authors might consider engaging further with classic literature on social choice and voting theory, e.g. Arrow's impossibility theorem and the inability to simultaneously satisfy intuitive desiderata when attempting to model a spectrum of preferences with a social utility function. - Do the datasets also provide demographic attributes of the annotators? If so, are disagreements (variance in opinions) more pronounced inter (between) or intra (within) demographic groups? - How does distributional reward modelling compare to 1) training only on the 70% of non-diverging opinions, and to 2) in-context learning or context-aware techniques? Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and suggestions! **Q1: The authors might consider engaging further with classic literature on social choice and voting theory.** Thank you for this suggestion! We agree that Arrow's impossibility theorem and social choice theory is relevant to our work, particularly to the discussions in Section 5 on pluralistic alignment and different practitioners' decisions in comply/refuse disagreements. In our revisions, we will add include such works in these discussions **Q2: Do the datasets also provide demographic attributes of the annotators? If so, are disagreements (variance in opinions) more pronounced inter (between) or intra (within) demographic groups?** We include annotator details in Appendix C. To summarize from there, these datasets do not include demographic information or annotator IDs due to privacy concerns. We are therefore unable to conduct such an analysis. **Q3: How does distributional reward modelling compare to 1) training only on the 70% of non-diverging opinions, and to 2) in-context learning or context-aware techniques?** We do not perform the first experiment; however, we do not expect it to have significant differences in Bradley-Terry or MSE-Regression reward models when trained with all or aggregated annotations. Training on all annotations, in particular, is perhaps closest to this setting, as the diverging and non-diverging examples are differentiated during training. Regarding your second question, our goals with our LM-as-Judge experiments were intended to assess the behavior of existing, standard LM-as-Judge benchmarks. We do not experiment with different prompting techniques to improve the LM-Judge’s predictions, but agree that this is an exciting area for future work and will add discussion for it along with our other results and recommendations (Section 5.3) in our revisions. --- Thank you again for your suggestions, and please let us know if you have any remaining suggestions or questions. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I maintain that this work makes valuable contributions towards understanding (limitations of present) reward modelling for LLMs. It in fact goes one further to propose a dataset and method to better enable pluralistic alignment. I believe my original score of 4 (accept) is befitting of the manuscript's quality and potential (for facilitating future investigations); I maintain my score and affirm my belief that it is worthy of acceptance.
Summary: - This paper introduces two datasets consisting of annotations for potential reasons for disagreements in preferences and derives a taxonomy from these annotations. - The paper also studies the distribution of rewards across different modeling techniques. - The paper also compares single versus distributional reward modeling methods. - The paper finally evaluates LLM-as-a-judge in divisive examples. ## update after rebuttal The authors have addressed most of my comments. I remain concerned that the experiments across different portions of the paper were conducted on varying datasets, limiting the potential generalizability of the work. Claims And Evidence: Overall, the organization of the paper and thus the main point was challenging to follow. It was almost as if Sections 3-5 were each their own papers glued together. There was no contributions list in the work to evaluate whether claims were appropriately substantiated. Methods And Evaluation Criteria: - The paper needs to make clear where the individual datasets (e.g., MultiPref and HelpSteer2) come from and appropriately cite the authors when the datasets themselves are first mentioned. Relatedly, I think it’s also very disingenuous to say that this paper introduces two datasets when the only thing that it did was to subset for disagreement. - Evaluations across sections draw from a lot of different datasets and models, making it hard to follow where potential confounding issues might stem from (e.g., data or model-specific choices). Theoretical Claims: N/A Experimental Designs Or Analyses: More detailed questions about experimental design: - Section 2: How do you separate “style” (in taxonomy) from noise? Do annotators have the right background to understand and correctly determine the cause for diverging preferences? - Section 3: Can you show Figure 2 for the other models, as this is only Bradley Terry? To what extent are these trends dataset-specific? The text doesn’t discuss that High-Agreement Ties do not follow the same trend for Figure 2. - Section 4: Can you provide error bars for Table 3? Preference accuracy is not that different between single-value vs distributional reward models. - Section 5: The AUROC of the KL model trained was still relatively low, which means that it is a very imperfect classifier to use on WildBench data. Were any checks done to verify that the model is useful? The authors report results over 50 examples, do these results generalize more broadly? Supplementary Material: Yes, all of it. Relation To Broader Scientific Literature: There is increasing work on pluralistic alignment. This work aims to shed light on potential issues that may arise when performing reward modeling or using LLM-as-a-judge. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: Please address aforementioned experimental design questions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your feedback! We address each comment below. Please let us know if you have any remaining questions or concerns. **Clarifying Dataset Collection and Release** HelpSteer2 and MultiPref are cited. Note that MultiPref was made public prior to the publication documenting it, thus we communicated with the dataset authors to determine how to cite it for this submission. We will update these citations in our revisions. We also make it explicit that we do not collect annotations ourselves for this work on Page 1: “Note that we did not collect new datasets but instead are releasing the individual annotations of these existing datasets (which previously released only annotations aggregated across multiple annotators for the same task), with support from the dataset creators.” **Contributions List & Connections between Contributions** See our response to Reviewer 1 under **“Contribution List”**. **Sec 2: How do you separate “style” (in taxonomy) from noise?** When categorizing disagreement causes, we label instances with specific subclasses (i.e., Verbosity, Format, Complexity, Aesthetic Tastes). The “style” meta-category is used to describe subclasses where the responses do not differ in their interpretation of the prompt nor their high-level content, rather how the information is presented to the user. Below, we paraphrase our definitions of each “style” subclass: * **Style:** Where instances are labeled if both responses interpret the prompt similarly, but… * **Verbosity:** … differ in their level of detail or in including supplementary examples. * **Format:** … differ how they organize their responses under lists or headings. * **Complexity:** … are targeted toward users with different levels of domain-expertise (e.g., technical jargon that appears in only one response). * **Aesthetic Taste:** … the prompt is open-ended for generation and differences are primarily in style / tone / creative choices. **Sec 3: Figure 2 for other models/datasets** These visualizations for all 8 models+datasets in Table 2 are in the Appendix (Figures 4 & 5). As Table 2 summarizes, we find similar trends across all settings. **Sec 3: Figure 2 – why do High-Agreement Ties not follow the same trend?** We expect opposite behaviors from reward models on High-Agreement Preferences and Ties. For High-Agreement Preferences, we expect reward models to predict a large gap in rewards of the two responses. For High-Agreement Ties, we expect reward models to predict a small or no gap in the rewards. **Our Table 2 results and visualizations (Figures 2, 4, & 5) demonstrate:** Standard reward models correctly capture this expected behavior on High-Agreement Preferences and Ties. Their predictions on examples with diverging preferences are indistinguishable from their predictions on examples with High-Agreement Preferences. They predict clear preference for one response in cases of annotator disagreement (further supported in later from Table 3 results). **Sec 4: Preference accuracy is not that different between single-value vs distributional reward models (Table 3).** We do not claim a large difference in preference accuracy. In our results discussion, we state “We find that, with the exception of the Mean-Var (NLL, Indep.) baseline, all systems perform comparably in Preference Accuracy.” (Section 4.2) Our goal with proposing distributional reward models is not to improve preference accuracy, but rather to develop a reward model that can identify diverging preferences without compromising on preference accuracy. **Sec 5: Utility of the relatively low AUROC mod on WildBench data & Generalization** Our systems are imperfect at identifying diverging preferences. As such, we suggest that they can be used to assist benchmark authors “by identifying divisive prompts in LLM-as-Judge benchmarks so they can be further examined by benchmark authors and removed.” (Section 5.3) While future work may develop better methods for identifying divisive examples, our experiments are a proof of concept for such an approach. We examine the 50 most divisive examples identified by our model in Wildbench (an out-of-distribution dataset) and find that the majority of these instances are divisive prompts where the task is ambiguous or where it may be reasonable for systems to refuse or comply with the request, depending on the LLM developer’s specifications. We, furthermore, find that the LLM-Judge scores responses that interpret the request differently or decide to comply/refuse substantially differently on these examples. We provide examples of such instances from Wildbench in the Appendix (Tables 7 and 8). To further support this, we repeat this analysis over 700 sampled instances from ChatbotArena and analyzed the top 30 most divisive examples as identified by our system. And similarly found that 17 were similarly divisive (ambiguous or reasonable to comply/refuse). We will include this analysis and provide examples in our revisions.
Summary: The authors examine diverging preferences in human-labeled datasets and present a taxonomy of disagreement sources. They show most disagreements stem from task underspecification and response style, not annotator errors, challenging the view that disagreements are mere noise. Standard reward modeling methods, like the Bradley-Terry model, overlook the difference between unanimous agreement and majority opinions, undermining pluralistic alignment. To address this, they propose methods to identify and mitigate diverging preferences in evaluations and training, promoting better LLM alignment. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No theoretical results. Experimental Designs Or Analyses: Experimental design is reasonable. Supplementary Material: N.A. Relation To Broader Scientific Literature: This paper is related to the disagreement study in NLP domain and model-based reward modeling. Essential References Not Discussed: References are properly discussed. Other Strengths And Weaknesses: **Strengths** - The paper provides a valuable analysis of diverging preferences in language model alignment by developing a clear taxonomy of disagreement sources and identifying task underspecification and response style as primary causes. - The proposed distributional reward model effectively captures diverging preferences, with experimental support. - The identification of bias in LLM-as-Judge evaluations on leading benchmarks offers meaningful insights for the community, enhancing the understanding of benchmark results. Other Comments Or Suggestions: Refer to *Other Strengths And Weaknesses* section. Questions For Authors: - Can you clarify the rationale behind selecting the specific values used to map the reward gap to various annotator preferences? (l247-255, right column) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and positive feedback. **Q: Can you clarify the rationale behind selecting the specific values used to map the reward gap to various annotator preferences?** Appendix A provides more details on how we select these hyperparameters (we select the best performing value on development data). Rationale for selecting values is twofold: (1) MSE regression systems support that mapping rewards / preferences to linearly spaced intervals has strong performance and (2) Unlike “slight preference” or “tied” judgements, “significant preference” judgments may represent an arbitrarily large gap between the quality of the the two responses, hence we allow the gap between rewards for such responses to also be unbounded. --- Thank you again for your careful review of our paper and feedback! Please let us know if we have addressed your remaining concerns or questions. We also would appreciate any additional suggestions or clarifications that might improve your assessment.
Summary: This paper investigates diverging preferences in human-labeled datasets used for reward modeling and language model evaluations. The authors develop a taxonomy of disagreement sources. Through empirical analysis of HelpSteer2 and MultiPref, they find that disagreements are not random noise but stem from systematic differences in annotator preferences. The paper further demonstrates that standard reward models fail to distinguish between high-agreement and diverging preferences. To address this, the authors propose a Mean-Variance reward model with KL-divergence training, which captures the distribution of annotator preferences. Additionally, the paper studies the impact of diverging preferences of popular LLM-as-Judge methods for evaluating LLMs and proposes a method for removing instances of diverging preferences in LLM-as-Judge benchmarks. Claims And Evidence: The first main claim is that diverging preferences are prevalent and not annotation noise. **Evidence:** The authors analyze preference pairs from two datasets, showing that 30-39% of them contain diverging preferences. They further categorize the reasons behind disagreement and provide statistics for each category. The other claim is existing reward models treat diverging preferences similarly to high-agreement preferences. The proposed Mean-Variance Reward Model better captures human preference distributions. **Evidence:** Experiments on Bradley-Terry and MSE Regression models show that they predict nearly identical reward differences for high-agreement and diverging cases. This holds even when trained on all annotator labels instead of aggregated preferences. Experiments show that the proposed Mean-Variance model improves Diverging ID AUROC by 0.16 over standard reward models, indicating better identification of diverging preferences. Methods And Evaluation Criteria: The proposed Mean-Variance model and evaluation criteria make sense to me. Theoretical Claims: No theorrm provided in the main text. Experimental Designs Or Analyses: The experimental design makes sense and the results seem to be solid. Supplementary Material: No. Relation To Broader Scientific Literature: Focusisng on the disagreement in preferenece, especially differentiating clear preference and preference with ambiguity is novel compared to the prior literature. Essential References Not Discussed: Not identified. Other Strengths And Weaknesses: One weakness is the taxonomy of disagreement sources developed in the first half of the paper seems to be distached from the second part where the new algorithm is proposed. The mean-variance model does not benefit from the taxonomy defined, only the empirical distribution of disagreement in data makes a diffference. Other Comments Or Suggestions: See the Questions below. Questions For Authors: In sectioni 4, training Mean-Var reward models requires a map between the reward difference and the preference annotations such as slightly preferred, significantly preferred, based on what the range of reward difference such as (−0.5, 0.5), [0.5, 1.5) is picked? How do you chose the turning points, would changes of those points affect the model performance? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and positive feedback! We would like to address and clarify the following points in the review: **W1: Contribution List + Connection between taxonomy and reward modeling** We agree that our contributions and their connections would be more clear. In our revisions, we will include a list of contributions and will strengthen the connections between the taxonomy and the experimental work to address this concern. To summarize, our goals and contributions are as follows: * **Goal 1:** Identify where disagreements in preference annotation come from.* * **Contribution 1:** We analyze diverging preferences in two datasets and develop a taxonomy of causes. Contrary to standard modeling practices, we find the majority of disagreements are not influenced by the correctness or appropriateness. They are, instead, due to factors such as underspecified prompts, verbosity, etc. We further work together with the dataset creators to release individual annotator judgments to support future efforts studying diverging preferences. * **Goal 2:** Understand how the LLM development pipeline is affected by diverging user preferences, focusing on the two most directly impacted areas: reward modeling and evaluation. * **Contribution 2:** We find that standard reward modeling approaches (e.g., Bradley-terry models) and evaluation methods (LLM-as-Judge) both fail to capture diverging user preferences by predicting a clear preference for a single response, even when annotators disagree. * **Goal 3:** Suggest novel methods for identifying examples with diverging preferences in reward models and evaluations. * **Contribution 3:** We develop a novel distributional reward modeling method that achieves strong reward modeling performance while also outperforming existing methods at identifying examples with diverging preferences. We also demonstrate that this model can be used to identify problematic examples in LLM-as-Judge benchmarks where LM-Judges demonstrate strong preference toward a single type of response, even when annotators would disagree. Connection between taxonomy and reward modeling: The taxonomy (Contribution 1) serves as a necessary foundation to validate that the disagreements in our dataset are meaningful (and not just noise) before proceeding to model them (Contributions 2+3). Without establishing the types and patterns of disagreement first, any modeling efforts would lack proper grounding. **W2: Specific disagreement types Experiments** Regarding experimentation on specific disagreement types, we faced practical limitations. It’s hard to do experiments on specific subsets of disagreement types when we don’t have the ability to label many instances. **Q1: Based on what the range of reward difference such as (−0.5, 0.5), [0.5, 1.5) is picked? How do you choose the turning points?** Appendix A provides more details on how we select these hyperparameters (we select the best performing value on development data). These values are also based on observing that MSE regression reward models, which mapping rewards / preferences to linearly spaced intervals, have strong performance --- Thank you again for your careful review of our paper and your valuable feedback! Have we successfully addressed your concerns? We would appreciate any additional suggestions for clarification or modifications that might improve your assessment.
null
null
null
null
null
null
Fixing Value Function Decomposition for Multi-Agent Reinforcement Learning
Reject
Summary: The paper studies the individual-global max (IGM) principle in model-based multi-agent reinforcement learning (MARL). They introduce a novel characterization of function classes of value function approximators, referred to as IGM-complete. They show the equivalence between this class and a parameterization of the agents' critic, and use this parameterization to "fix" existing value-based method, an approach they refer to as QFix. Experiments show the utility of QFix. ## Update after rebuttal I acknowledge that the authors answered quite a few of my concerns, but not all of them. I find the theoretical contribution of this paper insufficient, even after the rebuttal. The authors have shown me their empirical results in more detail, and I appreciate them. However, at this stage, I do not see this paper as a high-quality whole yet, and thus I stand by my weak reject. Claims And Evidence: It seems clear that the technical work done by the authors delivered stronger results in the targetted benchmarks. However, there are issues with the evidence provided by the authors for the benefits of the method, both theoretically and empirically. Lines 157-160 (right). *“... also demonstrate that QPLEX satisfies Definition 3.2 and its function class is IGM-complete, given sufficiently expressive models wi(h), bi(h), and λi(h, a).”* I do not see such a statement in the QPLEX paper. They just show that IGM can be defined in two ways. Line 162 (right). *“Practical implementations of value function decomposition methods often employ stateful joint values”* - such a claim should be heavily supported with citations; especially that using both state and history as arguments makes little sense. Definition 4.1. There is a problem with it, and you are not the only ones to blame. It seems that this definition (and Def 3.3) don’t handle the case when $A(h,a) < 0$ but $A_i(h_i, a_i) = 0$ properly. Indeed, IGM is of less use if this case is possible. Thus, NONE of them are actually equivalent to IGM (Def 3.1) as it demands that a global argmax can be formed from local argmaxes: thus, we would need $A_i(h_i, a_i)=0 \implies A(h, a)=0$, but your definition misses it. Equation (19). This paremeterizatio is interesting, but it makes the results Lemma 4.2 and Theorem 4.3 trivial at the same time. Yes, you can take $b(h)=V(h)$. But then, you can obviously define $f(u_1,...,u_N)=-\mathbf{1}(\exists u_i \neq 0)$, where $\mathbf{1}(\cdot)$ is an indicator function, and set $w(h,a)=-A(h,a)$. In the following claims, you say that *$Q_{IGM}$ is a minimal function class*. It is not clear what it means. If you mean that it requires as few parametric models as possible, my example above shows you that it is not true. Line 539 (Appendix). I don’t think you can take *“any $f(\cdot)$ that satisfies Eq (19)”* as it involves $w(h,a)$ that you haven’t defined yet. That can be probably fixed by defining $f(\cdot)$ and w jointly as a pair of functions that satisfy Eq 19, although this may be a poor characterization. Theorem 4.4. What do you mean by sufficiently expressive? Also, I don’t think that you can make such a statement without unrealistic assumptions about your function class (like *it can represent any function*) or set constraints on the functions you try to approximate, like continuity. To see why, look at this example: if history is continuous, and $f(u_1, …, u_N) = |{u_i | u_i \neq 0}| =: N_{nonzero}$, then $w(h,a)=[Q(h,a) - b(h)] / N_{nonzero}$ is, in principle, discontinuous, and thus for sure you cannot implement it with a neural network. Methods And Evaluation Criteria: The authors demonstrate the improvement of performance of the fixees with QFix across a range of tasks. However, as the fixing network is applied on top of (not instead) of base fixee networks, it is unclear if the benefit comes from the fixing mechanisms or from additional neural computation. I would need to see additional ablation experiments to be convinced of the utility of QFix. Theoretical Claims: I checked the correctness of the proofs. With sorrow I confess that, as I described in Claims and Evidence, I do not find these results very strong, nor do I think their proofs are particularly rigorous. Experimental Designs Or Analyses: I think that the experiments are, in a great deal, sound and of good quality. However, I believe the authors should ablate QFix against the base methods (like QMix and QPLEX) with bigger networks, to isolate the effect of the fixing network from additional parameter count. Supplementary Material: I reviewed the provided proofs. Relation To Broader Scientific Literature: The contribution targetted by the authors is valid and impactful. Value function decomposition and IGM are important areas of study for MARL. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is extensive, dense with information, and easy to follow. The experimental range is wide, and the figures are appealing. Other Comments Or Suggestions: Please attach the important assumptions to your theoretical work, so that your claims can be stated rigorously. Also, please ablate QFix against the fixees with larger networks, to isolate the effect of the fixing network. With these changes, I will consider raising my score. Questions For Authors: Line 67 (left). *“However, WQMIX appears to conflate the possibility of exploiting state information during centralized training (which is correct) with the goal of learning the decision process for a team of fully observable agents (which is incorrect).”* - what do you mean by this? Line 58 (right). You missed the distribution p in your POMDP-defining tuple. Line 68 (right). You define joint history space before defining history (line 75). Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough feedback; we will use it to improve the clarity of the submission. # Direct Questions ## Re: WQMIX See our response to Reviewer `5gNj`. We will clarify this comment in the paper. # Other Comments ## Re: QPLEX statement The corresponding statement for QPLEX is Prop. 2 of [1]. ## Re: stateful joint values All common implementations employ stateful joint values; this includes: the QMIX code [3], the QPLEX code [4], and the popular `pymarl2` code [5]. The QMIX and QPLEX papers also repeatedly refer to joint history-state values (although their notation is also often inconsistent). We will add these more explicit references to the paper. We also note “using models of both state and history makes little sense” is a common misconception. The literature of RL [6,7], MARL [8,9], and value-decomposition [2] has shown in both theory and practice that (mis)using models of state (without history) can be highly problematic, and that models of history and state can be more appropriate. Seminal work in MARL is slowly adapting to this misconception, e.g., see Errata in [10]. ## Re: Definition 4.1 This is indeed a discrepancy between def 3.1 and defs 3.3, 4.1. However, the root cause is that def 3.1 prescribes a unique maximum, while defs 3.3, 4.1 don’t. If defs 3.3, 4.1 were combined with unique maxima, then all definitions would be equivalent. That said, the generalization to multiple maxima is useful, and we agree that defs 3.3, 4.1 should be clarified as requested. We will clarify this point in the paper, but we also note that this has no effect on the rest of the work, as QPLEX, QIGM and QFIX already enforce a double implication. ## Re: Eq (19), Lemma 4.2, Thm 4.3 That lemma 4.2 and thm 4.3 follow so naturally from eq (19) is a strength and sign of simplicity; not a weakness. Note: the given example is a special case of our proof for one specific choice of $f$, whereas our thms are valid for any choice of $f$. The more general proof is necessary to derive QFIX by replacing $f$ with **any** IGM fixee. ## Re: Minimalism of QIGM Our claims of minimalism are informed both quantitatively (by model sizes, Table 1), and qualitatively (eqs (22,25) are simpler than eq (16)). Note: we never claim that QIGM represents a minimal (small) function class; only that it is a minimal (simple) formulation of the IGM class. ## Re: proof of Thm 4.3 We understand the confusion. The text states "For any $f$ that satisfies the requirements of eq (19)" We mean the requirements associated with eq (19), not eq (19) itself, i.e., $f$ non-positive and zero iff all inputs are zero. In eq (19), $f$ and $w$ are not codependent. In line 539, $w$ is constructed as a function of $f$ only to prove IGM-completeness; we are not claiming that this is a necessary structure of $w$. ## Re: Sufficiently expressive models This is an appeal to the Universal Approximation Thm, and consistent with other proofs in the literature [1]. We will clarify this point. See our response to Reviewer `zepP` for a detailed reply. ## Re: Ablation The proposed ablation is only feasible in one case: - VDN does not use a parameterized mixing model; the requested ablation is impossible for Q+FIX-{sum,lin}. - QPLEX is already IGM-complete and is never used as a fixee; the requested ablation is again impossible. - QMIX is the one case where the proposed ablation is feasible; we are working to run this ablation by making Q+FIX-mono fixee smaller and/or QMIX larger. If we get preliminary results within the discussion period, we will post them. However, note that the performances of all Q+FIX variants are comparable, yet Q+FIX-{sum,lin} employ the **smallest** mixers by far. This hints at our mixing structure as a core contributor of performance regardless of fixee size; we expect the additional ablation to also confirm this. Also see our response to Reviewer `zepP` for a related topic. # Rebuttal Summary We believe we have addressed all concerns and, aside from minor clarifications, we reaffirm the correctness and rigorousness of our theory and results. We hope the reviewer will reconsider their evaluation positively and that they will let us know of any further concern. 1. Wang et al. "QPLEX: Duplex Dueling Multi-Agent Q-Learning" ICLR 2021 2. Marchesini et al. "On Stateful Value Factorization in Multi-Agent Reinforcement Learning" AAMAS 2025 3. github.com/oxwhirl/pymarl 4. github.com/wjh720/QPLEX 5. github.com/benellis3/pymarl2 6. Baisero et al. "Unbiased Asymmetric Reinforcement Learning under Partial Observability" AAMAS 2022 7. Baisero et al. "Asymmetric DQN for Partially Observable Reinforcement Learning" UAI 2022 8. Lyu et al. "A Deeper Understanding of State-Based Critics in Multi-Agent Reinforcement Learning" AAAI 2022 9. Lyu et al. "On centralized critics in multi-agent reinforcement learning" JAIR 2023 10. Foerster et al. "Counterfactual Multi-Agent Policy Gradients" arXiv v3 2024 --- Rebuttal Comment 1.1: Comment: Thank you, Authors, for your rebuttal. Stating that you will include the citations for state-history value functions, you made me more optimistic about this work. However, the most crucial problems of mine with this paper have not been addressed. > 1. I would have to see any preliminary results (possibly under an anonymous link) before I lean positively towards this paper. Crucially, an ablation which compares some (doesn't have to be all) variants of QFix to non-QFix methods, with bigger networks, is a pre-requisite. > 2. It is not true that Definition 3.1 prescribes a unique maximum. Generally, $argmax$ is a set and IGM says that the argmax set of the global value function is the product of argmax sets of local value functions. For example, in case of one-dimensional actions and $N$ agents, if $Q(s,a)=-(a_1 - 1)^2 (a_1 + 1)^2 \dots (a_N - 1)^2 (a_N + 1)^2$, the IGM from Definition 3.1 holds and there are $2^N$ maxima. My original comment remains unchanged. > 3. Regarding Lemma 4.2 and Theorem 4.3 and 4.4: you claim the ease with which your results come to be your strength. If your theoretical results are meant to make a contribution of your paper, the formulated problems shouldn't be too obvious to make proofs about. If something follows too easily, it should be acknowledged. For example, I can now define a function class, let's say $Q(h,a) = b(h) - w(h,a)(u_1\cdot \dots \cdot u_N)^2$ where $w(h,a) > 0$ satisfies your conditions. Of course you can prove the same results about it and it is even simpler than your proposed class. This brings me to another point: words like "minimal" should be used with caution - minimality means something very specific. > 4. Regarding your use of universal function approximation and citation of QPLEX (which I am familiar with): the fact that QPLEX paper does something does not make it correct. QPLEX, just like you in this rebuttal, refers to universal function approximation theorem. But even that theorem (including the version cited by QPLEX (Csaji, 2021)) assumes that the approximated function is continuous. You do not, and I gave you an example when your theorem breaks. Before I consider raising my score, the theoretical limitations of this paper should be addressed, and its rigor improved. **References** Csaji, 2021. Approximation with Artificial Neural Networks. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the further feedback. Please see auxiliary figures at [1], which includes: - Fig 1: Updated results (now 5-6 runs per model per task, also shown as interquartile mean (IQM) [2]), of interest to rev. `5gNj`. - Fig 2: Probability of improvement [2], of interest to rev. `5gNj`. - Tab 1, Fig 3: Model size ablation, of interest to revs. `zepP`, `Cnks`. 1. We run additional experiments for QMIX-big (QMIX with bigger size) and Q+FIX-mono-small (Q+FIX-mono with smaller size) on all the 5v5 maps. [1, Table 1]. shows all mixer sizes. In terms of size, QMIX-big is comparable to QPLEX and Q+FIX-mono, while Q+FIX-mono-small is comparable to QMIX. [1, Fig 3] contains the ablation results; to avoid clutter, only QMIX, QMIX-big, Q+FIX-mono and Q+FIX-mono-small are shown (other methods are shown in [1, Fig 1]). These results reaffirm that Q+FIX-mono performs well not because of model size, but often in spite of smaller models, and due to our mixing structure. For the final version of the paper, we will extend this ablation to 10v10 and 20v20. 2. We understand better now that $\argmax$ can itself describe a set of solutions, and does not intrinsically assume a unique maximal element; we agree completely with the reviewer, and will fix the definitions accordingly. We note that this does not affect QIGM or QFIX. 3. - We are unable to concretely understand the reviewer’s concern here; if they are saying that the lemmas/theorems are so obvious they need not be stated or proven, then we strongly disagree, and expect other reviewers would have requested formal proof. If they are saying that they do not meet a threshold of importance to be called theorems, but should, e.g., be reformulated as lesser results like propositions, then that is agreeable and we can make these minor changes. If the concern lies elsewhere, we would appreciate further clarification. - The example provided is not just another function class; it is very specifically a special case of QIGM for one specific function $f(u_1, \ldots, u_N) = - \sum_i u_i^2$; This is a perfectly valid special case of QIGM, but our result remains strictly speaking a generalization of any of the provided examples. The importance of proving the more general case is that it is necessary for QFIX. Without the general case over a general class of $f$ functions, we would not be able to define QFIX by having a fixee advantage take the role of $f$. Without proving lemma 4.2 and theorem 4.3 for a general class of functions $f$, QFIX would not exist. - We agree that our use of the term "minimal" is not formal and can cause confusion; we will happily replace all mention of "minimality" into something less formal like "simplicity". 4. We understand the concern better now; universal function approximation theorems (UATs) come in many forms, but not all UATs are exclusively formulated to approximate continuous functions, at least partially because not all refer to the same notion of approximation. The most well known UAT by Cybenko is formulated in terms of uniform convergence, a very strong notion of approximation. However, there are other forms of UAT that use weaker notions of approximation and that are applicable to approximate non-continuous functions. E.g., Hornik’s Theorem [3, Theorem 1] is another popular UAT, and is based on $p$-norm convergence (with $p<\infty$) that proves approximation to $L^p$ functions; this includes large classes of non-continuous functions. In the same document, Hornik informally formulates a corollary that implies another form of approximation for functions that are merely measurable; this is a version of UAT we can employ while making minimal assumptions on $Q$ and $Q_\text{fixee}$. We are happy to clarify these assumptions and conclusions more explicitly; Thm 4.3 is not a statement related to NNs, so it needs no adjustment (it fundamentally states that eqs (38, 39) are the values of $w, b$ sufficient to guarantee QIGM=Q for arbitrary Q). Thm 4.4 does need to be reformulated. We need to assume IGM values that are measurable, and fixees that are also measurable. We must also assume that the fixee’s preimage $A^{-1}_\text{fixee}(0)$ is a measurable set. All of these are fairly mild assumptions. Then, eq (39) is trivially measurable, and eq (38) is measurable as a whole, as it is a piecewise construction based on measurable functions on measurable partitions. Since eqs (38, 39) are measurable, we can apply the corollary informally stated in the discussion section of [3], to justify using neural networks to learn these functions, with the corresponding approximation guarantees on compact subsets of the input space. **References** 1. https://anonymous.4open.science/r/qfix-icml-rebuttal-5C81/icml-2025-rebuttal.pdf 2. Agarwal et al., "Deep Reinforcement Learning at the Edge of the Statistical Precipice", NeurIPS 2021. 3. Hornik, "Approximation Capabilities of Multilayer Feedforward Networks", Neural Networks 4, 1991.
Summary: In this paper, the authors address the problem of cooperative multi-agent reinforcement learning with value function decomposition method. They propose a class of decomposition function that are complete with respect to the IGM principles (the max of individual value functions matches the max of the joint value function). The proposed class of function can be applied to previous value function decomposition method an the author show how previous method fit in the proposed theoretical framework. The theoretical method is used to design several new algorithms that are bringing simplification (in terms of number of parameters) compared to previous work. The proposed method are evaluated on the SMACv2 multi-agent RL benchmark agains other value decomposition method. **Update after rebuttal** Most of my comments were addressed and I am more confident about the results after reading the rebuttal. This paper was interesting and I believe the theoretical discussion is helpful to the field. Claims And Evidence: The authors claim to "fix" the value decomposition problem. The problem being highlighted as the class of decomposition function used before being either too complex or not expressive enough. Theorems show that the proposed class of functions encompasses the previous complex functions. Empirical results show that the proposed algorithm have similar performance to the more complex method (QPLEX). The claim that the proposed method is more simple is a bit harder to evaluate. The author show that they can devise an value decomposition method with fewer parameters than QPLEX. The derivation of the method does seem a bit simpler than QPLEX but it is quite subjective in my opinion. A lot of the claim is about "simplicity" and "fixing" the previous work. Although the theoretical approach supports the claim that the proposed algorithm is mathematically sound. The "fixing" is a bit of a stretch since the performance gap does not appear that significant empirically. Methods And Evaluation Criteria: The authors evaluate the methods on SMACv2 which is a common MARL benchmark. They use return as opposed to win rate, they give a justification. I am not familiar enough with the benchmark to know if the justification is correct but I don’t think it is unreasonable to look at returns. The baselines (VDN, QMIX, QPLEX) are relevant. Adding QTRAN would have brought more completeness but it is still ok. It would also have been useful to mention a non value decomposition method, or at least remind the reader whether or not those methods are the state of the art on this benchmark (I remember that MAPPO was quite a strong contender). Theoretical Claims: They claim that the proposed value function decomposition scheme is IGM complete and can be used to make previous schemes IGM complete. I checked Lemma 4.2. I went through theorem 4.3, the proofs make sense to me as one can construct a function Q_IGM from any function in the IGM class. Experimental Designs Or Analyses: 3 runs per model is fairly low, and the “statistical precipice” paper recommends more. Especially given how close the model performance is in Figure 2. In Figure 3 the authors average the performance between 27 runs. It is a bit confusing as of why they don’t directly show it on figure 3. The authors mention that VDN fails to be a competitive baseline but when one looks at figure 2, VDN is not always the worst, and the asymptotic gap between VDN and the best method seem to be within the confidence interval. Supplementary Material: I went through part A which was useful to understand the theorems. I looked at the curve with the win rates. Relation To Broader Scientific Literature: They position themselves as an improvement towards previous value decomposition method VDN, QMIX, QPLEX which are correctly presented in the paper. Honorable mentions of other MARL methods could have been useful. Essential References Not Discussed: Not that I could think of. Other Strengths And Weaknesses: Strengths: - The problem formulation and description of the different steps from previous work was very clear and useful to understand the different innovations that happened in value decomposition methods. - The theoretical treatment of the value decomposition is comprehensive and provides some harmonization of previous work. I am not sure I would really call it a simplification but at least it gives a sound theoretical framework and it shows well how QPLEX falls into it. Weaknesses: - The proposed theoretical form is helpful in ensuring IGM but it does not guide us to what is the best function class to solve the original problem of collaborative MARL. - Figure 2 shows that most of the variant provide the same performance. The theoretical analysis helped in simplifying the model but not in beating QPLEX (although solving some instability issues maybe). I also find it a bit puzzling that the three Q+FIX methods have about the same performance. It would be great if the authors could elaborate on that aspect. Other Comments Or Suggestions: - I think the writing style could be more nuanced when comparing to the related work. The authors make a bold claim of “fixing value decomposition” and highlight the issues in previous work. Although the work of the paper is interesting and impactful, I think the writing could be a bit more modest and consider that the previous methods (even if they have issues) did help in coming up with the new idea. E.g. "QPLEX is more convoluted than our version" => "QPLEX is a specific case of our proposed decomposition scheme." - Definition 3.3 seems like a lemma or a property rather than a definition? Questions For Authors: - I did not understand the problem mentioned with WQMIX in the related work, could the authors further clarify or provide an example? - Why do the Q+FIX method have the same performance on the benchmark while the design choices for w and b are different? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. # Direct Questions ## Re: WQMIX The WQMIX theory [1] explicitly assumes fully-observable control (MMDP), and makes assumptions that do not hold for Dec-POMDPs, e.g., that decentralized policies can achieve the same optimal behavior as centralized policies. We will clarify this in the paper. ## Re: Similar Performance of Q+FIX variants We believe the similarity in performance of Q+FIX variants hints at the importance of our mixing: - When employing our mixing, it does not matter how "(un)sophisticated" the fixee is; We are able to elevate all fixees equitably. This demonstrates the effectiveness of our structure over model size, and suggests that smaller models may be preferable. - When comparing the stability of Q+FIX to the instability of QPLEX, this implies that the complexity/size of QPLEX is a hindrance and that achieving IGM-completeness via simpler models more effective. We also note that the models of $w, b$ are not different across Q+FIX variants; only the fixee is. Perhaps the reviewer is referring to the $w, b$ constructed in the proof of thm 4.3; however, that is a construction to prove IGM-completeness, and not a requirement. # Other Comments ## Re: Simplicity Our claims of simplicity are informed quantitatively (Table 1), and qualitatively (eqs (22,25) are simpler than eq (16)). The claims of "fixing" refer to the expansion of the representation class of VDN and QMIX, not performance. We will clarify both points in the paper. ## Re: Statistical Significance We share the reviewer's concern that modern RL suffers from issues of statistical significance due to the increasing complexity of evaluation. While at the time of submission we were limited to 3 seeds per model per task, we have continued to run more evaluations. We are now iterating the 5th seed, and the results are consistent, with higher significance. We note that [3] does not prescribe a high number of seeds, but rather recognizes the practical limitations of evaluation ("3-10 runs are prevalent in deep RL as it is often computationally prohibitive to evaluate more runs"), and makes recommendations that are "easily applicable with 3-10 runs per task". We already adhere to all applicable recommendations: - confidence intervals over point estimates - aggregate results over tasks to increase significance (though we use mean, not IQM) When aggregating using IQM, the distinction between QPLEX and QFIX drops (though QFIX still remains in a slight lead); clearly IQM benefits QPLEX by ignoring its unstable runs. We will include the IQM results in the final paper as well, though we believe it's important to note that QPLEX remains less stable. It's not clear how to apply other recommendations that require each run to be summarized by a single scalar score; e.g., using final or maximal performances are both unfair, respectively against or in favor of QPLEX. We are happy to take suggestions. ## Re: Fig 3 We thank the reviewer for pointing out an issue in the presentation of fig 3. The intent was to aggregate results in accordance to [4], first normalizing returns per task separately (to avoid tasks with wider return ranges from dominating over others), then aggregating across tasks. We have fixed the plot, and it is almost indistinguishable; this is reasonable, as the return ranges are "similar" (same magnitude). ## Re: Function Class for MARL We agree that whether value decomposition methods represent the best function class for coop MARL remains an open question. However, that question falls beyond the scope of our work: our goal is to improve upon value decomposition methods themselves. ## Re: Relationship to QPLEX Though QPLEX and QFIX are both IGM-complete, QPLEX mixing is not strictly a special case of QFIX mixing: it is not possible to take the QPLEX models $w_i, b_i, \lambda_i$ and construct equivalent QFIX models $w, b$ s.t. QFIX and QPLEX values are equal for all inputs; for equality, the QFIX models must also depend on $Q_i$. In that sense, QFIX is not just a reparameterization of QPLEX. We will clarify this distinction in the paper. ## Re: Def 3.3 Defs 3.3, 4.1 are framed in accordance to Def 1 from [5]. However, we agree and will reframe both. # Rebuttal Summary We thank the reviewer for their feedback. We believe to have addressed all the stated concerns, and hope they will revisit their evaluation positively. 1. Tabish et al. "Weighted QMIX: Expanding Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning" NeurIPS 2020 2. Marchesini et al. "On Stateful Value Factorization in Multi-Agent Reinforcement Learning" AAMAS 2025 3. Agarwal et al. "Deep Reinforcement Learning at the Edge of the Statistical Precipice" NeurIPS 2021 4. Papoudakis et al. "Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms in Cooperative Tasks" NeurIPS 2021 5. Wang et al. "QPLEX: Duplex Dueling Multi-Agent Q-Learning" ICLR 2020. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response to my concerns and updated my score. I think the method is valuable, I am not familiar enough with MoE to judge if the application to MT-MARL is too straightforward as mentioned by Reviewer 8eg2, at least to me it is interesting. --- Reply to Comment 1.1.1: Comment: Firstly, we note that the reviewer’s response to our rebuttal appears to be related to another submission, not ours. During the rebuttal period, we were able to update our results in accordance to some concerns raised about statistical significance. Please see auxiliary figures at [1], which includes: - Fig 1: Updated results (now 5-6 runs per model per task, also shown as interquartile mean (IQM) [2]), of interest to rev. `5gNj`. - Fig 2: Probability of improvement [2], of interest to rev. `5gNj`. - Tab 1, Fig 3: Model size ablation, of interest to revs. `zepP`, `Cnks`. We note that the authors of [2] claim that a probability of improvement (POI) that is above 50% with its entire CI indicates a statistically significant result; out of all methods, Q+FIX-sum is the only one to achieve this against all other methods. **References** 1. https://anonymous.4open.science/r/qfix-icml-rebuttal-5C81/icml-2025-rebuttal.pdf 2. Agarwal et al., "Deep Reinforcement Learning at the Edge of the Statistical Precipice", NeurIPS 2021.
Summary: The paper bridges theory and practice by proposing QFIX , a minimalist yet powerful framework for IGM-complete value decomposition. By extending prior methods with a simple fixing mechanism, QFIX achieves superior performance, stability, and scalability, setting a new standard for cooperative MARL algorithms. ## Update after Rebuttal The authors have addressed several of my concerns. However, the use of outdated experimental environments and baseline algorithms remains an unresolved issue. While the authors provided some justification for their choices, more recent studies in the MARL community have already adopted modern benchmarks and stronger baselines, making the current experimental setup less compelling. I believe it is important for the community to move away from legacy environments and evaluation protocols to ensure progress and fair comparison. As such, I am keeping my original score. Nevertheless, I believe that updating the experimental section would significantly improve the overall quality and impact of the paper. Claims And Evidence: This paper is theoretically innovative, with a complete proof process, and the authors have conducted tests on the SMAC benchmark, providing a certain degree of support for their claims. Methods And Evaluation Criteria: The QFIX series of methods and their variants proposed in this paper take into account the issue that value function decomposition in multi-agent reinforcement learning needs to satisfy the IGM property. Their methodological foundation is a minimized IGM formula, which aligns with the requirements of the relevant problems. Meanwhile, the paper adopts SMACv2, a widely recognized benchmark platform in the field of multi-agent reinforcement learning, as the experimental environment, and evaluates performance using metrics such as average return, with win rate results further explained in the appendix. These evaluation criteria effectively reflect the performance of the methods in collaborative tasks. Theoretical Claims: I didn't carefully check the proof in this paper, But they seems to be right. Experimental Designs Or Analyses: The experimental benchmarks are reasonably chosen, but the ablation experiments maybe insufficient. Supplementary Material: No supplementary material is provided. Relation To Broader Scientific Literature: The key contributions of the paper are closely related to multiple Literature in the multi-agent reinforcement learning, providing a new perspective for future related research. Essential References Not Discussed: No Other Strengths And Weaknesses: strengths: 1. This article is written in a smooth and coherent manner, maintaining a high level of readability from theoretical derivation to experimental validation. The derivation process of QFIX, Q+FIX, and their variants progresses step by step, starting from the issues with VDN and QMIX, moving on to corrective approaches, and finally introducing an optimized additive correction strategy. This demonstrates strong continuity and coherence in the research line of thought. 2. The method is scalable, as QFIX adopts a "correction" network, which enhances the model's expressive power without altering the core structure of the original approach. It is compatible with VDN and QMIX and can be extended to other non-IGM-complete methods. Weaknesses: 1. The theoretical proof assumes that the network has sufficient expressive power, but in practical training, the performance is constrained by model capacity and optimization difficulties. The article does not explore the impact of different network architectures and hyperparameters on the final performance. 2. It is unclear whether the performance improvement of the original method by the fixing network is due to the increase in parameters or the enhancement of the method's representational capacity. Other Comments Or Suggestions: No Questions For Authors: What is the reason for the improvement in the method's performance due to the correction network? Is it because of the increase in parameters or the enhancement of completeness? This point does not seem to be confirmed by relevant experiments. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their kind words and positive feedback. # Direct Questions ## Re: Reason for the Improvement and Model Size We believe that the empirical results combined with the model sizes of Table 1 provide a compelling argument that the performance of Q+FIX is driven from its mixing mechanism over mixer size, and in almost all cases **despite** smaller mixer size. Note that Table 1 shows the sizes of the **mixing** networks alone; this includes the size of the fixee models (when applicable), and excludes the individual value models. Moreover: - VDN uses no mixer (hence no entry in Table 1). Any parameterized mixing method must by definition have more parameters; this is unavoidable. - QPLEX employs by far the most parameters than any other method, including all Q+FIX variants. - QMIX employs a mixing network of intermediate size (between VDN and QPLEX). Though Q+FIX-mono employs more parameters than QMIX, this is the only case of a parametric Q+FIX mixer being larger than any other parametric baseline mixer; again, Q+FIX-{sum,lin} employ significantly fewer parameters than QMIX. Finally, we note that the performances of all Q+FIX variants are similar, which suggests that **conditioned on the use of our mixing structure**, the quality and size of fixee is not an important factor, and that it is specifically the QFIX mixing structure that drives performance. To practitioners, this suggests the use of the smallest models, i.e., Q+FIX-{sum,lin}. Though we believe the above already provides sufficient evidence for the effectiveness of our mixing structure over model size, we will also run an ablation on model size as requested by reviewer `Cnks`, for the one case where the ablation is feasible, to further prove this point. # Other Comments ## Re: Assumption of sufficient expressive power The assumption of "sufficient expressive power" is an appeal to the Universal Approximation Theorem (UAT) [1, 2], and this methodology is consistent with other works in the literature, most relevant being Proposition 2 from [3]. Naturally, any concrete architecture may fall short of UAT in practice; this is true of any deep learning model. We will clarify this point better in the text. Re: hyperparameterization: all baselines (VDN, QMIX, QPLEX) employ the parameterization provided by [4], from the corresponding codebase `pymarl2` [5]. As such, the baseline parameterization has already been optimized by prior work. Our mixing models, on the other hand, have not, yet we have found good results even when using the baseline defaults, and without spending significant computational resources towards hyperparameter optimization of Q+FIX. # Rebuttal Summary We believe we have addressed the reviewer's concerns, that there are no major concerns remaining, and hope the reviewer will revisit their evaluation accordingly. If the reviewer has further concerns, we will gladly use our allotted discussion time to receive further feedback and provide further information. 1. Cybenko "Approximation by superpositions of a sigmoidal function" MCSS 1989 2. Lu et al. "The Expressive Power of Neural Networks: A View from the Width" NeurIPS 2017 3. Wang et al. "QPLEX: Duplex Dueling Multi-Agent Q-Learning" ICLR 2020 4. Ellis et al. "SMACv2: An Improved Benchmark for Cooperative Multi-Agent Reinforcement Learning" arXiv 2023 5. github.com/benellis3/pymarl2 --- Rebuttal Comment 1.1: Comment: The authors' response has partially addressed some of my concerns, for which I am grateful. However, I still have a few reservations beyond the impact of model size. First, I share the concerns raised by Reviewer Cnks, which have a significant influence on my overall evaluation of the paper. In addition, I am puzzled as to why all the baselines used in the paper are from five years ago. Could QFIX be applied to more recent value decomposition algorithms as well? Moreover, most recent MARL papers accepted at top conferences typically demonstrate the generality of their proposed methods across multiple diverse environments—this is currently lacking in the submission. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the further feedback. Please see auxiliary figures at [1], which includes: - Fig 1: Updated results (now 5-6 runs per model per task, also shown as interquartile mean (IQM) [2]), of interest to rev. `5gNj`. - Fig 2: Probability of improvement [2], of interest to rev. `5gNj`. - Tab 1, Fig 3: Model size ablation, of interest to revs. `zepP`, `Cnks`. **General Concerns Shared with Reviewer `Cnks`** Please see our final response to Reviewer `Cnks`, which addresses all of their remaining concerns. To summarize: - We ran preliminary ablation results on the sizes of QMIX and Q+FIX-mono, which shows that Q+FIX-mono outperforms QMIX even when QMIX is made bigger and Q+FIX-mono is made smaller. This again confirms that the performance of Q+FIX is driven by the mixing structure. - The issue with definitions 3.4 and 4.1 are a trivial fix that we will perform, and have no consequence on the rest of the work. - The claims of IGM-completeness are easily formalized by making mild assumptions and employing other forms of universal function approximation that are applicable to wide sets of measurable functions, not just continuous ones. **Can QFIX Be Applied to Other Fixees?** QFIX can be applied to any other value function decomposition model; however it only really makes sense to apply it to fixees that are not already IGM-complete. That is part of why we focus on VDN and QMIX. Further, our results indicate that QFIX performs best when paired with simpler fixees, as indicated by Q+FIX-sum performing at least marginally better than other variants. **More Recent Methods** Though newer methods than VDN, QMIX, and QPLEX do exist, to the best of our knowledge, virtually none have withstood the test of time, and VDN, QMIX, and QPLEX still represent the main reference points for value function decomposition methods to this day; e.g.: - The MARL book from 2024 [3], in the chapter dedicated to value decomposition, only mentions VDN, QMIX, WQMIX, QTRAN, and QPLEX as notable value decomposition methods, i.e., all methods we have discussed in our submission (with a discussion on the limitations of WQMIX and QTRAN in our related work section). FACMAC is also mentioned, but only as a reference to a policy gradient method that employs critic factorization (where IGM is not even a necessary or desired condition). - The SMACv2 paper from 2023 [4] only uses VDN and QMIX as evaluation baselines belonging to the value function decomposition family. - The "rethinking implementation details" ICLR blogpost from 2023 [5] focuses exclusively on QMIX. These are all primary modern up-to-date reference points for value function decomposition, and they hardly focus on anything more than VDN, QMIX, QPLEX (and sometimes WQMIX and QTRAN). **Evaluation Environments** Even though other evaluation environments do exist in the literature, SMAC variations (SMACv1, SMACv2, SMAX, etc) remain by far the most popular environment for cooperative MARL in the literature (not least because it has high variability in terms of maps, setups, team sizes, procedurally generated scenarios, etc). While other environments do exist in the literature, we disagree that an evaluation encompassing a significantly wider variety than ours is a common standard. Among others, we can point to the following seminal and primal works: - Both QMIX papers [6, 7] (both conference and journal versions) perform evaluations exclusively on matrix games (of little interest to us) and SMAC scenarios. - WQMIX [8] performs evaluations primarily on SMAC scenarios (though they do include one small evaluation on Predator Prey, this is a small component of their evaluation). - QPLEX [9] performs evaluations exclusively on SMAC scenarios. - The "rethinking implementation details" ICLR blogpost [5] performs evaluations exclusively on SMAC scenarios. All of these are seminal works in top-tier conferences, and their methods were almost exclusively evaluated on SMAC. **References** 1. https://anonymous.4open.science/r/qfix-icml-rebuttal-5C81/icml-2025-rebuttal.pdf 2. Agarwal et al., "Deep Reinforcement Learning at the Edge of the Statistical Precipice", NeurIPS 2021. 3. Albrecht et al., "Multi-Agent Reinforcement Learning: Foundations and Modern Approaches". MIT Press, 2024. 4. Ellis et al. "SMACv2: An Improved Benchmark for Cooperative Multi-Agent Reinforcement Learning" NeurIPS (Datasets and Benchmarks Track) 2023. 5. https://iclr-blogposts.github.io/2023/blog/2023/riit/ 6. Rashid et al. "QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning" ICML 2018. 7. Rashid et al. "Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning" JMLR 2020. 8. Rashid et al. "Weighted QMIX: Expanding Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning" NeurIPS, 2020. 9. Wang et al. "QPLEX: Duplex Dueling Multi-Agent Q-Learning" ICLR 2020.
null
null
null
null
null
null
null
null
Mastering Massive Multi-Task Reinforcement Learning via Mixture-of-Expert Decision Transformer
Accept (poster)
Summary: This paper studies the Massive Multi-Task Reinforcement Learning problem and proposes M3DT. When the number of tasks is large, it shows great performance improvement compared with former methods. Claims And Evidence: In section 3.1, the authors propose an insight "Reducing the learning task number, particularly to a sufficiently small scale can significantly enhance model performance" according to Figure 2. However, I do not feel the logic is smooth here. In my opinion, Figure 2 only shows that the more tasks there are, the more conflict and worse performance they have. This paper studies massive MTRL where the number of tasks is larger, say 160. Then Figure 2 does not tell us reducing 160 tasks to a small number of groups, say 20, can enhance the performance. First, the performance may highly depend on the grouping strategies. Second, there may still exist conflict within task groups. Thus, in terms of the presentation, this claim is not appropriate. Methods And Evaluation Criteria: Yes, the evaluation and methods make sense to me. Theoretical Claims: There is no theoretical analysis in this paper. Experimental Designs Or Analyses: This paper studies the massive multi-task RL setting where the number of tasks could be 160, which is sound to me. Supplementary Material: There is no attached supplementary material. However, I have checked the appendix for implementation. Relation To Broader Scientific Literature: This paper is highly related to current literature in this area. Essential References Not Discussed: I am not sure whether all essential references are included. Other Strengths And Weaknesses: 1. In Figure 4, the model architecture is drawn from bottom to top. However, the small figures for those tasks are drawn above the output layer with a "down arrow". This may raise some confusion. 2. The description of the method/algorithm is not clear, especially for the grouping part. I have the following questions but failed to find out the answer in the paper: a) How to determine the number of groups? b) What is the grouping frequency, such as grouping every epoch? c) For M3DT-G, it needs to calculate all gradients. Then, the computation cost is pretty high. d) Is there any special treatment for tasks within the same group? For example, do authors treat them equally important and use linear scalarization perhaps? 3. In Figure 5 on the left, why is there a performance drop from 98M to 123M? 4. I appreciate the ablation study in section 5.2 to support the method design. However, I do not understand why to separate the expert training and router training because they are trained simultaneously in the original MoE with load-balancing loss. Why do not authors simply group tasks and train MoE layers (router+experts)? 5. Lastly, this paper studies the MTRL problem. Nevertheless, I feel the methods are general and can be applied in supervised MTL as well. What specific challenges in MTRL do authors find and solve? The novelty in the current version looks like using grouping and MoE in MTL. I am happy to change my mind if these questions are answered or if I misunderstand the paper. Other Comments Or Suggestions: Please check weaknesses. Questions For Authors: Please check weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful feedback and questions that will surely turn our paper into a better shape. >Q1. The logic in insight "Reducing ..." is not smooth. Fig.2 does not tell us reducing 160 tasks to a small number of groups can enhance the performance. First, the performance may highly depend on the grouping strategies. Second, there may still exist conflict within task groups. A1. (1) As stated in Section 3.1 (bolded "Performance Variance"), we test on different task sets each run. The STD reflects the model’s robustness to variations in task combinations. When the task count is reduced to 10, the model exhibits minimal STD, demonstrating the good performance is **robust to the grouping strategies**. Note: The specific criteria for constructing task subsets can be found in Reviewer is3P top (2) The gradient similarity on 10 tasks is 0.29 in Fig.2, indicating **gradient conflicts still exist within task groups**. However, compared to massive tasks, the **gradient conflict is significantly reduced** when training on 10 tasks, and model achieves better scores. Therefore, by interpreting Fig.2 from the **reverse perspective (from right to left)**, we derive our robust insight. Importantly, this insight does not advocate focusing solely on small-scale tasks. Given the predetermined task count the model must solve, how to reduce task numbers to be learned is precisely the problem we raised in Section 3.2 and addressed in our method. >Q2. In Figure 4, the model is drawn bottom-top, but the task sets appear above output layer with downward arrows, which may cause confusion. A2. Thanks for identifying this. We upload the revised figure at link(https://anonymous.4open.science/r/ICML_rebuttal_11746/revised_fig4.JPG) and will replace in the revised manuscript. >Q3. How to determine the number of groups? A3. We align the number of task groups with experts, with each expert dedicated to a distinct task subset. Increasing the number of groups = more experts = more model parameters. Through experimental analysis in Fig.5 and Fig.7, we identify the optimal group (expert) counts for different task scales, which reflects the trade-off between task subset learning difficulty and gating difficulty. We will add these details to the revised manuscript. >Q4. What is the grouping frequency? A4. Task grouping is performed **only once after completing the first stage**. The overall algorithm workflow is backbone training in first stage, task grouping, each expert individually training on its corresponding fixed task group in second stage, and router training in third stage. The algorithm concludes after the third stage. **Each stage executes once** in the entire training process, with detailed training iterations per stage specified in Appendix A.5. We will supplement this in pseudocode in revised manuscript. >Q5. M3DT-G needs to calculate all gradients. Computation cost is high. A5. Since task grouping is performed only once, M3DT-G requires merely a single additional computation of all tasks' gradients followed by K-means clustering, which is negligible compared to the overall training cost. >Q6. Is there any special treatment for tasks within the same group? A6. We do **not** apply any special processing or treatment to any of the tasks. >Q7. In Figure 5 left, why is performance drop from 98M to 123M? A7. As analyzed in Section 5.2 (line 402-415) and Fig.7, the overall performance is a trade-off between expert performance and gating difficulty. When the expert count exceeds a threshold, further increasing the expert count no longer improves expert performance, while the complexity of router assigning weights to experts continues to improve. Thus, the performance plateaus or even degrades. >Q8. Why do not authors simply group tasks and train MoE? A8. Through experiments, we evaluate the ablation: group tasks and train MoE (simultaneously training router and task-ID-matched expert while freezing other experts), which achieved **71.21**, compared to 77.89 in Table 2. The performance drop primarily stems from the instability caused by alternating optimization of different experts. It also fails to leverage the the second stage, where experts can be trained independently and in parallel, resulting in significantly prolonged training duration. >Q9. I feel the methods are general and can be applied in supervised MTL. What specific challenges in MTRL do authors find and solve? A9. In RL, task distinctions are more pronounced and decision-making process is more sensitive compared to regression and classification. While scaling up model parameters has proven effective for handling numerous tasks in other fields, it falls short in RL. This is the core challenge that our paper addresses. Our method can be readily adapted to other supervised MTL domains by simply replacing the backbone with other task-specific models. We hope this work can inspire new research in MTL and advance the field.
Summary: Transformer-based models have recently shown success in offline reinforcement learning by framing the problem as a sequence modeling problem. Moreover, offline multi-task reinforcement learning (MTRL) has benefited from the high capacity of these models for solving complex and diverse tasks. Nevertheless, as the number of tasks massively increases, the overall performance starts to drop tremendously. The naive scaling of the transformer-based models doesn't counteract the drop in performance caused by scaling the number of tasks. The performance of the model saturates eventually as the model capacity increases. Accordingly, we are not gaining from the scalability of the transformer-based models in MTRL as done in the supervised learning setting. In this work, the authors propose a different way to scale the model by learning different experts for different groups of tasks. With that, scaling the model is in the form of increasing the number of experts, hence increasing the number of tasks to be learned. Moreover, this reduces the conflicts between tasks since the number of tasks per expert can be lower. The proposed method, named M3DT, consists of three stages of learning: a backbone transformer model, experts, and a router model. The approach has been evaluated on 160 tasks, a mix of tasks from metaworld, dm-control, and continuous control problems from Mujoco, against relevant and recent offline MTRL baselines with a transformer-based architecture. Claims And Evidence: - In my opinion, the claims presented in this work are quite clear and well-motivated. - The claims are supported by two studies (in Figures 2 and 3), which makes them very convincing. Methods And Evaluation Criteria: - I find the proposed approach, named M3DT, sound for tackling the problem of multi-task reinformcenet learning in the offline setting. - The massive number of tasks used for evaluation is interesting and quite motivating for the effectiveness of the approach. Theoretical Claims: - There are no theoretical claims presented in this work. Experimental Designs Or Analyses: - I highly appreciate the amount and the diversity of the experiments and ablations done in this work. - As mentioned before, I like the two studies presented in Figures 2 and 3. Supplementary Material: - I checked the supplementary materials. Relation To Broader Scientific Literature: - I believe this work is important to close the gap between supervised learning and reinforcement learning in terms of the scaling of the transformer-based models. - I agree with the authors that offline MTRL has been limited to dozens of tasks without looking into more realistic scenarios that we expect from offline RL. - Hopefully, this will affect the online MTRL setting to handle such massive number. Essential References Not Discussed: - I have no recommendations for papers to be cited. Other Strengths And Weaknesses: - I found some weaknesses in this work, - The limitations of this approach are not well-discussed. - One important limitation, in my opinion, is the training time since M3DT requires 3 stages of training. I would advise the authors to indicate the training time for the proposed approach, compared to the baselines. - I believe the future works were never clearly stated in this work. It is important to understand how to fix some limitations in this work or how to extend this work to future directions. - In Figure 7, it seems that the performance of the proposed approach eventually saturates even when the number of experts increases. I would encourage the authors to comment on that. Other Comments Or Suggestions: - In section 2 (Preliminaries), subsection 2 (Multi-Task RL), second line, there is a small typo in funct(u)ions. Questions For Authors: - How long is the training for M3DT compared to the other baselines? - The expert performance seems better than the performance with the router; why do we need a router then? Am I missing something? - Why do you freeze the *predict head* while the corresponding input has been changed because of the training of the router in the third phase? I would be happy to raise my score if my concerns are addressed. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of our work. We offer our responses to address your concerns as follows and will supplement these details and correct the typo in the revised manuscript. >Q1. The limitations are not well-discussed. It is important to understand how to fix some limitations or extend this work to future directions. A1. We primarily focus on the challenges in MTRL, analyzing the scalability of task numbers and parameter size, and proposing a paradigm that increases model parameters to achieve task scalability. However: (1) we did not focus on fine-grained network architecture design, such as optimizing the expert and router architecture, which could potentially further improve performance and reduce model size. (2) While we leave held-out task generalization and continual learning unexplored, our method’s modular parameterization and group-wise learning naturally support solutions like held-out task adaptation via expert learning, dynamic skill composition for held-out tasks, and forgetting mitigation for the router for past tasks. Addressing these areas presents promising future work. (3) Scaling experts with task count raises inference costs. While we prioritize performance, we experimented with activating Top-k experts with fixed k to control inference overhead, but it degrades performance. A more tailored Top-k gating mechanism, better aligned with our three-stage training paradigm, could enhance our practical applicability. >Q2. The training time for the proposed approach compared to the baselines. A2. Our experiments were conducted on RTX 4090. We train PromptDT-5M for 4e5 steps in first stage, taking around 5.2h. In second stage, all experts are trained independently on their dedicated task group in parallel, taking around 1.8 hours (the time for training a single expert). In third stage, we train the router for 4e5 steps, taking 17.2h. The reason for the long training time in third stage is our code's lack of parallel computation for experts. Code optimization may reduce this training time. **The total training time of M3DT-174M is around 24.2 hours. For comparison, training PromptDT-173M and MTDT-173M each takes around 21.3 hours, while training HarmoDT-173M requires 95.6 hours.** >Q3. In Fig.7, the performance eventually saturates even when the number of experts increases. A3. As stated in Section 5.2 (line 402-415), the performance plateaus for two main reasons: (1) The router's weight allocation becomes increasingly difficult with more experts; (2) Each expert's task load becomes small enough to achieve optimal performance, preventing further improvement in individual expert performance (dashed line). Thus, the overall performance eventually saturates. >Q4. The expert performance seems better than the performance with the router; why do we need a router then? A4. Sorry for your confusion. The "Expert Performance" is obtained by manually selecting each test task's corresponding expert based on Task ID and individually evaluating their performance. However, this approach is impractical for large-scale tasks in our overall method evaluation and real-world applications: (1) the specific task IDs are usually unavailable, (2) manual expert selection is prohibitively expensive or impossible. Therefore, a unified policy is essential. The router automatically assigns weights to each expert based on the backbone's hidden states, integrating all sub-policies into a unified policy without needing task ID. This necessitates the router. We consider it reasonable the framework's overall performance may be lower than the ideal case of perfect expert switch, as increasing the number of experts raises the difficulty of optimal weight allocation. We will rename "Expert erformance" to "Oracle Expert Selection Performance" in the revised manuscript and add the above explanation. >Q5. Why freeze the predict head while the corresponding input has been changed because of the training of the router? A5. **Through experiments, we evaluate the performance of M3DT-G with simultaneous training of both router and predict head, achieving a score of 76.86.** This result is slightly lower than our original algorithm's 77.89, but outperforms other ablation variants. The rationale for freezing the predict head is: (1) The predict head trained in the first stage already contains knowledge for all tasks, and continued training of the backbone parameters across all tasks would lead to overfitting due to severe gradient conflicts (as shown in Fig.6); (2) In the second-stage expert training, we also freeze the predict head, so all learned experts' outputs remain compatible with the predict head's input. In the third stage, the router performs weighted sum of all expert outputs (with weights normalized by Softmax to sum to 1), which means the entire MoE's final output can be viewed equivalent to a single expert's output, thereby preserving compatibility with the predict head's expected input. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for addressing my concerns and answering my questions. Accordingly, I will increase my score. --- Reply to Comment 1.1.1: Comment: Thanks for your reply. We appreciate your recognition of our responses. Your comments have been very helpful in refining our revised version, and we will incorporate these discussions accordingly.
Summary: This paper introduces M3DT, a novel mixture‐of-experts (MoE) extension of the Decision Transformer designed to tackle the scalability challenges in massive multi-task reinforcement learning. The method leverages task grouping, dedicated expert modules, and a three-stage training mechanism to reduce gradient conflicts and enable effective parameter scaling, demonstrating improved performance across up to 160 tasks on challenging benchmarks. **Update after rebuttal**--I appreciate the authors’ thorough revision and the additional explanation addressing my concerns. I have decided to keep my original score. Claims And Evidence: The authors claim that naive parameter scaling fails as task numbers increase and that their proposed MoE approach, with its task-specific grouping and staged training, significantly mitigates performance degradation. These claims are supported by extensive experiments and ablation studies comparing M3DT with existing baselines, although additional discussion on the statistical significance of improvements would be beneficial. Methods And Evaluation Criteria: The method is well-motivated and appropriate for addressing multi-task RL scalability. The use of standard benchmarks (Meta-World, DMControl, Mujoco, though mixed!) and normalized performance metrics strengthens the experimental evaluation. But my main concern is how “massive” these benchmarks are. Theoretical Claims: While the paper offers empirical insights into gradient conflicts and parameter scaling, it lacks rigorous formal proofs. Strengthening the theoretical analysis, particularly regarding the benefits of the three-stage training mechanism, would add depth to the work. Experimental Designs Or Analyses: The experimental design is comprehensive, including comparisons with state-of-the-art DT-based baselines and detailed ablation studies. Also, I’m curious how online multi-task methods would perform in these “massive” tasks. Supplementary Material: The supplementary material appears extensive and includes additional ablation studies, detailed experimental setups, and analyses of task grouping strategies, which complement the main text effectively. Relation To Broader Scientific Literature: M3DT builds clearly on recent advances in DT and MoE architectures for multi-task RL. It addresses a significant gap in scaling multi-task RL, and the authors relate their work to relevant literature, although a discussion of some recent MoE approaches in reinforcement learning could provide additional context. Essential References Not Discussed: It would be great if the authors could discuss more about the closely related MoE-based multi-task RL approaches (i.e., Hendawy et al. (2023) and Huang et al. (2024)). Other Strengths And Weaknesses: Strengths include a novel approach to reducing gradient conflicts, a well-structured three-stage training process, and robust experimental validations. On the downside, parts of the presentation are dense and could benefit from clearer exposition, and further theoretical justification of the methods would strengthen the paper. Other Comments Or Suggestions: - Font size in the figures is too small. - Inconsistency: Mixture-of-Expert v.s. Mixture-of-Experts - In table 1 (and check others as well), please double check to bold the values that are in a statistically significant range. Questions For Authors: - Could you elaborate more on how you measured “gradient conflict”? Did you observe the frequent occurence of gradient conflicts in cases with more tasks? - Can you elaborate on how the three-stage training mechanism specifically prevents overfitting to dominant tasks? - Could the approach be extended to online multi-task settings, and if so, what modifications would be necessary? - When should we use M3DT-Random? and M3DT-Gradient? What’s the criteria suggested by the authors? - Regarding the router component, what specific design choices (e.g., architecture of the MLP) led to its effectiveness in dynamically allocating expert weights? - The normalized score is a key evaluation metric across diverse tasks. Could you elaborate on the normalization process used for each benchmark? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are greatly appreciative of your recognition of our work. We offer our responses to address your concerns as follows and will address the formatting issues in Other Comments in the revised manuscript. >Q1. how “massive” these benchmarks are. A1. Current standard MTRL algorithms typically handle a limited number of tasks: offline methods are generally confined to 50 Meta-World tasks, while some online methods are restricted to MT10 tasks. In contrast, our approach integrates 160 tasks from Meta-World, DMC, and MuJoCo Locomotion. >Q2. Strengthening theoretical analysis would add depth to the work. A2. Thanks for your suggestions. But our method is proposed based on observed experimental phenomena, making theoretical analysis difficult. Neither DT nor MoE we used has been theoretically analyzed. While theoretical methods are rare in MTRL compared to MTL, our future work may focus on developing a theoretically grounded MTRL algorithm. >Q3. How online MT methods would perform in massive tasks. A3. Recent studies have shown offline MTRL methods outperform online methods on Meta-World MT50 benchmark, as seen in HarmoDT and MTDiff. This performance gap may widen significantly when scaling to larger task sets, where online methods often degrades. As task number increases, the required wall-time overhead for environment interaction grows substantially, leading to a significant expansion in training time. >Q4. Discuss more about MoE-based MTRL approaches. A4. Thanks for your suggestion. We will supplement and discuss [1,2] in the revised manuscript. [1] MoE-Loco: Mixture of Experts for Multitask Locomotion [2] Multi-Task Reinforcement Learning With Attention-Based Mixture of Experts >Q5. How you measured “gradient conflict”? Did you observe the frequent gradient conflicts with more tasks? A5. We compute the gradients for each task and calculate the mean gradient across all tasks. The gradient similarity is defined as the average cosine similarity between this mean gradient and all tasks' gradients. During training, we record this metric every 1e4 steps to obtain a similarity curve. After smoothing to the stabilized phase of training, we use the plateau value as our final gradient similarity. Gradient conflict is calculated as (1 - gradient similarity). Yes, our results confirm that the gradient conflicts intensifies with more tasks. >Q6. How the three-stage training specifically prevents overfitting to dominant tasks? A6. First stage, we terminate full-task training when performance plateaus and gradient conflicts peak (Fig.6(left)) to avoid overfitting to dominant tasks. Second stage, we use task grouping to reduce task numbers each expert learns, mitigating gradient conflicts and preventing overfitting. Third stage, we use well-trained experts to balance router learning on each task. >Q7. Could the approach be extended to online MT settings, what modifications would be necessary? A7. M3DT can be readily extended to online MTRL by simply replacing the backbone DT's training objective and optimization with those of Online DT and iterating through all tasks for parameter optimization. However, as task numbers increase, the time cost of interacting with all tasks becomes prohibitive, hence we still recommend offline as the primary paradigm. >Q8. When to use M3DT-Random or M3DT-Gradient? What’s the criteria? A8. M3DT-G consistently outperforms M3DT-R. The latter is designed as a baseline to validate that even random task grouping improves performance by reducing the learning task load per parameter subset. M3DT-G requires only two additional steps, computing gradients across all tasks and performing K-means clustering, which account for a little increase in training time. We recommend use M3DT-G as default approach. >Q9. Regarding the router, what specific design choices (architecture of the MLP) led to its effectiveness? A9. Thanks to our analysis of task quantity and the proposed algorithm, we achieve strong results using a basic 5-layer MLP with ReLU activation for the router, without any specialized design. We believe optimizing the router structure or routing mechanism could further enhance performance or extend our method to generalization and continual learning scenarios, which we leave as future work. >Q10. Normalized score is a key evaluation metric across diverse tasks. The normalization process used for each benchmark? A10. As stated in Appendix A.1, we perform score normalization separately for each domain's tasks. For MW, we use the success metric. For DMC, we map the original range [0, 1000] to [0, 100]. For Cheetah-vel, we map [-100, -30] to [0, 100]. For Ant-dir, we map [0, 500] to [0, 100]. For the latter two domains, scores outside the original range are clipped to 0 or 100. For overall average normalized score across all tasks, we assign equal weight to each task, computing the algorithm's final score as the uniform average over all 160 tasks.
Summary: The authors study the problem of distilling a large multi-task offline RL dataset into a single policy via Prompt Decision Transformer (DT). The authors first study the scalability of Prompt DT with respect to both tasks and model size. These experiments provide a (very clear) demonstration of the theory that multi-task RL datasets diminish performance on each individual task due to conflicting gradients and show that larger models do not necessarily fix the problem. A common solution is to split parameters between tasks. This is usually done with separate output heads, but the authors take a different approach and use a Mixture of Experts model to split learning. This **M3DT** architecture is trained in a three-stage process (base model, experts, then router). Experiments find M3DT improves multi-task performance relative to several other DT methods and ablations. ## Update After Rebuttal The authors' explanation of some missing details and additional results breakdowns do partially address my concerns. I think it is a mistake to setup an empirical multi-task experiment like this and make half the task set solvable by meta-RL methods that do not need to address multi-task RL challenges. Results emphasize an under-discussed method for task subset selection and score normalization. **The main conclusion from the results is that splitting the architecture of a multi-task policy into independent components improves performance, which is a well established pattern. Mixture of Experts is a different and simple way to go about this (Fig 7), but the simplicity is somewhat undercut by a complex three-stage training process. New figures in the rebuttal confirm MoE performs worse than the joint single-task baseline**. I understand and appreciate the desire to avoid task IDs, but disagree that the original task set makes this an unachievable oracle. The lack of task IDs would be a more valid limitation if randomized variants of Meta-World + DMC taks were also being used, or if the domains were more difficult to distinguish from a single observation. The rebuttal task transfer results show limited zero-shot benefit from implicit task identification in MW+DMC, which is expected, but further support the idea that these tasks are not similar enough to demand ground-truth task IDs. I am very torn between a score of 2&3 but will maintain my original score of 2 partially because my concerns are quite different from other reviews. Claims And Evidence: The main claims of the paper are supported by many experiments. However, I have several questions and concerns about the conclusions from the experiments, discussed below. Methods And Evaluation Criteria: To aggregate results across domains with different reward functions, the authors report every result as a normalized score. However, they are missing breakdowns of the normalized score by domain. To reach 160 tasks, the authors use tasks from three domains. 50 tasks are from Meta-World, and 30 are from the DM Control Suite (DMC). These choices make sense because previous work has demonstrated the key gradient conflict issue here (especially in Meta-World). In the main text, the authors state the third domain is “Mujoco Locomotion,” and I think a reasonable guess from that description would be the standard gym tasks that are similar to the DMC. Appendix A.1.3 clarifies: **80 of the 160 total tasks used are minor variations of two locomotion problems**. **I am very skeptical that “40 tasks” from Cheetah-Vel and “40 tasks” from Ant-Dir should count as 80 tasks in the context of this paper**. A meta-learning paper (including the Prompt DT paper the dataset is taken from) would call this “40 tasks”, but this is a formality of the meta-RL problem statement. Meta-RL is interested in identifying an unknown MDP at test-time, while many of the baselines here are provided with the task ID upfront. The bar for a unique "task" is much lower in meta-RL. In any case, these are very saturated meta-RL benchmarks: **it is easy to solve all 40 of these tasks online with a small model and without task IDs. Gradient conflicts are a non-issue**. When the standard of a new task is that it creates a distinct optimization objective, as in this paper, I think we should treat these "80 tasks" as two tasks. This makes the 160 task count a bit misleading. **My main concern is that the normalized score is actually 50% (80 tasks / 160) reliant on the performance in these two problems**. The writing does not make this totally clear, but reporting the score for 120 and 160 tasks as separate metrics suggests this is the case. If so, it significantly changes many of the takeaways upon a re-read. For example: > **Conclusion**: *“In MTRL, increasing the model size of DT swiftly hits the performance ceiling”*. **Reinterpreted**: yes, maybe because 50% of the eval metric relies on 2 tasks where Prompt-DT has nearly perfect performance at its original model size. This could explain why the margins are thin across the paper. > **Conclusion**: *“In the clear trend where the performance degrades with increasing task numbers, the decline is pronounced when task number is relatively low (below 40 tasks), while it becomes much more gradual once the tasks reach a sufficiently large number (above 80 tasks).”* **Reinterpreted**: the diminishing rate of decrease is caused by repeatedly sampling the same task. This is complicated by Appendix A.3, where the authors clarify that task sampling is not uniform. Instead, the subsets are designed such that the average performance of single-task DTs is consistent with the average performance of single-task DT on all 160 tasks. If that metric is 50% Cheetah/Ant, and PromptDT performance is nearly 100% successful on those tasks, perhaps the samples are biased by Cheetah/Ant, but too few details are given to analyze this. The subsets used to measure performance vs. task count might be skewed in some way. > **Conclusion**: Random task groupings work almost as well as the more sophisticated gradient-similarity task groupings (Tables 1 & 2). **Reinterpreted**: randomly spreading the Cheetah/Ant tasks among the experts is fine because the conflict between them are not significant enough to stop Transformers of this size from learning with standard online RL, much less supervised DT. > **Conclusion**: *“M3DT… shows diminishing performance gains after reaching 40 experts on 160 tasks.”* **Reinterpreted**: By the time we reach 40 independently trained models on 82 effective tasks, gradient conflicts are no longer an issue. We are approaching the baseline of learning a switch function between single-task expert models. Ideally, MTRL might be better than that method by positive transfer, but offline methods often treat it as an oracle. From that perspective, 48 experts work the best (Fig 5) because it is closest to 1 expert per task. The authors set out to distill a multi-task dataset into a single policy, but the solution is to set up a MoE architecture and scale it up 40x until it resembles training a separate policy for every task. **It would be very useful to see the results with Ant-Dir and Cheetah-Vel removed.** My apologies for the long discussion, but this is a rare case where I felt an appendix detail may substantially change my opinion on a paper's results, and that not all reviews may comment on this. Theoretical Claims: N/A Experimental Designs Or Analyses: Content that might normally go here was discussed in a section above. Supplementary Material: I read the Appendix. Relation To Broader Scientific Literature: This paper relates to important topics in multi-task optimization in RL and the use of large-scale sequence models for policy learning. Essential References Not Discussed: Nothing critical. Other Strengths And Weaknesses: The first few figures in this paper clearly demonstrate the core MTRL optimization challenge and would support many existing works in this area. The method is a good example of applying ideas from large-scale LLMs to RL, an emerging trend and an important area for study. This line of work involves training single-task agents from scratch on every task and then distilling their behavior into a single policy. The main benefit of this might be generalization to unseen tasks or at least a significant reduction in parameters to handle the training set (vs. switching between the single-task experts). The paper does not discuss generalization or fine-tuning to new tasks, and the architecture almost directly scales with task count. Figure 7 is a confusing result. The MoE experts outperform the overall model (including the router), which should be bad (why use the router then?), but the writing acts like this is falling short of an oracle upper bound. The standard upper bound is usually the aggregate results of (smaller) single-task specialist agents (e.g., Multi-Game DT, Gato). Because the experts are mostly redundant architectures to a base model that trains on its own, it is not 100% clear to me whether Figure 7 is reporting something different than this. Regardless, **Appendix A.3 makes it clear that, at some point, the authors trained a single-task Prompt DT model on all 160 tasks and used its score for balancing the dataset. Can we compare the performance of M3DT against those scores?** Other Comments Or Suggestions: The paper would benefit from evaluating on held-out tasks, which should be a key advantage of using Prompt DT as the base model rather than one-hot identifying every task. Minor: The Cheetah-Vel reference is [MAML](https://arxiv.org/abs/1703.03400), and the Ant-Dir reference is [ProMP](https://arxiv.org/abs/1810.06784). Questions For Authors: Please clarify whether the 80 Cheetah-Vel/Ant-Dir tasks receive equal weight in the normalized score as the 80 distinct Meta-World+DMC tasks. How would the main results change if Cheetah-Vel and Ant-Dir were completely ignored? Could you clarify Figure 7 and the aggregate score of the 160 single-task Prompt DTs? See the last paragraph of “Other Strengths and Weaknesses” for context. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's thorough reading of our manuscript and your valuable comments regarding out task setup and method. These comments have helped us realize that we inadvertently omitted several critical details in our original manuscript. We believe that supplementing and clarifying this information will significantly enhance the quality of our paper and we hope these can address your concerns. **Regarding the use of Ant-dir and Cheetah-vel as 80 tasks**, our intention is not to propose a new benchmark, but to ensure sufficient task quantity to investigate how task scale affects MTRL scalability. To achieve methodological rigor in task selection, we follow three key criteria: (1) Performance Balance: Based on single-task score, the selected task subsets maintain average scores comparable to the full 160 set, eliminating the influence of task difficulty (2) Domain Ratio Preservation: The subset composition at different task scales approximately maintains the original domain ratio (MW:DMC:Ant:Cheetah=5:3:4:4), mitigating domain-specific biases (3) Subset Diversity: We use distinct task subsets for each run with the same task scale to maximize task coverage, eliminating correlations between specific tasks. These eliminate potential bias caused by the relative ease of Ant and Cheetah tasks. We have separately reported the scores for MW+DMC and Ant+Cheetah tasks,which are consistent with the average scores across160 tasks in manuscript, validating the rationality of task selection and reliability of our findings. Let me know if link not work https://anonymous.4open.science/r/ICML_rebuttal_11746/revised_fig2.png >Q1. Calculation for Average Normalized Score A1. We assign equal weight to each of the 160 tasks and compute the average normalized scores. Ant and Cheetah collectively account for 50% of the average score. >Q2. "In MTRL, increasing ..." maybe because PromptDT has nearly perfect performance at original size. "In the clear ..." is caused by repeatedly sampling the same task. A2. In seperated results, both task scalability and parameter scalability exhibit the same trends as demonstrated in the original paper. Detailed analyse please see https://anonymous.4open.science/r/ICML_rebuttal_11746/revised_fig2.png. >Q3. "Random ..." maybe because the conflict between them are not significant enough. A3. We conduct experiments by separately training PromptDT-1.47M on 80 MW+DMC and 80 Ant+Cheetah tasks to show gradient conflicts and normalized scores,see https://anonymous.4open.science/r/ICML_rebuttal_11746/Q4.png. The results show that training on either standalone domain exhibits significantly weaker gradient conflicts. >Q4. "M3DT..." M3DT appears to learn several single-task experts and a switch function. A4. M3DT fundamentally differs from training single-task policies and a switch function: (1) our router dynamically assigns weights to experts at token-level, rather than switching to the single best expert at task-level (dashed line in Fig.7), and Fig.7 demonstrates our router outperforms oracle switching when expert count is small. (2) reducing tasks per expert doesn't guarantee better performance, as evidenced by the drop when scaling from 48 to 56 experts (175M to 200M in Fig.7 dashed line), which shows contribution that identifying the optimal task load for given parameter subsets and strike a trade-off between expert performance and gating difficulty. >Q5. How would the main results change if Cheetah-Vel and Ant-Dir were completely ignored? A5. M3DT shows more superior performance on complex tasks(MW+DMC). See https://anonymous.4open.science/r/ICML_rebuttal_11746/domain_main_result.png >Q6. Evaluating on held-out tasks, a key advantage of Prompt DT rather than one-hot identifying task. A6. We first emphasize that both M3DT and PromptDT operate without task ID and only use trajectory prompt during held-in task evaluation (see Rebuttal A8), whereas MTDT and HarmoDT require task IDs to function. For held-out evaluation, see https://anonymous.4open.science/r/ICML_rebuttal_11746/generalization.png >Q7. At least a significant reduction in parameters(vs. switching single-task experts) A7. We think the core challenge in MTRL lies in the limitation that simply increasing parameters fails to address large-scale tasks. **Only after achieving competent performance at large-scale tasks does parameter reduction become meaningful**. But parameter efficiency also remains a valuable future work, particularly through refined architectural designs for experts and router. While single-task agents totally require 235.2M parameters with an average score of 84.35, our method achieves a 26% reduction in parameters. >Q8. Figure 7 is a confusing result. Why use the router? Compare the average single-task performance? A8. Please see Reviewer aD2s Q4A4. We add the averaged single-task performance in the revised Fig.7 at https://anonymous.4open.science/r/ICML_rebuttal_11746/revised_fig7.jpg
null
null
null
null
null
null
Diverse Prototypical Ensembles Improve Robustness to Subpopulation Shift
Accept (poster)
Summary: The paper introduces **Diversified Prototypical Ensemble (DPE)** to enhance robustness against subpopulation shifts. The method trains multiple diverse prototypes per class on top of a frozen feature extractor and enforces feature diversity through **inter-prototype similarity (IPS) loss**. By restructuring the feature space, DPE ensures that minority subgroups receive dedicated representations, improving worst-group accuracy (WGA). Experiments across datasets like **Waterbirds and CelebA** show that DPE **outperforms prior reweighting-based approaches** in handling subpopulation imbalances. **Ablation studies confirm that prototype diversification and ensemble strategies drive these gains.** While effective, the authors highlight limitations in **prototype selection strategies and computational efficiency**, which require further optimization across diverse datasets. Claims And Evidence: 1. DPE improves WGA compared to re-weighted approaches. Tables 1 and 2 support this claim. 2. Figure 5 demonstrates that IPS loss encourages diverse feature representations. Methods And Evaluation Criteria: 1. While DPE improves WGA, interpret ability analysis is lacking. If t-SNE or feature visualization plots be shared then it will add to the understanding. 2. DPE builds on $ERM^*$ and not on ERM. The advanced data augmentations affects learned features in an ERM setting as shown in works such as CutMix. DPE’s effectiveness on standard ERM should be tested to separate augmentation effects from prototype diversification. 3. How are N prototypes determined? An ablation on N across datasets would clarify its optimal selection. 4. Only Living-17 is used as one of the BREEDs benchmarks. Other datasets such as Non-living 26 and Entity-30 should be addressed as well. These are subpopulation shift vision benchmarks. One can use datasets such as CELEBA as a benchmark, but evaluation on either on Non-lIving 26 or Entity 30 is required. Theoretical Claims: No theoretical claims made. Experimental Designs Or Analyses: The experimental section overall confirms DPE improves WGA against previous approaches. 1. For gap in analysis, please refer to the experiments related to Non-living 26 or Entity 30 and feature visualizations via t-SNE. 2. A major issue is the effect of $ERM^(*)$ as feature extractor and not ERM. Experimental validation is required here. 3. Please provide standard accuracy as well. Without that, it is difficult to understand the drop for WGA. These experiments are missing and will help in understanding the process. Supplementary Material: No, did not review supplementary material. Relation To Broader Scientific Literature: Very relevant for domain shift literature. Assuming subpopulation shift falls as a sub problem under domain shift. Essential References Not Discussed: Since prior work (e.g., hierarchical learning methods) has addressed subpopulation shift by structuring feature space, this paper should acknowledge that conceptual and ideally experimental overlap. While hierarchical classification relies on predefined label hierarchies, prototypical classifiers structure feature space through learned prototypes. Although implementation details differ, both approaches share the goal of improving generalization under subpopulation shifts, making the connection relevant. This point becomes as a novelty issue for this approach. 1. "Encoding Hierarchical Information in Neural Networks Helps in Subpopulation Shift", IEEE Transactions on Artificial Intelligence, 2023/ Fine Grained Visual Categorization (FGVC9) workshop, CVPR 2022. Other Strengths And Weaknesses: 1. Figure 2 is very well illustrated and conveys the goal of the paper. 2. Limitations are clearly explained. Other Comments Or Suggestions: 1. Why is ERM and ERM* on LIVING-17 different from the original BREEDs paper? 2. Please explain the numbers in Figure 3. In Table 2, gap between ERM* and DPE is aorund 10% for LIVING-17. Why is it showing 18+ on Figure 3? 3. For Figure 5. what is the number of prototypes? Questions For Authors: Please address the gaps in experiments provided under Experimental Designs and Analyses. 1. Results on Non-living 26 2. Results of ERM plus PDE on any 2 datasets This should help us understand the efficacy of approach. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed and constructive feedback on our submission. We appreciate the time you have dedicated to evaluating our work, and we are pleased that you recognize the strength of our method in improving worst-group accuracy (WGA) under subpopulation shifts. Your concerns are - (1) Unclear contribution of DPE versus augmentations in ERM* - (2) Need for ablations on the number of prototypes - (3) Missing evaluation on other BREEDS datasets - (4) Absence of standard accuracy reporting - (5) A missing reference to prior work involving hierarchical learning - (6) Lack of interpretability analysis (e.g., t-SNE). We address these points below with new experiments, clarifications, and discussion. Full tables are provided at https://github.com/anonymous102030411/anon. ### 1. ERM vs. ERM∗ (Backbone Confounding Issue): To directly evaluate whether DPE’s performance gains stem from its architecture or stronger backbones, we have retrained the feature extractors for all datasets using the original ERM configurations from Yang et al. (2023)—without additional augmentations or extended training. The results using DPE on these standard ERM features are now included as ERM + DPE in the revised Table 2 and Table 3, (see [Tables 2-3](https://shorturl.at/Dy4Ab)). Across all datasets, DPE still achieves substantial gains over baselines under the same backbone. For example, on Waterbirds, ERM yields 69.1±4.7 WGA, while ERM + DPE achieves 91.0±0.5. Similarly, on CivilComments, DPE improves WGA from 63.2±1.2 (ERM) to 71.5±0.6. **These findings confirm that the improvement comes from DPE itself, not just stronger pretraining**. The updated results are detailed in our response to reviewer [VrPG](https://openreview.net/forum?id=qUTiOeM57J&noteId=JePMn0uimT). ### 2. Prototype Count ($N$) Ablation Following the reviewer’s suggestion, we have expanded our ablation on the number of prototypes N, which will be included in the appendix. For a more detailed discussion of our analysis, we direct the reader to the response to reviewer ___rN4y___. ### 3. Additional BREEDS Evaluation – Non-living 26 and Entity-30 We thank the reviewer for this suggestion. BREEDS benchmarks such as Living-17, Non-living-26, and Entity-30 are important for studying subpopulation shifts as they construct domain splits based on fine-grained taxonomies, inducing subgroup variation. To address the comment, we conducted new experiments on Non-living-26 and Entity-30 using ERM and ERM + DPE, with and without ImageNet pretraining. Results show that DPE consistently improves worst-group accuracy (e.g., +3%–10%) across both source and target domains (see [Tables 4-5](https://shorturl.at/yuYnY)), These results validate DPE's robustness and generalization under complex subpopulation shift settings and will be included in the final version of the paper. ### 4. Reporting Standard Accuracy We thank the reviewer for highlighting the importance of reporting standard (average) accuracy to better contextualize the gains in worst-group accuracy (WGA). In response, we have included average accuracy results across all benchmarks in the appendix of the revised version. The average accuracy table corresponding to Table 2 is shown in [Table 6](https://shorturl.at/4CFih). The results demonstrate that both ERM + DPE and ERM* + DPE maintains comparable or improved average accuracy relative to ERM and ERM* across all datasets. These results confirm that the improvements in worst-group accuracy offered by DPE do not come at the cost of overall accuracy. On the contrary, DPE often improves both metrics. We include these results and observations in the revised manuscript. ### 5. Hierarchical Learning Literature We thank the reviewer for highlighting the connection to hierarchical representation methods. While DPE does not rely on explicit taxonomies, we agree it shares the goal of feature space structuring. We now cite and discuss "Encoding Hierarchical Information in Neural Networks Helps in Subpopulation Shift" in the Related Work section. We clarify that unlike those methods, DPE infers structure from data without pre-defined hierarchies, enabling application to a broader set of tasks lacking such annotations. ### 6. Feature Interpretability via t-SNE and Prototype Visualization To clarify the intuition behind the prototype ensemble’s ability to capture semantically meaningful and generalizable subpopulation features, we note that Figures 1.4 and 1.5 are real T-SNE visualizations from our Waterbirds experiments. They highlight prototypes aligned with semantic concepts (e.g., “small yellow bird”) rather than spurious ones (e.g., “land background”). We will include full-size T-SNE plots and additional examples from other datasets in the appendix. --- **Thank you for taking the time to review our work. If our answers resolve your concerns, we’d appreciate your consideration in raising the score. We're happy to clarify any remaining questions.** --- Rebuttal Comment 1.1: Comment: I confirm that I have read the author rebuttal to my review. I acknowledge the response to my comments. The experiments are detailed. 1. I still do not see performance improvements for BREEDs benchmarks. For any subpopulation shift experiment, ImageNet pre-training based experiments are technically incorrect. The networks are already trained on features on which the shift happens. So I believe any ImageNet pre-trained experimental results should not be considered as valid results. 2. The positive result is that DPE does have benefits along with ERM as well and not only ERM*. I am increasing my score. --- Reply to Comment 1.1.1: Comment: We appreciate your follow-up and the score increase. Your feedback helped address the gap in our evaluation. We agree that ImageNet pre-trained models should not be used for the BREEDS benchmarks, and we will exclude the corresponding results in the revision. That said, our method consistently improves worst-group accuracy over the ERM baseline across all BREEDS datasets without ImageNet pre-training. We'll discuss the challenge of this benchmark in the final version.
Summary: This paper studies the subpopulation shifting problem. To alleviate the issue, motivated by the idea of ensemble learning, the author proposes using a mixture of diversified prototypical classifiers over the feature prototypes of the subpopulations to classify different subpopulations correctly. Extensive experiments have been conducted on standard datasets. The author provides a comprehensive comparison of the proposed methods over the state-of-the-art method and justifies the effectiveness of the proposed method. Claims And Evidence: The paper's central claim is that explicitly encouraging diversity of the prototype-based predictors for each class could encourage subsequent ensemble members to capture the different decision boundaries for each subgroup, leading to better performance on classification under subpopulation shifting. The claim is reasonable, and the author provides extensive empirical studies to justify its effectiveness. Figure 2 provides a concise and clear demonstration of the motivation of the claim and the proposed method. The ablation study and error analysis further demonstrate that the proposed methods improve the classification performance as expected. Methods And Evaluation Criteria: Strengths: Overall, the paper is well-written and well-motivated. The proposed prototype-based method is relevant and sound, and the idea of ensemble diversified prototype-based predictors to alleviate the subpopulation shifting issue is originated from the ensemble learning perspective but with further consideration of the subpopulation shifting problem. Extensive experiments have been conducted to verify the effectiveness of the proposed methods from different perspectives. Weaknesses: 1. In line 220, the author chooses `N` prototypes per class for `K` classes. However, the author does not elaborate on how the `N` is being chosen, and the impact of the choice of `N` for training performance is unclear. The author may want to provide further ablation studies to demonstrate it, as the choice of `N` should be highly relevant to whether we can exactly achieve diversified predictors in practice, e.g., when `N` is not large enough, it is hard to achieve diversified predictors as there is not sufficient prototype to represent the feature space of the subgroups. 2. The cost of the proposed method has not been discussed. As the proposed method is prototype-based, the author must save the prototype and predictors to perform inference in test time. Given that the compared methods are not prototype-based, the author needs to provide the computational and storage costs between the proposed methods and the compared methods to let the reader know the trade-off between performance and the storage and computation cost among different kinds of methods to demonstrate the practicality of the proposed method. 3. The missing relevant literature in subpopulation shifting. Please refer to the Essential References Not Discussed for details. Theoretical Claims: This is not a theory paper, and there is no theoretical analysis. Experimental Designs Or Analyses: The experimental designs and analysis are sound. Supplementary Material: I have reviewed all parts of the supplementary material. Relation To Broader Scientific Literature: The submission is specific to addressing the subpopulation shifting problem and may not significantly impact broader scientific literature. Essential References Not Discussed: Paper [1] has not been cited/discussed in the paper. [1] tackles the subpopulation shifting problem in an incremental learning manner. In [1], the author proposes a prototype-based incremental learning method based on the generalized boosting theory [2,3] to learn new classifiers for novel subpopulations incrementally and combine the old and new classifiers sequentially. The insight of [1] is to calibrate the decision boundary over old and new subpopulations to correctly classify different subpopulations. Such a fine-grain modification on decision boundary is achieved by optimizing the margin-enforce loss [2,3], which, in theory, is equivalent to performing the ensemble learning via gradient-boosting that minimizes the residual error of the previously learned classifier. The reviewer recognizes that the present submission is not a continual or incremental learning paper. However, considering that both [1] and the present submission have the same research goal of addressing subpopulation shifting and similar insight on alleviating the subpopulation shift issue by ensemble learning and prototype-based methods, the reviewer believes the author must provide sufficient discussion of the current submission and [1] to properly acknowledge the existing literature and inspire the boarder research community. [1] Balancing between forgetting and acquisition in incremental subpopulation learning. ECCV 2022 [2] Multiclass boosting: margins, codewords, losses, and algorithms. JMLR 2019 [3] Boosting: Foundations and Algorithms. Kybernetes 2013 Other Strengths And Weaknesses: N/A. Other Comments Or Suggestions: N/A. Questions For Authors: N/A. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive and constructive review. We appreciate your recognition of our well-motivated method, the novel application of prototype-based ensemble for subpopulation shift, and the thorough experimental validation. Your main concerns include - (1) lack of ablation on the number of prototypes ($N$) - (2) missing analysis of computational and storage cost - (3) absence of discussion on related work in incremental subpopulation learning and boosting-based prototype methods. We address each of these points below with new results, expanded discussion, and clarifications. Full tables are provided at https://github.com/anonymous102030411/anon. ### 1. Choosing the Number of Prototypes We thank the reviewer for pointing out the importance of prototype count in achieving effective diversification. This is indeed a key design parameter in our method, and we have now conducted an extended ablation study to better quantify its effect on model performance. For a discussion of this study, we direct the reader to our response to reviewer ___[rN4y](https://shorturl.at/LBOK6)___. ### 2. Computation Cost and Complexity We agree that evaluating memory and computational requirements is essential. The additional cost of DPE relative to baselines is minimal, as our method only introduces $N$ prototypes per class ($D$-dimensional). In our experiments, with $K=10$ to $K=100$, $D=1024,$ and $N=10$ to $20$, this overhead is negligible compared to the backbone encoder. Inference speed is largely unaffected, as matching features against prototypes adds minimal cost relative to the encoder’s forward pass. While prototype storage scales with $K$, our experiments confirm that memory and compute demands remain manageable in practical settings. We refer the reviewer to our response to ___[rN4y](https://shorturl.at/LBOK6)___, where we provide detailed benchmarking results, including inference speed and GPU memory usage (see **[Table 1](https://shorturl.at/KekLM)**). We will update the manuscript to clarify that computation cost remains low and include the full benchmarking table in the appendix. ### 3. Addition of Relevant Citations We appreciate the reviewer’s suggestion and acknowledge that [1] addresses subpopulation shifts in an incremental learning setting. It employs an ERM-trained feature extractor and incrementally updates the classifier by combining previous and newly trained classifiers, with margin-enforce loss focusing on hard examples. The update balance is determined by measuring prototype distortion under the new classifier. Our work differs from [1] as follows: - Subpopulation characteristics: While [1] assumes distinct, predefined subpopulations, our method operates without prior knowledge of subpopulation structures and successfully generalizes to unknown shifts (as shown in Table 1, main paper). - Training setup: We focus on distribution shift robustness within a single training set, unlike [1], which assumes incremental subpopulation learning. - Methodology: Instead of using prototypes to balance learning vs. forgetting, we employ prototype-based classification with explicit diversification to enhance subgroup discovery. In response to the reviewer’s feedback, we will cite [1] and supporting references in the revised manuscript, highlighting how prototype-based methods and margin-enforce loss from boosting theory refine decision boundaries for subpopulation shift challenges, while clearly delineating our contributions. We hope this clarification strengthens the contextual positioning of our work. ### Relation to Broader Scientific Literature The reviewer highlights the relevance of our work to addressing the subpopulation shift problem. We would like to underscore the generality of the subpopulation shift phenomenon: as noted in [3], challenges such as class imbalance, attribute imbalance, and spurious correlations are all instances of the more general phenomenon of subpopulation shift. By targeting these core issues, our approach has broad applicability across domains like medical imaging, fairness in hiring, and large-scale recognition tasks, ultimately leading to more reliable, inclusive models. [3] Yang et al., 2023. Change is hard: a closer look at subpopulation shift --- **Thank you for taking the time to review our work. If our responses resolve your concerns, please kindly consider raising your score. Please feel free to reach out with any remaining questions.** --- Rebuttal Comment 1.1: Comment: Thank the author for the reply. I have read the author's rebuttal and other reviews and resolved my concerns. Overall, this is a good paper with clear motivation, and the methodology is sound and novel. Thus, I increase my score to Accept. Please incorporate the rebuttal and the discussion of the relevant literature in the final version. I look forward to future work in this direction. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the thoughtful feedback and for increasing the score. We’re glad to hear that the motivation and novelty of our work were well-received. We will incorporate the rebuttal clarifications and the discussion of relevant literature into the final version.
Summary: This paper tackles the problem of subpopulation shift in machine learning, where the proportions of different subgroups within a dataset change between training and testing. The authors propose a novel method called Diversified Prototypical Ensemble (DPE) to improve robustness to such shifts. DPE combines prototypical classifiers with an ensemble approach and explicit diversification strategies. The core idea is to train multiple prototype classifiers per class, encouraging them to learn different decision boundaries that capture various subpopulations within each class. Diversification is achieved through an inter-prototype similarity loss (IPS) and bootstrap aggregation. The main result is that DPE significantly improves worst-group accuracy (WGA) on real-world datasets, both with and without subgroup annotations during training. Claims And Evidence: The main claim is that DPE improves robustness to subpopulation shifts, as measured by WGA, compared to existing methods. The evidence provided is primarily empirical, based on experiments on several benchmark datasets. While the results show consistent improvements, the evidence is not entirely convincing due to concerns about the fairness of the comparisons (see below). The claims regarding the importance of diversification are supported by ablation studies. Methods And Evaluation Criteria: The proposed method (DPE) is a reasonable approach for the problem. Combining prototypical networks with ensemble learning and explicit diversification is a novel and potentially effective strategy. The evaluation criteria, primarily worst-group accuracy (WGA), is appropriate for assessing robustness to subpopulation shifts. The choice of benchmark datasets (from Yang et al., 2023) is also standard in this area. Theoretical Claims: The paper does not present any formal proofs or theoretical claims. Experimental Designs Or Analyses: The authors state that their initial ERM training stage ("ERM*") uses stronger data augmentation and longer training than the ERM implementation in Yang et al. (2023). They use published results for several baselines (ERM, CRT, ReWeightCRT, DFR from Yang et al., 2023; RWY and AFR from their original papers), but it seems not all of them use this stronger protocol. This introduces a confounder, making it unclear whether improvements are due to DPE itself or the stronger ERM training. Supplementary Material: I reviewed part of the supplementary material, including additional results (ablation study extensions), and dataset descriptions. Relation To Broader Scientific Literature: The paper builds upon several areas of related work, including subpopulation shift, prototypical networks, and ensemble learning. Essential References Not Discussed: The paper appears to cover the most essential references for the core problem and proposed method. Other Strengths And Weaknesses: 1. The method introduces several hyperparameters: the number of prototypes (N), the temperature ($\tau$), the IPS loss weight ($\alpha$), and the size of the subsets used for training. While the authors provide some details on how these were chosen, the sensitivity to these parameters (except N) isn't thoroughly explored. 2. The paper relies heavily on intuition and empirical results. There's no theoretical analysis explaining why the diversification strategies lead to improved worst-group accuracy, or if and why the prototypes align well with relevant subpopulations. A more formal understanding of the method's properties would be valuable. Other Comments Or Suggestions: None. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review. We appreciate your recognition of the novelty and relevance of DPE, its effective combination of prototypical networks and ensemble diversification, and its strong empirical performance on worst-group accuracy across standard benchmarks. Your main concerns include - (1) potential confounding due to stronger ERM* training compared to baseline implementations - (2) limited hyperparameter sensitivity analysis, particularly for τ, α, and subset size - (3) unclear prototype-subgroup alignment. We address each of these points below with new results, analyses, and clarifications. Full tables are provided at https://github.com/anonymous102030411/anon. ### 1. Ensuring a Fair Comparison with Prior Art We thank the reviewer for raising this important concern. To isolate the effect of our method (DPE) from stronger training protocols, we retrained all feature extractors using the exact setup from Yang et al. (2023)—matching architecture, augmentation, and training schedule. Applying DPE on top of these retrained backbones (see **[Table 2a](https://shorturl.at/bUCeK)**) confirms that DPE outperforms all reported baselines, showing the gains stem from DPE itself, not the ERM backbone. We also report results for ERM* + DPE, which includes stronger augmentation and longer training (see **[Table 2b](https://shorturl.at/3sAov)**). While these features boost performance, **DPE’s advantage remains consistent across both ERM and ERM*** **setups**. Regarding fairness in comparing ERM* + DPE to prior work, we note that several baselines use enhanced protocols: - DFR includes data augmentation. - RWG and RWY use extended training and thorough tuning. - CRT’s code includes augmentation, though not mentioned in the paper. - CnC, SSA, and GAP lack public code, so augmentation use is unclear. Thus, **comparing ERM*** **+ DPE to baseline reports is fair**, as many already benefit from stronger pipelines. In this context, DPE still achieves state-of-the-art worst-group accuracy across nearly all benchmarks. To support transparency, we will include: - A clear discussion of ERM vs. ERM* in the paper. - A full performance table with both ERM + DPE and ERM* + DPE results. ### 2. Hyperparameter Tuning and Sensitivity We thank the reviewer for pointing out the importance of hyperparameter sensitivity analysis. In our study, hyperparameters related to DPE—specifically the inverse temperature (**1/τ**), and the IPS loss weight (**α**)—are tuned using a held-out subset of the validation set, which is split into training and validation folds for tuning. Once optimal hyperparameters are selected, the prototypical ensemble is trained on the full validation set using these tuned values. Therefore, the subset size is not a hyperparameter in our pipeline. To address the reviewer’s concern, we conducted a sensitivity analysis on inverse temperature (**1/τ ∈ {10, 20, 30, 40}**) and IPS loss weight (**α ∈ {1e4, 5e4, 1e5, 5e5}**). Results across three datasets (Waterbirds, MetaShift, Living17) are presented in the attached tables (see **[Tables 7-10](https://shorturl.at/JZufO)**). The findings indicate that (1) DPE is robust to both τ and α on Waterbirds and MetaShift, with low standard deviations across tested values; (2) Living17 shows greater sensitivity, likely due to its more challenging subpopulation structure. Nonetheless, even in this case, worst-group accuracy varies within an acceptable range. **These results confirm that DPE’s performance is generally stable across a reasonable range of hyperparameter settings**. We’ll include this ablation study in the final version of the paper. ### 3. Prototype-subgroup alignment We appreciate the reviewer’s desire for deeper theoretical insight. To better understand whether learned prototypes capture meaningful subgroup structure, we conducted an exploratory qualitative analysis. Specifically, we tasked a third party (ChatGPT) with identifying common traits among the closest samples to each prototype—without providing group labels. This process revealed recurring semantic and ecological patterns (e.g., habitat type, pose, morphology) within prototype clusters, despite not being explicitly supervised to discover such groupings (see **[Figure 1 and 2](https://shorturl.at/o6tUI)**). These findings suggest that Diversified Prototypical Ensembles (DPE) may encourage meaningful prototype-subgroup alignment, potentially contributing to improved worst-group performance. We now clarify this insight in the revised draft. --- **Thank you again for your time and thoughtful review. If our responses addressed your concerns, we’d appreciate your consideration in raising the score. Please don’t hesitate to let us know if any points remain unclear—we’re happy to provide further clarification.** --- Rebuttal Comment 1.1: Comment: The authors have addressed most of my concerns. Therefore I am raising my score to 3. --- Reply to Comment 1.1.1: Comment: Thanks for re-evaluating the submission. We’re pleased the changes resolved your concerns, and we’ll incorporate them into the final version.
Summary: This paper introduces the Diversified Prototypical Ensemble (DPE) to improve the robustness of machine learning models to subpopulation shifts. It replaces the standard linear classification layer with an ensemble of distance‐based prototypical classifiers. A two-stage training scheme is used: first use ERM to train a feature extractor and then fine-tune the ensemble on a held-out validation set. Empirical evaluations across nine diverse real-world datasets demonstrate that DPE is better in worst-group accuracy than state-of-the-art methods, both when subgroup annotations are available and unavailable. Claims And Evidence: The paper claimed, "These classifiers are trained using ... to maximize prototype diversity, ensuring robust subpopulation capture within each class". Although improved worst-group accuracy is provided, the evidence linking each learned prototype to a semantically meaningful subpopulation is somewhat ambiguous. Methods And Evaluation Criteria: For evaluation, the paper mainly focuses on worst group accuracy when compared with other methods. The performance of average accuracy is not completely reported. Theoretical Claims: There are no theoretical claims or proofs. Experimental Designs Or Analyses: The ablation study on the number of prototypes is somewhat insufficient. Why limit it to less than 15? I am curious what will happen if you further enlarge the number. The authors claimed a weakness in their increased complexity, however, they did not conduct experiments to compare the running time, speed, computation resources, etc. Supplementary Material: Yes. Relation To Broader Scientific Literature: The paper relates to work including prototypical networks, ensembling, and subpopulation shifts. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: One strength is the paper is well-organized and easy to follow. Some weaknesses include: no theoretical justification, not solid enough evidence to show why worst-group accuracy improved, and some missing ablation studies on important hyperparameters. Other Comments Or Suggestions: NA Questions For Authors: NA. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We appreciate your recognition of our well-organized presentation, the strong empirical performance of DPE on worst-group accuracy, and its relevance to prototypical networks and subpopulation shift. You raised key concerns regarding: - (1) ablation on prototype count and hyperparameter sensitivity, - (2) missing runtime/resource analysis, - (3) incomplete reporting of average accuracy, - (4) limited evidence linking prototypes to meaningful subpopulations, and - (5) lack of theoretical justification. We address each of these below with new results, analyses, and clarifications. Full tables are provided at https://github.com/anonymous102030411/anon. ### 1. Further Ablation on the Number of Prototypes We thank the reviewer for raising this important question. To further investigate the effect of the number of prototypes per class, we extended our ablation study beyond 15 prototypes and now report results up to 40 prototypes per class. Let $\text{WGA}_N$ denote the worst-group accuracy (WGA) when using $N$ prototypes per class. We compute the percentage improvement over the single-prototype case as: $\Delta_N = \frac{\text{WGA}_N - \text{WGA}_1}{\text{WGA}_1} \times 100\%$ We evaluated this on four representative datasets—Waterbirds, CelebA, Metashift, and CheXpert—under both settings: with and without subgroup annotations. The average percentage improvements are as follows: $\Delta_5 = 2.4\%$, $\Delta_{10} = 3.3\%$, $\Delta_{15} = 3.7\%$, $\Delta_{25} = 3.7\%$, $\Delta_{40} = 3.7\%$ These results show that worst-group accuracy increases rapidly with the number of prototypes up to $N=15$, but plateaus beyond that point, indicating *diminishing returns*. Specifically, increasing from $N=15$ to $N=40$ provides no further gain in WGA. We interpret this as empirical evidence that a moderate number of diversified prototypes (e.g., 10–15 per class) is sufficient to capture the key subpopulation structures. Beyond this range, new prototypes tend to overlap in latent space with existing ones, limiting additional benefit. We also note that larger ensembles increase computational and memory costs without proportional gains. These observations justify our choice of using $N=15$ in the main experiments, balancing performance and efficiency. Regarding the performance sensitivity to other important hyperparameters in our study, we direct the reviewer to our response to Reviewer [VrPG](https://shorturl.at/t93so) with more extensive results and discussion. ### 2. Computational Complexity Analysis We clarify that “complexity” refers to implementation (e.g., added hyperparameters), not compute overhead. DPE adds minimal training cost, as prototypes act like lightweight linear layers. Compared to DFR, we train 15 simple classifiers instead of one. At inference, the overhead is negligible relative to the feature extractor. Table 1 (See **[Table 1](https://shorturl.at/KekLM)**) compares DPE and DFR in sample throughput and memory usage. Increasing from 15 to 100 prototypes only slightly raises time per batch (0.0031s→0.0045s) and keeps GPU memory below 1 GB—well within typical deep learning budgets. With fewer classes, the relative increase in complexity is even smaller. The complexity analysis will be included in the appendices of the revised version. ### 3. Reporting Average Accuracy We appreciate the suggestion to include average accuracy alongside worst-group accuracy. We revised the paper to include average accuracy in Table 6 (see **[Table 6](https://shorturl.at/KQEil)**), demonstrating that DPE remains competitive on average accuracy while prioritizing worst-group robustness. ### 4. Clarifying the Link Between Learned Prototypes and Subpopulations Our core contribution is demonstrating that prototype diversity reliably improves robustness across a wide range of subpopulation shift benchmarks. We provide the new qualitative visualization showing that learned prototypes naturally cluster semantically related samples and often align with meaningful subgroups, even without explicit supervision (see **[Figure 1 and 2](https://shorturl.at/o6tUI)**). This supports our central claim that diversity in prototype space enables better subgroup coverage. Expanded ablations further support the empirical foundation of our method. ### 5. Theoretical Justification The efficacy of DPE is supported primarily through extensive empirical evidence. We provide an intuitive rationale: by diversifying decision boundaries within each class, DPE encourages models to rely on more robust features rather than spurious correlations. We highlight this in the revised discussion and consider a formal theoretical study to be an important direction for future work. --- **Thank you for taking the time to review our work. If we have answered your questions, then we would appreciate you considering raising your score. If anything is still unclear, we are happy to clarify.** --- Rebuttal Comment 1.1: Comment: My concerns are mostly addressed. Therefore I am glad to increase the score from 2 to 3. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our response. We're glad the revisions addressed your concerns and appreciate the updated score. Your comments helped clarify key points, and the changes will be included in the revised manuscript.
null
null
null
null
null
null
Identifying key amino acid types that distinguish paralogous proteins using Shapley value based feature subset selection
Reject
Summary: When understanding the evolution of natural proteins, it can be helpful to distinguish related families where the sequences are similar but they perform different function. These are assumed to have diverged during evolution. A useful part of this workflow is to identify key amino acids that distinguish the two families. This paper proposes using Shapley values to select these, applies the selection technique to a number of paralogous pairs, and then confirms the selection by cross-referencing with the literature and the outputs of other computational techniques. Claims And Evidence: The authors confirm that the sets of amino acids identified by their method for various paralogous pairs agree with the outputs of other analysis workflows. There are limited claims about a particular method outperforming a different method. Methods And Evaluation Criteria: There is no ML methods novelty. The authors apply Shapley value feature selection technique off the shelf. The application of this technique to studying paralogous proteins appears to be novel. However, such content would be a better fit in biology venue. Unlike most ML papers, the results section does not contrast the performance of different methods; instead, it investigates the outputs of the method for particular protein families. Theoretical Claims: Not applicable Experimental Designs Or Analyses: I would have liked the experimental design to compare to alternative baseline methods for selecting the amino acids. It is unclear to me that the Shapley approach is required. Supplementary Material: Did not read Relation To Broader Scientific Literature: None Essential References Not Discussed: None Other Strengths And Weaknesses: I found the overall problem statement confusing. When explaining the distinction between two paralogous families, the output is a subset of the 20 possible amino acids, i.e., something like {Y, W, T, A, V, K, P}, irrespective of where these amino acids appear in the protein sequence. How is such a high-level statement helpful for understanding the distinction between the families? As stated above, I feel that this paper would be better suited in a biology-focused venue, where the outputs of the method, not the specific details of the method, are the primary object of study. For example, the method could be validated on sets of well-studied paralogous proteins and then applied to less-studied ones. Other Comments Or Suggestions: None Questions For Authors: None ## After Authors Response ## Thanks for the details. Given the high bar for acceptance to ICML, and the focus on ML methods (even for papers targetting a particular application domain), I continue to feel that this line of work would need further development to appear at ICML. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: ## 1. How is the AFS, irrespective of where they occur in the sequence, helpful for understanding the distinction between the families? The AFS is a data-driven prediction of the amino acid types that may play a role in the functional difference between paralogous proteins. Post-hoc computing the AFS, we have used various domain-based methods to validate their significance, like - (a) multiple sequence alignment and/or (b) 3D structure analysis, and/or (c) supporting evidence from biology literature (See section 3.1 Role of the amino acids identified in AFS). We believe these evidences suggest that **amino acids with high Shapley values are more likely to play a role in the functional difference between the paralogs and, hence, can be targeted for site-directed mutagenesis experiments.** ## 2. Novelty and suitability to ICML? The paper aligns well with the primary area of our submission (Applications->Chemistry). It proposes a computationally cheap, easy-to-run and data-lean ML pipeline for a well-defined and scientifically relevant application in the study of proteins. We agree there is limited novelty in the overall methodology. However, a novelty is in using SVM to class-wise partition the frequency based features, which is distinct from its primary and traditional usage to predict the class-labels. This partitioning is a major element in our pipeline to identify the class-wise importance of the amino acids. Experimental biochemists can quickly try the proposed ML pipeline as an initial data-driven step before investing in detailed wet lab experimentation. ## 3. Comparison with other baseline methods? As an established ground truth feature subset/rankings is not available for the task it is challenging to quantitatively evaluate the method or compare performances with alternative methods. We also agree with Reviewer YKE8, who points out, ``*There isn't a good way to tell if this method or an alternative … would do better or worse than the proposed method*''. For this reason, we have mainly supported our method with evidence from experimental biology literature that highlights the role of the AFS amino acids in the function/structure of the respective protein. However, AFS computed using SVEA was compared with MCI (an axiomatic approach, ICML 2021), an alternative feature ranking method (Sec 3.3 with details in Appendix Sec E.4). For 8 of the 15 paralog pairs, the AFS is the set of top-ranked features by MCI. For all 15 datasets, at least the top-3 MCI amino acids are in AFS. For 11 of these datasets, at least the top-5 MCI amino acids are in AFS. ## 4. Unclear why Shapley values are required. Shapley values have an axiomatic foundation and hence are a principled approach to apportion the training loss among features. To score each feature, the Shapley value accounts for its marginal contribution to all possible feature interactions in linearly separating the data by class labels. (details in Sec. 2.2)
Summary: The authors introduce a Shapley-value based approach to identify key amino acids distinguishing paralogous proteins (P and Q). By utilizing a Shapley-based SVM classifier, they define amino acid feature subsets (AFS) for each protein: AFS(P) and AFS(Q). They train an SVM to differentiate these subsets and validate the method against traditional approaches like MSA, 3D structural analysis, and existing literature. An interesting finding is the observed "exclusion principle": if AFS1(P) and AFS2(P) for protein pairs P-Q and P-R are compared, they are identical except for one amino acid. Moreover, amino acids present in both AFS1 and AFS2 are generally excluded or have low Shapley values in AFS3 when considering the Q-R pair. Claims And Evidence: Strengths - The writing is clear and accessible, even for readers unfamiliar with Shapley methods. - The method logically extends from previous works, such as Tripathi et al. (2021) [1], and experimental results support the approach. - The SVM model is straightforward and scalable, accounting for potential differences in training sequences between P and Q. - Computational efficiency is a significant advantage, with the Shapley approximation running faster than traditional methods. Weaknesses - The paper could have provided more detailed comparisons with alternative computational methods for distinguishing paralogous proteins. In its current form, it is hard to assess the strength of the proposed algorithm. - The novelty may be limited for a main-track ICML submission, as the approach basically combines an SVM with a class-weighted SVEA from [1]- both well-established methods. - Figures 1 and 3, which present AA cutoff data, would benefit from being bar graphs rather than line graphs. The current presentation makes the x-axis confusing and less intuitive. Methods And Evaluation Criteria: Since the nature of the paper is not a new algorithm or methodologies, the paper would be best evaluated in a computational biology conference with additional biological verifications. Theoretical Claims: None. Experimental Designs Or Analyses: The experiments are straightforward applications of SHAP and SVM on a domain problem. There is no concern in the experimental design. However, the method is not compared to other potential baseline algorithms. Supplementary Material: no Relation To Broader Scientific Literature: The impact on ML literature is very limited. Essential References Not Discussed: None. Other Strengths And Weaknesses: Discussed earlier. Other Comments Or Suggestions: Discussed earlier. Questions For Authors: The paper set the Shapley value cutoff based on the efficiency axiom. Could this cutoff be tuned based on prior knowledge of amino acids? For example, if an amino acid is known to have x_i^AAC~0, could N be adjusted to diclide {i}? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## 1. Could Shapley value cutoff be tuned based on prior knowledge of amino acids? E.g., when $x_i^{AAC}=0$ for some amino acid $i$? The cutoff, in principle, can be user-defined as it is only used to select the top-ranked features. The efficiency axiom based cutoff, i.e $\sum_i \phi_i/n$ , selects the top-ranked features with above-average Shapley values. Amino acids with $x_i^{AAC}=0$ will have a Shapley value of 0 (dummy player axiom) and hence will be excluded in the AFS. Adjusting $n$ to exclude dummy players will result in a higher cut-off and a smaller AFS. However, we did not find any dummy players in our 15 datasets. Hence, the cutoff for AFS doesn’t change (and hence, AFS doesn’t change). ## 2. Novelty and scope of the method for ICML? (limited ML impact) The paper is well suited to the primary area of our submission (Applications->Chemistry) as it has a well-defined and scientifically relevant application in the study of proteins. We identify a novel problem - identifying the amino acid types that distinguish a pair of paralog proteins. We agree there is limited novelty in the overall methodology. However, a novelty is in using SVM to class-wise partition the frequency based features, which is distinct from its primary and traditional usage to predict the class-labels. This partitioning is a major element in our pipeline to identify the class-wise importance of the amino acids. The proposed ML pipeline is computationally cheap and easy to run; hence, experimental biochemists can quickly try it as an initial data-driven step before investing in detailed wet-lab experimentation. ## 3. Is an application of SHAP and SVM. Our ML pipeline uses the SVEA algo (Tripathi et al., 2020) and SVM, but doesn’t use SHAP. (As discussed in page 2, line 103) SHAP scores the features for a given test instance based on a trained model, while SVEA learns feature scores from the training data by apportioning the training loss between the features based on their marginal contributions. The SVM here is used for class-wise partitioning of the feature subset, AFS, that was computed using SVEA. ## 4. Comparison with other baseline methods? We compare the AFS computed using SVEA with MCI (an axiomatic approach, ICML 2021), an alternative feature ranking method (Sec 3.3 with details in Appendix Sec E.4). For 8 of the 15 paralog pairs, the AFS is the set of top-ranked features by MCI. For all 15 datasets, at least the top-3 MCI amino acids are in AFS. For 11 of these datasets, at least the top-5 MCI amino acids are in AFS. However, an established ground truth feature subset/rankings is not available for the task that can be used for evaluating the method or comparing performances with alternative methods. We also agree with Reviewer YKE8, who pointed out, ``*There isn't a good way to tell if this method or an alternative ... would do better or worse than the proposed method*''. Mainly for this reason, we relied on evidence from experimental biology literature that highlights the role of the AFS amino acids in the function/structure of the respective protein.
Summary: This article deals with a biological problem: distinguishing paralogous proteins. It is addressed as a set of binary pattern classification problems.The method designed to solve them is a pattern extraction method. It consists in identifying the amino acids characterizing the paralogs. This extraction is the result of a two-step process: 1- a Shapley value based feature selection (SVEA) identifies a set of discriminant amino acids (referred to as AFS); 2- a binary SVM partition this set into two subsets associated with the two categories considered. Experimental results are provided on a set of 15 paralogous proteins. Claims And Evidence: The authors claim that their characterization of the parologs by means of AFSs should limit the number of experiments to be performed by the biologist, notably site-directed mutagenesis experiments. This is questionable since it is unclear whether their method could be sensitive to the substitution of one single residue. Methods And Evaluation Criteria: The experimental results are provided on a small set of proteins (15 paralog protein pairs)). Their statistical significance should be detailed. Theoretical Claims: This paper, primarily methodological, makes no theoretical claims. Experimental Designs Or Analyses: The experiments performed, involving a cross-validation, are technically sound. Supplementary Material: The supplementary material is rich and very helpful to understand the details of the method and its evaluation. Relation To Broader Scientific Literature: The feature extraction literature provides alternate options that could have been considered, at least to obtain performances of reference. Essential References Not Discussed: I did not identify any such reference. Other Strengths And Weaknesses: No other strengths or weaknesses Other Comments Or Suggestions: No comments Questions For Authors: Could you explain why you did not address the problem directly as a multi-category pattern classification problem, using, for instance, a multi-class SVM equipped with a string kernel? Could you make it clear that your method is enough to limit the number of site-directed mutagenesis experiments to be performed by the biologist? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## 1. Why not address the problem as multi-category pattern classification, eg, using multi-class SVM with a string kernel? Classification of proteins is not our task. Our learning task is identifying the amino acid types (feature subset) that play a role in the functional difference of a given paralog pair. Hence, the SVM in our pipeline is a binary classifier trained to classify the paralog pair. However, its utilisation here is to partition the AFS class-wise rather than predict the class label. The string kernel (Leslie et al., 2002) computes k-mer based $20^k$ dimensional feature vector from the sequence. As alluded to in Sec 4 Conclusion (pg 8, col 2, line 428), these features are high-dimensional. The Monte Carlo based approximation algorithm for Shapley values would require exponentially more sampling (in number of features) for good approximations. For k=2 itself, the feature dimension is 400, which is higher than the number of samples for many of our paralog pair datasets, resulting in a high-dimension-low-sample-size (HDLSS) setting. ## 2. Is the method enough to limit the number of site-directed mutagenesis experiments by biologists? A ground truth feature subset/ranking is not available for this task. Therefore, posthoc computing the AFS, we have used various domain-based methods to validate their significance, like - (a) multiple sequence alignment and/or (b) 3D structure analysis, and/or (c) supporting evidence from biology literature (Sec 3.1 Role of the amino acids identified in AFS). We believe these evidences suggest that **amino acids with high Shapley values are more likely to play a role in the functional difference between the paralogs. As the AFS amino acids are present at limited positions compared to the entire sequence, site-directed mutagenesis experiments can focus on these amino acids to check which positions may have functional significance.** However, since the AFS is a data-driven computation, its quality may be subject to selection bias in the data and/or the level of granularity in the functional difference of the paralog pair. ## 3. Feature extraction literature provides alternate options that could have been considered, at least to obtain reference performances. We used MCI (an axiomatic approach, ICML 2021) as an alternative feature ranking method and compared the top-ranked amino acids (Sec 3.3 with details in Appendix Sec E.4). For 8 of the 15 paralog pairs, the AFS is the set of top-ranked features by MCI. For all 15 datasets, at least the top-3 MCI amino acids are in AFS. For 11 of these datasets, at least the top-5 MCI amino acids are in AFS. However, an established ground truth feature subset/rankings is not available for the task that can be used for evaluating the method or comparing performances with alternative methods. We also agree with Reviewer YKE8, who pointed out, ``*There isn't a good way to tell if this method or an alternative … would do better or worse than the proposed method*''. Mainly for this reason, we relied on evidence from experimental biology literature that highlights the role of the AFS amino acids in the function/structure of the respective protein. ## 4. Results are provided on a small set of proteins (15 paralog protein pairs). Their statistical significance should be detailed. In the absence of ground truth of the set of key amino acids, we don’t have a score for the method's correctness; hence, one can’t compute statistical significance for our setting. Statistical significance is usually viewed as an indicator of the robustness of the proposed scheme. To check the method's robustness, a diverse dataset of paralogous proteins was curated from the UniProt (public database), considering the number of sequences and manually reviewed labels available. (Sec 1. Intro, col 2, line 30) The selected datasets of 15 paralog pairs show a range of sequence and function diversity. Sequence diversity has been discussed using the longest common subsequence score. As discerned from biology literature, functional diversity also shows large diversity from subtle functional differences (e.g., trypsin/chymotrypsin) to drastic (e.g., lysozyme c/$\alpha$-lactalbumin). More details are in Appendix Sec B (pg 15). ## 5. Unclear whether their method could be sensitive to the substitution of one single residue do such situations arise when one is comparing paralogs? The functional difference in two paralogous proteins is considered to arise due to evolutionary changes in the sequences after gene duplication. Hence, we do not focus on single residue substitutions but on the subset of amino-acid types that can play a role in the functional differences between the paralogs. Even so, if the difference between sequences of two paralog families is only in one residue, then the pipeline should identify this as only two amino acid types will have a change in their composition. However, we do not have such examples in our datasets.
Summary: Manuscript proposes a method for identifying which amino acid types are discriminative between paralogous families’ sequences. The selected set of amino acid types, 5-10 amino acid out of 20, are called amino acid feature subset (AFS). Feature relevance is determined using Shapley values. Biological relevance of the selected subset of amino acids that are explored using multiple sequence alignments between families, literature support for loss-of-activity or increase-of-activity when an amino acid belonging to the feature subset is mutated. The method operates on marginal counts of amino acids from sequence, and as a result it is agnostic to the position of amino acids. The proposed method is efficient to run and can produce candidates for mutational scanning. Claims And Evidence: Chief claim is that AFS, when examined in the context of paralogs, identify functionally relevant residues. Support from literature, semi-quantitative since we do not know the false positive, seems encouraging but incomplete. Methods And Evaluation Criteria: The manuscript focuses of sequence alignments to highlight the AFS and their relevance. There isn't a good way to tell if this method or an alternative -- for example, some variant of divergence between distributions at each site -- would do better or worse than the proposed method. Classification scores are provided but they are less relevant to the question of whether the selected amino acids are significant to function. Theoretical Claims: N/A Experimental Designs Or Analyses: Analysis focuses on two aspects: between paralogs classification and consistency of the AFS between themselves and literature. The analyses that are presented read as sensible. Supplementary Material: I read the supplementary material to understand paralog selection, looked at the algorithm description, classification results and sought the 3D structural alignment (that is sadly missing, E7). Relation To Broader Scientific Literature: The work aims to locate functionally relevant amino acids by studying closely related sequences. A prominent area is finding rare/causal disease variants using large corpuses of sequence data -- including paralogs. I cannot say with certainty whether a convolutional network trained in such manner has also identified context specific amino acid types where context may be sufficient to identify paralogous families. Essential References Not Discussed: N/A Other Strengths And Weaknesses: This is a simple method, easy-to-run and implement. It has a nice intuition behind it – given alignment of paralogs, positions are less important than amino acid identities. The key question is whether it is sufficiently attractive to broader ICML audience. Relating this to disease causing variants would have been quite a bit more powerful. Other Comments Or Suggestions: Figure E7 is missing in supplementary. I expected a 3d structural alignment and highlight of AFS. HLA families have plenty of examples of paralogs which would be easy to explain and validate. It is odd to skip them, since they are so heavily studied and deeply typed. Would HLAs not have interesting AFS? It might be the shortest route to a disease, such as MS. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## 1. Figure E7? Figure E7 is present on page 22, Appendix Sec E.1. It shows the alignment of sequences after 3-D structure alignment. The structural superimposition is not shown in the figure. The AFS amino acids are in bold in the alignment, and the contact points of hemoglobin tetramer are highlighted in yellow. We regret that the lack of a 3-D structure superimposition image may have caused confusion. We have an updated figure with superimposed structures, link to figure: <https://anonymous.4open.science/r/AFS_AAC_SVM-F3D9/Figure-E7-updated.png> ## 2. AFS for HLAs? This is a promising suggestion. However, the number of reviewed (SwissProt) entries available in UniProt for each HLA family is very low (1 per family). There are 22 entries for HLA genes: 7 from class I (A, B, C, E, F, G, H) and 15 from class II (DMA, DMB, DOA, DOB, DPA1, DPB1, DQA1, DQA2, DQB1, DQB2, DRA, DRB1, DRB3, DRB4, DRB5). The Class I genes are considered paralogs to each other, as are the Class II genes. **Each of these genes has only one reviewed entry, while we have considered paralog pairs in our datasets with at least 15 reviewed sequences available per family.** While there are many allelic variants (with single residue mutations) available for each entry, they are nearly identical and, hence, are not statistically helpful in computing the AFS. ## 3. Relating AFS to diseases? This is a good suggestion; below, we report some examples that link AFS amino acids to diseases. This is based on evidence from biology literature that suggests the role of these amino acids in the function/structure of the respective families. Thus, modification/mutation of these residues may be disease-causing as it could disrupt some important physiological processes relating to the function of these proteins. **Example 1**: $W$ has the highest Shapley value in $AFS_2(\text{Secretin})$ and $AFS_3(\text{Secretin})$ (Table 1). Mutating certain conserved $W$ leads to a loss in cell-surface expression of calcitonin gene–related peptide receptor (CGRPR), a secretin-like GPCR (Cary et al., 2022). A study in mice reports that disruption in CGRPR signalling accelerates Amyotrophic lateral sclerosis (ALS). (source: <https://pmc.ncbi.nlm.nih.gov/articles/PMC11107523/>) **Example 2**: (*AFS of hemoglobin-$\beta$* - from myoglobin vs hemoglobin-$\beta$ and hemoglobin-$\alpha$ vs hemoglobin-$\beta$) (Table 1) **(a)** $W$ with a significantly high Shapley value $\phi(W)$ (Figure 3(b)), is present in $AFS_3(\text{Hemoglobin-}\beta)$. It is also in $AFS_2(\text{Hemoglobin-}\beta)$ (Table 1). $W$ to $S$ and $W$ to $R$ mutation at a certain position in hemoglobin-$\beta$ has been reported to result in abnormal hemoglobins - Hemoglobin Hirose and Hemoglobin Rothschild, respectively. These have altered oxygen affinities and dissociation of hemoglobin tetramer to dimers. (Hb Hirose Source: <https://pubmed.ncbi.nlm.nih.gov/22548/>) (Hb Rothschild Source: <https://www.sciencedirect.com/science/article/pii/002228368090090X>) **(b)** $N$ is common in $AFS_3(\text{Hemoglobin-}\beta)$ and $AFS_2(\text{Hemoglobin-}\beta)$. Mutation/deletion of $N$ has been associated with $\beta$-thalassemia (Source: <https://pmc.ncbi.nlm.nih.gov/articles/PMC3633182/>) **(c)** Mutations to $V$ with the highest Shapley value in $AFS_2(\text{Hemoglobin-}\beta)$ are related to hemoglobin variants with altered structure and biochemical properties leading to varying physiological effects. (Source: <https://pmc.ncbi.nlm.nih.gov/articles/PMC3579210/>). An example is Hemoglobin Olympia, having $V$ at a particular position mutated to $M$. (Source: <https://pmc.ncbi.nlm.nih.gov/articles/PMC302263/>)
null
null
null
null
null
null
All-Purpose Mean Estimation over R: Optimal Sub-Gaussianity with Outlier Robustness and Low Moments Performance
Accept (oral)
Summary: Mean estimation is the following simple task: given samples from a probability distribution D on R, estimate the mean mu of D. Although mean estimation has been studied since the dawn of time, various elementary and fundamental questions about mean estimation are still being tackled. A natural goal is to obtain optimal *nonasymptotic* errors. That is, for a given number of samples n and error rate delta, design an estimator hat(mu) which satisfies an inequality of the form Pr( |hat(mu) - mu| < error ) \geq 1-delta, for as small a function error(n,delta) as possible. If D is Gaussian, the empirical mean gives the best possible estimator, but since the real world isn't really Gaussian, we would like to obtain estimators with such guarantees, ideally with the function error(n,delta) matching the one we would get for Gaussian D, under much weaker assumptions on D. Lee and Valiant in 2022 constructed a mean estimator which obtains the optimal error(n,delta) under only the assumption that D has bounded variance. Their estimator even adapts to the variance of D. While the Lee-Valiant estimator is highly robust in the sense of tolerating a broad class of underlying distributions D, the question this paper addresses is whether it is robust in other senses, in particular: -- adversarial contamination/data poisoning -- what if a small fraction of the sample are chosen adversarially? -- nonexistent variance -- what if even the variance of D does not exist? The authors prove that the Lee-Valiant estimator satisfies strong robustness guarantees in both these senses. First, they show that it is resilient to an \( \eta \)-fraction of adversarially corrupted samples in the strong contamination model, achieving the optimal error bound \( O(\sigma \sqrt{\eta}) \). Second, they establish that when \( D \) has only a finite \( z \)-th moment for \( z \in (1,2) \), the estimator attains the minimax-optimal error rate, matching known lower bounds. Third, they show that it satisfies *neighborhood optimality*, meaning that it adapts to the structure of the underlying distribution and achieves the best possible error beyond worst-case guarantees. Finally, they prove that the estimator is asymptotically normal, converging efficiently to the true mean as the sample size grows. These results show that the Lee-Valiant estimator is not just optimal in the standard setting but also robust across a range of challenging conditions, making it a strong candidate for practical use over alternatives like median-of-means or trimmed means. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: no Experimental Designs Or Analyses: n/a Supplementary Material: no Relation To Broader Scientific Literature: Robustness is statistics is a long running theme with thousands of publications. Prior works have developed estimators which achieve all of the guarantees of the lee-valiant estimator separately, but as far as I am aware no single estimator is so far known (until this work) which has the sub-Gaussian error guarantee (including the right constant), robustness to adversarial contamination, and minimax optimal rates under weaker moment assumptions. Essential References Not Discussed: n/a Other Strengths And Weaknesses: The paper is well written and elegantly argues for the usefulness of the Lee-Valiant estimator. Other Comments Or Suggestions: non Questions For Authors: non Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your appreciation of our work!
Summary: This paper considers the problem of designing "all-purpose"mean estimation algorithms that can be applied to a variety of scenarios. To be specific, the authors consider the estimator by Lee & Valiant (2022). This algorithm, is previously shown to be optimal in the standard finite variance and i.i.d. setting. This paper, further give the result that Lee & Valiant (2022)'s algorithm is optimal in other settings. To be specific, they proved that the algorithm is robust to data corruption, and also optimal for distributions that have infinite variance. Claims And Evidence: The proof of robustness to corruption is given in Thm 2.2 and 2.3. Optimality under infinite variance is proven in Theorem 2.4. Methods And Evaluation Criteria: No evaluations is involved by this work. It is a learning theory work. Theoretical Claims: I didn't check the proof correctness in detail. Experimental Designs Or Analyses: No experiment in this work. Supplementary Material: No supplementary material. Relation To Broader Scientific Literature: Not sure how this work is related to broader scientific literature. Essential References Not Discussed: No missing essential references. Other Strengths And Weaknesses: Strengths: Proof the optimality of the previously proposed algorithm under some scenarios. The optimality results is important to the community. Weakness: The term “all-purpose” may be an overstatement. I am not sure why the robustness under certain scenario can be regarded as "all=purpose". Other Comments Or Suggestions: I would suggest that the authors give more explanation of why the robustness under the mentioned scenario can be viewed as "all-purpose". It might be good to emphasize why those scenarios are important. Questions For Authors: The studied estimator is 1-d. Is there multi-dimensional estimator proven, or shown in experiment, to be efficient? Is there any other setups that worth considering? Other than contamination and infinite variance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive review of our paper. We answer your questions below. **Beyond 1-d**: Currently, even the construction of a 2-d mean estimator achieving the analogous guarantees of Fact 1.1 (i.e. with sharp constants) is an *open problem* in the field, so no such estimator is known at this moment. On the other hand, Lee and Valiant have a different paper named "Optimal Sub-Gaussian Mean Estimation in Very High Dimensions", which studies the regime where the "effective dimension" of a distribution is much much larger $\log^2 1/\delta$, and they yield $1+o(1)$-tight estimation error. The robustness of that estimator is basically trivial to analyze, although the extra error term is different from the $\sqrt{\eta}$ term we get in 1-d: that other estimator does not handle low-dimensional distributions well, and the $\sqrt{\eta}$ term from data corruption is a "low-dimensional phenomenon". **Other setups, the phrase "all-purpose"**: We chose to study adversarial corruption, distributions with infinity variance, and asymptotic normality, because these are compelling and fundamental notions of performance that one could hope for in a mean estimator. These properties are the focus of widely known or folklore results on mean estimation. Thus we are motivated to ask whether all these properties can still hold in addition to the sharp performance guarantee achieved by the recent Lee-Valiant estimator. There are (of course) many other settings that are worth considering, for example what if the samples are not i.i.d., but many desiderata might be incompatible with the extremely strong 1+o(1) performance guarantee of the Lee-Valiant estimator. As for the name "all-purpose", we also weren't completely happy with the naming -- we wanted to convey the meaning of "swiss-army knife" but without the clunkiness of the phrase. We welcome any suggestions!
Summary: This paper concerns the problem of estimating the mean of i.i.d. real-valued samples. The authors studies an estimator due to Lee & Valiant 2022 and show that this estimator enjoys several properties not known before. These include - optimal robustness against adversarial outliers - optimal accuracy for heavy-tailed distributions - optimal adaptation to unknown distributions Claims And Evidence: All claims come with rigorous proofs. Methods And Evaluation Criteria: NA Theoretical Claims: I didn't check the proofs in the appendix. Experimental Designs Or Analyses: NA Supplementary Material: NA Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: 1. I'm not sure I understand the difference between Theorems 2.2 and 2.3. Besides the technical nuances in $\delta'$ vs. $\delta$, what's the fundamental reason to prove one in addition to the other? They seem to have essentially the same quality given that neither $222$ in Theorem 2.2 nor $135$ in Theorem 2.3 is expected to be sharp. 1. Line 146-149 "Comparing Theorem 2.2 with Theorem 2.3, the former asks for a failure probability $\delta' > \delta$, but correspondingly has a smaller sub-Gaussian error term (since $\log\frac{1}{\delta'} < \log\frac{1}{\delta}$)". Do you mean "latter" instead of "former" since there is no $\delta'$ in Theorem 2.2. 1. After reading the first 8 pages, I'm still confused about what neighborhood optimality means. It was not really defined in Definition 2.5. I suggest the authors formally define this somewhere in the main text. 1. The convergence equation $\sqrt{n} \hat{\mu} \to \sqrt{n} \bar{X}$ in Theorem 2.8 and elsewhere is inaccurate since the RHS is $n$-dependent. Do you mean something like $|\sqrt{n} \hat{\mu} - \sqrt{n} \bar{X}|\to 0$? Other Comments Or Suggestions: NA Questions For Authors: 1. Unless I missed it, this paper doesn't seem to discuss at all what happens for mean estimation beyond *one dimension*, which is a natural question to ask and potentially more relevant to modern statistics. Robustness guarantees for high-dimensional mean estimation can be quite hard to derive. Even defining the notion of median is nontrivial, as briefly alluded to in the paper. I wonder if anything in the spirit of Lee & Valiant 2022 can be (or has been?) done. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive review and questions. Here, we answer the questions raised. **Thm 2.2 vs 2.3**: The difference is indeed subtle, but neither theorem implies the other, and since different readers might consider one or the other "more natural", we included both, to avoid leaving readers with lingering questions. (We consider Theorem 2.2 as being more natural; but 2.3 is more similar to what has appeared in prior literature.) **Line 146-149**: Thanks for catching the typo **Neighborhood Optimality**: Yes, we unfortunately had to make the conscious choice to omit the full definition, given the submission page limit. The definition is rather subtle, and is an adaptation of "local minimax" from the statistics literature. Essentially, the notion means "local admissibility" or "local Pareto efficiency" --- the performance of an estimator is neighborhood optimal if, even after we restrict attention to the "local neighborhood" of a distribution $D$, no other estimator can outperform our estimator simultaneously on all distributions in that neighborhood. We will include the definition (including the choices of local neighborhoods) to the main paper if accepted, using the extra page space. **Convergence equation**: we use the notation $X \to Y$ to mean $|X - Y| \to 0$ as you pointed out. We originally thought it is a reasonably common notation, but we are happy to further clarify in the paper to remove ambiguity. **Beyond 1-d**: Currently, even the construction of a 2-d mean estimator achieving the analogous guarantees of Fact 1.1 (i.e. with sharp constants) is an *open problem* in the field, so no such estimator is known at this moment. On the other hand, Lee and Valiant have a different paper named "Optimal Sub-Gaussian Mean Estimation in Very High Dimensions", which studies the regime where the "effective dimension" of a distribution is much much larger $\log^2 1/\delta$, and they yield $1+o(1)$-tight estimation error. The robustness of that estimator is basically trivial to analyze, although the extra error term is different from the $\sqrt{\eta}$ term we get in 1-d: that other estimator does not handle low-dimensional distributions well, and the $\sqrt{\eta}$ term from data corruption is a "low-dimensional phenomenon".
Summary: This paper can be seen as a follow up for the seminal work of Lee & Valiant (2022), which proposed an optimal mean estimator for distributions over the set of real values, i.e. $\mathbb{R}$. Based on that, this paper further shows that the mean estimator of Lee & Valiant (2022) is ``all-purpose'', that is, it is robust to an $\eta$-fraction of corruptions under the strong contamination model; meanwhile, for heavy-tailed distribution with bounded $z$-th moment ($z\in(1,2)$), it has optimal estimation error comparing to a lower bound in Devroye et al. (2016). Futhermore, it also shows that all estimators with similar error guarantees imply the neighborhood optimal, hence solving an open question raised in Dang et al. (2023). Finally, the paper also proves that the estimator is asymptotically normal and efficient, further supporting its performance in many learning scenarios. Claims And Evidence: All claims are very clear with convincing evidence. Methods And Evaluation Criteria: The method, or the algorithm, follows from prior work. The main contribution is the theoretical analysis for robust settings. Theoretical Claims: The theoretical theorems and lemmas, proof ideas, look sound to me. The algorithm remains unchanged. The paper aims to show that even when the algorithm learns from a corrupted set of data samples $\tilde{X}$, an important ``influence parameter'' $\tilde{\alpha}$ calculated based on $\tilde{X}$ will be lower bounded, hence upper bounding the influence from the corrupted samples. Experimental Designs Or Analyses: N/A Supplementary Material: I checked theorem 2.3 in appendix, however, didn't read the proofs closely. Relation To Broader Scientific Literature: This work solves an important problem by extending the performance guarantees of the mean estimator Th proposed by Lee & Valiant (2022) to robust mean estimation literature. More importantly, it shows that the nice feature of the optimality in leading constant terms of the error rate still carries over to the robust setting. Essential References Not Discussed: Sufficiently discussed. Other Strengths And Weaknesses: The paper brings very strong theoretical guarantees for robust mean estimation in one-dimensional space. It provides insights for the optimality of the estimator proposed by Lee & Valiant (2022) and median-of-means. The theoretical analysis and discussion look sound to me and the paper is well written. Other Comments Or Suggestions: N/A Questions For Authors: Can you comment on the broad impact of the analysis for the estimation of heavy-tailed distributions? In addition, if these techniques are applicable to higher-dimensional estimation? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your appreciation of our work. To answer your question regarding high(er)-dimensional mean estimation: currently, even the construction of a 2-d mean estimator achieving the analogous guarantees of Fact 1.1 (i.e. with sharp constants) is an *open problem* in the field, so no such estimator is known at this moment. Furthermore, the neighborhood optimality notion is also only understood in 1-d. Hence, it's hard to say if our techniques will directly extend to high(er) dimensions, since our analyses do depend on the precise estimator being studied. That said, asymptotic normality should be easiest to prove for any eventual 2-d mean estimator with sharp constant guarantees, since it's a much easier guarantee than Fact 1.1-style results. Also, we speculate that our low-moment analysis of the 1-d Lee-Valiant estimator can plausibly extend to 2-d (or constant-d), if the eventual 2-d estimator subsumes the 1-d Lee-Valiant estimator and is also naturally analyzed via the maximin linear programming games capturing Chernoff Bounds (cf Section 6.2). About the broader impact of our analysis: while we hope other authors will be intrigued by our results and try to prove analogous results in other settings, the techniques in our paper are very tailored to the specifics of the estimator---for the sake of a tight analysis. Let us know if there are more specific aspects of this that you might like to see discussed in the final paper.
null
null
null
null
null
null
COMRECGC: Global Graph Counterfactual Explainer through Common Recourse
Accept (poster)
Summary: The authors focus on the question of generating global counterfactual explanations. They introduce the problem from a theoretical perspective. They introduce the problem of FCR and FC which correspond to Finding Common Recourse and Finding Counterfactual problems, respectively. They introduce these problems from a high-level theoretical perspective. They provide several theoretical guarantees such as: proving that FCR is a NP-hard problem, and that there exists an approximation for the FC problem with the given constraints. This motivates their algorithm, which uses a graph embedding, a multi-head vertex reinforced random walk to find CFs, and finally clustering to obtain common recourses. After designing their algorithm they implement it on several real-world datasets. They use the data sets MUTAGENICITY, NCI1, AIDS, PROTEINs. In particular, most of these datasets are in molecules. They train a base GNN (GCN) and experiment with parameters $\theta$ and $\Delta$. They then compute the coverage with respect to the cost. Claims And Evidence: S1. The paper is well written and well founded. The main idea is novel and uses theoretical justification to motivate their framework. S2. Their reasoning behind design choices is mostly sound. S3. They provide initial experiments that showcase some of the promising behavior of their framework. O1. Their is a glaring lack of experiments. It is understandable the authors introduce a novel problem and emphasize that in their experiments however the main metrics are coverage and cost. These experiments are good at showing their methods desired properties but they do not showcase mainstream metrics and experiments such as validity, sparsity, and potentially some other metrics. All these metrics are not necessary but they bolster their framework's strengths. O2. Another issue is the lack of experiments. There are numerous types of datasets that have not been experimented on. Focusing primarily on molecular data leaves a glaring question for the readers: experimentally how does this framework work on other types of graphs (such as networks, etc.)? Not addressing this question also raises questions of how their framework works on graphs with many more nodes, edges, etc. This reviewer believes, in the current state of this paper, that these experiments do not prove this method's superiority over existing Counterfactual methods nor do it showcase the full behavior of their framework on graphs it is likely to encounter in the real world. Methods And Evaluation Criteria: As stated before, several issues with their experimentation or lack thereof are: O1. Their is a glaring lack of experiments. It is understandable the authors introduce a novel problem and emphasize that in their experiments however the main metrics are coverage and cost. These experiments are good at showing their methods desired properties but they do not showcase mainstream metrics and experiments such as validity, sparsity, and potentially some other metrics. All these metrics are not necessary but they bolster their framework's strengths. O2. Another issue is the lack of experiments. There are numerous types of datasets that have not been experimented on. Focusing primarily on molecular data leaves a glaring question for the readers: experimentally how does this framework work on other types of graphs (such as networks, etc.)? Not addressing this question also raises questions of how their framework works on graphs with many more nodes, edges, etc. This reviewer believes, in the current state of this paper, that these experiments do not prove this method's superiority over existing Counterfactual methods nor do it showcase the full behavior of their framework on graphs it is likely to encounter in the real world. Theoretical Claims: no issues. Experimental Designs Or Analyses: For the graph embedding algorithm why did the authors just use a standard backbone of a GCN. It would be interesting to see various architectures trained and how they affect the framework. The design of the current experiments are correct Supplementary Material: no issues. Relation To Broader Scientific Literature: This paper works on generating global counterfactual explanations. Specifically extending this to the notion of common recourses which add several benefits. Most existing works use local counterfactual explanations so the authors explore a relatively under-explored area. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Please refer to comments above. Other Comments Or Suggestions: Please address the weaknesses. Questions For Authors: Can the authors address why they chose not to include experiments on other types of graphs such as networks, or with a larger scale. Also why did the authors not include more metrics to assess the counterfactual nature of their explanations such as validity. There are several standard metrics in graph interpretability papers that are absent here. Ethical Review Concerns: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: >Q1: validity and sparsity metrics are missing. **Answer:** These two metrics have been considered, but under a different form in the definition of common recourse (in page 2 of our paper). **Validity:** It measures how often the recourse suggestion changes the model’s prediction. In our definition of recourse we access the oracle, so every counterfactual candidate is a counterfactual, and the validity of all our recourse is $1$. **Sparsity:** It measures how many edges or nodes need to be altered to reach a common recourse. - We used a normalized GED in the paper. The sparsity is exactly equal to $Sparsity(G,G') = 1 - Normed GED (G,G')$ for $G,G'$ two graphs. Therefore sparsity is directly embedded in the definition of common recourse, where we explicitly limit the counterfactual to have sparsity at least $1-\theta$, which in our experiments is typically a lower $0.90$ to $0.85$. - We use the cost metric in our experiments, defined in Section 2.1, which accounts for the total distance from the covered input graphs to their closest attained counterfactual. Therefore the cost measure an aggregation over $1-Sparsity$. >Q2: Add more datasets, networks, graphs with many more nodes. **Answer:** As suggested, we provide additional experiments. We study the IMDB-BINARY and IMDB-MULTI datasets (Yanardag & Vishwanathan, 2015). The IMDB-BINARY features movies from two genres (Action and Romance). The IMDB-MULTI includes movies from three genres (Comedy, Romance, and Sci-Fi). Since our current method is only interested in binary classification, we change the consider the following labels: Comedy and non-Comedy (i.e Romance and Scifi). The parameters for COMRECGC are the same as for the experiments in Section 4 and Table 3. *The results show that our method outperforms the baseline GCFExplainer in coverage on both datasets, while preserving a similar cost (higher coverage and lower cost are better).* **Table 1: Results on the FCR problem for the task of explaining the GIN trained model.** | | **IMDB-BINARY** | **IMDB-BINARY** | **IMDB-MULTI** | **IMDB-MULTI** | |-|-|-|-|-| | | Coverage | Cost | Coverage | Cost | | **GCFExplainer** | 76.5% | 8.33 | 19.9% | **7.65** | | **COMRECGC(Ours)** | **80.9%** | **8.10** | **21.9%** | 7.70 | >Q3: Testing the approach on different GNN architecture **Answer:** As suggested, we provide experiments with different architectures. We train GAT(Velickovic et al., 2018), GraphSAGE(Hamilton et al., 2017), GIN (Xu et al., 2019) GNN model for a binary classification task, consisting of three convolutional layers, a max pooling layer, and a fully connected layer, following the literature (Vu & Thai, 2020). The model is trained with the Adam optimizer (Kingma & Ba, 2014) and a learning rate of 0.001 for 1000 epochs. The training/validation/testing split is 80%/10%/10%. The training/validation/testing accuracy tables, alongside the result tables for the experiments are available at: https://anonymous.4open.science/r/COMRECGC-3E4E/tables_additional_experiments.pdf. We test COMRECGC against the GCFE baseline. The parameters for the experiments and methods are the same as in Section 4 and Table 3 of the paper, in particular $\theta = 0.1$ ($0.15$ for Proteins) and $\Delta = 0.02$. **The results show that our method outperforms GCFExplainer in terms of coverage on all datasets for the GIN model explanation, while often offering a lower cost (higher coverage and lower cost are better).** **Table 2: Results on the FCR problem for the task of explaining the GAT trained model.** ||**NCI1** | **NCI1** | **Mutagenicity** | **Mutagenicity** | **AIDS** | **AIDS** | **Proteins** |**Proteins**| |-|-|-|-|-|-|-|-|-| || Coverage | Cost | Coverage | Cost | Coverage | Cost | Coverage | Cost | | **GCFExplainer** | 24.4% | 5.26 | 47.3% | **5.82** | 27.6% | 7.12 | 42.6% | 10.54 | | **COMRECGC(Ours)** | **35.6%** | **5.02** | **55.7%** | 6.05 | **30.7%** | **6.89** | **42.9%** | **10.27** | --- **Table 3: Results on the FCR problem for the task of explaining the GraphSAGE trained model.** || **NCI1** | **NCI1** | **Mutagenicity** | **Mutagenicity** | **AIDS** |**AIDS** | **Proteins** | **Proteins**| |-|-|-|-|-|-|-|-|-| || Coverage | Cost | Coverage | Cost | Coverage | Cost | Coverage | Cost | | **GCFExplainer** | 32.8% | 4.86 | 46.5% | **5.46** | 20.3% | 7.38 | 68.6% | 11.53 | | **COMRECGC(Ours)** | **47.9%** | **4.76** | **50.9%** | 5.90 | **21.5%** | **7.16** | **69.4%** | **11.51** | --- **Table 4: Results on the FCR problem for the task of explaining the GIN trained model.** | | **NCI1** | **NCI1** | **Mutagenicity** | **Mutagenicity** | **AIDS** | **AIDS** | **Proteins** || |-|-|-|-|-|-|-|-|-| || Coverage | Cost | Coverage | Cost | Coverage | Cost | Coverage | Cost | | **GCFExplainer** | 31.2% | 5.13 | 30.4% | **6.05** | 14.7% | 7.68 | 47.3% | 12.21 | | **COMRECGC(Ours)** | **45.6%** | **4.58** | **33.7%** | 6.41 | **16.6%** | **7.34** | **48.6%** | **11.32** | --- Rebuttal Comment 1.1: Comment: The authors have addressed most of my concerns, and I will increase the final rating.
Summary: In this study, the authors have formalized the problem of generating global counterfactual explanations for Graph Neural Networks (GNNs) with common recourse. Considering the NP-hard nature of the FCR and FC problems, the authors have developed COMRECGC, a method specifically designed to extract high-quality common recourse explanations. Experiments on real-world datasets show that COMRECGC consistently generates global common recourse explanations of significantly higher quality compared to the baseline methods. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes. All parts. Relation To Broader Scientific Literature: The key contributions of the paper is important to the broader scientific literature. Essential References Not Discussed: No Other Strengths And Weaknesses: 1. The idea of this paper is very novel. The authors formalize the FCR problem. They provide a generalized version of the FCR problem, named FC, and derive an approximation algorithm for a constrained version of FC. 2. Sufficient experimental results have proven the effectiveness of the method proposed by the authors. 3. The authors have made the source code publicly available. Other Comments Or Suggestions: 1. Besides GCN, the authors could attempt more GNN architectures to verify the generalizability of the method. 2. The writing of the authors needs to be improved. Since this paper involves many new concepts, the authors should illustrate them by giving more examples. Questions For Authors: 1. The authors have analyzed the Complexity Analysis. Then, what is the actual running efficiency of the model? 2. What does "x" in Table 6 represent? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your strong support and encouragement! >Q1: Testing the approach on different GNN architecture. **Answer:** We provide the following **additional experiments**: - **Experiments on different GNN architectures:** GAT, GraphSAGE and GIN on the datasets of the paper for solving the FCR problem. - **Experiments on 2 additional datasets, IMDB-BINARY and IMDB-MULTI:** we have compared our method to the GCFE counterfactual mining baseline for solving the FCR problem. The results and methodology are available in **our response to Reviewer oYnL**, the tables are also available at: https://anonymous.4open.science/r/COMRECGC-3E4E/tables_additional_experiments.pdf. >Q2: The writing of the authors needs to be improved. Since this paper involves many new concepts, the authors should illustrate them by giving more examples. **Answer:** We will add examples in the appendix, in particular for the FCR and FC problems introduced. >Q3: The authors have analyzed the Complexity Analysis. Then, what is the actual running efficiency of the model. **Answer:** We provide the time complexity analysis of our method in Section 3.4, and the running time in Appendix D.3. We will emphasize the reference to the Appendix in the complexity analysis section. >Q4: What does ”x” in Table 6 represent? **Answer:** The Table illustrates raising $\theta$ from $0.1$ to $0.15$, which dramatically increased the number of recourse entering the clustering stage, making DBScan, the clustering algorithm we use for our experiment, intractable. Thank you, we will add this comment to the caption. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. Can you provide the detailed examples of Q2? --- Reply to Comment 1.1.1: Comment: Thank you! We will include the examples in the revised version. ### Example for FC problem #### **Setup** - Given a set of input graphs: $\mathbb{G} = \{G_1, G_2, G_3\}$ - And counterfactual graphs: $\mathbb{H} = \{H_1, H_2, H_3, H_4\}$ - Each counterfactual $H$ is obtained by applying a recourse $f$, i.e a graph transformation, to an input graph $G$ according to the following table: --- #### **Finding Counterfactuals to Maximize Common Recourse coverage** - Suppose the recourse transform the input graph as follows: | Input Graphs | Recourse | Counterfactual Graphs | |---------------|--------------| ---- | | $G_1$ | $f_1$ | $H_1$| | $G_1$ | $f_2$ | $H_2$| | $G_2$ | $f_3$ | $H_3$| | $G_3$ | $f_1$ | $H_4$| | $G_3$ | $f_3$ | $H_1$| We will write $f_1(G_1) = H_1$. We say that an input graph, $G \in \mathbb{G}$, is covered by recourse $f$, if: i) both $H$ and $f$ have been chosen within the budget and ii) $F(G) = H$. - Given budgets $R = 2$ (recourse) and $T = 2$ (counterfactuals), the goal of the FC problem is to select $2$ counterfactuals in $\mathbb{H}$ that yield the best coverage on $\mathbb{G}$ by using at most $2$ recourse. - Suppose we choose counterfactual graphs $H_1$ and $H_3$, then the best two recourse to pick are : $\mathbb{F}_{\mathbb{H}^*} = \{ f_1, f_3 \}$, which allows us to cover the three input graphs as $f_1(G_1) = H_1$, $f_1(G_3) = H_1$ and $f_3(G_2) = H_3$. An intuitive way to see this problem is as a "max 2-budget 2-cover problem", where 2-cover means that to "cover" an element, one has to cover it in the two budgets. --- ### Example for FCR problem #### **Setup** - Suppose we have three reject input graphs: $\mathbb{G} = \{G_1, G_2, G_3\}$ - We are given a budget \( R = 2 \), meaning we can choose at most 2 recourse to transform to transform as many graphs as possible into accep graphs. #### **Recourse and Coverage** - Suppose we have a set of possible recourse $\mathbb{F} = \{f_1, f_2, f_3, f_4\}$ where each $f_i$ is an graph modification. - Assume the recourse successfuly change the classification for the following subset of the graphs: | Recourse Action | Affects Graphs | |---------------|--------------| | $f_1$ | $\{G_1, G_2\}$ | | $f_2$ | $\{G_2, G_3\}$ | | $f_3$ | $\{G_1, G_3\}$ | | $f_4$ | $\{G_1, G_2, G_3\}$ | - The goal is to choose **$2$** recourse to maximize the number of graphs they apply to. An optimal selection is: $\mathbb{F}^* = \{ f_4 \}$.
Summary: The paper introduces COMRECGC, a framework for generating global counterfactual explanations for graph neural networks through common recourse. Unlike local counterfactual explanations which are instance-specific, this approach seeks to find a small set of transformations (recourse) that can convert multiple "reject" graphs into "accept" graphs, thereby providing model-level insights. The authors formalize two novel problem settings - FCR and FC - and prove their NP-hardness. COMRECGC combines a multi-head vertex reinforced random walk to explore the graph edit space for counterfactuals with a clustering approach to identify common recourse patterns. The method is evaluated on four real-world datasets where it substantially outperforms baseline approaches in terms of coverage (the fraction of input graphs explained) while maintaining comparable or lower recourse cost. ## update after rebuttal I have updated my scores Claims And Evidence: I found the following main claims in this paper. 1. FCR and FC are Np hard. these are well supported with theoretical analysis. 2. the proposed method outperforms baseline methods on the FCR and FC problems. It is supported by experiments. However, only GCN is used. So it could be more convincing. 3. They also mention that the method worth considering for applications such as drug discovery or computational biology. there is no much evidence like case study to support that. Methods And Evaluation Criteria: Yes. The methods and evaluation approach are suitable for the counterfactual explanation problem. The multi-head vertex reinforced random walk effectively explores the graph edit space, while the clustering approach sensibly identifies common patterns among recourse. Theoretical Claims: I briefly checked the proofs. I don't see problems, although some parts are missing, like the analysis of the FCR problem Experimental Designs Or Analyses: The authors use four benchmark datasets and provide both quantitative and qualitative results. I have the following concerns. 1. Only a single GNN model (3-layer GCN) is used throughout all experiments. Testing the approach on different GNN architectures (such as GraphSAGE, GAT, or GIN) would have strengthened the generalizability claims of the method. 2. The authors only compare with GCFEXPLAINER and mention that it outperforms other baselines already. However, I think the settings are different, since the results of GCFEXPLAINER on these datasets are different from those in the original paper. So it is not clear whether in the new setting, GCFEXPLAINER is still the best baseline. A direct comparison with other counterfactual explainers under the same experimental conditions would provide stronger evidence. 3. The ablation study in Table 8 effectively demonstrates component contributions but doesn't fully analyze why clustering is more important for some datasets than others. They briefly mention that clustering is "important on large datasets such as MUTAGENICITY, but less so on the smaller PROTEINS." However, the PROTEINS dataset is not significantly smaller than AIDS, yet shows different behavior. This inconsistency is not adequately addressed. 4. The paper uses fixed values for key parameters (θ=0.1 or 0.15, Δ=0.02) across most experiments, with limited sensitivity analysis in the appendix. More extensive parameter tuning would strengthen the robustness claims. Supplementary Material: I review the appendix. no other supplementary material are provided. Relation To Broader Scientific Literature: The paper builds upon several research threads in the broader explainability and graph neural network literature. Its concept of global counterfactual explanations extends work on local counterfactual explanations for GNNs to the model level, addressing the interpretability gap when dealing with numerous local explanations. The common recourse framework connects to actionable recourse research but innovates by identifying patterns across instances rather than just instance-specific actions. The multi-head vertex reinforced random walk ia from classic reinforcement random walk. Essential References Not Discussed: There are some papers not discussed in the paper. [1] He, Kangjia, et al. "Learning counterfactual explanation of graph neural networks via generative flow network." IEEE Transactions on Artificial Intelligence (2024). [2] Chhablani, Chirag, et al. "Game-theoretic counterfactual explanation for graph neural networks." Proceedings of the ACM Web Conference 2024. 2024. [3] Qiu, Dazhuo, et al. "Generating robust counterfactual witnesses for graph neural networks." 2024 IEEE 40th International Conference on Data Engineering (ICDE). IEEE, 2024. [4] Verma, Samidha, et al. "InduCE: Inductive counterfactual explanations for graph neural networks." Transactions on Machine Learning Research (2024). [5] Kang, Hyunju, Geonhee Han, and Hogun Park. "Unr-explainer: Counterfactual explanations for unsupervised node representation learning models." The Twelfth International Conference on Learning Representations. 2024. Other Strengths And Weaknesses: Strengths: 1. new prpoblem formulation of the common recourse problem for graph counterfactual explanations. 2. The theoretical framework is well-developed, with clear problem definitions and complexity analysis. 3. effectively combines random walk exploration with clustering to produce useful explanations, and the empirical results demonstrate clear improvements over existing approaches. Weakness: 1. The organization needs to improve. I think a significant weakness is that the paper is not self-contained. Several important concepts and algorithms are only briefly described with details relegated to appendices, making it difficult to fully understand the method without constantly referring to other sections. The related work part is also put in the appendix. 2. Some key references are missing 3. experiments are not that comprehensive to support the effectiveness of the proposed method. Other Comments Or Suggestions: I think the factual global methods like XGNN and PGExplainr are also related to this topic. I suggest the authors also add some discussion on that. Questions For Authors: I don't have specific questions. Please correct me if my understanding in the above sections is wrong. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: >Q1: Some part of the analysis of the FCR problem is missing. **Answer:** We will add the following in Appendix B.2: Let us define $f$ as the function that associates to a set of common recourse its total coverage. We prove that $f$ is submodular: Let $A \subseteq B$ be sets of recourse, and let $r$ be a recourse. Suppose $G$ is a counterfactual graph covered by $r$ but not by any recourse in $B$. Since $A \subseteq B$, $G$ is not covered by any elements of $A$. Therefore $f_A(\{r\}) \geq f_B(\{r\})$. >Q2: Different GNN architectures. **Answer:** We provide the following **additional experiments**: - Experiments on different GNNs: GAT, GraphSAGE and GIN for solving the FCR problem. - Experiments on 2 additional datasets, IMDB-BINARY and IMDB-MULTI The results and methodology are available in **our response to Reviewer oYnL**. >Q3: Comparison with GCFExplainer with incorrect setting. **Answer:** - The results of GCFExplainer are different from the original paper since GCFExplainer aims at identifying global counterfactuals to explain, while the FCR and FC problems consist in finding common recourse, i.e graph transformations, to explain a GNN. - The comparison between our method and GCFExplainer is valid, because we define counterfactuals and recourse in the same way, with the same values for the parameters $\theta$ and $\Delta$ that define counterfactuals and common recourse. - We think comparison between our method and typical CE baselines are unfair. A typical CE explainer associates to each reject graph a single CE graph, which is often a subgraph. Both our method and GCFExplainer mine at least 10 times more counterfactuals, and are not limited to subgraphs. Those counterfactuals are then used to build common recourse through clustering(see Algorithms 1,2,3 page 5 of the paper for more details). Therefore, adding classical CE explainer baselines is not relevant. >Q4: Table 8. **Answer:** Thank you! Although both the AIDS and Proteins dataset contain 1837 and 1113 graphs respectively, the number of "reject" graphs is 1473 in AIDS, while it is 366 for Proteins. We are explaining those reject graphs by providing 100 recourse in table 8. So the difference between the two datasets when looking at COMRECGC with or without clustering is possibly explained by the "sparsity" of Proteins reject graphs, and the difficulty to explain the whole dataset in only 100 recourse. >Q5: Fixed values for key parameters **Answer:** You are right! We have two values of each parameters ($\theta=0.1$ or $0.15$, $\Delta=0.02$ or $0.04$). The parameters $\theta$ and $\Delta$ are crucial in the common recourse definition, and in the FCR and FC problem. We agree that a finer sensitivity analysis is interesting, specially when the coverage for the NCI1 and Mutagenicity dataset almost double when the common recourse parameter $\Delta$ goes from $0.02$ to $0.04$. The choice of $\theta$ and $\Delta$ is specific to each dataset and application, and we provide a few values to showcase their impact. A systematic sensitivity analysis is relevant for the overall method and comprehensive experiments will be included in the revision. We provide new experiments for the sensitivity of $\Delta$. **We observe that a higher $\Delta$ allows for a more comprehensive explanation, as we are allowing similar recourse to be considered "common" more easily. The cost naturally rises as more explanations are covered.** **Table 1: Results of our method on the FCR problem for the task of explaining the GCN model on the Mutagenicity dataset for different values of $\Delta$.** | $\theta$ | $\Delta$ | Coverage | Cost | |---------|--------|----------|-------| | 0.1 | 0.02 | 51.8% | 5.63 | | 0.1 | 0.025 | 67.15% | 6.99 | | 0.1 | 0.03 | 75.35% | 7.59 | | 0.1 | 0.035 | 83.88% | 8.03 | | 0.1 | 0.04 | 90.0% | 8.11 | >Q6: Key references are missing. **Answer:** We will include the references to the **global factual and local CE literature**: Recent advancements in counterfactual explanations (CE) for GNNs include He et al.'s (2024) generative flow network, Chhablani et al.'s (2024) game-theoretic approach, and Qiu et al.'s (2024) robust counterfactual witnesses. Verma et al. (2024) introduce InduCE, an inductive method for unseen data, while Kang et al. (2024) focus on unsupervised node representation learning. *While these approaches address local counterfactuals, the task of generating global counterfactual explanations for GNNs remains relatively unexplored.* >Q7: Paper is not self contained. **Answer:** We will add the related work section in the main body of the paper. The page limit for the paper is definitely a hard constrain, and we chose to prioritize the definition of the FCR and FC problem, as well as the concept of common recourse in our paper. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' response. I have a follow-up question for Q3. While I understand your point about the differences between typical CE explainers and your approach, I'm still concerned about the limited comparison. Even if traditional CE methods typically provide one counterfactual per graph, couldn't they be adapted for comparison purposes? This type of comparison would provide stronger evidence of your method's advantages over adapted existing approaches. Without such comparisons, it's difficult to fully assess the advancement your method represents beyond GCFExplainer alone. --- Reply to Comment 1.1.1: Comment: Thank you! You are right, we have to perform some modifications. Specifically, One can generate a common recourse explanation from a set of graphs using the clustering algorithm 1 (page 5 in the paper) for **the local counterfactual baselines**. We simply have to filter out the graphs that are not valid counterfactuals. Then the common recourse are formed based on the $\theta$ and $\Delta$ parameters. We extend Table 3 in the paper with **additional experiments** with the popular local counterfatual baseline, CF-GNNExplainer \[*Lucic et al., 2022*\] added: **Table 4: Results on the FCR problem for the task of explaining the GCN trained model for different explainers. The settings are the same as for Table 3 in our paper.** | Method |**NCI1**|**NCI1** | **Mutagenicity** | **Mutagenicity** | **AIDS** | **AIDS**| |----------------------|----------|------|------------|------|--------|------| | | Coverage | Cost | Coverage | Cost | Coverage | Cost | Coverage | Cost | | **CF-GNNExplainer** | 9.1% | 7.5 | 12.4% | 8.90 | 0.1% | **0.35** | | **GCFExplainer** | 21.4% | 5.75 | 20.6% | 6.91 | 14.2% | 6.97 | | **COMRECGC(Ours)** | **42.9%** | **4.95** | **51.8%** | **5.63** | **34.7%** | 6.74 | **Findings:** The coverage of building common recourse explanation using only CF-GNNExplainer generated counterfactual is noticibly worse than the counterfactual mining methods such as ours. This is likely explained by a number of reasons, such as: - Generating less counterfactual graphs (~2k5 graphs for CF-GNNexplainer on Mutagenicity vs 50k+ for GCFExplainer and our method), - Only considering subgraphs of input graphs and, - Not taking into account the $\theta$ and $\Delta$ parameters of the CR explanation. - Please note that GCFExplainer (our main baseline) has been shown to outperform the local counterfactual baselines. And, our method outperforms GCFExplainer.
Summary: This paper designs an algorithm COMRECGC for global graph counterfactual explanation (CE). It considers the finding common recourse (FCR) explanation to address the limitations of existing graph CE methods (relying on experts for recourse directions; separating the process of funding recourse direction from data fitting). The paper includes a complexity analysis, and the code is publicly available. The experiment compares common recourse explanations and graph CE, and shows COMRECGC provide a comparable performance in global CE. Evaluations on benchmark datasets demonstrate strong performance. ## update after rebuttal I appreciate the authors for their response, which addresses some of my concerns. I stick to the original score of weak accept. Claims And Evidence: The claims in the submission are generally well supported. Methods And Evaluation Criteria: It would be beneficial for the experiment to include additional evaluations on efficiency, particularly in terms of time and space complexity. Providing a more detailed analysis of computational cost would help assess the practical feasibility and scalability of the proposed approach. Theoretical Claims: The paper provides theoretical analysis for the FC problem and the complexity of the proposed method. Experimental Designs Or Analyses: The experimental design and analysis appear generally sound, with appropriate methodologies used to validate the proposed approach. However, the improvement in terms of cost does not seem significant. A more in-depth evaluation of efficiency, such as a detailed comparison of efficiency and robustness with baselines, would strengthen the analysis and further enhance the impact of the results. Supplementary Material: Yes, I have reviewed the appendix. Relation To Broader Scientific Literature: The paper is related to some graph-related scientific discovery and explanation fields. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The point of involving FCR and FC problems is novel, and the source code link is provided. But in general, the improvement compared with existing GCE baselines does not seem very significant. Other Comments Or Suggestions: Few typos are found and perhaps need a further proofread. Questions For Authors: Could the authors provide any (empirical) evaluation for the model efficiency? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your remarks and we appreciate the support for our work. >Q1: Could you include additional evaluations on efficiency, particularly in terms of time and space complexity? **Answer:** We make the following notes: * **Time complexity** We have provided the time complexity analysis of our method in Section 3.4 of the paper, and the running time in Appendix D.3. * **Space complexity:** The space complexity is upper bounded by the number of recourse we are generating through the random walk. The resulting complexity is $O(n|G|)$ where $|G|$ is the number of input graphs and $n$ is the number of top-visited counterfactuals during the random walk. We will add this discussion in the paper. >Q2: Could the authors provide any (empirical) evaluation for the model efficiency? **Answer:** We would like to mention that we have provided time complexity in the paper (see the above answer). If you mean efficiency as effectiveness, we have performed extensive experiments (please Sec. 4 and Appendix D). To strengthen the claim of our model efficiency (or effectiveness) further, we provide the following **additional experiments**: - **Experiments on different GNN architectures:** GAT, GraphSAGE and GIN on the datasets of the paper for solving the FCR problem. - **Experiments on 2 additional datasets, IMDB-BINARY and IMDB-MULTI:** we have compared our method to the GCFE counterfactual mining baseline for solving the FCR problem. The results and methodology are available in **our response to Reviewer oYnL**, the tables are also available at: https://anonymous.4open.science/r/COMRECGC-3E4E/tables_additional_experiments.pdf. >Remark 1: Few typos are found and perhaps need a further proofread. **Answer:** We will go over the paper and fix the typos. Thanks!
null
null
null
null
null
null
Adaptive Sample Sharing for Multi Agent Linear Bandits
Accept (poster)
Summary: The paper studies an adaptive sample sharing problem for multi-agent linear bandits, where agents' true parameters may be different. The authors propose a separation test, which detects the stopping time for beneficial collaboration. The authors provide both the separation time upper-bound and cumulative pseudo-regret upper bound. The authors also compared the pseudo regret with existing methods. Finally, the authors empirically conduct experiments using both synthetic data and real-world data. Claims And Evidence: I have several confusions when reading the paper. 1. Can you elaborate on the bias-variance tradeoff in Figure 1? Why is the orange ellipsoid inside the blue ellipsoid? 2. I'm confused of the connection between Table 1 and Theorem 6.4? How do you get the $\sqrt{T}\log(T/d)$ upper bound from Theorem 6.4? Methods And Evaluation Criteria: The paper studies the separation time and cumulative regret of the proposed algorithm, which looks reasonable to me. Theoretical Claims: I feel the theoretical parts are not well presented. More specifically in Theorem 6.4, what is $v(\delta, T)$? How do you quantify $n_i^e(t)$? The authors should have a remark to discuss the quantity of the bound. Currently, the upper bound is not very clear to me. Experimental Designs Or Analyses: I do not find big issues in experiments. This paper is mostly a theoretical work. Supplementary Material: Theoretical proofs look mostly clear to me. Minor issue: in the appendix content table, real section names are not presented. Relation To Broader Scientific Literature: This research is related to the broader multi-agent collaborative learning, where multiple agents collaboratively share data for learning purposes. Essential References Not Discussed: I am not very familiar with the collaborative learning literature in linear bandits so I'm unsure whether essential references are missed. Other Strengths And Weaknesses: The paper provides detailed upper bound analysis on pseudo regrets, the number of misaligned agents, and separation time. They also provides the lower bound on separation time. However, the lower bound for pseudo regrets is missed. How close is your algorithm's upper bound to the lower bound? Other Comments Or Suggestions: I'm curious why the authors define "pseudo regret" instead of "regret". The definition looks the same as the standard regret. Questions For Authors: In Figure 1, how do you compare BASS with CLUB and SCLUB? The constant term of SCLUB and CLUB depend on $M$, and your algorithm's upper bound has different parameters. It's unclear why BASS has tighter upper bound. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their positive and constructive comments. Below, we address your concerns and provide detailed answers to your questions. 1/ The orange ellipsoid (representing the collaborative estimation) corresponds to a reduced estimation variance, as it yields a smaller Mahalanobis radius of the confidence ellipsoid—note that the smaller ellipsoid might not be contained in the bigger one, we will provide a more general illustration to underline that. However, notice that the center of the orange ellipsoid differs from that of the blue ellipsoid due to the introduced bias, illustrating the bias (OLS estimate deviation) - variance (confidence ellipsoid radius reduction) tradeoff. 2/ We will add a section in the appendix to properly detail the steps to obtain Table 1 from Theorem 6.4 and Lemma 6.1. The proof is similar to CLUB and SCLUB, and the improvements stem from both the Mahalanobis separation criterion and a refined analysis from Abbasi-Yadkori et al 2011. 3/ The definition of $\nu$ remains unchanged across all theorems: it is the regret upper bound without sample sharing. Considering the upper bound of Theorem 6.4, the first term corresponds to the regret minimization improvement due to the variance reduction (represented by $\mu$) and the second term corresponds to the cost of sharing (i.e., the shared OLS estimate bias, due to misassigned agents). This second term features an interplay between the determinant and the exponential term in $n_i^e(t)$, which overall results in a $\mathcal{O}(\frac{4L}{\delta \rho_{\min}\sqrt{T}^{\gamma^2R^2 - 1}})$ regret. This leads to an overall behavior of the upper bound in $\mathcal{O}(\sqrt{T} \log (T/d))$ as depicted in Table 1. 4/ Thank you for pointing out the inconsistency in the appendix content table; we will revise the naming of the appendix sections accordingly. 5/ To derive a lower bound on the cumulative pseudo-regret, we consider the optimal case of $N$ collaborative agents sharing the same bandit parameters, forming a single—known—cluster. This leads to: $\frac{1}{N}\mu(\delta, T) \leq R_{i,0,T}^{\mathrm{cluster}}$. We will further compare the additional terms stemming from the clustering estimation in the upper bound of Theorem 6.4 to this lower bound to underline the cost of clustering. 6/ As defined by in [1], pseudo-regret uses expected rewards, while regret uses realized rewards; for our paper we use Bubeck’s definition. 7/ Thank you for pointing out the typo in Table 1: the proper constant term should be $d\sqrt{\frac{M}{N}}$ for CLUB and SCLUB which is equivalent to $\frac{d}{\sqrt{\rho_{\min}N}}$ for balanced cluster sizes. As mentioned in 2/, the differences ($\sqrt{d}$ factor and $T/d$ in the logarithm) are due to the use of a Mahalanobis metric combined with a refined analysis (i.e., applying the Elliptical Potential lemma on the design matrix $\mathbf{A}_i$). [1] Bubeck, S., & Cesa-Bianchi, N. (2012). Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends® in Machine Learning, 5(1), 1-122. Additional comments: If you have any additional questions about the paper and the theoretical and empirical analysis, we would be happy to provide further clarification. Please let us know if you need any additional information, clarification or modification that we could provide for you to improve your score. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses. I decide to increase my score to 3.
Summary: This paper considers the problem of multi-agent linear bandits in a collaborative setting where the aim is to maximise the cumulative reward across all agents. In this problem each agent has the same (static) set of arms but has a different parameter. The idea is if the parameters of two agents are close enough together then pooling their results to create a single parameter estimator is better than learning independently. Whereas previous works assume that the agents form clusters in which the intra-cluster parameter distances are within a certain threshold, this work has no such assumptions. They also show that in the clustering case their algorithm theoretically outperforms existing algorithms for that case. The algorithm is also relatively efficient. They also give the results of experiments, both on real and synthetic data, showing that their algorithm outperforms others. Claims And Evidence: I have not read the proofs so can’t confirm Methods And Evaluation Criteria: Yes Theoretical Claims: I have not read the proofs Experimental Designs Or Analyses: I did not check Supplementary Material: I did not read the supplementary material. Relation To Broader Scientific Literature: There has been much work on collaborative multi-agent linear bandits. This paper gives the first algorithm that makes no assumptions on the distances between the parameters of the agents. Most other works assume a clustering assumption, and this work also shows that the same algorithm has a state of the art result there as well (albeit only a slight improvement on the previous state of the art). These two facts combined make this work a significant contribution to the field. Essential References Not Discussed: Unknown Other Strengths And Weaknesses: I feel the result is strong. But also, as far as I am aware from interacting with ChatGPT, the original clustering algorithm of Claudio Gentile et. Al. treats all agents in a cluster the same when it has learnt it. If significant time passes the strategy of playing each agent the same in a given cluster is non optimal and it is better to learn them independently, whilst the algorithm of this paper appears to eventually learn them independently, which seems to be another strength (although I guess other clustering algorithms have been proposed that do this). Other Comments Or Suggestions: I very much like the way the paper is presented - building up from the fundamental principles of two agents sharing observations and the single agent stochastic linear bandit. However, in my subjective opinion I believe that it is best to place the full theoretical result as soon as possible in the work (which would be before these sections). Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their very positive and insightful comments. Thank you very much for your kind appreciation of the paper's results. Thank you for the suggestion regarding the structure of the paper; we will revise it, as we agree it improves the overall logical flow. Additional comments: If you have any additional questions about the paper and the theoretical and empirical analysis, we would be happy to provide further clarification.
Summary: This paper studies the collaboration of multiple heterogeneous agents in addressing a linear bandit problem. It proposes an adaptive sample approach to dynamically determine whether agent pairs should cooperate (utilize observations). Based on the technique, the paper proposes the BASS algorithm to address the multi-agent linear bandit problem with a tighter theoretical regret bound and better empirical performance. #### After rebuttal This is a nice piece of work. The reviewer would like to see the final version with all the comments addressed accordingly. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: The reviewer does not check the proof in detail, but the reviewer goes through the detail explanation in the main text and believes the intuition before the theoretical results of this paper make sense. Experimental Designs Or Analyses: Yes, the simulations look reasonable to the reviewer. Supplementary Material: No Relation To Broader Scientific Literature: The idea proposed in this paper may initiate a broader impact in the community of sequential decision-making and reinforcement learning. Essential References Not Discussed: No Other Strengths And Weaknesses: Overall, the reviewer believes the paper makes a fair contribution, and thinks the approach can be extended in broader areas, like other heterogeneous multi-agent bandits setting, and MARL as well. ### Strengths: - New approach and better results: the adaptive proof discussed in Sections 3.2 and 5 is new to the reviewer, and the BASS algorithm based on the technique is also novel in the literature with slightly tighter regret bounds. - Clear writing: the authors did a good job in presenting their work, with toy two-agent as an example, and then extended to multi-agent, with synchronous action as a start, and then generalized to heterogenous actions. The illustrative figures (Figures 1 and 2) also help the reviewer better appreciate the fundamental idea behind the approach. ### Weakness: - Some sections can be merged; for example, Section 4 should be in the Preliminaries, and Section 7 should just be put as a remark for Theorem 6.4 instead of a full section. - Some presentation/writing needs further clarifications, see "Other Comments" below. Other Comments Or Suggestions: In Section 5: - The presentation logic also makes the reviewer a bit confused. At the end, the action sequence, or $\bm A_i$, are different across agents. Why the section starts from the general case in Eqs. (3) and (4), and then go to the homogeneous sequence case in Lemma 5.2, and then go back to general case in Definition 5.1? Is there a challenge of extending from Lemma 5.2 to Definition 5.1? - In the equation of definition 5.1, the reviewer is a little bit lost: what is $s$ in the definition, and where $t$ as an input to $\Psi$ appears in the equation? In Section 6: - In Algorithm 1’s Line 3, how the neighbors $\hat{\mathcal{N}}$ are determined by the updated graph $\mathcal{G}_t$ in Line 9, or by the environment? Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their positive and constructive comments. Below, we address your concerns and provide detailed answers to your questions. 1/ Thank you for the remark; we will revise the structure of our paper, as we agree it improves the overall logical flow. 2/ There is indeed a challenge in analyzing the effects of asynchronous pulling. It leads to differently shaped ellipsoids and requires a rescaling term in the form of $\mathbf{A}_i^{-1} \mathbf{A}_j$, which makes the proof much more difficult. Therefore, we restrict our analysis to the synchronous setting. In Definition 5.1, we use the general case because this definition is used in the practical implementation of the algorithm, which allows for empirical evaluation of our method in the asynchronous setting. 3/ Thank you for pointing out this typo: the correct definition consider the minimum w.r.t $s$ and compare the later to 0: $$ \Psi(i, j, t) = \mathbf{1}\left \\{ \min_{s \in ]0, 1[} \frac{\gamma^2}{4} - \sum_{l=1}^d \mu_{tl}^2 \frac{s (1 - s)}{\beta_{i,t}^2 + s (\beta_{j,t}^2\eta_{tl} - \beta_{i,t}^2)} < 0 \right\\} \enspace. $$ We will correct this and update the corresponding subscripts. 4/ The graph $\mathcal{G}_t$ is updated and stored in a central server that first requests the design matrix and the regressand of each agent in the neighborhood of agent $i$. Then, send collaborative OLS variables to agent $i$ to pull an arm. Hence the neighborhood is determined by a central server to manage the sampling sharing logic within the agents network. Note that our work focuses on sample efficiency rather than communication; however, sample sharing for a general decentralized setting is a promising line of future work. Additional comments: If you have any additional questions about the paper and the theoretical and empirical analysis, we would be happy to provide further clarification. --- Rebuttal Comment 1.1: Comment: The reviewer thanks the author for the response. The reviewer will hold their positive evaluation on the paper.
Summary: This paper considers a multi-agent linear bandit problem in which each agent seeks to estimate its own linear parameter (so that they can minimize regret) while all agents select arms from a shared set. In this setting, agents are allowed to share reward observations with other agents to reduce the uncertainty of parameter estimates at the cost of increasing the bias of the estimates. To construct proper collaboration sets for agents, the authors leverage the idea of overlapping ellipsoid test, proposing an OFUL-based algorithm with Mahalanobis distance-based agent similarity criterion. This paper provides theoretical guarantees for cumulative pseud-regret, agent separation time (length of communication), and the expected number of misassigned agents. The proposed algorithm is also numerically evaluated in both synthetic and real-world datasets. Claims And Evidence: This paper claims three key contributions: no parameter assumption, anisotropic approach, and analysis. However, as one of the main contributions, the algorithm description in Section 6.1 appears too brief. Could the authors provide a more detailed explanation to enhance clarity? Methods And Evaluation Criteria: This paper employs appropriate metrics, such as cumulative pseudo-regret and clustering scores, to evaluate performance. However, while the number of communications (i.e., agent separation time) appears to be considered in the theoretical analysis, it is not reported in the numerical study. Could the authors clarify this or provide the corresponding simulation results? Theoretical Claims: - In Theorem 6.2, the term $ \beta(\delta, {\bf A}_{T_s}) $ is not explicitly specified. - In Theorems 6.3 and 6.4, the terms $ \nu(\delta, T) $ are not explicitly specified. Could the authors clarify whether they take the same form as in Theorem 6.2? - I am seeking a better understanding of the general regret upper bound for all agents presented in Theorem 6.3. Could the authors summarize the assumption-free bounds in a similar manner to Table 1 for clarity? Providing these clarifications in the paper would enhance readability. Experimental Designs Or Analyses: Could the authors provide comments on the results presented in Table 3? In particular, could they offer an explanation or intuition for why almost all benchmark algorithms received zero scores? Additionally, could the authors discuss whether the approaches proposed in this paper could be incorporated into these algorithms? Supplementary Material: I am aware of the BASS algorithm implementation details and complexity analysis in the Appendix. Relation To Broader Scientific Literature: This work may bring attention to the potential of incorporating Mahalanobis distance as a similarity measure in the multi-agent regret minimization research community. Essential References Not Discussed: I am not aware of essential references that were not discussed. Other Strengths And Weaknesses: Strength: - This paper proposes an interesting algorithm for multi-agent linear bandit and provides extensive theoretical analysis. Weakness: - Some of the results are not presented clearly. Other Comments Or Suggestions: N/A Questions For Authors: - I am trying to better understand the claim that, unlike "most" existing approaches, this work does not rely on any assumptions about the structure of the bandit parameters. Do works that assume an agent cluster structure (e.g., Gentile et al. 2014, Ban & He 2021, Wang et al. 2023) impose assumptions on the parameters? Could the authors clarify this point? - I am puzzled by the discussion following Lemma 5.1, which states that as we consider synchronous pulling, we have $ {\bf A}\_{i, t} = {\bf A}\_{i, t} = {\bf A}\_t$. My understanding is that even if agents $i$ and $j$ are synchronous, if they pull different arms, then ${\bf A}\_{i, t} \neq {\bf A}\_{j, t}$. Could the authors clarify this point? Please let me know if I have misunderstood anything. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their positive and constructive comments. Below, we address your concerns and provide detailed answers to your questions. 1/ We will add a specific section in the appendix, featuring a flow-chart, where we will better expand the description of the algorithm major steps: a/ the samples sharing (L 2-4), b/ the UCB pulling (L 5-6), c/ the local variables update (L 7) and d/ the graph $\mathcal{G}_t$ update (L 9). 2/ The focus of this paper is to analyze the effect of sample sharing on regret minimization in the general case—i.e., without any assumption on the agent clustering structure. To that end, we ignore considerations such as communication cost. However, for completeness, we will add a section in the appendix presenting the empirical communication cost in the federated setting (using a central server), along with a theoretical upper bound in the case where the clustering structure is assumed (since in that setting, the number of agents per cluster can be quantified). 3/ The term $\beta(\delta, \mathbf{A}_t)$ is stated in Theorem 4.1 and corresponds to the confidence ellipsoid radius of the OLS estimate. We will restate and clarify the definition of $\beta(\delta, \mathbf{A}_t)$ in Theorems 5.1, 6.1, 6.2, 6.3, and 6.4, and underline that the definition of $\nu$ remains unchanged across these theorems. 4/ A particularity of the sample sharing problem is that it unfolds in two distinct phases: one where sharing is beneficial, and another where independent parameter estimation is preferable. Theorems 6.2 and 6.3 present the cumulative regret at the transition point between these phases, highlighting the gain from sharing before switching to an independent regime. Table 1 reports the asymptotic regret in the independent setting, after collaboration has ceased. Thus, the results of Theorems 6.2 and 6.3 cannot be presented in the same form. However, the upper bound can be summarized as a tradeoff between the acceleration of the first term (in $\mu$), compared to the independent regret ($\nu$), and the bias from data sharing, captured in the second term. 5/ The other algorithms perform poorly in terms of clustering. Rather than accurately recovering the cluster structure, other approaches focus on regret minimization, either by design or by regret-oriented hyperparameter optimization. Figure 8 in Appendix R.3 shows the progressive degradation of the clustering score. Notably, no existing work in the literature provides theoretical guarantees or empirical validation regarding the quality of the clustering. 6/ Beyond the analysis of sample sharing itself, the most significant aspect to revise is the use of the Euclidean distance criterion (as in Gentile et al. (2014); Ban & He (2021); Wang et al. (2023)), which is effectively equivalent to the test proposed by Gilitschenski et al. (2012), where the ellipsoids considered are $\mathcal{E}(\hat{\mathbf{\theta}}_i, \lambda^{\min}(\mathbf{A}_t)\mathbf{I}, \tilde{\beta})$ and $\mathcal{E}(\hat{\mathbf{\theta}}_j, \lambda^{\min}(\mathbf{A}_t)\mathbf{I}, \tilde{\beta})$ thereby ignoring most of the arm-pulling history. 7/ We will add a summary table in the appendix outlining the different clustering assumptions considered in the literature and mentioned in our related work: * no assumption: our paper, except in “Regret analysis with clustered parameters”. * same bandit parameter within a cluster: our paper with Assumption 6.1. * same bandit parameter within a cluster and a minimum distance between clusters: Gentile et al. 2014, Li et al. 2016, Li & Zhang, 2018, Wang et al. 2023, Yang et al. 2024. * close bandit parameters within a cluster and overlapping clusters definition: Ban & He 2021. Note that our primary intention is to derive a principle rule to share samples between agents without introducing assumptions on their distribution. We then extend our results to the clustered setting to provide a fair comparison to existing work. 8/ In the synchronous setting, all agents in a cluster select an arm based on the same shared history. Since UCB is a deterministic rule, two agents using the same history will select the same arm. Therefore, their design matrices will also be identical. We will add an additional section in the appendix to better explain this aspect of our setting. Additional comments: If you have any additional questions about the paper and the theoretical and empirical analysis, we would be happy to provide further clarification. Please let us know if you need any additional information, clarification or modification that we could provide for you to improve your score. --- Rebuttal Comment 1.1: Comment: I greatly appreciate the authors' clarification in response to my questions, as well as their plan to incorporate additional material into the paper. I continue to maintain my positive recommendation for this work.
null
null
null
null
null
null
TIMING: Temporality-Aware Integrated Gradients for Time Series Explanation
Accept (spotlight poster)
Summary: The paper first reveals that Integrated Gradients (IG) effectively captures important points but has been underestimated in previous research due to traditional evaluation metrics' inability to consider feature importance's directional information. Therefore, the authors propose novel evaluation metrics, CPD and CPP, which comprehensively assess attribution methods and re-evaluate baselines. Then, based on the effectiveness of directional attribution methods, they introduce Time Series Integrated Gradients (TIMING) for time series data. TIMING overcomes the limitations of traditional IG in handling complex temporal dependencies and out-of-distribution issues by integrating temporality-aware dynamic baselines into the attribution calculation and demonstrating through extensive experiments that TIMING outperforms baselines while maintaining IG's efficiency and theoretical properties. Claims And Evidence: Yes, the evidence is clear. Methods And Evaluation Criteria: The proposed method and evaluation criteria are well-aligned with the challenges of time series XAI. CPD and CPP provide nuanced evaluation metrics, while TIMING offers a theoretically grounded, temporally adaptive solution based on IG. However, there are several points that need to be explained: * It is unclear how the random retention strategy can solve the problem of OOD (line r213). I have reviewed Table 5, but it does not effectively indicate that the OOD issue has been alleviated. * What if the mask $M\in [0,1]$ is a soft mask for sampling. * It seems that all masks $M_{t,d}$ are independent and identically distributed for each feature. Should we consider temporal continuity in explanation? such as a simple consideration, $|M_{t,d} - M_{t+1,d}|<\theta$. * Is there any relationship between CPD and AUP, and how to improve both metrics simultaneously? * Why does a big $n$ in Table 7 actually have a worse effect, and how should we select $n$ for a specific real dataset? Theoretical Claims: The method is based on Integrated Gradients, so there is no specific theory to check it. Experimental Designs Or Analyses: The experiment is complete, and most of the experiments were conducted on MIMIC-III. My question is why gradient-based methods (IG, GradSHAP ...) have lower CPD and CPP. Is it because their highlighted important features are too sparse, like Figure 10? If only a small number of features are considered, is there a potential risk? Supplementary Material: The supplementary materials are complete. BTW, Could you share the performance of black-box classifiers without masking strategies? Relation To Broader Scientific Literature: The paper’s contributions build upon and address limitations in the time series explainable AI (XAI) literature. Prior work (Tonekaboni et al., 2020; Crabbe & Van Der Schaar, 2021; Enguehard, 2023) focused on unsigned attributions, emphasizing feature importance magnitudes but ignoring directional effects (positive/negative contributions). This contrasts with signed attribution methods like Integrated Gradients (IG) (Sundararajan et al., 2017), widely used in non-temporal domains, which inherently encode directionality. However, prior evaluations of time series XAI methods (e.g., simultaneous masking of high-attribution points) failed to account for directional cancellation effects, leading to underestimation of IG’s efficacy. The proposed CPD/CPP metrics address this gap by tracking incremental attribution impacts, resolving the cancellation issue and aligning with evaluation principles. Essential References Not Discussed: No. All the prior related references that seem relevant to the proposed evaluation metrics (CPD and CPP), the improvement of IG for time series data (TIMING), and the issues with existing evaluation methods and limitations of applying IG to time series data have been appropriately covered and cited within the paper. Other Strengths And Weaknesses: The paper makes valuable contributions by re-evaluating and enhancing gradient-based methods for time series XAI, challenging prior norms, and introducing principled metrics. While some aspects of TIMING’s novelty may be similar to existing IG with ensemble learning, the work meaningfully advances the field’s understanding of directional attribution in temporal data. Other Comments Or Suggestions: Does the definition of CPD and CPD need to be distinguished in the formula (lines r123 and r141)? Questions For Authors: Please see the methods and evaluation criteria. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate your valuable feedback and have responded to each of your questions below. --- [Q1] Random retention strategy to mitigate OOD. [A1] Thank you for this comment. Integrated Gradients (IG) often interpolates through out-of-distribution (OOD) regions. Our proposed random retention strategy (RandIG) addresses two key issues concurrently: it reduces OOD occurrences by partially retaining original inputs and captures cases involving disrupted temporal dependencies, which are especially critical in time series contexts. Table 5 illustrates RandIG’s improved performance over IG in addressing these concerns. Furthermore, our TIMING method equipped with segment-based masking strategy preserves segment-level information, thus more effectively mitigating the OOD issue compared to RandIG, as shown in Table 5. --- [Q2] Soft Mask ($M \in [0,1]$) & Temporal Continuity. [A2] Our segment-based sampling approach generates binary masks with inherent temporal continuity. Unlike independent sampling (RandIG), our method preserves local temporal structure, which enhances overall performance by maintaining contextual relationships between contiguous segments. While soft masks ($M \in [0, 1]$) show promise, we currently face challenges in handling gradients for fractional mask values. Our current method relies on gradients from zero-masked positions, and integrating soft masks would require sophisticated theoretical and computational approaches, marking an important avenue for future research. --- [Q3] Relationship between CPD and AUP. [A3] CPD measures feature influence on model predictions, while AUP evaluates alignment with the data generation process. In XAI methods, explaining model behavior is the primary focus. Higher CPD doesn't always correlate with higher AUP. When models focus on different features, more accurate attribution might actually decrease AUP. This complexity reflects real-world challenges in model interpretation, where perfect alignment with data generation processes is rare. TIMING demonstrates model faithfulness by prioritizing accurate model explanations over detecting specific data features, highlighting the nuanced nature of explanation methods. --- [Q4] Worse effect of a big n and selection criteria. [A4] Thank you for your question. In Table 7, while $(n, s_{min}, s_{max}) = (100, 10, 48)$ results in a lower score, this is simply a natural outcome of using a large $n$: when n is large, many points remain each step in our modified baseline $(1 - M) \odot x $, so some points receive few (or no) gradient calculations over the path. Consequently, attribution quality can appear lower. Regarding how to choose $n$ for real-world datasets, our hyperparameter tuning suggests the following when $n_{samples}=50$: - Multivariate: $(n, s_{min},s_{max})=(2D, 1, T)$ often performs well. - Univariate: ensure that $2n \times s_{max} \approx T$. Nonetheless, as discussed in our paper, TIMING is robust to different hyperparameter choices. The main principle is to avoid retaining too many points in the masking, which can undermine its effectiveness. --- [Q5] Reason for gradient-based methods’ better performance – due to sparsity? [A5] Thank you for this insightful question. We believe these high CPD and low CPP scores are not simply due to sparsity. If that were the case, their advantage would likely drop when shifting from K=50 to K=100, since adding more features makes it harder to rely on only a few key points. Instead, in our MIMIC-III experiments, they actually maintain or improve performance in the K=100 setting across various architectures. Additionally, these methods stay robust under 20% masking across multiple datasets, suggesting they truly identify features that drive model predictions, rather than relying on sparsity alone. --- [Q6] Performance of black-box-classifiers. [A6] Thank you for your question. We will include the requested results in the Appendix of our paper. We also note that the significantly improved performance on certain datasets (e.g., Boiler), compared to results reported in TimeX++, appears to be largely due to the normalization step in our data preprocessing. - [Table for the single-layer GRU with 200 hidden units.](https://postimg.cc/kBD66nYm) - [Table for different black-box classifier.](https://postimg.cc/0MzmgDMb) --- [Q7] Formula should distinguish between CPD and CPP definitions. [A7] We agree that distinguishing these two definitions would improve clarity. We propose the following notations: $$ \begin{equation*} \text{CPD}(x) = \sum_{k=0}^{K-1} \left\| F(x^{\uparrow}_k) - F(x^{\uparrow}\_{k+1}) \right\|_1, \end{equation*} $$ $$ \begin{equation*} \text{CPP}(x) = \sum_{k=0}^{K-1} \left\| F(x^{\downarrow}_k) - F(x^{\downarrow}\_{k+1}) \right\|_1, \end{equation*} $$ where $x_k^{\uparrow}$ and $x_k^{\downarrow}$ refer to the input after the removal of the top-k points with the highest and lowest absolute attributions, respectively.
Summary: The paper addresses the explainable AI (XAI) issue in time series. It proposes CPD and CPP evaluation metrics, discovers that traditional Integrated Gradients (IG) performs well, and then presents the TIMING method. Experiments show that TIMING outperforms baseline methods in multiple aspects. Claims And Evidence: Most claims are supported by evidence, but the claim that TIMING's segment-based masking is the best lacks sufficient proof. Methods And Evaluation Criteria: The methods and evaluation criteria are reasonable. CPD, CPP, and TIMING's segment - based masking suit the needs of time series XAI, and the evaluation with multiple datasets and metrics is comprehensive. Theoretical Claims: The proofs of other theoretical properties assume certain conditions that may not hold in all practical situations, weakening the theoretical foundation of TIMING. Experimental Designs Or Analyses: The experimental designs are sound, comparing with 12 baseline methods and conducting ablation studies. Supplementary Material: The author did not submit any supplementary materials. Relation To Broader Scientific Literature: The proposed metrics and methods are based on existing research and improve time series XAI. Essential References Not Discussed: No essential references are missing, and the literature review in the paper is comprehensive. Other Strengths And Weaknesses: - Strengths: The evaluation metrics and the TIMING method are novel, the experiments are comprehensive, and the theoretical basis is solid. - Weaknesses: The generalization of TIMING needs to be enhanced, and the focus on directional attributions may be unnecessary in some tasks. Other Comments Or Suggestions: The paper should further explore the practical impact of TIMING's incompleteness, add real-world case studies. Questions For Authors: - Q1: Have you considered comparing with more complex masking strategies? What would the results be? - Q2: How do you suggest users interpret the differences between TIMING attributions and model outputs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your helpful feedback and have addressed each of your comments below. --- [Q1] Insufficient evidence for TIMING's segment-based masking superiority & comparison with complex masking. [A1] Our results across real-world (Table 2, 3) and synthetic (Table 4) datasets show that TIMING consistently outperforms IG, demonstrating the practical benefits of segment-based masking within the integration path. The ablation study in Table 5 further confirms that our "segment-based" masking approach, which considers temporal dependencies by simultaneously masking consecutive points, surpasses RandIG's random masking of individual points. While our segment-based random masking performs well, it could be enhanced through "data-dependent" strategies: - Replacing masked portions with counterfactuals to calculate attributions as points contribute in the "opposite direction." - Mining temporal motifs or shapelets and adjusting masking probabilities to preserve these patterns, analyzing how breaking typical patterns affects point contributions. These approaches offer promising directions to refine our segment-based masking and better capture temporal dependencies. --- [Q2] Proofs rely on assumptions not always valid in practice, weakening TIMING’s theoretical foundation. [A2] Thank you for the constructive feedback. TIMING relies on three assumptions: - (S1) Partial derivatives are bounded along all of the paths from $\tilde{x}(M)$ to $x$. - (S2) $M$ follows a probability distribution. - (S3) $Pr(M_{t,d}=1)>0$. We agree that assumption S1 might raise concerns in practical scenarios. However, it's commonly adopted in ML theory [1][2] and typically holds for modern neural networks. Moreover, these assumptions primarily serve to justify that multiple Integrated Gradients (IG) computations can efficiently share a single IG path. Therefore, any potential gradient divergence would impact IG in general, rather than specifically undermining TIMING’s theoretical foundation. Assumptions S2 and S3 are practically reasonable. If any assumptions remain problematic or unclear, please let us know. [1] Diederik P. Kingma and Jimmy Ba. "Adam: A Method for Stochastic Optimization." ICLR 2015. [2] Bernstein et al. "signSGD: Compressed Optimisation for Non-Convex Problems." ICML 2018. --- [Q3] Weakness: Limited generalization of TIMING; directional attribution not always necessary. [A3] TIMING inherits gradient-based feature attribution estimation from Integrated Gradients (IG). As a general XAI method, TIMING can be broadly applied to time series. It is important to note that TIMING is not specialized solely in direction estimation, but naturally provides both magnitude and direction. Our experiments support this. For fair comparison with undirectional state-of-the-art time series XAI methods, we "absolutized" our attributions to compare magnitude estimation. Even so, TIMING achieved superior SOTA performance. Thus, TIMING outperforms methods focusing only on magnitude and offers the additional benefit of directional information. --- [Q4] Explore practical impact of TIMING's incompleteness; add real-world examples. [A4] Thank you for your valuable feedback about the practical impact of TIMING's theoretical incompleteness and for suggesting additional real-world case studies. While completeness is a desirable theoretical property, our research demonstrates that TIMING's approach prioritizes accurately identifying the most influential features in time series data, which is typically more critical in practical applications. For instance, in healthcare scenarios, correctly identifying influential features is crucial for doctor and patient to trust the model, whereas completeness may be of lesser practical significance. --- [Q5] Interpretation of differences between TIMING attributions and model outputs. [A5] Thank you for this important question. In standard Integrated Gradients (IG), due to completeness, attributions directly represent individual contributions summing exactly to $f(x) - f(x')$. However, these standard IG attributions are less reliable for time-series data. In TIMING, without the condition $M_{t,d}=1$, the attribution reflects the expected model-output difference over various masked samples: $ \sum_{t,d}\mathbb{E}\_{M \sim G(n, s_{min}, s_{max})}[\text{MaskingIG}\_{t,d}(x,M)] = \mathbb{E}\_{M \sim G(n, s_{min}, s_{max})}[\sum_{t,d}\text{MaskingIG}\_{t,d}(x,M)] = \mathbb{E}\_{M \sim G(n, s_{min}, s_{max})}[f(x)-f((\mathbf{1}-M) \odot x)] $ Thus, users can interpret TIMING’s attributions as the expected influence of each element over all baselines retaining specific segments. However, without conditioning on $M_{t,d}=1$, attribution values may become biased due to variations in masking frequency. By imposing $M_{t,d}=1$, TIMING ensures consistent evaluation of each feature’s contribution, resulting in more accurate and reliable interpretations for time-series data.
Summary: This paper proposes an improved version of the integrated gradient for time series tasks. The paper also challenges previous metrics for evaluating time series explainability and accordingly proposes two improved metrics for better evaluation. Overall, I think it is a good paper. ## update after rebuttal Thanks for the response, which addressed my concern and helped me better understand the paper. So, I decided to raise my score to 4. Claims And Evidence: The authors claim that simultaneously masking out will raise issues in time series data, which makes sense to me. Their motivation is good, but I am still trying to understand Fig 1. The proposed method also supports their motivation, but if I understand correctly, authors only use random segmentation masking to characterize the temporal dependencies? Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. The theoretical analysis in the paper mostly reaffirms known properties of IG rather than introducing new insights. Experimental Designs Or Analyses: - The experiments followed some standard flow consistency with the previous works, which I believe is sound. - From my observation, IG is almost better than other baselines. Can authors comment on this? Supplementary Material: I briefly read all of them. Relation To Broader Scientific Literature: This paper made a bridge between IG and the time series community. Essential References Not Discussed: Can you also include the recent time series multiple-instance learning framework in the discussion? Since I noticed they mentioned they have somehow explained abilities? - Inherently interpretable time series classification via multiple instance learning, ICLR'24 - TimeMIL: Advancing multivariate time series classification via a time-aware multiple instance learning, ICML'24 Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: One concern is this method may rely on the segment window size and number of windows. I saw authors conduct such ablation, but is there possibly a better way to generate the window? Questions For Authors: - What is the y-axis in Fig.1? - Can you elaborate more about the ground truth? How is it formatted? Can it be visualized? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your thoughtful insights and have addressed each of your comments individually below. --- [Q1] Clarification on Figure 1. (particularly its y-axis) [A1] Figure 1 has two components. The upper portion shows ground truth signed attributions on the y-axis. Two XAI methods estimate attributions, with features arranged by absolute importance. Method (a) has perfect feature selection by absolute value but alternating signs, while method (b) has consistent signs but inaccurate absolute values. The lower portion shows evaluation results after removing top-K features. Panel (c), the conventional prediction difference metric, suffers from cancellation effects, making method (b) appear superior due to sign alignment. Panel (d), our cumulative prediction difference, compares consecutive predictions to mitigate cancellation and enable accurate comparison. We appreciate your feedback and will clarify this in the final draft. Let us know if you have further questions. --- [Q2] Using only random segmentation to characterize the temporal dependencies? Thank you for your observation. Your understanding is correct. As stated in our paper, we began with a random retaining strategy to simulate scenarios where temporal dependencies are disrupted while keeping intermediate path values close to x to address out-of-distribution issues. Building on that, we introduced segment-based masking to better suit time series data. Retaining several segments helps preserve segment-level information, allowing the model to handle both preserved and disrupted temporal relationships. Although this approach is simple, we find it powerful for time series. Nonetheless, customizing the masking strategy for specific datasets or developing more complex methods remains a promising direction for future work. --- [Q3] IG outperforms other baselines, any comments? [A3] Your observation aligns with our central hypothesis: gradient-based methods like IG, DeepLIFT, and GradSHAP can identify both positively and negatively important points (similar to method (a) in Fig. 1), but conventional evaluation metrics underestimate their effectiveness due to cancellation effects when simultaneously removing top-K points. Our CPP and CPD metrics address this limitation by preventing cancellation during evaluation. Results confirm that gradient-based baselines outperform state-of-the-art time series XAI methods like TimeX++ and ContraLSP when evaluated with metrics that avoid cancellation. Also, TIMING improves upon state-of-the-art results by incorporating a random masking strategy during the IG path. This novel approach leads to practical performance enhancements, as demonstrated by our experimental results. --- [Q4] Comparision to recent time series multiple-instance learning framework. [A4] Thank you for pointing out multiple instance learning approaches for time series interpretability [1][2]. While they are indeed relevant, they rely on specific ante-hoc explainable architectures rather than providing model-agnostic post-hoc explanations. As a result, we did not include them as our baselines. However, we recognize their importance for time series interpretability and will incorporate them in the related works section. [1] Early et al., "Inherently Interpretable Time Series Classification via Multiple Instance Learning." ICLR 2024. [2] Chen et al. "TimeMIL: Advancing Multivariate Time Series Classification via a Time-aware Multiple Instance Learning." ICML 2024. --- [Q5] Concern about dependency on window hyperparameters & alternatives? [A5] Indeed, our method currently depends on segment window size and the number of windows. However, as demonstrated by our ablation study, TIMING is robust and maintains stable performance across a wide range of segment configurations, significantly mitigating sensitivity concerns. Additionally, as discussed in Reviewer ynvH-[A4], we only need to carefully consider cases involving the retention of too many points during masking. Nonetheless, we agree that adaptive window generation could offer further advantages. As provided in Reviewer sX8p-[A1], exploring such adaptive or data-driven window methods is a promising direction for future work. --- [Q6] Clarify the format and visualization of ground truth. [A6] The ground truth in our synthetic datasets consists of binary labels explicitly marking input points responsible for output generation. These labels are clearly defined during data construction, enabling direct visualization of ground truth saliency maps. However, as mentioned in our paper, it may differ from what the trained model actually considers important, even if the model achieves high accuracy. Since our approach exactly follows ContraLSP, please refer to their visualization and construction process detailed in Figures 12–13 and Appendix D.2 of that work. We will include additional detailed explanations and visualizations of ground truth construction in the appendix.
Summary: The authors introduce Temporality-Aware Integrated Gradients which addresses the reliability issues of naive IG in the time series setting by applying a random retraining to partially retain certain data points with a segment-based mask. The theoretical properties of this approach are explored and comprehensive evaluation is provided. Perhaps the more helpful contribution is the introduction of the cumulative prediction difference and cumulative prediction preservation metrics which provide more fair comparison between attribution methods for time series. These evaluations show that in real-world tasks like MIMIC-III the original interpretability methods like IG and GradSHAP can still perform quite well. This work highlights once again how difficult it is to accurately evaluate time series interpretability methods and continues to advance the field. Claims And Evidence: The authors put forward a number of well-substantiated claims C1. Current interpretability evaluation metrics are limited because of their focus on magnitude rather than direction of attribution or focusing on performance drop which shifts the explanatory focus to the data. This claim of limitation is a hard one to substantiate - and perhaps is best done in conjunction with evaluation of human-in-the-loop interpretability. However the measurement of sign-aligning bias is a reasonable way to show the misdirected attribution of current evaluations. C2. TIMING outperforms baseline methods on proposed CPD/CPP metrics across multiple datasets. This is well evaluated and the results are presented clearly in the paper C3. TIMING is computationally efficient and competitive with standard interpretability methods like GradSHAP/IG for its performance. This is validated with elapsed time on MIMIC-III though no theoretical analysis of time/memory complexity is completed. Methods And Evaluation Criteria: 1/ CPD and CPP metrics: These metrics address a real limitation in existing evaluation approaches by sequentially removing features rather than simultaneously, preventing the cancellation of positive and negative attributions. This approach aligns better with how models actually use features. 2/ TIMING algorithm: The method enhances IG by incorporating temporal dependencies through segment-based masking, which is appropriate for time series data where relationships between points matter. The randomization strategy mitigates out-of-distribution issues in the integration path. 3/ Evaluation approach: The authors evaluate on both synthetic (Switch-Feature, State) and real-world datasets (MIMIC-III, PAM, Boiler, etc) across multiple domains, using both their proposed metrics and conventional metrics. The comparison against 12 baseline methods is comprehensive. It could be helpful to understand more about the selection criteria for the task as some tasks from various sources have been left out (e.g. delayed spike from Leung et. al., seqcomb/freqshapes from Liu et al. (2024b) and incorrectly attribute the state dataset to Liu et al. when it was actually developed by Tonekaboni et al. 4/ Ablation studies: The ablation study in Table 5 effectively shows the value of segment-based masking over point-wise masking. Theoretical Claims: TIMING maintains theoretical properties of IG around sensitivity and implementation invariance, though not completeness. These claims have been proven in the appendices. I reviewed the proofs and did not see any issues. Proposition 4.4 can be supported through the addition of an appropriate counterexample for additional thoroughness and seems an adequate tradeoff for gaining temporal awareness and the added performance benefit. Experimental Designs Or Analyses: Comparative evaluation showed TIMING performance against up to 12 other methods on the synthetic and real-world tasks. The authors compared CPD, CPP, accuracy, cross-entropy, sufficiency, and complexity as well as CPD under different substitution strategies. For synthetic datasets AUP/AUR was also considered. Isolated effects of segment-based masking versus random point masking were evaluated with ablation studies. In additional to the primary evaluations on GRU some CNN and Transformer experiments were done to show generalization across model types. Hyperparameter sensitivity was also evaluated. Computational efficiency was analyzed in terms of raw time, although this analysis could be expanded on to evaluate the theoretical compute complexity of the algorithm. Qualitative review of explanations generated by TIMING was shown in the appendices. Supplementary Material: I reviewed the appendices, spending particular time on the algorithm design and proofs in Appendix B and C. Relation To Broader Scientific Literature: The paper positions its work within the broader research of model interpretability for time series. It compares modality-agnostic methods (FO, AFO, GradSHAP, DeepLIFT, LIME) and time-series approaches (FIT, WinIT, Dynamask, Extrmask, ContraLSP, TimeX++). Essential References Not Discussed: All the primary and essential works seem to be covered. Some adjacent work on applying multiple instance learning to time series interpretability and the theoretical implications of that work for evaluating interpretability methods could be added. Early et. al. (2023) could be a good place to start for this. Additionally, engaging with counterfactual and human-centered interetability methods could help expand the applications of this work. Other Strengths And Weaknesses: S1. The theoretical analysis connecting TIMING to IG is rigorous and backs up the empirical evidence provided. S2. The identification of sign cancellation in evaluation metrics is an important insight. S3. Experiments are completed comprehensively across diverse datasets. W1. Absence of comparison to other explainability paradigms beyond attribution methods. Other Comments Or Suggestions: N/A Questions For Authors: Q1. Did the authors consider whether other gradient-based methods suffer from similar problems to IG and if these temporality modifications could apply more broadly to that class of interpretability methods? Q2. How could this method be extended to the time series regression setting? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your insightful feedback. Below, we address each comment individually. --- [Q1] C3. Theoretical analysis of time/memory complexity. [A1] As shown in Fig. 4, TIMING demonstrates high efficiency compared to baselines like LIME, FO, AFO, and modern time series XAI methods, which rely on expensive forward passes or mask optimization (Compare IG and TIMING in Figure 4). The algorithm's time complexity is $O(b + m)$, with $b$ representing forward/backward pass cost and $m$ for mask sampling. Parallelizing integration steps significantly enhances efficiency, with mask sampling cost remaining negligible compared to gradient computation. Memory complexity of $O(n_{samples} \times (T \times D + B))$ matches Integrated Gradients, ensuring TIMING maintains high efficiency while providing superior attribution quality for time series explanation tasks. --- [Q2] Selection criteria for the task. [A2] We based our task selection on established benchmark datasets from ContraLSP (the previous SOTA method). To expand to diverse real-world datasets, we also included datasets from Table 4 of TimeX++ (another SOTA time series XAI work). We prioritize practical utility over ground truth identification. Conventional metrics like AUP and AUR assume perfect feature learning, which is unrealistic, as models can predict correctly by focusing on different features. Our CPD and CPP metrics provide a more nuanced evaluation of attributions. Even on synthetic datasets, our method demonstrates superior performance, justifying our use of the two classification datasets from ContraLSP for synthetic data evaluation. --- [Q3] Attribution correction for the state dataset. (Liu et al. to Tonekaboni et al.) [A3] We initially cited ContraLSP (Liu et al.) for the state dataset since they utilized it in their experiments. However, as the reviewer suggests, the dataset was actually developed by Tonekaboni et al; it was further modified in Dynamask (Crabbe & Van Der Schaar), which was subsequently adopted by Extrmask, ContraLSP, and our paper. We will revise the citations to FIT (Tonekaboni et al.) and Dynamask (Crabbe & Van Der Schaar) in our final draft. --- [Q4] Strengthening Proposition 4.4 with a counterexample. [A4] Thank you for this valuable suggestion. As noted in Proposition 4.4, developing IG variants for time series indeed leads to sacrificing completeness. However, we argue this trade-off is justified in scenarios dependent on temporal relationships. For clarity, consider a scenario involving $x=(x_1,x_2,x_3)$ where $x_1<x_2<x_3$ and function $f(x)=(x_3 - x_2)I(x_3>x_2)+(x_2 - x_1)I(x_2>x_1)$. Standard IG returns zero attributions for $x_2$ due to its interpolation path structure. Contrastingly, TIMING explores multiple partial interpolation paths, identifying meaningful gradients for $x_2$ and thus reflecting temporal dynamics more accurately. We will integrate a refined counterexample into our final version accordingly. --- [Q5] Comparision to other explainability paradigms. [A5] Related to multiple instance learning, please refer to reviewer Mpsu-[A4]. Regarding your suggestions about counterfactual and human-centered methods, we agree these are valuable directions. That said, they extend beyond our current focus on time series–specific, model-agnostic XAI. We appreciate your feedback and may explore these broader approaches in future studies. --- [Q6] Extension of temporality modifications to other methods. [A6] Thank you for this insightful question. Our main focus was addressing issues of Integrated Gradients (IG) in time-series contexts, leading to the development of TIMING. As suggested, we additionally tested our masking strategy on other gradient-based methods by adjusting baselines and computing approximations of expectations. DeepLIFT’s CPD scores remain unchanged (only -0.001 at K=100), whereas GradSHAP exhibited slight improvements (+0.016 at K=50; +0.020 at K=100), likely due to its similarity to IG. We suspect that our segment-retaining approach is especially well-suited for IG, and other interpretability methods might require tailored solutions. Nonetheless, we believe these temporality modifications could be broadly beneficial and consider their extension an exciting direction for future work. --- [Q7] Extension to time series regression. [A7] TIMING naturally extends to regression settings by generating attributions for each time point in parallel. When the output is a vector, the method creates a comprehensive attribution map that measures each point's contribution to individual output elements. [Experiments on the MLO-Cn2 dataset](https://postimg.cc/McYK3435) [1] verify TIMING's superior CPD performance compared to existing baselines, confirming that our approach extends correctly to regression. [1] Jellen et al. "Effective Benchmarks for Optical Turbulence Modeling." arXiv preprint, 2024.
null
null
null
null
null
null
µnit Scaling: Simple and Scalable FP8 LLM Training
Accept (poster)
Summary: The paper introduces a new 8-bit training method for LLMs, by keeping all tensors close to unit variance. The method has fewer hyper-parameters than earlier methods, and is more straightforward to implement. It also enables more accurate hyperparameter transfer from small models to large ones. Claims And Evidence: The main claims of the paper are well supported: - Claim: Proposed method achieves comparable accuracy to bf16 training. Authors compare against equivalent models trained with bf16 at several scales (up to 13b) and show training loss is within ~1%. Only thing to make this claim stronger would be to train for more steps. Training steps are a bit short, but this is understandable and probably due to compute limitations. - Claim: Proposed method is more practical than previous fp8 training methods The method has only 3 hyperparameters which compare favorably to comparable methods, and these parameters also transfer better from small to large models, making the method more practical. It also looks easier to get stable training with this method. I'm conviced. Methods And Evaluation Criteria: Metrics are training loss and standard LLM benchmarks, which is the right set of metrics here. Training steps could be longer, if more compute is available, to make a stronger case. Theoretical Claims: Theoretical claims look correct. Experimental Designs Or Analyses: Experiment design is mostly straightforward as it should be. The setup and presentation for the hyperparameter transfer results are rigorous and convincing. Supplementary Material: I did not. Relation To Broader Scientific Literature: This is the part I'm most unsure about, as I haven't followed the recent literature very closely. Most of the methods presented (e.g. post-layer norm, unit variance initialization and modifications) have been used in prior work, and the authors cite those works where appropriate. It is hard for me to judge, which changes introduced here result in the improvements over prior work like \mu P. For instance table 1 is informative to show changes wrt standard transformer, a similar table would be very useful to compare against \mu P and SP as well. For example what modifications induce the stability observed over SP? Which method allows to reduce hyperparameters over \mu P Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: What modifications induce the stability observed over SP? Which method allows to reduce hyperparameters over \mu P? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback on our work and are glad that they find the method’s theoretical basis and empirical results rigorous and compelling. ## Training duration > Only thing to make this claim stronger would be to train for more steps. Training steps are a bit short, but this is understandable and probably due to compute limitations. We agree with the reviewer that training for a longer duration would have been nice to make our claims even stronger. As the reviewer has anticipated, we had only a limited amount of compute and had to design our 1B, 3B, 7B, and 13B training runs accordingly. We are excited to see future work applying µS at even larger scales. ## Changes vs. SP and µP and their roles > For instance table 1 is informative to show changes wrt standard transformer, a similar table would be very useful to compare against \mu P and SP as well. We agree that denoting differences between µS and other training methods is useful. Table 1 in the paper shows changes that µS makes relative to the standard transformer (i.e., standard parametrization, or SP), which is the main comparison we aim to make. As a supplement to Table 1, we have prepared another table that enumerates differences between µS and other training methods (see table: https://imgur.com/a/f4dglps), which we plan to include in the appendix of the final paper. > What modifications induce the stability observed over SP? Numerical stability is a result of variance preservation in µS, making tensors better representable with low-precision formats. The subtle point we would like to emphasize is that variance preservation is an AND function; many components of µS all work in conjunction to achieve it. Unless the *entire* residual block preserves variance, the model doesn’t preserve variance. The entire model needs to be variance preserving in order to achieve better numerical stability over SP, and this is a conjunction of several modifications: linear layer scaling factors, post-branch-norm, fixed residual modification, and unit variance initialization. > Which method allows to reduce hyperparameters over \mu P? We need to tune fewer hyperparameters with µnit Scaling than µP because: - **Improved stability leads to more simplicity with µS** - As discussed earlier, improved stability is the result of several components of µS together. Because training is more stable, we do not need to tune more hyperparameters to achieve reasonable performance with µS. Of course, if we added more hyperparameters and multipliers to tune it is reasonable to expect marginally better performance, but this is simply not necessary with the µS approach. - **Design choices eliminate extraneous hyperparameters** - Enforcing near-unit variance in µS models by design eliminates hyperparameters related to initialization and individual layers. The examples below contrast hyperparameters tuned with µP (see Section F.4 of Tensor Programs V) with µS. - µP tunes the weight initialization scale. µS initializes weights from $\mathcal{N}(0,1)$ by design. - µP tunes the attention temperature. µS maintains queries and keys with near-unit variance by design and does not require this. - µP tunes the embedding multiplier. µS keeps activations near-unit variance by design and avoids this. As a result, **training performant LLMs with µS only requires three hyperparameters**: learning rate, weight decay, and the residual coefficient.
Summary: This paper presents µnit Scaling(µS), a straightforward and scalable FP8 training method. It addresses the root causes of numerical instability in conventional transformer blocks and proposes effective solutions to mitigate these issues. µS approach incorporates Square-root Softmax Attention and Post-Branch-LayerNorm within transformer blocks, along with zero-shot hyperparameter transfer, enabling hyperparameters tuned on smaller networks to be applied directly to larger models. µS demonstrates stable training for LLMs up to 13B parameters in FP8 precision without dynamic scaling and achieves a 25-33% throughput increase compared to NVIDIA’s Transformer Engine. Claims And Evidence: Most of the key claims in the paper are well-supported by experimental evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited to the core problem of FP8 training for LLMs. The design effectively addresses stability issues, improves efficiency, and reduces tuning costs. However, extending experiments to diverse architectures, tasks, and hardware platforms would provide stronger evidence of µnit Scaling’s robustness and versatility. Theoretical Claims: The paper introduces a modification to the conventional softmax attention mechanism by applying a square root to the softmax scores. The provided proof establishing the relationship between sequence length and variance is sound, and the advantages of using Square-root softmax attention are effectively demonstrated through both theoretical analysis and experimental results. Experimental Designs Or Analyses: The experimental design effectively demonstrates µnit Scaling’s strengths in FP8 training stability, throughput, and hyperparameter transfer. However, limited benchmark diversity restricts insights into µS’s broader applicability. Supplementary Material: I reviewed the appendix of the paper. The appendix described the detailed algorithm settings, activation function choices and activation outliers. Relation To Broader Scientific Literature: Square-root Softmax Attention: Conventional softmax attention mechanisms in Transformers are known to amplify large activations, which can destabilize FP8 precision due to overflows in matrix multiplication. The Square-root Softmax Attention in this paper offers a lightweight yet effective stability solution for FP8. Post-Branch-LayerNorm: Conventional Transformer architectures place LayerNorm before the residual connection. µS adapts post branch layernorm ideas specifically for FP8 precision, ensuring better variance control in deep LLMs. Residual Modification Schemes: Research on Deep Residual Networks has demonstrated that scaling residual branches can mitigate gradient explosion or vanishing gradients. This paper introduces fixed residual modification that stabilizes variance across deep Transformer layers. Essential References Not Discussed: Most of the related works are cited and discussed in the paper. Other Strengths And Weaknesses: Strengths: 1. The paper is well-written with minimal typos. 2. Figures int the paper are well designed and easy to understand. 3. The introduction and demonstration of the method are sufficient. Weaknesses: 1. The layout of the article increases the difficulty of reading. 2. Lack of experimental evidence demonstrating the necessity of different components of the method. Other Comments Or Suggestions: Summarizing the symbols and variable names used in the paper in the appendix can improve the readability of the article. Questions For Authors: 1. Are there ablation experiments on the role of different components of µS? 2. Can µS be effective in models from other modalities or for different types of tasks? Ethical Review Concerns: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s comments on our work, and are glad that the ideas we presented are clear. ## Ablating components of µS > Are there ablation experiments on the role of different components of µS? Most interventions in µS are **uniquely determined by simple math and the design goals** of: 1. Enforcing near-unit variance in all tensors 2. With negligible overhead 3. While enabling hparam transfer In the few cases where there are degrees of freedom, we either adhere to common best practices or perform ablations. We’ve added a new section at the start of the Appendix that spells out the origins of each component and we are modifying the text to make this clearer. An abbreviated version of this section is as follows: - **Unit variance init** - Necessary to ensure that weight tensors have unit variance. - **Linear layer static scales** - Since we don’t scale down the weights by $\frac{1}{\sqrt{\text{fan in}}}$, we have to scale linear layer outputs by this factor to maintain unit variance outputs. - **Learning rate scaling** - Based on µP, this is the unique way of scaling LR with model width that enables hparam transfer with the above weight initialization and static scaling factors. - **Weight decay scaling** - We adhere to the best practice of using decoupled weight decay from prior work, since coupled weight decay complicates hparam transfer. This is noted so our µS “recipe” is fully self-contained. - **FP8 hidden layers** - Standard practices for FP8 training, again included for completeness. - **Post-Branch-Norm** - As shown in Section 2.1, masked self-attention has diminishing variance with sequence position. This cannot be corrected with per-tensor scaling factors. Norm placement *can* correct this, so this degree of freedom has ablation results in Fig. 4b. - **“Fixed” residual modification** - Maintaining unit variance in the residual stream requires a weighted sum, but how to weight the branch and stream is a degree of freedom. We use the same weights across all residuals based on the results in Fig. 5. Omitting one or more of these modifications would prevent unit variance and/or hparam transfer, as shown by the mathematical analysis. While it may be interesting to explore scenarios where training doesn’t completely fail without some components, these results would not be useful enough to warrant inclusion–especially since these components are easy to implement and have minimal overhead. Another subtlety is that variance preservation and hparam transfer are AND functions. Unless the *entire* residual block preserves variance, the model will not. Similarly, weight init *and* static scales *and* learning rate must all work in conjunction to get hparam transfer. **The above modifications are not independent, additive tweaks–they are a minimal set to achieve the desired properties.** While the existing results already provide ample justification for the components of µS, to be completely sure that we address the reviewers’ comments, **we performed further experiments ablating norm placement and residual modification choices.** - Norm placement: Pre- vs. Post-branch norm with µS models (see https://imgur.com/a/qhq9CTP) - Post-branch-norm converges better than pre-norm with µS in FP8. Supports the theoretical motivations for post-branch-norm from Section 2.1 (i.e., maintaining residual stream variance). - Supplements our existing norm ablation in Fig. 4b. - Residual modification: Fixed, running-mean and standard residual modification (see https://imgur.com/a/eByqlYO) - µS models that use standard residuals (i.e., no coefficients) do not properly converge. Supports the theoretical motivation for fixed residual modification from Section 2.2 (i.e., maintaining stream variance). - Supplements our existing residuals ablation in Fig. 5. **In the final paper, we will compile all of these ablation results into a single subsection to clarify the contribution of the µS components. We hope this addresses the reviewers’ feedback about ablation studies.** ## Other modalities, architectures, and tasks > Extending experiments to diverse architectures, tasks, and hardware platforms would provide stronger evidence of µnit Scaling’s robustness and versatility. > Limited benchmark diversity restricts insights into µS’s broader applicability. > Can µS be effective in models from other modalities or for different types of tasks? We agree that applying our ideas to more modalities and architectures would be interesting. However, we believe improving LLM training is already enough scope to have a large, real-world impact. Further, nothing in µS is specific to H100s – µS is useful whenever properties listed in Fig. 1 are desirable. ## Article layout improvements > The layout of the article increases the difficulty of reading. If the reviewer could please elaborate on what we can improve, we are glad to address it.
Summary: This paper introduces µnit Scaling (µS), a method for efficient FP8 training of large language models without requiring dynamic scaling factors or extensive hyperparameter tuning. µS builds on Unit Scaling to maintain unit variance in weights, activations, and gradients, ensuring stable low-precision training. It enables hyperparameter transfer across model sizes and eliminates the need for mixed-precision layers, allowing all hidden linear layers to compute in FP8. The method achieves training speedups of up to 33% while maintaining quality comparable to higher-precision baselines. Claims And Evidence: The claims made in this submission are generally supported by Fig 2, Fig 7, Table 5. Methods And Evaluation Criteria: The evaluation criteria make sense for the problem and are aligned with previous papers. Theoretical Claims: Did not completely check the correctness of the proofs. Experimental Designs Or Analyses: The proposed µnit Scaling (µS) combines µP and Unit Scaling. However, the main results did not include these closely related baselines: µP, u-µP, and Unit Scaling. Supplementary Material: No Relation To Broader Scientific Literature: https://arxiv.org/pdf/2407.17465 Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: (1) It is interesting to see the theoretical analysis of the attention output variance. (2) This paper is well-organized and easy to follow. Weakness: (1) The novelty is limited. The proposed µnit Scaling (µS) scheme combines previously published µP and Unit Scaling. (2) Lack of Baselines. µP, u-µP, and Unit Scaling are strongly related techniques that should be included in the main results like Table 5 and Figure 7. (3) Lack of Ablation studies. The proposed µnit Scaling (µS) scheme contains several modifications as shown in Table 1. However, there are no ablations to show each contribution of these modifications. Other Comments Or Suggestions: None Questions For Authors: (1) Could you include µP, u-µP, and Unit Scaling baselines in your main results? (2) Could you include ablation studies for the proposed µnit Scaling (µS) ? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer’s helpful feedback and questions. ## Novelty of µS > The novelty is limited. The proposed µnit Scaling (µS) scheme combines previously published µP and Unit Scaling. While µS does build on ideas from both of these methods, µS involves modifications that are not present in either and obtains desirable properties that neither of these methods achieve (c.f. Figure 1: https://imgur.com/a/v1oA2cH). In particular, µS achieves hparam transfer with fewer extra hparams than µP and has better FP8 numerics at scale than Unit Scaling. We elaborate on specific differences below. ### µP µP enables hparam transfer from smaller to larger models. **However, µP suffers from numerical instabilities even with 16-bit formats**. As described in Section 7.4 of their paper, numerical issues caused frequent divergences that required them to train their GPT-3 in FP32. In contrast, µS provides hparam transfer even in FP8. µP also requires tuning many more hparams than µS (6 vs. 3, see Table 3), as well as changes such as $\frac{1}{d}$ attention and zero-initialization for some layers, which µS does not impose. **In summary, µS is a simpler solution than µP for hparam transfer on top of enabling FP8 training.** ### Unit scaling Unit Scaling facilitates low precision training by maintaining near-unit variance of weights, activations, and gradients through unit variance weight initialization and per-operation static scaling factors. However, as we show in Section 2.1, the masked self-attention operation central to LLMs has diminishing output variance over sequence position, **which Unit Scaling doesn’t address**. Our work is the first to identify this issue and uses post-branch-norm to address it. We demonstrate LLM training up to 13B parameters in FP8, while the largest model trained in the Unit Scaling work was 340 million params (BERT Large). µS also enables hparam transfer, which Unit Scaling does not. **In summary, µS fixes key numerical issues that Unit Scaling does not, enables hparam transfer, and scales FP8 training to much larger model sizes**. ### u-µP Concurrent work on u-µP also builds on both µP and Unit Scaling. However, **unlike µS, u-µP cannot keep all hidden layers in FP8**, requiring “critical matmuls” in transformer blocks to stay in BF16. u-µP also requires tuning many more hparams than µS (7 vs. 3, see Table 3). Key architectural modifications such as post-branch-norm and fixed residual coefficients permit µS to scale better. **In summary, unlike u-µP, µS provides full FP8 training for large LLMs and does so with fewer hparams.** ## Ablating components of µS > Lack of Ablation studies. Please refer to the “Ablating components of µS” section of our response to Reviewer NZBg. ## Additional baselines > Lack of Baselines. µP, u-µP, and Unit Scaling are strongly related techniques that should be included in the main results We have attempted comparison to Unit Scaling (US), but on models 7B and larger, US models were unable to converge in FP8 (see this representative figure at 7B model size: https://imgur.com/a/lJAMKmU). Further, our comparison of pre-norm vs. post-norm in FP8 showed that pre-norm (which US models use) had worse convergence (see here: https://imgur.com/a/qhq9CTP). In light of these findings, and because SP was more important to compare with, we allocated resources towards SP baselines at large scales instead of US. We did not compare to a µP baseline because of the difficulty of training µP models in 16 bit formats, let alone with FP8 (see “µP” subsection above). Being forced to train in FP32 means that 1) we can’t obtain an apples-to-apples comparison, and 2) collecting results for this baseline would be extremely expensive. We did not compare to a u-µP baseline since this work was done concurrently with our own. Given more time/compute resources, we agree that this comparison would be interesting, but unfortunately is not feasible to complete. Similar to our work, the u-µP work also only uses SP as a baseline. While assessing these additional baselines could be valuable, we note that **our ***demonstrated*** results are stronger than any existing method’s ***claimed*** results**. Even if we were to tune the hparams of these methods and achieve good hparam transfer and FP8 training quality, these methods would not surpass our µS results. This is because the focus of all methods, including ours, is on *qualitative* rather than quantitative properties–existing methods cannot fully preserve BF16 accuracy *even more* or keep the optimal hparams *more* unchanged with width. µS already matches BF16 quality with FP8 and achieves near-perfect hparam stability, leaving room for only tiny improvements on these axes. Such small improvements would not outweigh our method’s **reduced hyperparameter count**, **faster training**, and **alignment between training and inference precisions**.
null
null
null
null
null
null
null
null
An efficient implementation for solving the all pairs minimax path problem in an undirected dense graph
Reject
Summary: This paper considers the all pairs minimum bottleneck edge problem. For this problem, given a weighted undirected graph $ G $ with weights $ w $, the goal is to compute: $$ d_{bot}(s, t) = \min_{p\in \mathcal{P}(s, t)} \max_{e\in p} w(e) $$ for all pairs $ (s, t) \in V(G) \times V(G) $, where $ \mathcal{P}(s, t) $ is the set of all paths from $ s $ to $ t$. The "contribution" of this paper is virtually nonexistent. It merely re-implements an algorithm explicitly detailed by Liu (2023), with absolutely nothing meaningful or no new insights! The implementation itself is trivial, so much so that it fits entirely within a figure and could easily be assigned as an exercise to a first year undergraduate student. Even worse, the provided implementation is not only simplistic but also inefficient! For a paper titled "An efficient implementation...." this is just unacceptable. For example, to initialize a ``max_weight`` variable, the author does: ``import sys`` then ``maxW = sys.maxsize`` instead of ``maxW = float('inf')`` or ``maxW = np.inf``!! I invite the AC or any of the other reviewers to look at the bad quality of the codebase in the supplementary section. To prove this point even further, I wrote a new implementation of **another algorithm** (that the authors don't even consider or mention in the paper). I implemented the new algorithm in **20 minutes**, and it runs **40 times faster** than the authors' implementation on their **largest dataset**, reducing the runtime from **28 seconds** to under **1 second** on my machine. I’ve attached my implementation below for reference. Frankly, this is one of the worst papers I have read in the past five years. It should have never been written, let alone submitted. Its premise (that there exists "no efficient implementations" for this problem) is fundamentally flawed, and the paper is full of incorrect claims and unsubstantiated evidence, and a terrible experimental setup, that it leaves nothing of value to salvage. **I never write reviews this harsh**, but the sheer lack of merit in this work makes it impossible to hold back. If this paper is accepted to ICML, I will seriously reconsider reviewing for the conference in the future. Claims And Evidence: The following claim, which serves as the motivation for the entire paper, is absolutely absurd: > "There are several theoretical outcomes which claim the APPD matrix can be solved accurately in \( O(n^2) \) (Sibson, 1973; Demaine et al., 2009; 2014; Alon & Schieber, 2024). However, there is no code implementation of these algorithms, which implies they are impractical." The notion that the absence of an implementation automatically renders an algorithm impractical is not just flawed, it is *completely unscientific*. Theoretical work is not obligated to provide implementations, and the lack of one does not imply impracticality. This is a deeply misguided and intellectually lazy argument. To directly refute this claim, I have implemented an $O(n^2)$-time algorithm myself, and it runs **40 times faster** than the authors' implementation. Methods And Evaluation Criteria: The authors choose arbitrary benchmark datasets without providing any justification for their selection. There is no explanation as to why these particular datasets were chosen. Also, every single dataset consists solely of sets of Euclidean points, which makes absolutely no sense in the context of the problem. Why restrict the evaluation to only complete graphs when real-world graphs are rarely complete? A proper evaluation should include normal graphs with varying structures, rather than artificially constrained cases that fail to reflect practical applications. Theoretical Claims: There are literally no theoretical claims in the paper. Experimental Designs Or Analyses: The experiments are **NOT** sound. They compare implementations in C++ with Python which makes no sense whatsoever. Supplementary Material: Yes, I looked at the entire code base, and even implemented a better algorithm than the authors' implementation. Relation To Broader Scientific Literature: There is 0 contribution of this paper to the scientific literature. Essential References Not Discussed: The author has no idea about the relevant literature. For example, the MST reduction easily gives a practical $O(n^2)$ time algorithm that is "practical", and an even simpler $O(n^2 \log n)$ time using binary lifting that would be faster for smaller $n$ values. Other Strengths And Weaknesses: There is quite literally no new contributions in this paper, and it should've never been written. Other Comments Or Suggestions: Here is my implementation, which significantly improves on your result. You call it using ``ultra_fast_wide(X)`` ```python import numpy as np import numba import numpy as np from sklearn.metrics.pairwise import pairwise_distances @numba.njit(cache=True, fastmath=True) def prim_mst(D): """ Optimized Prim's MST using Numba JIT for dense graphs. Returns edges as list of (u, v, weight) tuples. """ n = D.shape[0] in_mst = np.zeros(n, dtype=np.bool_) parent = np.full(n, -1, dtype=np.int64) key = np.full(n, np.inf, dtype=D.dtype) key[0] = 0.0 for _ in range(n): # Find minimum key vertex not in MST u = -1 min_val = np.inf for i in range(n): if not in_mst[i] and key[i] < min_val: min_val = key[i] u = i if u == -1: break in_mst[u] = True # Update neighbors for v in range(n): if not in_mst[v] and D[u, v] < key[v]: key[v] = D[u, v] parent[v] = u # Build edge list edges = [] for v in range(1, n): u = parent[v] edges.append((u, v, D[u, v])) return edges def build_csr_adjacency(n, mst_edges): """Convert MST to compressed sparse row (CSR) format""" edge_counts = np.zeros(n, dtype=np.int32) for u, v, _ in mst_edges: edge_counts[u] += 1 edge_counts[v] += 1 ptr = np.zeros(n+1, dtype=np.int32) ptr[1:] = np.cumsum(edge_counts) adj_edges = np.empty(ptr[-1], dtype=np.int32) adj_weights = np.empty(ptr[-1], dtype=np.float64) positions = np.zeros(n, dtype=np.int32) for u, v, w in mst_edges: for _ in range(2): # Add both directions idx = ptr[u] + positions[u] adj_edges[idx] = v adj_weights[idx] = w positions[u] += 1 u, v = v, u # Swap for reverse direction return ptr, adj_edges, adj_weights @numba.njit(parallel=True, cache=True, fastmath=True) def compute_bottleneck_matrix(n, ptr, adj_edges, adj_weights): """Numba-optimized BFS for all pairs bottleneck calculation""" bottleneck = np.zeros((n, n), dtype=np.float64) for src in numba.prange(n): visited = np.zeros(n, dtype=numba.boolean) max_edges = np.zeros(n, dtype=np.float64) queue = np.empty(n, dtype=np.int32) queue_weights = np.empty(n, dtype=np.float64) front = back = 0 # Initialize BFS visited[src] = True queue[back] = src queue_weights[back] = 0.0 back += 1 while front < back: u = queue[front] current_max = queue_weights[front] front += 1 # Process all neighbors start = ptr[u] end = ptr[u+1] for i in range(start, end): v = adj_edges[i] weight = adj_weights[i] if not visited[v]: new_max = max(current_max, weight) visited[v] = True max_edges[v] = new_max queue[back] = v queue_weights[back] = new_max back += 1 bottleneck[src] = max_edges return bottleneck def ultra_fast_wide(X): n = X.shape[0] distance_matrix = np.round(pairwise_distances(X), 15) mst = prim_mst(distance_matrix) # Use Numba-optimized prim_mst # Convert MST to CSR format ptr, adj_edges, adj_weights = build_csr_adjacency(n, mst) # Compute bottleneck matrix with Numba return compute_bottleneck_matrix(n, ptr, adj_edges, adj_weights) ``` Questions For Authors: No questions. I've spent enough time on this already. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: I am very glad there finally appears a reviewer challenging the contributions of the paper with real and solid code, not by cheap talking. Thank you! I have tested Reviewer t5C9's code, it is indeed faster. However, the acceleration is due to the use of Numba JIT (Just-In-Time Compilation). It is NOT because of Reviewer t5C9's code/algorithm is faster than my code/algorithm. If we disable Numba JIT in Reviewer t5C9's code, it is about seven times slower than my code. Some information for readers: Numba is a high-performance compiler for Python that accelerates numerical computations by compiling Python functions into fast machine code using LLVM. The JIT (Just-In-Time) compilation feature of Numba allows Python functions to be compiled at runtime, significantly improving their execution speed, especially for loops and array operations. Readers can test Reviewer t5C9's code with Numba JIT being disabled, and compare with my code. Moreover, **Reviewer t5C9's code is logically inefficient**. Since **the graph is undirected, the all-points path distance (APPD) matrix $ M $ is symmetric**, meaning **only half of the values need to be computed**, while the other half can be directly copied. However, Reviewer t5C9's code fails to copy the computed values to their symmetric positions in the matrix. Instead, **it redundantly recalculates each entry, making the code/algorithm inefficient**. For example, after computing $ M[p, q] $, the code does not copy this value to $ M[q, p] $ but instead recalculates it. Readers can refer to the `compute_bottleneck_matrix` function in Reviewer t5C9's code for further details. So, Reviewer t5C9's implementation is less efficient than mine. Nevertheless, nice try! Question: If this paper is accepted to ICML, I will seriously reconsider reviewing for the conference in the future. Response: Please do not leave even if the paper is accepted by ICML. We need valuable reviewers like Reviewer t5C9 for top conferences like ICML. Reviewer t5C9 is the first reviewer I have ever seen, who are willing to (and are able to) do some real coding work to verify the contribution of a submitted paper, even if his/her opinion is biased and unfair. Question: To prove this point even further, I wrote a new implementation of another algorithm (that the authors don't even consider or mention in the paper). Response: Which algorithm? Published at where? Question: For a paper titled "An efficient implementation...." this is just unacceptable. For example, to initialize a max_weight variable, the author does: import sys then maxW = sys.maxsize instead of maxW = float('inf') or maxW = np.inf Response: By saying "An efficient implementation," I mean the logic of the code is efficient, not the code itself. These dirty work is left to people like Reviewer t5C9, because I am not good at these trivial optimization work. These work is left to people who are good at it. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. However, your response contains multiple inaccurate or misleading statements that warrant clarification. I will address them in detail. # On the Role of Numba JIT in Performance Your central defense against the performance gap is that my implementation is only faster due to the use of Numba JIT. That is *precisely the point*. Your paper claims to provide an efficient implementation of the all-pairs minimum bottleneck edge problem. An efficient implementation is not merely a high-level description of logic; it is measured by actual runtime performance and code quality. **Leveraging available optimization tools such as Numba, just as one would use optimized BLAS libraries or compiler intrinsics in C++, is standard practice in practical algorithm engineering**. The fact that my implementation, with commonly available and well-documented tools, achieves a 40× speedup over yours demonstrates that your claim (“no efficient implementation exists”) is **objectively false**. Disabling Numba to simulate a worst-case performance environment is not a meaningful benchmark, particularly when your own implementation was not tested under such constraints. Your comparison is neither fair nor scientifically relevant. # On Redundant Computation of Symmetric Entries You are correct that the APPD matrix is symmetric. The choice not to copy values from (i, j) to (j, i) in my implementation was deliberate. The code uses a parallel-for structure (numba.prange) across all sources to facilitate scalable parallel execution, **so your objection is completely absurd**. Copying symmetric values post hoc would add branching or locking overhead in certain parallel environments, and the trade-off was consciously made in favor of simplicity and scalability. More importantly, **even with this redundancy, the runtime remains dramatically better than yours**. This further undermines your claims about the efficiency of your own approach. # On “Logic vs. Code” You state: “By saying ‘efficient implementation,’ I mean the logic of the code is efficient, not the code itself.” This distinction is inconsistent with both the title and the stated goals of your paper. ICML is a venue that values practical contributions, especially in the context of implementation and empirical evaluation. If the actual code is inefficient, and the implementation performs poorly, then the core claim of your paper, that there are no efficient implementations and yours fills this gap, is invalid. One cannot simultaneously argue that: 1) No efficient implementation exists in the literature; 2) Your implementation fills this void; But you did not actually attempt to optimize the implementation. That is logically inconsistent. # On Algorithmic Novelty You refer to your implementation as non-trivial and claim to present a novel contribution. Yet, you omit widely known and practical approaches such as MST-based reductions and simple BFS, all of which are well-documented in the literature and used in practice. You ask which algorithm my implementation uses. It is based on a standard MST reduction and a BFS-based traversal of the resulting tree. This method is widely known, and **its omission from your literature review or experimental comparisons is concerning**. If you were unaware of it, that reflects a gap in your understanding of the space you are trying to contribute to. **If you were aware but chose not to include it, that undermines the objectivity of your experimental setup**. # On Dataset Choice and Experimental Setup You did not respond to my point about your arbitrary and narrow dataset choices. Restricting evaluation to Euclidean complete graphs does not reflect practical usage scenarios for bottleneck path problems. Real-world graphs are often sparse, heterogeneous, and structurally diverse. Evaluating only on synthetic Euclidean graphs without justification or analysis severely limits the credibility of any empirical claim of efficiency or generality. # Conclusion Your rebuttal unfortunately does not meaningfully engage with the core criticisms raised in the original review: 1) The implementation you present is not competitive. 2) The claim that no efficient implementations exist is demonstrably false. 3) The paper lacks comparisons with practical, known algorithms. 4) The experimental setup is unmotivated and narrow in scope. Instead, the rebuttal deflects from these substantive issues with arguments about implementation details that, when scrutinized, only reinforce the original concerns. **You're free to use any Python library, including Numba to speed up your implementation, but currently, it's no where near as fast as my implementation.** **The paper as written does not meet the standards of novelty, rigor, or contribution expected at ICML, nor any of the top 10 or even top 20 conferences for AI/ML. ** --- Reply to Comment 1.1.1: Comment: 1. The time complexity of Algorithm 4 (MMJ distance by Calculation and Copy) is **$O(1/2*n^2)$** (half of $O(n^2)$), because it can elegantly copy the computed values to their symmetric positions. The time complexity of Reviewer t5C9's code/algorithm is **$O(n^2)$**, because it can NOT **elegantly** copy the computed values to their symmetric positions. **This does NOT depend on Reviewer t5C9's choice; it is an intrinsic demerit of BFS-based method. This will be a big difference when the graph is very large, e.g., billions or even trillions of nodes.** 2. The merit of Reviewer t5C9's code/algorithm is that it can be easily and straightforwardly accelerated by parallel computing; the workload for each processor is balanced. Algorithm 4 (MMJ distance by Calculation and Copy) can also be accelerated by parallel computing. However, it is not easy and straightforward. The workload for each processor is unbalanced. It needs some effort to balance the workload. This is a demerit of Algorithm 4. 3. I have implemented two versions of Algorithm 4 which are accelerated by parallel computing in python. **The performance is comparable to Reviewer t5C9's code.** I have tested the code, it needs 4.579s for calculating the APPD matrix of data 136 (N=10,000), where Reviewer t5C9's code needs 3.046s, on my desktop computer. The code has much room to be further optimized. However, since I am not good at these dirty optimization techniques, I will leave the remaining dirty work to other people who are interested. 4. Although parallel computing can improve the speed, it is not always desirable. **It trades Computing Power for speed.** At some situations, the Computing Power is limited, parallel computing is unusable. Question: Yet, you omit widely known and practical approaches such as MST-based reductions and simple BFS, all of which are well-documented in the literature and used in practice. Response: Please indicate the **exact** paper title, authors, page number, algorithm description, and pseudo code. **I will compare your code with the pseudo code of the algorithm step-by-step.** Question: Real-world graphs are often sparse, heterogeneous, and structurally diverse. Response: How do you know real-world graphs are often sparse? How many real-world graphs have you ever seen? Following are the two versions of parallel-computing accelerated Algorithm 4, readers can test it and even further optimize it. Since it is more than 5,000 characters, I provide it with an anonymous URL. It can be found at: https://drive.google.com/file/d/1yuZnagrymvm3vqe0smBt-UUICyPLkZxq/view?usp=drive_link
Summary: This paper presents an implementation of an existing algorithm (Liu, 2023) for the all pairs minimax path problem for undirected dense graphs. Claims And Evidence: As claimed, this paper provides an implementation of the algorithm proposed in (Liu, 2023). Methods And Evaluation Criteria: There is no problem. Theoretical Claims: I am aware that this paper includes a proof of the correctness of the existing algorithm (Liu, 2023), but I did not verify it. I am not sure why this paper includes the proof. If the authors of this paper think that the explanation of the correctness in (Liu, 2023) is wrong, this paper should explain which part of (Liu, 2023) is incorrect. Experimental Designs Or Analyses: There is no problem. Supplementary Material: N/A Relation To Broader Scientific Literature: The major problem of this paper is that this paper does not contain any contribution. In general, providing an implementation of a known algorithm (Liu, 2023) is not considered as a contribution, even though I acknowledge that no implementation is available for the algorithm proposed in (Liu, 2023). Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Question: I am not sure why this paper includes the proof. If the authors of this paper think that the explanation of the correctness in (Liu, 2023) is wrong, this paper should explain which part of (Liu, 2023) is incorrect. Response: The (Liu, 2023) paper did not include a proof of correctness. This is one of the five contributions of this paper. See my rebuttal to Reviewer ZPcY.
Summary: In this paper the minimax path problem is studied. The input consists of an undirected graph and the goal is to compute a minimax path between all pairs of vertices. For this, in an s-t path the edge with highest weight is the bottleneck edge. In other words, the goal is to compute a path with the lowest bottleneck edge for each vertex pair. This paper provides an implementation of a simple O(n^2) time algorithm of a previous paper based on a spanning tree of the input graph. The main contribution of the paper is providing this implementation. Claims And Evidence: The only claim in this paper is that this is the first implementation of the O(n^2) time algorithm. I think this is true. Methods And Evaluation Criteria: No. The evaluation is performed only on a very small data set. Moreover, for the other algorithms for this task always the C++ implementation is faster. However, for the O(n^2) time algorithm only a Python implementation is provided. Thus, implementing this algorithm in C++ too could be faster. Theoretical Claims: The only theoretical claim is the correctness of the O(n^2) algorithm and the proof is correct. Experimental Designs Or Analyses: See: Methods And Evaluation Criteria Supplementary Material: No Relation To Broader Scientific Literature: The contribution is very little. The only contribution is an implementation. This implementation requires only 20 lines of simple code in Python and can be done by a student in a short time. Essential References Not Discussed: No Other Strengths And Weaknesses: See: Relation To Broader Scientific Literature: very little contribution no important strength Other Comments Or Suggestions: algorithm 4 as a name in the abstract is not very meaningful footnote 3 violates double blind conditions Questions For Authors: No questions. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thanks for the evaluation. "The first O(n^2) implementation for calculating the APPD matrix" and "theoretical proof of correctness" are not little contributions, for a fundamental problem of minimax path problem or widest path problem in graph theory. Question: footnote 3 violates double blind conditions. Response: Footnote 3 is the URL of the official code of the paper "Min-Max-Jump distance and its applications," which has been heavily discussed in the submitted paper. This does not violate double blind conditions. Note that two different papers can share the same title, this does not mean they are the same paper.
Summary: The paper studies, given a graph G, the all pairs shortest minimax path problem. Here, the cost of the path between two nodes u and v of the graph is simply the edge with the largest cost and the minmax path is simply the smallest cost path between u and v. It is well-known that the path between two nodes in the minimum spanning tree is the minimax path between them. Although several algorithms that have an execution time of n^2 are known, the authors claim that none of them have an implementation. The authors then present an implementation of an existing algorithm for the problem Claims And Evidence: The paper is pretty straight-forward. The writing can be clearer but the simple nature of the algorithm means that the ideas are easy to verify Methods And Evaluation Criteria: The methods and evaluations seem sufficient Theoretical Claims: There were no non-trivial theoretical claims. Experimental Designs Or Analyses: Experimental designs seem sufficient Supplementary Material: NA Relation To Broader Scientific Literature: The authors discuss prior work many of which claim to achieve the same bound as presented in this paper. The authors main contribution is to also provide an implementation. Essential References Not Discussed: NA Other Strengths And Weaknesses: The paper provides implementation of a very simple prior result. A very straight-forward implementation does not meet the criteria for publishable work (in my opinion). The problem is of only a peripheral interest to ML community. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Contributions of the paper: 1. It provides the first code implementation for solving the all pairs minimax path problem or widest path problem in an undirected dense graph, in $O(n^2)$ time. 2. It provides the fastest code implementation for solving the all pairs minimax path problem or widest path problem in an undirected dense graph. 3. We provide a theoretical proof of the correctness of Algorithm 4 (MMJ distance by Calculation and Copy) . 4. It indicated and verified the warm-start merit of Algorithm 1 (MMJ distance by recursion) , which is a key merit of Algorithm 1. This merit makes Algorithm 1 can calculate the all pairs shortest paths (APSP) efficiently in dynamic graphs [1]. 5. It explores how Algorithm 4 (MMJ distance by Calculation and Copy) can be accelerated by parallel computing, which is not straight-forward. [1] Liu, Gangli. "Solving the all pairs shortest path problem after minor update of a large dense graph." arXiv preprint arXiv:2412.15122 (2024).
null
null
null
null
null
null
Rényi Neural Processes
Accept (oral)
Summary: This paper identifies an important issue in Neural Processes (NPs) where prior misspecification can appear from the fact that the conditional prior and posterior share parameters, which can lead to and degrade uncertainty estimates. The authors propose Rényi Neural Processes (RNPs) as a solution, replacing the KL divergence in the NP objective with the Rényi divergence. This modification introduces a tunable hyperparameter $\alpha$, which allows adjusting the influence of the prior and helps mitigate misspecification. One of the key contributions of this work is showing how RNPs can bridge variational inference (VI) and maximum likelihood (ML) objectives. The paper also provides both theoretical support and empirical results. Experiments on datasets like MNIST, SVHN, and CelebA show improved log-likelihoods and more reliable predictions, particularly in cases where the prior is misspecified. Claims And Evidence: The main claims in the paper are: 1. Prior misspecification is an issue in NPs, and appears because of the parameter-sharing assumption. 2. Using Rényi divergence instead of KL divergence mitigates this issue by allowing more flexibility in how the prior affects the posterior. 3. The proposed approach unifies VI and ML objectives, offering a more general training framework for NPs. 4. RNPs consistently outperform standard NPs across a variety of tasks and settings. These claims are well-supported both theoretically and experimentally. The paper presents detailed derivations and proofs, showing how the Rényi divergence modifies the posterior updates. Methods And Evaluation Criteria: The authors test RNPs on: 1. 1D regression using different kernels. 2. Image inpainting tasks on MNIST, SVHN, and CelebA. 3. Ablation study with prior misspecification. The experiments also include ablation studies to analyze the impact of Monte Carlo sampling, the number of context points, and the choice of $\alpha$. Overall, the methodology is well thought out and effectively demonstrates the strengths of RNPs. Theoretical Claims: The paper demonstrates that replacing KL divergence with Rényi divergence allows for more robust posterior updates while still maintaining a connection to standard NP objectives. One of the most interesting theoretical insights is that RNPs naturally interpolate between VI and ML objectives. By adjusting $\alpha$, the model can shift between behavior similar to KL-based variational inference ($\alpha \approx 1$) and maximum likelihood estimation ($\alpha \approx 0$). Experimental Designs Or Analyses: The experiments are well designed and include a good mix of standard benchmarks. The main findings are: 1. RNPs achieve better log-likelihoods than baseline NPs. 2. RNPs improve performance in the presence of prior misspecification, particularly in cases with noisy or distribution-shifted contexts. 3. Tuning $\alpha$ appropriately leads to noticeable gains, but default values (e.g., $\alpha = 0.7$ for VI, $\alpha = 0.3$ for ML) seem to work well across tasks. One strength of the experimental setup is the inclusion of ablation studies that analyze different aspects of RNPs, including Monte Carlo sampling and the number of context points. Supplementary Material: The supplementary material was not reviewed. Relation To Broader Scientific Literature: This work builds on the Neural Processes (NP) family and connects it with ideas from robust divergence measures in variational inference. It references key prior work, including: 1. The original NP papers by Garnelo et al. 2. Extensions like Attentive NPs (ANP) and Transformer NPs (TNP) 3. Research on robust divergences like Rényi divergence and $\alpha$-divergence The contribution is well-motivated, but the paper could benefit from a deeper discussion of alternative robust divergences (e.g., $\alpha$-divergence, $f$-divergence) and why Rényi divergence is particularly well suited for NPs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. Identifies a misspedification problem in standard NPs and provides a simple but effective solution. 2. Theoretical contributions are rigorous and clearly explained. 3. Extensive experiments on multiple datasets and settings. 4. Ablation studies provide useful insights into the behavior of the method. Weaknesses: 1. Computational cost. Monte Carlo sampling adds overhead, which could be an issue for large-scale applications. 2. Hyperparameter tuning. Tuning $\alpha$ is important, and the method might require careful tuning in some cases. Other Comments Or Suggestions: Consider adding more intuition or visualizations to help explain why Rényi divergence improves posterior estimation. Also, a discussion on scalability would be useful, can this method be made more efficient for larger datasets? Questions For Authors: 1. Did you try other robust divergence, for example, $f$-divergence? If so, how did they compare? 2. Would RNPs work on larger datasets or real-time applications? Are there any known limitations? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the effectiveness and theoretical rigor of our work. We appreciate their effort in helping us improve the efficiency and practicality of the work. # Computational costs. As shown in Supp Table 7 of the paper, we have already compared the wall clock time between our RNP and the VI objectives and no significant differences were observed. Our computational complexity is linear to the number of MC samples, which is also comparable to the VI objective. We will add more clarifications for efficiency in sec 5.4. # Hyperparameter tuning of $\alpha$ and adaptive $\alpha$ tuning. Since cross-validation can be computationally expensive, we have suggested the heuristics in sec 5.3 to gradually anneal $\alpha$ from 1 to 0 without hyperparameter tuning. We have already shown in Supp Table 5 that our RNP consistently outperformed competing approaches. We will elaborate further on this heuristic in sec 5.3. # Scalability on larger datasets or real-time applications. We would like to clarify that NPs are generally scalable to large datasets due to minibatch training and fast inference of the framework, and we have tested all our NP models on a sizeable image dataset CelebA. To further improve our efficiency, we suggest to adopt variance reduction methods such as control variates and Rao-Blackwellization [1] that could require fewer MC samples. Additionally, for attention-based NPs, We suggest to use efficient attention mechanisms, e.g., Nyströmformer [2] which uses low rank approximation of the attention matrix. We will leave the efficiency improvement for our future work. [1] Ranganath, R. et al, 2014, Black box variational inference. In AISTAT. PMLR. [2] Xiong, Y. et al, 2021, Nyströmformer: A nyström-based algorithm for approximating self-attention. In AAAI. # $f$-divergence results. We added the comparing results using $f$-variational bound in [3]. More specifically, by specifying a convex function $f$ and its dual $f^*$, $f$-divergence connects several divergences including KL and Rényi divergences. Based on eq (8) in [3], we have the objective $L_f(\phi, \theta) = \mathbb{E}_{q(z; \phi)}[f^*(\frac{p(z, Y_T|X_T, C; \theta)}{q(z; \phi)})] \geq f^* (p(Y_T|X_T, C))$. We chose the posterior $q(z;\phi)= q(z|C, T; \phi)$ like NPs do and compared our RNP objective with two functions. $\chi^2$ divergence [3]: $f(u)= \frac{1}{u} - u, f^*(t)=t^2 -1$. and Jeffery divergence [4]: $f(u) = (u-1)\log u, f^*(t)=(t-1)\log t$. | Method | Set | Objective | RBF | Matern 5/2 | Periodic | |:------:|:-------:|:---------:|:-------------:|:-------------:|:----------:| | NP | context | $\chi^2$ | 0.64±0.01 | 0.52±0.03 | -0.49±0.01 | | | | Jeffery | -0.00±0.00 | -0.01±0.01 | -0.59±0.01 | | | | RNP | **0.78±0.01** | **0.66±0.01** | -0.49±0.00 | | | target | $\chi^2$ | 0.21±0.02 | 0.07±0.01 | -0.68±0.00 | | | | Jeffery | -0.23±0.00 | -0.27±0.00 | -0.61±0.00 | | | | RNP | **0.33±0.01** | **0.16±0.01** | -0.62±0.00 | The results provide additional support that the RNP objective improves the baseline models. We also tested f-divergence minimization approaches based on the bound of Nguyen et al [5] such as that proposed in [4]. However, as with many GAN-type objectives, they led to unstable training. Finally, unlike our approach, it is unclear how f-divergence minimization methods proposed in the literature can be extended to improve robustness of maximum likelihood-based NPs. [3] Wan, N. et al, 2020, F-divergence variational inference. NeurIPS. [4] Nowozin, S. et al, 2016, f-gan: Training generative neural samplers using variational divergence minimization. NeurIPS. [5] Nguyen, X. et al, 2010, Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory. # Intuition or visualizations for RNP posterior estimation. In Fig 1 (a) we have visualized the two posterior distributions obtained by our RNP and VI objectives. The VI objective produced an overestimate of the posterior variance, indicating a strong regularization from the prior model. As a result, it fitted the periodic data with a large variance and over-smoothed mean functions as shown in Fig 1 (b), whereas RNP dampens the prior regularization and obtained a posterior with much smaller variance and therefore encourages the likelihood model to be more expressive as illustrated in Fig 1 (c).
Summary: The paper presents a novel approach for training neural processes. By replacing the conventional KL divergence with the Renyi divergence, this allows the model to adapt when confronted with a misspecified prior, therefore enabling more robust inference. This paradigm is somewhat analogous to the utilisation of a hierarchical prior. Experiments are conducted on a mix of regression and image tasks, showing improvement in log likelihood performance over existing techniques. Claims And Evidence: The claims made are generally well supported by the empirical evidence Methods And Evaluation Criteria: Yes, a suitable selection of benchmarks and NP architectures are shown. Theoretical Claims: No, while some theoretical background is given, the key results here are empirical. Experimental Designs Or Analyses: The experiments span a good range of datasets and in most cases are presented alongside suitable uncertainty estimates. In Tables 1 and 2, currently only the numerically highest performing log likelihood is set in bold, but in many cases there is no statistically significant difference to the second strongest method. I would recommend applying bold only to results that outperform at a statistically significant level (and explicitly state that chosen level). This is particularly significant in Table 2 where the L_ML is bolded twice when in neither case it is significantly superior. The captions are a little sparse, for example Figure 3 simply reads "Hyperparameter tuning" and Figure 4 is "Ablation study". I'd recommend ensuring that the captions to the tables and figures are self sufficient. Figure 3 is somewhat lacking in uncertainty quantification, presumably this is showing the outcome for only a single split. It's therefore not clear how much of the functional form is stochastic in nature. Supplementary Material: Reviewed some extended experiments Relation To Broader Scientific Literature: This work seeks to build on previous studies in the development of Neural Processes. Essential References Not Discussed: When introducing the Renyi divergence on page 2, it would be appropriate to cite the paper which first proposed it, which was Renyi 1961. Other Strengths And Weaknesses: A novel, well motivated and clearly presented paper I think the main weakness relates to how some of the experimental results are missing details. It's not clear to me why Figure 2 shows just the Lynx data but not the Hare. Do the results in Table 2 relate to the data from both species or is it also just for the Lynx? Other Comments Or Suggestions: Figure 3 seems to illustrate alpha going up to 2.0 but in the text it is mentioned that it is restricted to alpha<1. Perhaps this figure will be more informative if we can highlight (as a horizontal dotted line for example) the vanilla alpha=1 value. I would also recommend maintaining a similar dynamic range on the y axis, between different panels, as this will illustrate eg that the vanilla NP on RBF is much more sensitive to the choice fo alpha. It might be of interest to comment on the relation between using Renyi divergence and using a hierarchical prior. In that the latter one would explicitly evaluate a range of priors, while in the Renyi case the alternative forms of the prior are implicit. And a small typo: "The limitations of our framework lie in drawing multiple samples...." presumably ought to read "A limitation of our framework lies in drawing multiple samples...." Questions For Authors: No further questions at this stage! Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer's efforts in helping us refine the details and acknowledge the original literature. We agree that a rigorous analysis and self-sufficient figures would strengthen the soundness of our work. # Significance tests in Table 1 and 2. All the results presented in the paper were reported with error bars (i.e., standard deviations). As the reviewer suggested, we performed two‐sample t-tests between our RNP and the second best method and highlighted the bold results with significant improvements of **p value $<$ 0.05**. We observed that in Table 1 we significantly improved RBF, Matern and Periodic datasets for most of the methods. Here we show the updated Table 2. In Table 2, the ML objective is no longer significantly superior than our RNP objective on the D\_train EMNIST dataset. | Objective | D_train (Lotka-Volterra) | | Misspec D_test (Hare-Lynx) | | D_train EMNIST (class 0-10) | | Misspec D_test (class 11-46) | | |:---------:|:-------------------------:|:-------------:|:---------------------------:|:--------------:|:----------------------------:|:---------:|:-----------------------------:|:-------------:| | | context | target | context | target | context | target | context | target | | L_ML | 3.09±0.22 | 1.98±0.11 | -0.59±0.47 | -4.44±0.41 | 1.54±0.05 | 1.56±0.07 | 0.03±0.97 | -0.20±0.57 | | L_RNP | 3.32±0.15 | **2.12±0.06** | -0.17±0.31 | **-3.63±0.09** | 1.52±0.08 | 1.47±0.12 | 0.96±0.18 | **0.70±0.15** | # Missing details for the Hare Lynx dataset. We apologize for the missing details. Table 2 in the paper reported the results for both species as we treat them as a 2-dimensional input system with feature correlations (see sec 5.2). We left out the Hare plots in Fig 2 originally because when we tried to plot both the Lynx and Hare results in a single graph, the uncertainty intervals of two species were heavily overlapped and impeded the visibility. We will add the Hare results using different colors and opacity in the final version. We will also add more descriptions of this dataset in sec 5.2 and supplementary section for better clarification. # Relationship to hierarchical prior models. We thank the reviewer for raising this interesting point. We believe our approach to misspecification in neural processes based on the Rényi divergence is fundamentally different to hierarchical approaches. By introducing additional latent variables in a hierarchical fashion, one aims to have a more flexible marginal prior model. For example, under some mild conditions, mixture models are known to be able to approximate any continuous density. However, usually, that additional flexibility comes at the cost of more complex inference. Our approach, maintains the original prior but deals with model misspecification through dampening with the $\alpha$ parameter. As pointed out by Reviewer Fmj6, it is more closely related to robust Bayesian inference methods based on Gibbs/tempered posteriors. We will discuss this in the final version. # Updating figures, captions, typos and references. - **Captions of Fig 3 and Fig 4.** We will add captions for them to be self-sufficient. Fig 3: cross-validation is used to select the optimal $\alpha \in [0, 2]$. Fig 4: We investigated how MC sample sizes and the number of context points affect test log-likelihood. - **Uncertainty quantification in Fig 3.** We have actually plotted the uncertainty intervals but some of them were occluded by thicker lines of mean values (see Fig 3 (a) TNPD $\alpha = 1.5$ for better visibility). We will change the color and the thickness of the intervals to make them more evident. - **$\alpha$ range and dynamic plot range in Fig 3**. Fig 3 showed that $\alpha >1$ impedes NP training due to an overestimate of the posterior variance (please check the comment for Reviewer aNVZ ``The effect of $\alpha$ values on training'' for more details). Therefore, we recommend tuning $\alpha \in (0, 1)$ in the text. As suggested by the reviewer, we will highlight the results using the KL objective which corresponds to $\alpha = 1$ in the plot and change the plot into a dynamic range. - **Adding reference**. We will add the original Rényi divergence paper for reference. - **Correcting typos**. We will fix the typo in the limitation discussion and proof read the paper. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response, I'm glad the feedback was helpful, and that Tables 1 & 2 have been strengthened. With regards to the hierarchical prior - my suspicion here was that for any given value of alpha (and for a given dataset), there exists an implicit alternative prior that would generate the same posterior as that value of alpha. Thus marginalising or tuning alpha could be deemed equivalent to marginalising or optimising a hyperprior. (I only mention this as it might counteract any criticism or concerns that this approach - as with tempered posteriors - is no longer fully Bayesian.) --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for increasing your initial score. We sincerely appreciate your helpful feedback and thoughtful defense of our work. We will make sure to discuss the hyperprior on alpha in the revision. Sincerely, The Authors
Summary: The paper introduces Rényi Neural Processes (RNPs), a modification of Neural Processes (NPs) that replaces the standard Kullback-Leibler (KL) divergence with the Rényi divergence to mitigate prior misspecification. The authors argue that parameter coupling between the prior and posterior in traditional NPs leads to biased variance estimates and propose RNPs as a more flexible alternative. The method is tested on regression and image inpainting tasks, showing improved log-likelihood performance compared to state-of-the-art NP variants. _(for any missing input on any of the fields, please refer to the **Strengths and Weaknesses** or the **Other comments or suggestions** sections)_ Claims And Evidence: - The parameter coupling in standard NPs leads to prior misspecification and degraded performance. - *Evidence*: Theoretical derivations and empirical evaluations demonstrate that standard NPs overestimate posterior variance, leading to oversmoothed predictions. - Using Rényi divergence provides a tunable mechanism to reduce the impact of prior misspecification. - *Evidence*: The proposed RNPs consistently outperform standard NPs and other NP variants across multiple benchmarks. - RNPs improve generalization without modifying model architectures. - *Evidence*: The method is applied to existing NP models (ANP, VNP, TNP-D) with measurable improvements. Methods And Evaluation Criteria: - The primary modification is substituting KL divergence with Rényi divergence in the NP objective. - Evaluations focus on predictive log-likelihood, testing the approach on 1D Gaussian process regression and image inpainting (MNIST, SVHN, CelebA). - Comparison against NP, ANP, VNP, and other baselines. - Additional experiments assess robustness under prior misspecification (e.g., noisy context points, domain shifts). Theoretical Claims: - "New insight into prior misspecification in NPs through the lens of robust divergences". - Proves that the standard NP prior model is misspecified due to parameter coupling. - Establishes that Rényi divergence generalizes KL divergence and allows tuning of prior penalization. - Demonstrates that RNPs unify variational inference (VI) and maximum likelihood (ML) approaches, bridging two common objectives. Experimental Designs Or Analyses: - Well-structured with comprehensive baselines. - Includes ablation studies on hyperparameter tuning ($\alpha$ selection) and Monte Carlo sample size. - Tests both parameterization-induced and context-induced prior misspecification. - Considers real-world applications (Hare-Lynx dataset for time series forecasting). Supplementary Material: - The appendix contains detailed derivations, pseudo-code for RNP training and inference, and additional ablation studies. - Includes proofs of theoretical results and hyperparameter selection strategies. Relation To Broader Scientific Literature: - Builds on extensive prior work in Neural Processes, Variational Inference, and robust divergences. It seems to cover the most relevant literature for the proposed method. - Connects with literature on robust Bayesian inference and alternative divergences (e.g., $\alpha$-divergence, f-divergence). Essential References Not Discussed: - A deeper discussion on the connection to PAC-Bayes approaches and other Bayesian robustness techniques could strengthen the theoretical grounding. - Explicit comparisons with f-divergence-based variational inference methods would be useful. - Regarding the choice of $\alpha$, the authors could consider mentioning [1], which discusses the impact of $\alpha$ in variational inference. - Related to the previous work, [2] seems strongly related to this contribution since they show that more flexible models in combination with robust divergences may fix prior misspecification issues. Not essential, although maybe worth mentioning here. - Some references to robust inference outside the NP community could provide a broader perspective. As a suggestion, the authors may consider mentioning other works that make use of robust divergences in Bayesian inference, like [3] (which, for instance, also mentions NPs as a particular case of their model). [1] Rodríguez-Santana, et al. "Adversarial $\alpha$-divergence minimization for Bayesian approximate inference." Neurocomputing 471 (2022): 260-274. [2] Santana et al. "Correcting Model Bias with Sparse Implicit Processes." ICML 2022 Workshop "Beyond Bayes: Paths Towards Universal Reasoning Systems" arXiv preprint arXiv:2207.10673 (2022). [3] Ma, et al. (2019, May). "Variational implicit processes". In International Conference on Machine Learning (pp. 4222-4233). PMLR. Other Strengths And Weaknesses: **Strengths:** - Well-motivated theoretical derivations to bolster the theoretical claims made in the text. The supplementary work provides a great deal of detail on the needed calculations needed to understand the method. - The approach is clearly motivated and well-explained, with a strong connection to prior work. - The implementation of the method is straightforward and could be integrated into existing NP frameworks. - The proposed approach provides strong empirical results across multiple benchmarks. - The paper is well-structured and written, making it easy to follow. **Weaknesses:** - Although the empirical results are strong, I fear that the initial idea of the paper might be a bit incremental. It is aided by the extensive theoretical derivations provided, but nonetheless, the core idea of using Rényi divergence to fix prior misspecification is not really that novel (similar efforts with similar empirical results have been achieved, e.g. see [1]). - There is no clear way to choose the optimal value of $\alpha$ for a given problem since it is dataset-dependent. This could be a limitation in practice, requiring cross-validation. Other Comments Or Suggestions: - Investigating adaptive $\alpha$ selection methods could reduce the need for cross-validation. - Notation can sometimes be cumbersome (e.g. Eqs 3, 5 and 6). Maybe it would be beneficial to simplify it for readability. EDIT: I increased my initial score after reading the authors' response. Questions For Authors: 1. In general, what are the implications inside Bayesian modelling of constructing the prior in a data-dependent fashion? Does this not imply a form of overfitting? Maybe a discussion on this and its relationship with extensions of the Bayesian inference framework as in the (already mentioned in the paper) Generalized Variational Inference. 2. Following the previous question, what theoretical guarantees can be provided on the inference results for the proposed method? In particular, I wonder about the soundness of the inference process and the quality of the uncertainty estimates. 3. How does the choice of $\alpha$ affect uncertainty calibration in practice? Ethical Review Concerns: The paper mentions potential misuse in reconstructing missing images, which could raise privacy concerns. Other than that, no immediate ethical risks beyond standard considerations in probabilistic modeling. In fact, more accurate uncertainty estimates could lead to more responsible decision-making in AI applications. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging our "well-motivated theoretical derivations" and "strong empirical results". # Incremental novelty We would like to clarify to the reviewer that our work goes beyond using RD for prior misspecification. We are the first to identify prior misspecification in the realm of NPs due to the parameter coupling between the prior conditional and the approximate posterior, and our theoretical and empirical analysis justify the use of robust divergences. Secondly, we introduce a new objective that unifies the VI and MLE objectives for NPs. # Adaptive $\alpha$ value selection Please kindly refer to the comments for the Reviewer GkqA for this and f-divergence related questions. # Results on uncertainty calibration We added the results of continuous ranked probability scores (CRPS) w.r.t different α values for NP and ANP on the RBF dataset. CRPS is a commonly used to measure the forecast accuracy and lower scores indicate better calibration. The results showed that the CRPS calibration improved when α increases from 0 to 0.9. | α | NP | ANP | |----------|---------------|---------------| | 0.0 | 0.1427±0.0016 | 0.0978±3e-4 | | 0.3 | 0.1452±0.0029 | 0.0942±2e-4 | | 0.7 | 0.1370±0.0018 | 0.0827±5e-4 | | 0.9 | 0.1321±0.0014 | 0.0815±5e-4 | | 1.0 | 0.1484±0.0023 | 0.0965±3e-4 | # Theoretical discussions We sincerely appreciate the reviewer's thoughtful feedback. While some of the questions raised by the reviewer extend beyond the immediate scope of this paper and address broader aspects of NP research, we recognize their importance and agree these points warrant further investigation. We are delighted to make every effort to clarify their concerns. - **Robust Bayesian inference and PAC-Bayes** Indeed, the literature on robust Bayesian inference is rich and we thank the reviewer for pointing out the need to discuss this. In particular, the relation to PAC-Bayes is fascinating. In short, dealing with model misspecification from a Bayesian perspective appears in the literature under several disguises, for example, Gibbs posteriors, tempered posteriors and fractional posteriors [1,2]. These approaches essentially weigh the likelihood to temper its influence. Connections with PAC-Bayesian bounds controlling generalization in statistical learning have also been explored previously, more notably in [2]. These connections are extended by explicitly analyzing variational approximations of PAC-Bayes/Gibbs posteriors [3]. We will expand on this in the final version. However, it is important to emphasize that our focus is on neural processes but these approaches remain exciting directions for future work. [1] Grunwald, P, et al. Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it. Bayesian Analysis, 2017. [2] Bhattacharya, A., et al. Bayesian fractional posteriors. The Annals of Statistics, 2019. [3] Alquier, P., et al. On the properties of variational approximations of Gibbs posteriors. JMLR, 2016. - **Theoretical guarantees of RNP** The theoretical soundness of our work can be built on two parts: the frequentist consistency properties of the vanilla NPs using the KL divergence, and the relaxation of assumptions with our Rényi divergence. For the first part, we kindly refer the reviewer to section 3.3 -3.5 of the thesis [4] for prediction map approximation theory of NPs. Briefly, in the limit of infinite data, a variational family of prediction maps can recover the mean map of the true NP and the sum of the variance map and observation noise under some regularity assumptions. In the case of limited data, more assumptions including the input size, the compactness of the variational family, the boundedness of the data, and the boundedness of the stochastic process are required to guarantee the consistency. For the second part, some milder regularity conditions than KL divergence can be made for consistency [5], including a uniformly bounded prior distribution and a locally asymptotically normal likelihood model. [4] Bruinsma, W. (2022) Convolutional Conditional Neural Processes. Apollo - University of Cambridge Repository. [5] Jaiswal, P., Rao, et al, 2020. Asymptotic Consistency of α-Rényi-Approximate Posteriors. JMLR, 21(156). - **Data-dependent priors** We agree with the reviewer that it might sound un-Bayesian to have data-dependent priors, but in the case of NPs it is not only valid but necessary when viewed through the lens of hierarchical Bayes or meta-learning: the prior is being drawn from a hyper-prior. When we condition on context data, we're effectively doing Bayesian inference over a latent variable that defines the function. # Fixing typos and adding references We will add the recommended references as well as [6] to the main text. We will also simplify some equation notations. [6] Rodríguez-Santana et al, 2022. Function-space Inference with Sparse Implicit Processes. ICML.
Summary: The paper replaces the Kullback–Leibler (KL) divergence in the standard neural processes (NPs) with the Renyi divergence to mitigate the issue of prior misspecification. The proposed Renyi neural process (RNP) has a tuning parameter $\alpha>0$ that penalizes the misspecified prior and unifies the variational inference ($\alpha=1$) and maximum likelihood estimation ($\alpha=0$) in the same framework. The reason of prior misspecification in NP is explained and the mitigation by RNP is well illustrated. The paper further investigates the robustness of Renyi divergence applied to other state-of-the-art (SOTA) NP algorithms and demonstrates advantages of Renyi divergence in NP applications. Claims And Evidence: The claims are well explained and there is enough numerical evidence to support the claims. Methods And Evaluation Criteria: The paper includes comprehensive numerical studies. However, they are focused only on the objective function. It would be nice to include other metrics, e.g. relative error comparing against the truth, in one of the comparisons. Theoretical Claims: Yes. The proofs appear correct. Experimental Designs Or Analyses: Most of the experiments are well designed. I do have following questions: 1. Figure 1: what controls the smoothness and the correlation strength of RNP, all by $\alpha$? BTW, the labels in caption are wrong: '(c)' should be '(b)' and '(d)' should be '(c)'. 2. Figure 2: What is the advantage of RNP (b) over VI (a) in ANP? The estimates with similar smoothness and both miss the truth between 0.5 and 1. 3. How does $\alpha$ affect the training? Is there any particular value that poses challenges in training, e.g. large gradients, slow convergence? Supplementary Material: Yes, I reviewed all of them, particularly additional numerical results. Results in Figures 5, 6, 7 lack explanation. Relation To Broader Scientific Literature: The paper provides a good contribution to neural process algorithms by exploring a divergence more robust to prior misspecification. Essential References Not Discussed: None. Other Strengths And Weaknesses: The application of Renyi divergence to neural process is novel. The contribution that unifies VI and ML is appreciated. Other Comments Or Suggestions: Two sentences on page 5 under "Robust divergence" seem incomplete: "KL divergence ... between the posterior..." between the posteriors or between the posterior and the prior? "Several other ... features are noise or the existence of outlier."? Questions For Authors: See above. Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging our innovation of unifying the objectives and carefully reviewing details including supplementary materials and numerical results. # Additional evaluation metrics. We have additionally reported the relative errors for two baseline methods NP and ANP on three GP regression datasets. Our objective still generally outperformed baseline models on this new metric. We will incorporate more results in the version. | Method | Set | Objective | RBF | Matern 5/2 | Periodic | |:------:|:-------:|:---------:|:-------------:|:-------------:|:-------------:| | NP | context | L\_VI | 2.56±0.12 | 5.79±0.73 | 2.88±0.09 | | | | L\_ML | 2.47±0.14 | 4.63±1.42 | 2.87±0.08 | | | | L\_RNP | **2.09±0.10** | **4.48±1.16** | **2.77±0.10** | | | target | L\_VI | 4.80±0.40 | 3.59±0.12 | 6.42±0.35 | | | | L\_ML | 4.52±0.36 | 3.33±0.07 | 6.42±0.43 | | | | L\_RNP | **4.23±0.35** | 3.40±0.16 | **6.22±0.25** | | ANP | context | L\_VI | 0.17±0.01 | 0.28±0.03 | 2.76±0.39 | | | | L\_ML | 0.18±0.01 | 0.24±0.03 | 2.92±0.38 | | | | L\_RNP | 0.17±0.01 | 0.25±0.06 | **0.52±0.08** | | | target | L\_VI | 2.51±0.06 | 2.42±0.05 | 8.81±0.43 | | | | L\_ML | 2.55±0.07 | 2.40±0.04 | 8.27±0.86 | | | | L\_RNP | **2.09±0.05** | **1.89±0.03** | **6.15±0.31** | # The effect of $\alpha$ values on training. $\alpha > 1$ usually impedes training as it focuses too much on improving the mass-covering of the posterior, resulting in an overestimate of the posterior variance (Please refer to Fig 3 in the paper for details). Regarding the convergence rate, [1] showed that under mild regularity assumptions, it is bounded by $\sqrt{n}$ with n being the number of samples rather than by $\alpha$ values. [1] Jaiswal, P., Rao, et al, 2020. Asymptotic Consistency of $\alpha $-Renyi-Approximate Posteriors. JMLR, 21(156). # Missing clarifications and typos. - **Smoothness in Fig 1**. The difference between L_RNP and L_NP on smoothness and correlation strength is only controlled by $\alpha$. Due to the misspecified prior regularization in standard KL, vanilla ANPs (Fig 1b) struggle with over-smoothed predictions. Our objective dampens such regularization and encourages a more expressive likelihood model that better fits the data. - **RNP advantage in Fig 2**. RNP indeed did not impose a significant advantage over VI as shown in Fig 2 (a) and (b). However, our main claim in Fig 2 is that RNP achieves better uncertainty estimate than the ML objective. - **Typos and missing captions**. We thank the reviewer for pointing out the typos in the captions of Fig 1, which we will correct in the revision. We will also add explanations for Fig 5, 6, 7 in the supplementary and fix the typos in sec 4 related work.
null
null
null
null
null
null
Toward Data-centric Directed Graph Learning: An Entropy-driven Approach
Accept (poster)
Summary: This paper proposes a general data-centric directed graph online knowledge distillation framework called EDEN. The framework achieves data-centric machine learning, guided by the proposed hierarchical encoding theory for the graph-structured data. The paper conducts experiments to validate the efficacy of the proposed method. Claims And Evidence: It is not clear why (at least should give some examples or references) "real-world digraphs commonly exhibit hierarchical structure" (LHS of line 206, line 207). Methods And Evaluation Criteria: The method seems valid but requires lightweight adaptation due to scalability issues. There are only two dataests for link prediction though, which may not be sufficient (but acceptable). Theoretical Claims: The claims seem correct, but I did not check carefully. Experimental Designs Or Analyses: The designs and analyses seem valid. Supplementary Material: I roughly glanced through it. Relation To Broader Scientific Literature: There seems to be good relations to existing literature, and with novel contributions. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper has somewhat too long an abstract, and it is not friendly to non-experts in the specific field. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: Claims And Evidence** We sincerely apologize for the insufficient explanation in our initial submission, which may have caused confusion. We kindly ask you to refer to our response to Reviewer RNnj Q2, where we provided an example of the hierarchical structure of directed graphs in the real world using citation networks. We also elaborated on our motivation for employing trees as a hierarchical data structure for data organization and learning, which is inspired by other relevant references. If possible, we will include this real-world example of hierarchical structures in directed graphs in Sec 1 and the appendix of the revised submission, along with additional references, to provide a clearer presentation. We hope that the above responses address your concerns and enhance your confidence in our manuscript. **Q2: Methods And Evaluation Criteria** We sincerely appreciate your valuable insights, and please allow us to offer the following justifications: (1) Scalability Issues and Lightweight Alternative: Your concerns about scalability issues regarding our proposed EDEN are valid, and as a matter of fact, we have recognized it as the main bottleneck that requires our future efforts to optimize it further based on the lightweight alternative of EDEN in Sec 3.4. We kindly refer you to our response to Reviewer RNnj Q1 & Q2, which includes the theoretical motivation and implementation details of the current lightweight alternative, as well as the discussion on potential future efficiency improvements and the revision plans in the new submission. (2) Probably Missed Link-level Test: In response to your concerns regarding the absence of results for the link prediction task, we apologize for the lack of a direct indication in the main text, which may have led to your omission. In fact, we have already provided additional link-level evaluations for all datasets in Appendix A.14 of the initial submission. We fully intend to address this in the subsequent editing process. Once again, we thank your valuable comment and hope our response can address your concern. **Q3: Other Strengths And Weaknesses** Thank you for your comments. We acknowledge that the current abstract may be more extensive than typical abstracts. Our aim was to provide detailed explanations to help readers understand the intended task and the design of EDEN. However, we recognize the need for improvements and will refine the abstract during the editing process to enhance clarity and conciseness while ensuring intuitive interpretations of EDEN’s core contributions for future readers.
Summary: This paper introduces ​entropy-driven digraph knowledge distillation (EDEN), a data-centric framework for representation learning on directed graphs. EDEN addresses the limitations of current directed Graph Neural Networks by leveraging ​directed structural measurements to construct a ​hierarchical knowledge tree (HKT), refined using ​mutual information of node features. This enables ​topology and profile knowledge transfer through a ​tree layer-wise distillation process. For downstream tasks like ​node classification and ​link prediction, EDEN employs a ​random-walk based technique. Experiments show that EDEN, as a hot-and-plug module, improves the performance of existing methods. Claims And Evidence: The motivation for the proposed method lacks sufficient clarity. The claim that existing methods "fail to fully explore the potential correlations between directed topology and node profiles at the data level" is not adequately substantiated. Illustrating how these limitations impact downstream tasks like node classification or link prediction would provide a clearer rationale for the necessity of the proposed method. Methods And Evaluation Criteria: The paper suffers from a ​lack of necessary details, which hinder readability and comprehension. Many technical definitions, such as the ​structural measurements (Eqs 1-3), are presented without intuitive explanations. These shortcomings make the paper challenging to grasp and weaken its overall accessibility. To comprehensively assess the ​scalability and efficiency of EDEN, it is crucial to include ​larger datasets in the evaluation, particularly to test its performance on ​large-scale graphs. Theoretical Claims: N/A Experimental Designs Or Analyses: Are there any ​latest methodologies from 2024 that could serve as baselines, and if so, should they be included to ensure a more comprehensive and up-to-date evaluation? Supplementary Material: I recommend including the commands to re-implement the experimental results in the README, with the hyperparameters for all experiments. Relation To Broader Scientific Literature: The topic is quite relevant. Essential References Not Discussed: N/A Other Strengths And Weaknesses: See all sections above. Other Comments Or Suggestions: See all sections above. Questions For Authors: See "Experimental Designs Or Analyses" Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Due to the word limit imposed by the new regulations of ICML 2025 rebuttal, we have not provided detailed references, but we will gladly supply them in our subsequent discussions if needed. **Q1: Claims And Evidence** We sincerely apologize for any concerns that may have arisen. We kindly refer you to our response to Reviewer RNnj Q1, which addresses similar issues. Based on this, we provide the following clarifications and outline our revision plans, hoping to further alleviate your concerns. (1) Important Directed Edges: Recent studies [LoG 2024, ICDE 2024, WWW 2025] have shown that considering edge directionality provides a novel approach to addressing the heterogeneity issue. These studies conducted extensive empirical analyses, highlighting the significant impact of extracting directed graph knowledge to enhance downstream task performance. (2) Our Contribution: Although they are effective, these methods vary in data knowledge extraction and lack a unified framework. Therefore, we propose EDEN, which aims to achieve more efficient data-centric graph learning and further enhance downstream task performance (i.e., fully explore the potential correlations between directed topology and node profiles at the data level). Based on this, inspired by your insightful comments, we have conducted additional experiments to further demonstrate the improvements of our framework on existing studies. | Models | | Empire | | | | Rating | | | |--------------|----------|-----------|-----------|----------|----------|-----------|-----------|----------| | | Node-C | Existence | Direction | Link-C | Node-C | Existence | Direction | Link-C | | ADPA | 79.3±0.4 | 67.3±0.5 | 54.2±0.4 | 59.0±0.5 | 44.8±0.5 | 77.8±0.4 | 83.7±0.3 | 64.5±0.5 | | ADPA+EDEN | 81.7±0.4 | 68.6±0.4 | 55.3±0.6 | 60.7±0.3 | 46.6±0.3 | 79.4±0.5 | 85.2±0.2 | 66.0±0.6 | | MAP++ | 79.5±0.3 | 67.6±0.6 | 54.7±0.5 | 59.8±0.4 | 45.4±0.5 | 78.5±0.3 | 84.4±0.3 | 65.6±0.4 | | (MAP++)+EDEN | 81.4±0.6 | 69.0±0.5 | 56.4±0.4 | 61.2±0.4 | 47.5±0.4 | 79.8±0.3 | 85.6±0.4 | 66.8±0.5 | We apologize if our initial presentation caused any misunderstandings. In the revised submission, we will reorganize the motivation, references, and experimental results based on the above clarification for a clearer presentation. **Q2: Methods And Evaluation Criteria** Please allow us to offer the following explanations and revision plans, which we trust will effectively address your concerns and enhance your confidence in our manuscript. **Enhancing the Readability** We plan to add intuitive explanations and background about formulas and introduce a table that includes mathematical symbol definitions. Taking Eq. (1–3) as examples, we will provide more background on the concept of Shannon entropy and graph mining. **Scalability Test and Performance** We fully agree on the importance of enhancing EDEN's scalability. However, as the first data-centric digraph learning framework, some complexity is inevitably introduced. We kindly ask for your understanding in this regard. That being said, we introduce a lightweight alternative in Sec 3.4, which has been evaluated on the million-scale Arxiv and WikiTalk and renders it comparable to or even superior to the best baseline (Table 1 and Figure 3). We also plan to provide a more detailed discussion on potential efficiency optimizations, as elaborated in our responses to Reviewer RNnj Q1. **Q3: Experimental Designs Or Analyses** Thanks for your valuable suggestions. We add ADPA (ICDE 2024) and MAP (WWW 2025) as additional baselines to the already existing Dir-GNN (LoG 2024) and HoloNet (ICLR 2024) as follows: | Node-level | CoraML | CiteSeer | WikiCS | Tolokers | Arxiv | | ---------- | -------- | -------- | -------- | -------- | -------- | | ADPA | 83.2±0.6 | 64.4±0.6 | 80.2±0.5 | 79.5±0.3 | 67.8±0.5 | | MAP++ | 83.5±0.4 | 64.7±0.7 | 79.8±0.4 | 80.1±0.3 | 68.2±0.4 | | EDEN | 84.6±0.5 | 65.8±0.6 | 81.4±0.3 | 81.3±0.2 | 69.7±0.3 | | Link-level | | Slashdot | | | WikiTalk | | | ---------- | --------------- | -------------- | ------------ | --------------- | -------------- | ------------ | | | Existence (AUC) | Direction (AP) | Link-C (ACC) | Existence (AUC) | Direction (AP) | Link-C (ACC) | | ADPA | 90.9±0.1 | 92.6±0.1 | 86.0±0.1 | 94.8±0.2 | 90.8±0.1 | 90.6±0.1 | | MAP++ | 91.2±0.0 | 92.7±0.1 | 86.4±0.1 | 94.7±0.1 | 90.6±0.1 | 90.5±0.0 | | EDEN | 91.8±0.1 | 93.1±0.0 | 87.1±0.2 | 95.4±0.1 | 91.7±0.1 | 91.0±0.1 | **Q4: Supplementary Material** Thank you for your notification, we will include implementation details in the README to guide any interested researchers to implement our method. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I still have the following concerns: Q1: What does "data-centric" mean in your response to Reviewer RNnj Q1? Q2(1): The revision plan does not convince me that the revised paper will be sufficiently readable. Overall, the motivation and method sections lack necessary explanations and clear relationships. For example, simply introducing the concept of Shannon entropy does not aid in understanding Eq. (1). You should provide a detailed explanation of why topology uncertainty is relevant to your motivation. Q2(2): When you refer to "EDEN" in Table 1, are you indicating the lightweight version? Although the authors discuss methods to accelerate EDEN in Section 3.4, these methods seem rough, and it is unclear whether they will impact the model's performance. Q3: Many graph embedding methods perform well on directed graphs, such as: [1] Scalable Graph Embeddings via Sparse Transpose Proximities. [2] ELTRA: An Embedding Method based on Learning-to-Rank to Preserve Asymmetric Information in Directed Graphs. I recommend that the authors conduct a comprehensive comparison with SOTA graph embedding methods. Minor New Concerns: The method "NSTE" is cited alongside the paper "Directed Graph Auto-Encoders". Is this a mistake? --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your thorough review and apologize for any confusion. Please allow us to provide the following supplementary clarification, which we hope will strengthen your confidence in our work. **Q1** **Why and What is Data-centric Graph ML** Conventional model-centric approaches prioritize GNN design complexity while neglecting the data itself. Therefore, we advocate treating graphs as knowledge sources, where higher-order graph patterns—e.g., homophily, heterophily, motifs, and communities—can provide topology- and semantic-aware insights in model design and computation. **Existing but Insufficient Studies** - Dir-GNN [LoG 2024] – Utilizes **directed edges** to design directed message passing augmented with additional trainable parameters. - ADPA [ICDE 2024] – Utilizes **directed neighborhood connectivity** to obtain graph operators. These operators are used to obtain directed propagated messages and design trainable message aggregators. - MAP [WWW 2025] – Utilizes **node degrees and directed label patterns** to improve complex-domain message passing by optimizing the Magnetic Laplacian for each edge (determines the strength of edge directionality). **Q2(1)**: We acknowledge your concerns and recognize that this point involves an abstract concept that may not be immediately intuitive. Nonetheless, we are committed to presenting it clearly and accessible. To that end, we offer the following clarification, along with further revision plans. **Topology Uncertainty and Our Motivation** The core of our work is grounded in the observation that real-world relational data often contain structural noise from dynamic evolution—e.g., spurious edges and missing links—resulting in topology uncertainty [Nature Physics 2012]. We quantify this uncertainty using structural entropy, defined as the information needed to describe the graph via random walks and communities [TOIT 2016]. By minimizing this metric, we construct HKT to reorganize node affiliations, thereby effectively denoising the graph. **Revision Plan** (1) In the appendix, we will formalize topology uncertainty through structural entropy and highlight that minimizing structural entropy reduces graph noise and reveals data knowledge. Moreover, we will provide a real-world case study (citation network). (2) In Sec 3.1, we will emphasize topology uncertainty is relevant to our motivation and justify structural entropy minimization can filter graph noise while revealing data knowledge by HKT. (3) In Sec 3.1-3.2, we will refine the neural mutual information estimator to emphasize its capacity for serving a dual role as both a predictive module and a further graph denoising mechanism driven by HKT. **Q2(2)**: **Experimental Results** All reported results are obtained by lightweight EDEN, ensuring fair comparison. **Revision Plan for Lightweight EDEN Implementation Details** Para. 1: We will provide a detailed algorithm based on a Monte Carlo approach in the appendix, along with additional theoretical motivation and implementation details (Reviewer RNnj Q2). Para. 2: Based on Eq. (7–9), we will formalize the equation of class-specific prototypes and provide an intuitive explanation in the appendix. Para. 3: We will expand the description of the computation-friendly directed graph learning function and provide its formal equations. **Performance Impact** In response to your valuable comments, we have conducted additional experiments as follows: | Node-C | Tolokers (Acc) | Tolokers (Time) | Rating (Acc) | Rating (Time) | | ---------- | -------- | -------- | -------- | -------- | | EDEN (Ori.) | 82.1±0.3 | 240.8±6.5 | 46.8±0.3 | 132.3±4.9 | | EDEN (Light.) | 81.3±0.2 | 72.1±3.6 | 46.3±0.4 | 57.6±2.2 | **Q3** Motivated by your constructive suggestion, we have conducted the following experiments. | | CoraML (N-C) | CoraML (L-C) | WikiCS (N-C) | WikiCS (L-C) |Arxiv (N-C) |Arxiv (L-C) | | ---------- | -------- | -------- | -------- | -------- | -------- | -------- | | STRAP [KDD 2019] | 80.8±0.3 | 71.3±0.4 | 77.5±0.4 | 78.7±0.4 |66.9±0.5 |75.4±0.4 | | ELTRA [CIKM 2023] | 82.2±0.6 | 72.2±0.5 | 78.6±0.4 | 80.4±0.5 |67.6±0.6 |76.9±0.5 | | PSNE [CIKM 2024] | 81.7±0.4 | 72.8±0.3 | 78.2±0.2 | 79.8±0.4 |68.3±0.4 |77.8±0.4 | | EDEN | 84.6±0.5 | 75.2±0.5 | 81.4±0.3 |83.5±0.2 | 69.7±0.3 |80.2±0.2| Graph embedding methods do not explicitly leverage label supervision, resulting in lower performance compared to the semi-supervised baselines. However, their effectiveness under limited label settings underscores their utility. Therefore, we will provide a detailed comparison in the revised submission. **Minor New Concerns** We apologize for the confusion. This cited paper presents an autoencoder framework, differing from the semi-supervised paradigm. Thus, we only apply its encoder component, as described in Section "Neural Source and Target Encodings" (NSTE) in the original paper.
Summary: This paper focuses on data-efficient representation learning for directed graphs and presents a novel online knowledge distillation framework based on a hierarchical tree structure. Leveraging this framework, the authors introduce EDEN, a new method that can be employed as a plug-and-play module to improve performance of existing directed graph neural networks or as an entirely new neural network architecture. The paper elucidates the novelty of the proposed knowledge distillation framework for graph learning and the effectiveness of the EDEN method through detailed textual presentations, charts, and relevant theoretical proofs, in a manner that is both reader-friendly and comprehensible. Moreover, extensive experiments have been conducted to validate the practicality of the proposed method. Claims And Evidence: This work explicitly highlights in Section 1: Introduction that existing directed graph neural network methods fail to fully leverage the rich knowledge embedded in data, thereby imposing a limited upper bound on performance. Building on this observation, the authors conduct an in-depth analysis of the issue. In Section 2: Preliminaries, they define knowledge from two dimensions—semantic and topological—using a hierarchical tree structure. This forms the basis for constructing a novel online knowledge distillation framework that paves the way for data-efficient graph learning. The evidence supporting the framework is meticulously presented and thoroughly substantiated in Section 3: Methodology, Section 4: Experiments, and Appendix. Methods And Evaluation Criteria: For the Methods, the authors provide visual representations through Figure 1 and Figure 2, which facilitate reader comprehension. Additionally, they offer a granular exposition of the methods, relying on the formal problem definitions, equations, presentations, and theoretical proofs detailed in the Preliminaries and Methodology sections. Regarding the Evaluation Criteria, the authors elaborate on the experimental settings in the Experiments section, as demonstrated in Appendix A.10–A.13. Theoretical Claims: In this paper, the authors identify relevant theoretical foundations and conduct a rigorous theoretical analysis. For instance, in the Preliminaries section, the concept of the hierarchical knowledge tree, inspired by hierarchical coding systems, is introduced. The related literature is comprehensively cited, and a clear problem definition is provided for ease of understanding. Furthermore, in the Methodology section, the authors perform a detailed theoretical analysis of the hierarchical online data knowledge distillation process from the perspective of neural mutual information estimation, thereby demonstrating its feasibility. Experimental Designs Or Analyses: The experimental designs and analyses in this work are both comprehensive and sound. Specifically, the performance evaluations across 14 datasets of varying scales and four downstream tasks demonstrate the superiority of this work. Building on this foundation, the ablation studies and sensitivity analyses are sufficiently thorough, providing robust evidence for the contributions of each module. Additionally, the authors leverage the appendix to offer supplementary convergence analyses, further enhancing the thoroughness and completeness of the experimental work. Supplementary Material: The authors provide the code in the supplementary materials, which further completes this work. Relation To Broader Scientific Literature: This work, taking directed graph representation learning as an example, proposes a universal data-efficient online knowledge distillation framework for graph learning. This is an inspiring attempt. Subsequent research in related fields can build upon the hierarchical knowledge tree defined by the authors to make more attempts and promote the vigorous development of data-centric graph learning. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: 1. Novel and Well-Motivated: The paper focuses on data-efficient graph learning, an issue of critical importance for the future development of the graph learning community. Building on this motivation, the authors introduce a novel framework for online data distillation using a hierarchical tree structure. This approach is highly intuitive and natural, leveraging the inherent properties of tree structures in a way that aligns well with human understanding. I believe this framework will be highly beneficial for future research in this area. 2. Complete Methodology and Robust Theory: The authors provide a detailed exposition of the proposed online data distillation framework and the new method built upon it. The presentation is clear and easy to follow, making the complex concepts accessible to readers. The methodology is further enriched with clear definitions and thorough theoretical analysis, enhancing the interpretability and credibility of the proposed approach. 3. Significant and Comprehensive Results: As a general method for data-efficient graph representation learning, the proposed EDEN method demonstrates substantial performance improvements when used as a plug-and-play module for existing methods. Additionally, it achieves state-of-the-art performance when employed as a standalone neural network architecture. The authors conduct extensive experiments across multiple datasets, downstream tasks, and backbone models, providing solid evidence of the method's effectiveness. The experimental results are robust and comprehensive, supporting the claims made in the paper. 4. Well-Presented: The authors make excellent use of figures and clear presentations to simplify complex concepts, making the paper easy to understand. I appreciate the effort made by the authors to ensure that the content is well-organized and clearly communicated. Weaknesses: 1. Enhanced Explanations for Accessibility: The authors provide detailed theoretical analysis in the methodology section, which is commendable. However, understanding these complex formulas may still be challenging for readers who are not well-versed in the field. Could the authors consider offering more intuitive descriptions or visual aids to lower the barrier to entry for a broader audience? 3. Detailed Discussion of Future Directions: While the authors acknowledge the current limitations of their work, the discussion is somewhat brief. A more detailed exploration of promising future research directions would be valuable. Specifically, outlining potential avenues for achieving more data-efficient graph learning could provide valuable insights for the community. 3. Thorough Review Needed: The authors are encouraged to conduct a comprehensive review of the entire manuscript to avoid potential typographical errors and minor mistakes. Ensuring the accuracy and clarity of the text will enhance the overall quality of the submission. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: **W1: Enhanced Explanations for Accessibility** We appreciate your in-depth feedback and acknowledge that certain sections, particularly the formula interpretations, require background knowledge, which may pose challenges for readers. To improve readability, we will incorporate more intuitive explanations in the revision process, as suggested, ensuring greater accessibility for our audience. Specifically, we plan to include the collection of annotations used for formulas in a tabular presentation to guide our readers in the main document. We will also read through the manuscript and add more intuitive explanations to display the functions and meaning of each formula to make them easy to comprehend. We thank you again for your valuable comments to enhance the readability of our work. **W2: Detailed Discussion of Future Directions** We sincerely appreciate your suggestions and acknowledge that some content may be brief due to our focus on presenting methodological details within the space constraints of the main document. We will carry the obligation to provide more valuable insights for further development of this field, and we plan to provide additional information either within the main document or in the appendix, with clear references for accessibility for readers. Specifically, we intend to open a new discussion in the appendix for potential proposals to further enhance the efficiency of EDEN with theoretical support and valuable related works for inspiring interested researchers. We also plan to add a clear indication in Sec 3.4 to guide our readers to this particular section in the appendix. Thank you! **W3: Thorough Review Needed** Thank you for your notification. We will thoroughly review the manuscript to eliminate errors and ensure both accuracy and clarity of the content. More importantly, we will ensure that each formula is well interpreted for our readers with clearer annotations in the table for their convenience. --- Rebuttal Comment 1.1: Comment: I commend the authors for their detailed revision plan, which addresses my concerns. After carefully reviewing other reviewers' feedback, I recommend expanding the discussion on computational efficiency optimizations and research motivations in the revised manuscript. The machine learning field is shifting from model-centric to data-centric approaches, and this work could provide valuable insights. Accordingly, I have raised my score and voted for acceptance. --- Reply to Comment 1.1.1: Comment: Thank you very much for your positive feedback. Rest assured that we will meticulously follow your suggestions to enhance further the presentation of our paper in the revised version and address any potential concerns. We are pleased to know that our work has been recognized for its value in highlighting data-centric approaches within the graph ML community. Your endorsement means a great deal to us.
Summary: The author proposes a complex but effective method called EDEN, which tailored for the directed graph, specifically, it frist build a coarse-gradined Hierarchical Knowledge Tree (HKT), then, it refine the HKT with knowledge flow in the HKT. The method is widely adopted in the 14 graphs and the results valid the method's effectiveness. Claims And Evidence: Claim 1: The author primarily discusses the limitations of Graph Neural Networks (GNNs) on undirected graphs, highlighting issues such as suboptimal data representations and inefficient learning at the model level. Claim 2: Additionally, the author argues that existing methods fail to fully capture the potential correlations between directed graph topology and node profiles at the data level. To address this, the proposed approach enhances existing methods by incorporating directed graph knowledge. Evidence 1: The author demonstrates that directed graphs provide richer and more diverse knowledge in terms of topology and node profiles, supporting this claim with case studies. Evidence 2: Extensive empirical studies are conducted in the experimental section to validate the proposed method. Methods And Evaluation Criteria: Yes. Theoretical Claims: No, since I lack the background knowledge, I may not be able to check all the details of derivation. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: I still question whether such computationally expensive operations are truly necessary. Essential References Not Discussed: I am not sure. Other Strengths And Weaknesses: My main concern is the neccessary of such complex tree design and refine processes, and finally use the "Monte Carlo" method to enhance the efficiency. 1. Could you give a real case that represents the so-called digraph data knowledge? Does it really exists? Other Comments Or Suggestions: 1. The figure 2 is hard to distinguish the words and what you want to express, and also the Figure 1 (quite small) Questions For Authors: See above. Ethical Review Concerns: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Due to the word limit imposed by the new regulations of ICML 2025 rebuttal, we have not provided detailed references, but we will gladly supply them in our subsequent discussions if needed. **Q1: Relation To Broader Scientific Literature** I sincerely apologize for any confusion that may have been caused. Please allow us to clarify our motivations and methodology. **Motivation** Recent studies have shown that considering the directionality of edges provides a novel perspective to address the long-standing heterogeneity challenges in graph ML and achieves impressive performance. These studies collectively emphasize the necessity of extracting implicit data knowledge from directed graphs to drive the model’s learning - Dir-GNN [LoG 2024] facilitates homophilous aggregation by **directed edges**. - ADPA [ICDE 2024] derives high-order propagation operators from **directed neighboring connectivity**. - MAP [WWW 2025] optimizes the strength of directed edges by **node degrees and directed label patterns**. Despite their effectiveness, the data-centric concept remains unformulated, and optimization perspectives vary. Therefore, we aim to propose a unified framework, which leverages Tree as the carriers of data knowledge and achieves learning. Inspired by Reviewer zpYj Q3, we have conducted additional experiments, which illustrate the superiority of our framework. **Complexity** We acknowledge that the current EDEN may limit scalability. However, this is somewhat inevitable in the early stages of establishing a unified framework. We kindly ask for your understanding in this regard. That being said, we have made practical efforts to simplify EDEN. In Sec 3.4, we introduced a lightweight EDEN, derived from three orthogonal perspectives, and successfully applied it to million-level Arxiv and WikiTalk. Table 1 and Figure 3 indicate that our method demonstrates a certain efficiency advantage. We also recognize the need for further efficiency improvements. If possible, we plan to further discuss promising optimization directions in the appendix. For instance, we will explore the possibility of integrating the HKT construction with graph partitioning techniques to enable parallel processing. We trust that the above response will address your concerns and enhance your confidence in our manuscript. If you require additional clarification, please do not hesitate to contact me. **Q2: Other Strengths And Weaknesses** **Why Tree** (1) Graph Mining: We adopt the tree structure for its intuitiveness in revealing hierarchical knowledge within structured data. This concept has been theoretically substantiated in prior studies [STOC 1988, TOIT 2016, ICML 2024]. We kindly suggest that you refer to Sec 2.2 for a detailed elaboration. (2) Graph Learning: In some recent studies [NeurIPS 2021, NeurIPS 2024, ICLR 2024], researchers have abstracted the core of GNN as the tree-based message passing. They have theoretically demonstrated its effectiveness. Based on these insightful studies, the motivation for adopting a tree-based framework is its versatility and significant potential for data-centric graph learning. Additionally, we optimize the efficiency of this framework in Sec 3.4, rendering it comparable to or even superior to the best baseline shown in Sec 4. **Why Monte Carlo** This technique has recently been widely employed in tree-based numerical computation [SIGMOD 2023, WWW 2024]. In this paper, we construct HKT by minimizing structural entropy via a greedy algorithm, which iteratively selects optimal sets for leaf Combining and Lifting. By integrating the Monte Carlo method, we approximate solutions through random sampling, avoiding exhaustive enumeration of all branches for subsequent greedy selection. **Real Case** (1) Research Perspective: As previously mentioned in our response to Q1, the data knowledge derived from directed graphs provides key insights that improve model computation and yield superior performance. (2) Practical Perspective: Taking citation networks with interdisciplinary (e.g., AI4Science) as an example. In this context, at the first (leaf) level of HKT, we use edge directions to model intra- and inter-field citations. Based on this, HKT endows us with the ability to organize hierarchical knowledge. Specifically, at higher levels of the tree, we can interpret this as progressively more abstract concepts, such as research groups, institutes, and universities. Explicitly representing these concepts can be regarded as revealing data knowledge and providing insights to enhance tree-based graph learning. We trust that our response effectively addresses any potential concerns. Additionally, we plan to enrich Sec 1, 2.2, and 3.4, and the appendix, with more detailed motivation, background, and intuitive interpretations of the technologies. **Q3: Other Comments Or Suggestions** We sincerely appreciate your suggestion and plan to reformat the figure to a vertical layout to enhance clarity.
null
null
null
null
null
null
Distributed Event-Based Learning via ADMM
Accept (poster)
Summary: This paper introduces an event-triggered distributed learning method using ADMM to reduce communication in federated learning (FL) while handling non-i.i.d. data distributions. The key contributions claimed include: - A communication-efficient approach that reduces the number of message exchanges using an event-based trigger. - A convergence analysis for convex and nonconvex settings, with accelerated rates in the convex case. - Robustness to communication failures, both theoretically and experimentally. - Empirical validation using MNIST and CIFAR-10, showing up to **35% communication savings** over baselines like FedAvg, FedProx, SCAFFOLD, and FedADMM. The approach is interesting and contributes to reducing communication in distributed learning, but the empirical evaluation is limited to small-scale image classification datasets. ===== After rebuttal. ======= Thanks for the authors' responses. My questions and concerns have been addressed. I tend to accept this submission. So I increase my point. Claims And Evidence: - Claim: "Our method reduces communication while remaining agnostic to data distribution." - **Partially valid:** These are general advantages of FL methods, and it's unclear whether the event-triggered mechanism offers a **significant advantage** beyond existing FL techniques. - Claim: "We ensure convergence by setting a time-varying $\Delta_k \to 0$." - **Unclear in practice:** The authors do not provide a **practical tuning strategy** for $\Delta_k$, which is critical for real-world implementations. - Claim: "Our work is the first to analyze communication failures in event-triggered ADMM." - **Partially valid:** While the paper contributes to understanding failures in event-based ADMM, prior work in **randomized communication reduction** and **partial client participation** should be acknowledged. Methods And Evaluation Criteria: - The event-triggered ADMM approach is reasonable, but the paper misses discussions on prior ADMM-based FL methods and SDE-based convergence analyses. - Missing Related Work: - **FL methods using SDE/OPE tools**: - Liang, J., Han, Y., Li, X., Liu, Y., Zhang, Y., & Lin, Z. (2022). Asymptotic behaviors of projected stochastic approximation: A jump diffusion perspective. NeurIPS 2022. (Jump diffusion perspective) - Deng, W., Zhang, Q., Ma, Y. A., Song, Z., & Lin, G. (2024). On Convergence of Federated Averaging Langevin Dynamics. UAI 2024. (Federated Averaging Langevin Dynamics) - Glasgow, M. R., Yuan, H., & Ma, T. (2022). Sharp bounds for federated averaging (local SGD) and continuous perspective. AISTATS 2022. (Sharp bounds for FedAvg) - **Federated ADMM studies**: - Swaroop, S., Khan, M. E., & Doshi-Velez, F. (2025). Connecting Federated ADMM to Bayes. ICLR 2025 (to appear). - Chen, Y., Blum, R. S., & Sadler, B. M. (2022). Communication efficient federated learning via ordered ADMM in a fully decentralized setting. CISS 2022. - Experimental limitations: - Only small-scale datasets (MNIST, CIFAR-10) are tested. - The paper ignores asynchrony and stragglers, which are critical in real-world FL. Theoretical Claims: - Theoretical results rely on strong convexity: - The claimed accelerated rates are only valid under strong convexity, limiting applicability to deep learning models. - The nonconvex case analysis (Theorem 2.3) provides only a slow $O(1/k)$ rate, which does not demonstrate a significant advantage over existing FL methods. - The analysis is for deterministic optimization while practices favor stochastic optimization. - Interpretability issue in Algorithm 1: - The term $d_{k+1}^i$ is not well explained. - **Question:** Could the authors clarify how this term fits into the computation and **why it is needed?** Experimental Designs Or Analyses: - Communication savings are demonstrated, but scalability is unclear: - The method is tested on small networks. What happens when scaling to hundreds or thousands of clients? - There is no study on network delays or heterogeneous computing capabilities. - No clear study on the benefits of event-triggered communication: - The results show communication savings, but is there a trade-off in terms of final model performance? - **Question:** Could the authors summarize the **practical benefits** of event-triggered communication? Supplementary Material: The appendix contains proof details and additional experiment setup and results. I didn't check very carefully, though the proof seems to be correct. Relation To Broader Scientific Literature: - The paper didn't discuss some prior work in ADMM variants for FL, particularly: - ICLR 2025 (Swaroop et al.): Connections between Federated ADMM and Bayesian inference. - CISS 2022 (Chen et al.): Ordered ADMM for fully decentralized FL. - Misses FL literature using SDE-based analysis: - UAI 2024 (Deng et al.): Federated Averaging Langevin Dynamics. - NeurIPS 2022 (Liang et al.): OPE/SDE tools in FL. - The novelty claim on communication failures seems to be overstated since prior work already discusses robust FL under partial participation. The author should highlight the difference between them. Essential References Not Discussed: See previous report for missing references. Other Strengths And Weaknesses: **Strengths:** - The idea of event-triggered ADMM is interesting and could be useful for communication-limited FL settings. - The paper provides some convergence guarantees, though they rely on strong convexity assumptions. - The communication savings are demonstrated empirically. **Weaknesses:** - Experiments are too simple (MNIST, CIFAR-10) and may not generalize. - Some references are missing (SDE, ADMM in FL). - Unclear benefits of event-triggered communication beyond saving messages. Other Comments Or Suggestions: - Clarify the interpretation of $d_{k+1}^I$ in Algorithm 1. - Provide a practical tuning strategy for $\Delta_k \to 0$. - Discuss how this approach generalizes to asynchronous/decentralized FL settings. Questions For Authors: See the other comments. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer for their thorough review and constructive feedback. Below, we address the specific points raised. - **Justification of Experiment Design and Generalization:** The point of our manuscript is to present a distributed optimization/learning algorithm that is both communication-efficient and robust, even under heterogeneous data distributions (e.g., each agent having one digit in MNIST, shown in Fig. 8). The experiments were designed to demonstrate the feasibility of our method in these scenarios. Additionally, we show that our approach scales to larger networks, with examples using 100 clients to train a CNN on CIFAR-10 (Fig. 3) and 50 agents that communicate over a graph (Fig. 12). Our theoretical result (Corollary 2.2) guarantees that the method can indeed scale to large networks. Numerical experiments align with these theoretical predictions, highlighting that our approach is effective even in the presence of 100 agents or more. - **Missing References:** Thank you for bringing these papers to our attention. We will incorporate them into the related work section of the revised manuscript. The paper by Swaroop et al. (2025) is very recent, so it was not included in our initial submission. Regarding Chen et al. (2022), which discusses the ordered ADMM variant, we cite other relevant works related to our ADMM variant. We will include Chen et al. (2022) in the revised manuscript. We will also add references to alternative convergence techniques, such as Liang et al., 2022 and Deng et al., 2024. Additionally, (Glasgow, 2022) proves FedAvg is suboptimal under heterogeneity, which aligns with the findings of (Li, 2020c) that we already cite, but we will include this reference as well. - **Communication vs Performance Trade-off:** Our framework allows explicit trade-offs between communication load and solution accuracy, which we demonstrate through experiments in Fig. 8, 9, 11, 12 (App. G). These curves clearly show the relationship between the communication threshold and the resulting model accuracy. - **Benefits of Event-triggered Communication:** Our framework, based on ADMM with event-triggered communication, not only saves communication but also ensures convergence under heterogeneous data distributions. This is not guaranteed by many other federated learning methods. We also provide a bounded error guarantee, which can be controlled by a single threshold value ($\Delta$). Our numerical examples (see, e.g., Fig. 3, as well as 8, 9, 11, 12) highlight that the event-triggered communication strategy requires much less communication for a given target accuracy than a vanilla communication strategy that randomly communicates among neighbors (in the context of our ADMM-based approach). **Clarifications** - **Interpretation of $d_{k+1}^i$:** The term $d_{k+1}^i$, defined in line 189, encapsulates the combined effect of the local primal and dual updates. Its change essentially quantifies the deviation of the local state from the last communicated state and serves as the key signal for triggering communication (see (2)). - **Practical Tuning Strategy for $\Delta_k$:** Thank you for raising this point. We discuss a tuning strategy in line 347, where we suggest a time-varying schedule, such as $\Delta_k = \Delta_0 / (k+1)^t$ with $t > 0$. Convergence is further analyzed in App.F. Alternatively, the communication threshold can be actively tuned based on feedback from the system, as in [1]. [1] Cummins, M., Er, G. D., Muehlebach, M. “Controlling Participation in Federated Learning with Feedback” (arXiv:2411.19242). - **Generalization to Decentralized Settings:** We demonstrate in App. G (see Fig. 11 and Fig. 12) that our algorithm operates over a network of agents, supporting decentralized FL systems. - **Covering Asynchrony, Network Delays and Heterogeneous Computing Capabilities:** We already discuss the adaptability of our approach to asynchronous systems at the end of the manuscript (line 412). While initially framed as a synchronous method, event-based communication naturally extends to asynchronous settings, enabling robust operation in real-world scenarios with varying network reliability and synchronization. We also propose to model the effect of stragglers, network delays and heterogeneous computing capabilities as communication drops, which are effectively managed via our reset strategy. Proposition 2.1, Corollary 2.2 and Theorem 4.1 provide theoretical guarantees for convergence under these conditions. --- We believe we have addressed all the questions and provided the necessary clarifications. We kindly request raising our paper’s score based on the responses provided. If there are further questions or additional clarifications needed, please let us know.
Summary: The authors present an event-based distributed agent framework that leverages a relaxation of Alternating Direction Methods of Multipliers (ADMM) to provide a framework with a reduced communication cost. Claims And Evidence: The authors make sufficient claims to support their idea and also list its limitations which is notable. Overall, I feel that the evidence provided is enough to verify the author's claims. Methods And Evaluation Criteria: The method is compared against a suite of recent methods and seem to be sufficient given the application at hand. Theoretical Claims: The theoretical claims are sound but in order to be followed fully one has to delve into the appendix. I feel that _critical_ claims and ideas should be present in the main text. Experimental Designs Or Analyses: Overall the experimental design is acceptable however I would appreciate to have an ablation study as well as use data drawn from biased/shifting distributions to see how robust the framework is in such conditions. Supplementary Material: Very little, it is too long. Relation To Broader Scientific Literature: The paper reduces the communication overhead in a distributed system when local models undergo substantial transformation. It is heavity related to convex safe zones [1] and it would be great for the authors to compare against similar literature which is missing in the text. [1] Distributed query monitoring through convex analysis: Towards composable safe zones, Garofalakis et al. Essential References Not Discussed: I feel that the paper lacks discussion on the broader topic mentioned above, but other than that the authors have included sufficient ammount of references. Other Strengths And Weaknesses: While the paper proposes a significant reduction in the communication cost, it is still missing imporant guarantees required in such a system. The authors also listed most of them but I feel not catering for bad actors and to gradient poisoning attacks limits the system applicability. It would be great if the authors discussed potential methods to mitigate these concerns. Other Comments Or Suggestions: Nothing of note. Questions For Authors: Most of my theoretical concerns were answered in the previous segments. However, I could not find the code to replicate the experiments in the supplied submission. Is there any reason why the authors did not include the associated artefacts? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the time and effort invested in evaluating our work. Below, we address each of the points raised. - **Robustness to Biased/Shifting Distributions:** Our experiments already use heterogeneous datasets with inherent bias and distribution shifts among agents (e.g., each agent having one digit in MNIST, shown in Fig. 8 and other cases in Fig. 3,9,11 and 12). - **Relevance to Safe Zone Design:** We acknowledge the relevance of reducing communication overhead through local conditions in distributed systems. We will incorporate the corresponding reference and the following discussion in the revised version of our manuscript: Our approach similarly monitors local models through local constraints (our communication event definition, see (2) in the paper), which collectively guarantee the global condition of overall error remaining bounded. While thresholding can be seen as a subset of such local constraints, our methodology is directly inspired by the send-on-delta (SoD) concept (Miskowicz, 2006) and event-based control literature, where communication is triggered by significant state changes. - **Bad Actors and Gradient Poisoning Attacks:** Our method is compatible with existing techniques that mitigate issues related to bad actors and gradient poisoning. Addressing all possible challenges in federated learning is beyond the scope of a single approach. However, our event-based methodology can be integrated with robust aggregation or anomaly detection methods [1,2] to improve security without compromising communication efficiency. We will add these methods into our discussion as possible compatible solutions. [1] Yin, D., Chen, Y., Kannan, R. &amp; Bartlett, P. (2018). Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates. Proceedings of the 35th International Conference on Machine Learning, 80:5650-5659 [2] Pillutla K., Kakade S. M. and Harchaoui Z. (2022) "Robust Aggregation for Federated Learning," IEEE Transactions on Signal Processing, 70:1142-1154 We thank the reviewer for bringing this up. - **Code in the Submission:** All details related to our implementation can be found in App G. We will release our code if the manuscript gets accepted. However, we are happy to share the current version of the code upon request. --- We believe we have addressed all points and agree to add further discussions to the final version. We kindly request raising our paper’s score based on our responses. If further clarifications are needed, we are happy to provide them and refine our work.
Summary: This paper proposed an ADMM style algorithm with event-triggered communication for minimizing the sum of a smooth possibly nonconvex function and a closed proper convex convex regularizer subject to linear equality constraints. This general problem formulation subsumes the typical consensus optimization framework, which enables one to apply the proposed algorithm to the typical federated learning problem setup (parameter server) and decentralized communication topologies. Theoretical guarantees are provided using a Lyapunov analysis. In the strongly convex setting, this work can achieve an accelerated linear convergence with a dependence on the square-root of the condition number, up to a neighbourhood of the minimizer that depends on the threshold of communication triggering, as well as the the periodic "resets" required for handling dropped messages. In the non-convex case, a fast O(1/k) rate is proven. The proposed method involves exactly solving for nested optimization problems "argmin," however, in practice this is replaced with a few stochastic gradient steps. Empirical results provided training an MLP on MNIST, a ConvNet on CIFAR10, and linear regression with L1 regularization in the appendix, comparing to established federated learning baselines. Claims And Evidence: All claims are well supported, theoretical results are the primary contribution and are relatively impressive. Empirical results are convincing, with sufficient details provided in the appendix. Methods And Evaluation Criteria: Main claim is to reduce the number of required communication rounds. Thus focusing on validation accuracy of downstream tasks as a function of communication rounds is appropriate. Theoretical Claims: Coarsely went through the proofs in the appendix, which look ok. Experimental Designs Or Analyses: Yes checked all experiments, including those in the appendix. All appear sound. Supplementary Material: Yes all of it. Relation To Broader Scientific Literature: Paper is well situated within the broader literature on federated learning. Essential References Not Discussed: Nothing major, but the approach to handling dropped packages is a bit brutish. Essentially, need exact synchronization to mitigate the effect of dropped packages, and the frequency of this synchronization is reflected in the residual error of the convergence bounds. There are already tools in the gossip literature for dealing with dropped packages by tracking running sums (Hajdicostos et al., IEEE TAC 2016). I am curious whether such approaches can be combined with the proposed method to better handle dropped packages. As can be seen in Figure 10 in page 33 of the Appendix, dropped packages require significant synchronization to establish fine-grained convergence. Hadjicostis et al., “Robust distributed average consensus via exchange of running sums,” IEEE TAC, 2016. Other Strengths And Weaknesses: Well-written paper with solid results, convincing empirical experiments, well document experimental setup in the appendix, and several additional ablations in the appendix. Other Comments Or Suggestions: N.A. Questions For Authors: N.A. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback and recognition of our work’s strengths. We reviewed (Hadjicostis et al. 2016), and their approach using running sums offers an interesting alternative to periodic synchronization. While our method ensures strong theoretical guarantees, exploring tools from the gossip literature could be a valuable direction for future work to further mitigate communication drops. We appreciate the reviewer’s insights and are happy to provide further clarifications if needed.
null
null
null
null
null
null
null
null
Policy-labeled Preference Learning: Is Preference Enough for RLHF?
Accept (spotlight poster)
Summary: This paper introduces PPL, a new method for learning policies from human preferences based on the regret based model of preferences. The authors note a key distinction between the regret model used in prior work (CPL by hejna et al.) and propose an improvement upon it to consider the current policy. The authors evaluate the method on a number of tasks with heterogeneous and homogenous policy data. Claims And Evidence: The claims of the paper are generally well supported. The authors do a good job: * Explaining the differences with prior work * Deriving a new update rule * evaluation on a nubmer of tasks. There are few missing citations that I believe should be added, and adjustments to be made which I left to the essential references section. Methods And Evaluation Criteria: The authors present a nice small-scale example of where the previous preference model might be incorrect in Fig 3. The method's derivation is intuitive and makes sense. One weakness I would like to point out is that though the authors spend time deriving their method, the final practical implementation is not easily grasp-able from the main body of the paper, and is instead most easily found in the appendix. I would encourage the authors to include a derivation of the final objective within the text of the main paper, I think this would make the work much more understandable and stronger. Theoretical Claims: * The theoretical claims around preference models are supported by apt examples. Note that I did not carefully check all proofs provided in Section 3.2, but I did scan the derivations in the Appendix. I believe some of the results need proper attribution to prior works (see later section). * While the theoretical results are interesting, it was less clear form an intuitive perspective why we should care about the theoretical results. If the authors could provide more support for what insights these are. * Some of the results could be explained a bit more. Why is the expectation in Theorem 3.6 taken over only preferred segments? This does not seem to be explained. * I do not believe that all claims made desrve the level of distinction provided. For example, I'm not sure that Theorem 3.4 constitutes as theorem. A similar result was considered a Proposition in "Show, Don’t Tell: Aligning Language Models with Demonstrated Feedback" by Shaikh et al. Experimental Designs Or Analyses: **Reproduction** The experimental design is good -- it considers two cases of preference data from heterogeneous or homogeneous policies. The authors went the extra mile reproducing the original results from CPL. However, the performance on the authors own datasets for CPL is far worse than on the original CPL datasets, in fact so much so that SFT often performed better than CPL. In particular, performance doesn't go up at all for CPL in Fig 5 even in the homogenous case. One question I have is if the authors re-trained the oracle policy with the behavior segments used for data added to the replay buffer as done in the original CPL paper to ensure more accurate advantage function estimates. **Performance** The authors method performs favorably in comparison to prior methods. Supplementary Material: I did a quick review of the supplementary material, and it is generally useful in understanding the method. Relation To Broader Scientific Literature: The authors sufficiently cover related work, aside from the points brough up in the next section of the review. Essential References Not Discussed: **Regret-based Preference Model** The regret based preference model was not first introduce by Hejna et al. It was instead introduce by Knox & Hatgis-Kessell et al. in "Models of human preference for learning reward functions" published in 2023. This was further explored in "Learning optimal advantage from preferences and mistaking it for reward" by Knox et al. It would be nice if the authors could attribute the regret-based preference model to these works, rather than CPL which used it to derive their algorithm. **Theoretical Justification** * The derivation of Lemma 3.2 bears high similarity to "From to : Your Language Model is Secretly a Q-Function" by Rafailov & Hejna et al. published in COLM 2024, and should probably be attributed there. * Theorem 3.4 seems highly related to Proposition A.1. from "Show, Don't Tell: Aligning Language Models with Demonstrated Feedback" from Shaikh et al., published in ICLR 2024. The only distinction between this and Theorem 3.4 is the use of the full discounted MDP. However, up to notation and changing to the discounted KL, they are the same. Other Strengths And Weaknesses: Strengths: The paper identifies a theoretical weakness, and develops a method to address it and demonstrates improved performance. Weaknesses: some questions remain around the dataset generation and evaluation procedure. The presentation could be improved, particulary in providing the final practical method and explaining the theory. Finally, some critical citations are missing. Should the authors address the weaknesses listed (which I believe should be very do-able, I am willing to raise my score.) Other Comments Or Suggestions: * Include a discussion of why the presented theory is valueble. If space is a limitation, I would rather have this in the main text and delegate some of the intermediary results to the appendix. * Include the final practical objective in the main text. I believe this is currently in a later section of the appendix. Questions For Authors: * Why is CPL performance on the re-created datasets so low if the authors used the same procedure? * Could the authors clarify if they trained their label-generating oracle policy with the trajectories used to sample preferences in the replay buffer of the SAC policy? This would lower the TD-error when estimating the advantage. * Why are only positive segments used for Theorem 3.6? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their time and thorough evaluation of our paper. We have organized our responses to your comments below. If any of our responses fail to address the intent of your questions or if you have remaining concerns, please let us know. **1. Final practical implementation** We agree that it would be beneficial for readers to have a clearer view of the final implementation in the main text, and we will revise to address this. **2. Regret-based preference model attribution** Thank you for clearly outlining the relationships with the literature. We will revise and add a reference to [Knox et al. 2022] to clearly attribute the origin of the regret-based preference model as referenced in CPL. **3. Including the meaning of theoretical results** We appreciate your insight that providing intuitive explanations for our theoretical results can further enhance the reader's understanding. We have endeavored to include interpretive statements following each theoretical result. Could you please kindly indicate if there are specific parts of the theoretical results that you believe would benefit from additional clarification? **4. Theoretical justification** - **(Lemma 3.2):** We have confirmed that [Rafailov et al. 2024a] derived the same theoretical results as [Rafailov et al. 2024b], and we will cite this in the paragraph explaining Lemma 3.2. - **(Theorem 3.4):** While [Shaikh et al. 2024] presented their results in a contextual bandit setting, our Theorem 3.4 extends this to MDPs. This extension is non-trivial due to the presence of stochastic transitions. Though [Zeng et al 2024] achieved results more similar to ours than [Shaikh et al. 2024], their proof applies only to deterministic transitions. Thus, Theorem 3.4 makes a distinct theoretical contribution by addressing environmental stochasticity. **5. Theorem 3.6** First, Corollary 1 of [Hejna et al. 2024], which motivated Theorem 3.6, showed that maximizing the optimal advantage is equivalent to learning the same optimal policy as maximizing the reward. In this context, Theorem 3.6 was written to make it intuitively easier to understand what kind of policy is learned from a reinforcement learning perspective when maximizing negative regret. The reason for considering only the preferred segment here is to interpret this intuitive meaning more clearly. For less preferred segments, the PPL objective can be expressed as: $\arg \min\_{\pi\_{\psi}}\Big(\mathbb{E}\_{\zeta^- \sim \mathcal{D}}[-Reg^{\pi^-}\_{\pi\_{\psi}}(s^- ,a^-)-\alpha \log \pi\_{\psi}(a^- |s^-)]\Big) \equiv \arg\max\_{\pi\_{\psi}}\Big( \mathbb{E}\_{\zeta^- \sim \mathcal{D}}[\bar{D}\_{KL}(\pi^- || \pi\_{\psi} ; s^-, a^-)]\Big)$ However, since the left-hand side represents cost minimization rather than reward maximization in traditional reinforcement learning, it is difficult to interpret intuitively. To avoid this ambiguity, we only dealt with the preferred segment perspective. **6. Dataset Generation and Evaluation Procedure** - **Dataset Generation:** During SAC training, we saved checkpoints of models showing 20%, 50%, and 100% success rate. We loaded policy model to generate 20K segments, and for dense cases, randomly sampled 2.5K segments. For labeling procedure, we used $V^{\pi^*}(s_k)-V^{\pi^*}(s_0) + \sum_{t=0}^{k-1} r(s_t,a_t)$ from CPL, and calculated $V^{\pi^*}$ using 64 MCMC samples with a 100% success rate critic $Q^{\pi^*}$. For fair comparison, all algorithms were trained with the same labels, and when policy labels were needed in PPL, logits were calculated using policy information stored during rollout. - **Evaluation Procedure:** We conducted evaluations following the same method as CPL: (1) run 25 evaluation episodes, (2) calculate averages across 8 neighboring checkpoints, (3) average across 4 seeds, and (4) report the maximum values in the table. For clarity of experimental design, we will add the provided response to the appendix. **7.Retrain the SAC oracle for accurate advantage estimation** In the CPL dataset creation process, data segments are added to the replay buffer of the oracle SAC and then re-trained, which helps lower the TD error and improve the accuracy of advantage estimation. While we do not adopt the same procedure, we used only a single SAC oracle and trained all algorithms with the same labels. Therefore, it is difficult to conclude that the inaccuracy of advantage estimates caused performance degradation only for CPL. One likely reason for CPL's struggle with noisy labels appears to be similar to the likelihood mismatch issue discussed earlier. Since CPL does not consider policy labels, it assumes all training data segments originate from an optimal policy. We suspect that when noisy labels occur, suboptimal data segments might be incorrectly treated as if they were generated by the optimal policy, potentially making the model more sensitive to noise and degrading its learning performance. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their responses and work in rebuttal in addition to answering my questions! It still seems like the authors do not follow the exact procedure used in CPL for generating the data (namely they do not minimize the bellman error of the Q function used to estimate A on the rollout datasets). At a minimum, it would be good to state this explicitly as the draft currently states that the same data generation procedure was used "we follow CPL’s preference dataset generation procedure" in line 358. At best, it would be good to know if this could explain CPLs low performance versus other explanations. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the prompt and thoughtful feedback. To better reflect the differences you pointed out, we will revise the manuscript as follows: **Line 356–359 (Left column):** To evaluate performance on offline datasets generated from diverse policies, we aimed to follow CPL’s preference dataset generation procedure. However, there are two key differences in our implementation of the critic. First, we utilize raw rollout data without any trajectory truncation. Second, whereas CPL applies a specific technique to reduce TD-error by re-training the critic with all rollout data added to the replay buffer, we generated preference labels *without such retraining*. As a result, our labels may be noisier than those in CPL. Nevertheless, to ensure a fair comparison, all algorithms were trained using the same set of labels. For further details, please see Appendix E.5. **Line 386 (Left column):** We suspect that CPL’s lower performance on our dataset may partially result from the absence of this retraining technique to reduce TD-error. However, since all algorithms were trained using the same labels, we believe this performance gap is better explained by CPL’s sensitivity to label noise. This sensitivity likely stems from CPL’s implicit assumption that all training trajectories are generated by an optimal policy.
Summary: The paper proposes Policy-labeled Preference Learning (PPL) to mitigate what it calls “likelihood mismatch” in RLHF. The authors illustrate how policy labels (i.e., knowledge of which behavior policy generated which trajectory) can help disentangle suboptimal policy actions from stochasticity in the environment and thereby improve preference modeling. They further introduce contrastive KL regularization as a tool to align the learned policy with the policies associated with higher-preference trajectories. **Update after Rebuttal:** I'm keeping my positive score. Claims And Evidence: In my opinion, the main claims of the submission are well-supported. Methods And Evaluation Criteria: The authors perform experiments on six MetaWorld environments and test both homogenous and heterogenous offline dataset (homogenous/heterogenous here refers to the generating behavior policies). MetaWorld is a robotics benchmark, which is a reasonable choice in the context of RLHF. However, the authors begin the paper with a discussion around LLM fine-tuning so that there is a slight disconnect between how the paper positions itself in the introduction and the type of experiments it conducts. Theoretical Claims: I read through the proofs of Lemma 3.2, 3.3 and Theorem 3.4 and found no issues. Experimental Designs Or Analyses: I read through the experimental details and superficially skimmed through the additional details in the appendix. Question to the authors: Could you please clarify whether in the experiments of Section 4.2 you assume access to the policy labels? This is not fully clear to me from the text. Supplementary Material: The supplementary material consists of code, which I did *not* review. Relation To Broader Scientific Literature: The paper can be viewed as a refinement of contrastive preference learning (CPL) by explicitly taking into account the behavior policy and the "likelihood mismatch" that can arise. Essential References Not Discussed: As far as I am aware related work has been appropriately discussed. Other Strengths And Weaknesses: ### Strengths 1. The authors highlight the issue of likelihood mismatch due to heterogenous behavior policies. 2. Using the regret w.r.t. the behavior policy in contrats to the optimal advantage as in CPL appears like a natural approach to address the challenge of diverse generating policies. 3. The experiments support the claims that incorporating policy labels can improve substantially in certain cases. ### Weaknesses 1. One key assumption is that either we know the precise policy that generated each trajectory or we can assign a "pseudo-label". In many real-world applications (especially large, unstructured offline data), extracting reliable policy labels is nontrivial. The paper briefly uses deterministic pseudo-labels but does not deeply analyze the accuracy needed or possible mismatch when pseudo-labels are very noisy as far as I am aware. 2. A minor weakness is that that the role of coverage is not sufficiently discussed in the paper in my opinion. Likelihood mismatch seems to only be an issue when coverage is incomplete (or biased). Discussing this in more detail especially seems important around Figure 3. Other Comments Or Suggestions: In my opinion, "Is Preference Enough for RLHF?" is a slightly misleading title, as you are explicitly working under the premise that examples are generated using a heterogenous/diverse policies. You are not discussing the limitations of RLHF w.r.t. pairwise preferences more broadly. This part of the title could maybe be made less broad and more accurate. Questions For Authors: The experiments focus on comparing the performance of PPL only with CPL, P-IQL and SFT. For the case of heterogenous data, PPL is only compared to CPL in the main text and additional plots shown in the appendix. Is there a reason why you are not comparing with DPO and DPPO? Are there other offline RL methods that explicitly address similar issue to the likelihood mismatch (i.e., policy mismatch) that you could compare to? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their time and thorough evaluation of our paper. We have organized our responses to your comments below. If any of our responses fail to address the intent of your questions or if you have remaining concerns, please let us know. **1. Paper position disconnection with introduction** To emphasize that the PPL algorithm learns policies without explicitly learning a reward function, we mentioned in the Introduction that we are adopting the Direct Preference Optimization framework. Although the DPO paper, which introduced this framework, has been studied in the LLM domain, we noted in line 25 (right column) that its deterministic transitions make it an unsuitable setting for demonstrating the advantages of PPL. **2. Label usage in experiments** In the experiments in Section 4.2, PPL assumed that policy labels are accessible. However, since extracting reliable policy labels is nontrivial, in Section 4.3 we designed and experimented with PPL-deterministic, which utilizes pseudo labels. **3. Analysis under noisy label** Thank you for your comment regarding noisy pseudo-labels. We are currently conducting experiments in which we add Gaussian noise, $\mathcal{N}(0, \sigma^2)$to the ground-truth labels. In the button-press environment, we observed that performance degrades as the noise level $\sigma$ increases. Specifically, at $\sigma$=0.1, the performance was lower than that of PPL but higher than that of PPL-deterministic. However, at $\sigma$=0.3, the performance dropped below that of PPL-deterministic. This indicates that if the pseudo-label is not sufficiently close to the policy label, its performance will be inferior to the proposed PPL-deterministic, thereby confirming that the pseudo-label used in PPL-deterministic is a reasonable alternative. **4. Likelihood mismatch and role of coverage** As described in Figure 3 and in the section on likelihood misalignment, the likelihood mismatch issue does not primarily arise from dataset coverage. Instead, it occurs when the algorithm assumes that trajectories are always generated by the optimal policy. As shown in Figure 3, let us consider the black trajectory is generated by the optimal policy, while the red trajectory is produced by a suboptimal policy. Let's consider the case where all four trajectories—which ensure complete data coverage—are included in the dataset. Without the policy label, the estimated MDP appears as shown in the right panel of Figure 3, and it is evident that it still misinterprets the ground-truth MDP depicted in the left panel. **5. Misleading title** Thank you for your detailed feedback. However, we believe that the scenario considered in our work—where data is generated by heterogeneous/diverse policies—is more justified and encompasses a broader range of realistic situations than merely assuming a homogeneous dataset. Previous studies using offline datasets have not taken into account the policy that generated the dataset. In contrast, our paper highlights this point and proposes an algorithm that leverages policy information, thereby emphasizing that in RLHF settings, both preference and policy information should be considered. We consider the current title appropriate, but if you feel that this context is not sufficiently conveyed, we would greatly appreciate any further suggestions. **6. Comparing with DPO and DPPO** Since the [Rafailov et al 2024] presents an algorithm designed for bandit settings, it could not be directly applied to sequential tasks such as robotic manipulation, and therefore was not compared separately. In the case of the DPPO, since CPL assumes a fixed standard deviation Gaussian for the policy, its loss simplifies to an L2 loss between actions—identical to the implementation used in DPPO [An et al. 2023]. We confirmed this similarity based on the official CPL GitHub repository, so no separate comparison was performed. **7. Other offline likelihood (policy mismatch) literature** To the best of our knowledge, we have not found any offline RL algorithm that considers likelihood mismatch or policy mismatch. **References** * [Rafailov et al 2024] : Rafailov, R., Sharma, A., Mitchell, E., Manning, C. D., Ermon, S., and Finn, C. Direct preference optimization:Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36. * [An et al 2023] : An, Gaon, et al. "Direct preference-based policy optimization without reward modeling." Advances in Neural Information Processing Systems 36 (2023): 70247-70266.
Summary: The paper proposes a novel approach to preference-based RL, grounded in regret-based modeling of human preferences. Unlike prior work that also uses regret-based modeling, the paper explicitly labels the behavior policy. This, as argued by the authors, is important in order to resolve the likelihood mismatch issues when utilizing regret-based modeling of human preferences. The paper provides a formal analysis of their novel framework, which yields a new algorithmic approach to preference-based RL. Finally, extensive experiments are conducted to showcase the benefits of the approach. --- ## update after rebuttal I thank the authors for their response. I will keep my original score. Claims And Evidence: In my opinion, he set of results in this paper is quite convincing and it clearly demonstrates the utility of the proposed approach. The paper provides an in-depths analysis (both theoretical and empirical) of the considered problem setting. I also appreciate that the intuitive examples in Section 3 which motivate the approach. Methods And Evaluation Criteria: The paper provides both a theoretical investigation of the problem setting and an empirical evaluation of the proposed approach. The test-bed is primarily based on simulation-based robotics tasks. It would be interesting to see whether the results generalize to other domains, but I don't think this is necessary given that the contributions are substantial. Theoretical Claims: I've taken a look at the theoretical claims, including the proofs in the appendix. The claims and the proofs seem to be correct. Experimental Designs Or Analyses: The experimental test-bed is rigorous and follows closely related prior work on preference-based RL. The experiments are extensive and test various aspect of the proposed approach. I believe that the experiments are well-designed and I didn't spot major issues with the experimental analysis. Supplementary Material: I've taken a look at the proofs. The appendix contains quite a few additional experimental results, which I skimmed through, but didn't check carefully. Relation To Broader Scientific Literature: This work is broadly related to the literature on AI alignment, focusing more specifically on preference-based RL. The proposed approach could potentially be more broadly applicable, e.g., for training LLMs in multi-turn dialogues. However, the current results primarily focus on RL and manipulation policies in robotics. Essential References Not Discussed: The background and the main reference that this work extends is explained well. However, it would be great if the authors could include a separate related work section and provide an overview of the literature on RLHF and DPO. I can see several important references cited in the related work, including those that discuss MDP extensions of DPO (e.g., Rafailov et al. 2024a, Nika et al. 2024, etc.). However, these don't seem to be extensively discussed in the main paper. Other Strengths And Weaknesses: The paper is well-written, easy to follow and enjoyable to read. The contributions are non-trivial in my opinion; while the main idea behind the proposed approach is quite intuitive, to my knowledge it is conceptually novel. The theoretical analysis is also novel, although the proof techniques seem to be based on standard arguments. The experimental analysis is also thorough. Overall, I found the results quite interesting and convincing. Other Comments Or Suggestions: Suggestions: - I didn't initially understand how Eq. (3) was obtained from (2). I noticed the proof in B.1, but it seems that $Q_*^{\pi}$ is only introduced afterwards. Might be good to explain what $Q_*^{\pi}$ denotes beforehand. - Minor typo in Lemma 3.3: 'Let $\pi^*$ is' -> 'Let $\pi^*$ be' Questions For Authors: Please see my comments above. I don't have specific clarification questions at the moment. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their time and thorough evaluation of our paper. We have organized our responses to your comments below. If any of our responses fail to address the intent of your questions or if you have remaining concerns, please let us know. **1. Generalization to other domains** Since PPL is particularly beneficial for sequential and stochastic scenarios, we determined that a robotic manipulation task is more appropriate than a language domain with deterministic transitions, and thus conducted our experiments accordingly. If extended to the language domain, exploring the multi-turn LLM scenario you suggested would be an excellent direction for future work. **2.LLM literature reference** Recent research on RLHF has been particularly active in the LLM domain, and our PPL algorithm which directly learns policies without explicitly learning a reward function shares similarities with the Direct Preference Optimization framework. We briefly mention this connection in the main text. However, since our experiments focus on sequential RL tasks such as robotic manipulation, we decided not to deeply discuss related work specific to LLM tasks to avoid reader confusion. **3. Response to [Other Comments and Suggestions]** In Equation (2), we noticed that the derivation of Equation (3) is referenced as being in Appendix B.1, but that reference appears about one paragraph after the equations are presented. To reduce any potential confusion for readers, we will adjust the sentence placement. Additionally, the explanation for the subscript * is provided later than its initial usage. Since removing the subscript * does not change the original meaning of Equation (2), we will remove it from that equation. We will also correct the typo in Lemma 3.3 as you pointed out.
Summary: This paper proposes an algorithm called policy-labeled preference learning where the preferences are assumed to be formed by a regret-based metric and the preference data are labeled by the policy that generated the trajectories. The paper did a very good job explaining why the regret-based preference model may be preferable and how it is different from the advantage-based model in the literature. The results are very strong against the state-of-the-art baselines. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Overall, I find the paper to be difficult to argue against. Kudos to the authors. Methods And Evaluation Criteria: The methods and the evaluation criteria make sense. While reading the paper, I always had in mind that the proposed method will not work for the cases where there already exists an offline dataset without policy labels. However, the paper considered this cases too and proposed a heuristic method to get around that issue. It also conducted experiments under that setting. Theoretical Claims: I did not check the proofs as they are in the Appendix, but the claims intuitively made sense to me. Experimental Designs Or Analyses: The experimental designs and analyses make sense. While there are a few cases where baselines perform better than PPL, in most cases PPL outperforms the others. Such consistent improvement is not very common in the RLHF literature, so I would say the results are very strong. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The paper builds upon the CPL algorithm. One thing that I am not sure about is: the paper cites "Learning Optimal Advantage from Preferences and Mistaking It for Reward" by Knox et al. which also considers a regret-based method. Does their method also suffer from the same problem as the CPL paper, i.e., do they also present the regret-based model as equivalent to the advantage-based model? If so, perhaps that should also be mentioned when this limitation of CPL is discussed in the paper. If this is not the case, a clarification of why the regret-based model of this paper is different/better is needed, as otherwise the novelty becomes thinner (though the method and the theoretical analyses are still novel and I would still argue for the acceptance of the paper). Essential References Not Discussed: N/A (though, just like many other recent RLHF papers, this one also mostly neglects the pre-InstructGPT preference-based reward learning literature other than the Christiano et al. 2017 paper) Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: There are a few typo-like issues: 1) In the equation before Section 3.2, the term inside the summation must be inside a parenthesis as otherwise the second term is outside of the summation. 2) In line 313 (right column), there seems to be an extra word that should be deleted. 3) In line 379 (left column), "implementation" is misspelled. Finally, it might be a good idea to divide Figure 4 into three separate figures so that the histograms are separately visible -- currently, their overlapping regions are difficult to distinguish, if not impossible. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your time and effort in reviewing our paper. We have organized our responses to your comments below. If any of our reconstructed responses miss the intent of your questions or if there are remaining concerns, please let us know so we can address them. **1. Difference between PPL and cited literature** The regret-based preference model used in the CPL algorithm, which defines regret in terms of the optimal advantage function, was first introduced in [Knox et al. 2022]. Similarly, [Knox et al. 2024] interprets preferences using the same optimal advantage-based regret model, arguing that the Bradley-Terry model yields an approximation of the optimal advantage function rather than a true reward function. The regret preference models used in CPL, [Knox et al. 2022] and [Knox et al. 2024] are both essentially based on optimal advantage, which is a different design from the regret model proposed in PPL. **2. Response to [Other Comments and Suggestions]** Thank you for pointing out the typo, and I will incorporate it into the revised version for better clarity. Regarding Figure 4, it was designed as a single plot to enable an at-a-glance comparison of the distributions across the three datasets. However, we recognize that overlapping colors make it difficult to distinguish the overlapping regions. Given the space constraints in the main text, we will either separate the plots or incorporate an envelope to improve boundary delineation. **References** * [Knox et al 2022] : Knox, W. Bradley, et al. "Models of human preference for learning reward functions." arXiv preprint arXiv:2206.02231 (2022). * [Knox et al 2024] : Knox, W. Bradley, et al. "Learning optimal advantage from preferences and mistaking it for reward." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 9. 2024.
null
null
null
null
null
null
Robust Sparsification via Sensitivity
Accept (poster)
Summary: The paper proposes a general framework for constructing ε-coresets for robust optimization problems of the form $\min_{x \in \mathbb{R}^d} F(x) = \sum_{i=1}^n F_i(x)$, where the robust version $F^{(m)}(x)$ aggregates all but the $m$ largest values of $F_i(x)$. This formulation is motivated by the need to handle outliers in machine learning tasks such as regression, PCA, and clustering. The authors develop an algorithm that constructs an ε-coreset of size $O(mT/\varepsilon \cdot \log(mT/\varepsilon)) + S$, where $T$ is the total sensitivity and $S$ is the size of a vanilla ε-coreset, assuming the latter exists. The approach leverages sensitivity sampling and a novel sensitivity flattening technique, yielding scalable algorithms with near-tight bounds for problems like $\ell_2$-subspace embeddings. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: This paper is well-written, the proofs are clear, and seems sound to my knowledge. Experimental Designs Or Analyses: It proposes a general procedure that may extend vanilla coreset to coreset with outliers. The experimental part is clear. Supplementary Material: No. Relation To Broader Scientific Literature: It may be useful to analyze large scale scientific data in physics, biology, and chemistry. Essential References Not Discussed: Please see "Other Strengths And Weaknesses" part. Other Strengths And Weaknesses: 1. Not very new problem. For example, the outlier problem as proposed in **Question 1.1** has been commonly studied, which can be traced back to [RL87]. 2. Lack of novelty and more closely related papers should be referred. The paper [WGD21] also studies robust coresets for continuous-and-bounded learning, which **already includes applications like regression, PCA, and clustering, as covered in submitted paper**. They share some similar technical ideas. [RL87] Peter J. Rousseeuw and Annick Leroy. Robust Regression and Outlier Detection. Wiley, 1987 [WGD21] "Robust and fully-dynamic coreset for continuous-and-bounded learning (with outliers) problems." *Advances in Neural Information Processing Systems* 34 (2021): 14319-14331. Other Comments Or Suggestions: N/A Questions For Authors: Please see "Other Strengths And Weaknesses" part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the questions. Below are our responses. Not very new problem. For example, the outlier problem as proposed in Question 1.1 has been commonly studied, which can be traced back to [RL87]. We agree that robust estimation [RL87] is classical work in the field. However, our work addresses the distinct and more recent challenge of constructing coresets for robust objectives. This specific problem of robust coreset construction is not addressed in [RL87], and the theory significantly lags behind that of standard (non-robust) coresets. Our paper makes novel contributions specifically to this direction. Lack of novelty and more closely related papers should be referred. The paper [WGD21] also studies robust coresets for continuous-and-bounded learning, which already includes applications like regression, PCA, and clustering, as covered in submitted paper. They share some similar technical ideas. We thank the reviewer for pointing out [WGD21]. We will add a detailed discussion comparing our work in the revised manuscript. There are crucial differences: (1) The definition of a coreset in [WGD21] differs fundamentally from ours. Their coreset preserves the loss function within a ball, closer to the notion of a "weak coreset" in the literature. In contrast, a "strong coreset" preserves the loss function for all parameters, which is the standard, more powerful notion of a coreset in related work and is significantly harder to achieve in the robust setting. It also allows for arbitrary additional constraints in the optimization problem as it preserves the loss function for all parameters. Our work is the first to achieve this for a general class of loss functions. (2) The approximation guarantee in [WGD21] requires relaxing $m$ (allowing more points to be removed than specified) or depends on an implicit problem-dependent parameter $\epsilon_0$. Our guarantee holds for the precise given $m$ without such a relaxation or hidden dependencies. (3) While both [WGD21] and our algorithm use two stages, the techniques differ substantially due to the different definitions of coresets. The first stage of [WGD21] identifies all local potential outliers (with respect to a fixed solution), while our first stage identifies all global "contributing functions" (regardless of the solution) through a novel process that combines uniform sampling with sensitivities. While both methods use a vanilla coreset in the second stage, we introduce an additional sensitivity-flattening step to satisfy the requirements of a strong coreset, which is absent in [WGD21]. Therefore, while related in appearance, [WGD21] tackles a different (weaker) coreset definition with distinct techniques. Our work provides the first strong robust coreset for this general setting, marking a significant theoretical advance. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed response to my concerns. The rebuttal addressed part of my concerns. Also, given the positive grades from other reviewers, I would like to raise my score to weak accept.
Summary: This work studies the coreset for robust optimization problems, where the loss function is defined to allow the removal of the highest $m$ costs. Research on robust coreset is relatively limited compared to the vanilla version. For functions with total sensitivity $T$, using the vanilla coreset algorithm and sensitivity oracle, the authors introduce a meta-algorithm for constructing a robust coreset of additional size $O(Tm/\varepsilon\log (Tm/\varepsilon))$. Roughly speaking, the robust coreset consists of two parts. The first part is a set of contributing functions with unit weight, achieved through $O(m\log(Tm/\varepsilon))$ rounds of sampling and selection. The second part is a refined coreset of the remaining functions. The refinement is necessary because the weight of a function in the vanilla coreset may be too large, potentially violating the definition of a robust coreset. In addition to the meta-algorithm, the robust coreset offers a pathway to develop approximation algorithms for robust optimization problems. This paper presents improved algorithms for (robust) regression and PCA by demonstrating bounded total sensitivity. Claims And Evidence: Theoretical paper. All claims are supported by corresponding proofs. Methods And Evaluation Criteria: In the context of a theoretical paper, the primary evaluation criterion is the correctness of the main theorem. As for the experimental component, the results are meaningful and make sense in the field of coreset. Theoretical Claims: Yes. i have checked all the proofs in the main body of the paper Experimental Designs Or Analyses: Yes. The work aligns with the consistent experimental style of the coreset research. Supplementary Material: Reviewed the additional proofs but not checked. Relation To Broader Scientific Literature: not sure Essential References Not Discussed: [1] also explores the robust coreset within a broader context. And their robust coreset also consists of a vanilla coreset and a sampling-based component of size $O(m/\varepsilon)$ (can be improved with bounded doubling dimension). However, in their study, the coreset property only holds for local $x$. And it includes a relaxiation of $m$, which aligns with the result in [2]. Regrading the generality of this paper, I am curious whether their work can be incorporated into your framework as a special case. --- [1] Robust and Fully-Dynamic Coreset for Continuous-and-Bounded Learning (With Outliers) Problems. [2] Near-optimal Coresets for Robust Clustering. Other Strengths And Weaknesses: ## Strengths - The algorithm is concise and easy to implement, making it accessible for further development. The proof is clear and appears to be correct, based on my assessment. Its clarity enhances the understanding of the algorithm's validity and effectiveness. - The assumption of sensitivities is reasonable, as it is reasonable when regarding real-world downstream tasks such as PCA and regression. ## Weaknesses From a practical standpoint, the robust coreset is less critical. This is because robust algorithms for optimization problems with outliers often lack guaranteed approximation, leading to a reliance on heuristic methods for solutions. In such cases, compressing the data as a preprocessing step becomes less significant, particularly when the compression method is relatively complex and time-consuming. Other Comments Or Suggestions: - Line 098, “The total sensitivity of $\mathcal{F}$” should be “The sensitivity of $\mathcal{F}$” - Line 159, “A function $f$ is” , missing “ $\in A$ “ - Line 217, $F$ in the Uniform() should be $A$ Questions For Authors: The effectiveness of the proposed algorithm depends on a bound of total sensitivity $T$ and a running time of sensitivity oracle $t_1$. In other words, the algorithm is suitable for such problems that (1) total sensitivity is not large (2) The time of computing sensitive is limited. Is there a large class of problems satisfying such condition besides the cases presented? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the questions. Below are our responses. [1] also explores the robust coreset within a broader context. And their robust coreset also consists of a vanilla coreset and a sampling-based component of size $O(m/\varepsilon)$ (can be improved with bounded doubling dimension). However, in their study, the coreset property only holds for local $x$. And it includes a relaxiation of $m$, which aligns with the result in [2]. Regrading the generality of this paper, I am curious whether their work can be incorporated into your framework as a special case. [1] Robust and Fully-Dynamic Coreset for Continuous-and-Bounded Learning (With Outliers) Problems. [2] Near-optimal Coresets for Robust Clustering. We generally agree with the reviewer's comment on [1]. We note that the framework in [1] is based on assumptions that the loss functions satisfy the Lipschitz condition and the boundedness property, which are not assumed in our paper. Therefore, the two papers are not directly comparable. We also refer to our response to the second point of Reviewer YNPP for a detailed comparison. From a practical standpoint, the robust coreset is less critical. This is because robust algorithms for optimization problems with outliers often lack guaranteed approximation, leading to a reliance on heuristic methods for solutions. In such cases, compressing the data as a preprocessing step becomes less significant, particularly when the compression method is relatively complex and time-consuming. We understand the concern regarding practicality, especially when heuristics are common. However, our robust coreset offers a significant advantage: it enables the use of, or accelerates, algorithms on datasets previously too large. For example, as detailed in the appendix, applying FastLTS for trimmed least squares to the full dataset took 7.543 seconds. By using our coreset, sized at only approximately 2.7\% of the original data, the same FastLTS implementation achieved good accuracy in just 0.4 seconds. More generally, for computationally intensive algorithms with potentially prohibitive runtimes (e.g., scaling as $\binom{n}{m} = \exp(O(m\log\frac{n}{m}))$ in the worst case), we reduce the effective input size from $n$ to $O(md)$ (assuming $\epsilon$ is a constant), making these algorithms more feasible for larger-scale problems. The effectiveness of the proposed algorithm depends on a bound of total sensitivity T and a running time of sensitivity oracle t1. In other words, the algorithm is suitable for such problems that (1) total sensitivity is not large (2) The time of computing sensitive is limited. Is there a large class of problems satisfying such condition besides the cases presented? While current prominent applications of sensitivity sampling focus on regression, PCA, and clustering, the framework itself is general. We emphasize that approximate sensitivities, rather than exact ones, are sufficient for coreset construction, making this approach adaptable to a wider range of problems where exact sensitivities might be intractable. Just as efficient algorithms were developed for approximating leverage scores, we anticipate similar progress for sensitivity estimation in other problems, and hope our work encourages such research by demonstrating the utility of sensitivity in the robust setting. Line 098, “The total sensitivity of $\mathcal{F}$” should be “The sensitivity of $\mathcal{F}$” Line 159, “A function $f$ is” , missing “$\in A$" Line 217, $F$ in the Uniform() should be $A$ We agree with the reviewer on all these points. We will correct them in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you for the response. After reading your discussions with other reviewers, I agree with the distinct contributions and improvements this work presents compared to existing research. As for me, the primary significance lies in being the first to achieve "global" robust coreset without requiring relaxation in the number of outliers. I think it is necessary to clarify this claim in the paper if it was verified. Also, as pointed out by other reviewers, the current version lacks a thorough review of robust coreset, which is only briefly mentioned in the introduction section. Actually, research on robust coreset began very early. In addition to [1, 2], references [3] and [4] also explore robust coreset from certain perspectives. A more detailed comparison should be included in the related work section. [3] SOTC’11 A unified framework for approximating and clustering data [4] FOCS’18 ε-Coresets for Clustering (with Outliers) in Doubling Metrics --- Reply to Comment 1.1.1: Comment: We thank the reviewer for pointing out additional related work. In the revised version, we will include a more detailed discussion comparing our results with existing literature. The two papers [3] and [4] mentioned by the reviewer provide bicriteria guarantees. In addition, the coreset in [5] has an exponential size, which is significantly worse than the standard vanilla coreset sizes. [5] Dan Feldman, Leonard J. Schulman. Data reduction for weighted and outlier-resistant clustering. SODA 2012.
Summary: This paper studies a robust version of the $\epsilon$-coreset construction for a function class $\mathcal{F}$. Specifically, if we assume that there are $m$ outliers in $\mathcal{F}$, the goal is to construct a coreset such that it will always be an $\epsilon$-coreset even if we remove up to $m$ largest functions, valued with respect to any model $x$. This definition is strong in the sense that, it needs to hold for any model $x$, as the input to the functions. The construction is built upon vanilla coreset construction which refers to the non-robust counterpart, which includes a uniform sampling stage and a refining stage. The uniform sampling tries to collect most of the functions with large sensitivity; while the refining stage verifies the rest, and include any function that escapes the first stage. The paper also shows the results can be used in many problems such as robust linear regression and robust PCA. Claims And Evidence: The claims are supported with clear and convincing evidence. The claims are clearly stated, the algorithms are clearly presented, and the proofs are well organized and written. Methods And Evaluation Criteria: The method makes sense. As the paper studies the coreset problem by proposing a robust version to the vanilla $\epsilon$-coreset construction. The algorithm includes two stages: 1) uniform sampling to detect functions with large sensitivity; 2) constructing a vanilla coreset and refining it by adding more weight to functions with large sensitivity. Theoretical Claims: I checked most of the proofs in Section 4 (especially the key algorithm steps), and verified the application of these theorems in Section 5. All steps I checked are sound. Experimental Designs Or Analyses: I quickly scanned the experiments and found no issue. Supplementary Material: I didn't check the appendix as most of the important results are included in the main paper. Relation To Broader Scientific Literature: The result of this paper, can potentially be applied to many problems, such as regression, PCA, k-median problems, etc. As such, it can be used to build robust algorithms for many different machine learning problems. On the other hand, this means the proposed technique is very general. Hence, if we were to solve a specific problem (for example, robust linear regression has a rich literature of its own), it might not be comparable to existing robust algorithms. Essential References Not Discussed: See Questions for Authors. Other Strengths And Weaknesses: The paper is very solid and the proof is sound. In Theorem 5.1, the running time of the algorithm for robust regression, it is $d^{O(m)}e^{O(m/\epsilon)} + O(nd)$, meaning that it is exponential in $m$. This is a large running time when $m$ is large, and may prohibit realistic application of the algorithm to practical problems. Other Comments Or Suggestions: In the proof for Lemma 4.4, Line 212, given that the total sensitivity is bounded by $T$, why is it that there are at most $\frac{4T}{\epsilon}$ functions added into $D$ in each repetition? I didn't quite get this part. Why is this function unrelated to $m$? Questions For Authors: While this paper is compared to Simonov et al. 2019 for robust PCA, I wonder how do you compare it to more recent paper: Nearly-Linear Time and Streaming Algorithms for Outlier-Robust PCA, by I. Diakonikolas, D. Kane, A. Pensia, T. Pittas (ICML 2023). Specifically, the running time in this paper, and that of Simonov et al. 2019 is exponential in either $m$ or $d$, which is avoided in Diakonikolas et al. 2023. One reason might be that there seems no assumption of the $m$ outliers in this paper, while Diakonikolas et al. 2023 has distribution assumptions. Is this correct? Otherwise, the running time seems not reasonable. If possible, could you also comment on your current running time in terms of how optimal it is? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the questions. Below are our responses. This is a large running time when m is large, and may prohibit realistic application of the algorithm to practical problems. We acknowledge the exponential dependence on $m$. However, many existing robust algorithms, including practical ones like FastLTS or theoretical approaches, involve computations with worst-case complexities like $\binom{n}{m} = \exp(O(m\log\frac{n}{m}))$. Using our coresets, the runtime would become $\exp(O(m\log d))$ (assuming that $\epsilon$ is a constant). In typical large-scale scenarios where $n\gg d$ and $n\gg m$, this is a significant improvement in the base of the exponent. In the proof for Lemma 4.4, Line 212, why is the number of functions added not dependent on m? The reviewer is correct to note the number of functions added in a single execution of Line 4 (Algorithm 3) does not explicitly depend on $m$. This line calls Algorithm 1, which adds functions whose sensitivity relative to the sampled set $B$ is at least $\epsilon/4$. By definition of total sensitivity, the sum of sensitivities is at most $T$, meaning at most $4T/\epsilon$ functions can satisfy this $\geq \epsilon/4$ condition. The threshold is $\epsilon/4$ to ensure that each contributing function (defined with $\epsilon/m$) is caught with sufficient probability ($1/(5m)$ as in Lemma 4.3), but the number of functions added in one execution of Algorithm 1 is bounded only in terms of $T$ and $\epsilon$. As explained above, the dependence on $m$ lies in the probability that each contributing function is captured, and this dependence enters the number of repetitions $R$ (Lines 2 and 3 of Algorithm 3). Comparison with the paper "Nearly-Linear Time and Streaming Algorithms for Outlier-Robust PCA, by I. Diakonikolas, D. Kane, A. Pensia, T. Pittas (ICML 2023)." Yes, the reviewer is correct. The crucial difference is that Diakonikolas et al. (2023) achieve nearly-linear time by leveraging assumptions about the data distribution. Our work provides guarantees in the worst-case setting without such distributional assumptions, which inherently makes the problem harder and often necessitates complexities exponential in parameters like $m$ or $d$. If possible, could you also comment on your current running time in terms of how optimal it is? Regarding optimality for robust PCA: Simonov et al. (2019) established that no polynomial-time algorithm exists under standard complexity assumptions, suggesting that exponential dependency on some parameters is likely necessary for worst-case guarantees. They provide a lower bound of $m^{\Omega(k)}$ and an upper bound of $n^{O(d^2)}$. Our coreset construction takes $d^{O(m)}$ time. While not directly comparable to their upper bound (better for small $m$, worse for large $m$), plugging our coreset (size $\approx md$ assuming a constant $\epsilon$) into their algorithm improves their runtime dependence on $n$, yielding roughly $(md)^{O(d^2)}$. This significantly reduces the base compared to $n^{O(d^2)}$ when $n\gg md$. While there remains a theoretical gap between our exponential runtime and known lower bounds, our approach offers a concrete improvement over existing worst-case algorithms by mitigating the dependence on $n$.
Summary: This manuscript shows that two simple conditions are sufficient for the existence of a small coreset for$F(m)$: $F(x)$ has a small vanilla coreset and has bounded total sensitivity. Then develops a general framework for constructing ε-coresets for several robust problems. Experiments on real-word datasets demonstrate that the coreset constructions are effective in approximating the loss function and considerably reduce the running time for robust regression problems while maintaining a good approximation of the objective function. Claims And Evidence: The claims and proofs in this manuscript are clear and explicit, and the writing style is organic unity, loose in form but focused in content, reaching the level of a professor. Methods And Evaluation Criteria: The method proposed in this manuscript is a new entry point for the construction of core sets and can well promote the development of related fields. Theoretical Claims: The theoretical derivation is logically rigorous, which is the author's greatest advantage and cannot be refuted. The proofs in the text and the appendix are rigorous and correct. As for practical applications, from the Impact Statement, it seems that this article does not care about. Experimental Designs Or Analyses: The experimental part of this article is relatively concise. I have two questions: In the experimental part, this scheme is only compared with uniform sampling. I understand that core sets are carefully designed preferential sampling. So, are the results of such sampling better than those of random sampling? The article assumes that the m to be removed is known (the form of the robust optimization problem defined in Section 1 uses m, and the input parameters of Algorithm 3 include m, m=10 given in your codes from). m is a parameter used to control the number of functions to be removed, which has not been effectively discussed. Supplementary Material: The proofs of Lemma 5.2 and 5.3 in the appendix are rigorous, but the relative error of the experiments in the appendix is slightly worse than that in the main text. Relation To Broader Scientific Literature: As stated in the manuscript: In the robust case, coresets with similar size and performance to the vanilla case were not known until recently in the context of clustering with outliers. Beyond clustering, no other robust coresets have been proposed. So, this is an extension of previous research. Essential References Not Discussed: The author is authoritative in the relevant field, so no other relevant literature is not cited. Other Strengths And Weaknesses: The paper is highly original, considering a wider range of robust core set selections and developing corresponding algorithms. It would be more conducive to promoting related research if the code was made public rather than limited to reviewers, but this is not necessary. The relevant arguments are clear and explicit. However, its importance and impact on the entire machine learning may be local and limited, especially when it comes to specific applications. Other Comments Or Suggestions: 1. Practicality needs to be considered. If the greatest value of an algorithm improvement comes from the article itself, wouldn’t that be a pity? 2. Publish the code within an appropriate scope so that more people can comment, improve and use it. Questions For Authors: The discussion about random sampling and m value in the experimental part are two important issues that affect my score. In addition, there are two more issues, but they are relatively unimportant compared to the above issues. 1、 With the robust formulation, has the context of facility been satisfactorily solved? What I mean is that we often take advantage of the loopholes in other people's theoretical research and continue to conduct theoretical research on the problem, but ignore the most real needs in the modeling process of this problem. Maybe this kind of theoretical research is not practical. 2、 Why was robust coresets first proposed under the condition of clustering? What are the essential differences between the PCA and regression promoted in this article and clustering? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the questions. Below are our responses. this scheme is only compared with uniform sampling. I understand that core sets are carefully designed preferential sampling. So, are the results of such sampling better than those of random sampling? We compared against uniform sampling as it is a standard and widely used baseline in the coreset literature. However, we acknowledge the reviewer's point regarding other sophisticated sampling methods. We will incorporate a comparison with leverage score sampling and a discussion in the revised manuscript. The article assumes that the m to be removed is known. m is a parameter used to control the number of functions to be removed, which has not been effectively discussed. Our formulation assumes a given $m$, consistent with the standard definition of problems like trimmed least squares and much of the existing robust coreset literature. Determining the value of $m$ is indeed a different and challenging problem, often tackled with heuristics that lack the strong guarantees our coreset framework enables. This falls outside the scope of this work, which is focused on guaranteed approximation for fixed $m$. relative error of the experiments in the appendix is slightly worse than that in the main text We agree with this point. The observed difference in relative error is dataset-dependent. Specifically, the Emission dataset (appendix) is nearly 1.8 times the size of the Energy dataset (main text). Since we compared performance using coresets of similar sizes for both, it is understandable that the relative error would be slightly higher when applied to the much larger Emission dataset. With the robust formulation, has the context of facility been satisfactorily solved? What I mean is that we often take advantage of the loopholes in other people's theoretical research and continue to conduct theoretical research on the problem, but ignore the most real needs in the modeling process of this problem. Maybe this kind of theoretical research is not practical. While constructing the coreset is a preprocessing step, it directly enables the application of robust algorithms to much larger datasets. For instance, established algorithms for trimmed least squares like FastLTS, while considered practically effective, have worst-case runtimes depending exponentially on $m$ and polynomially on $n$. Our coreset significantly reduces this dependency on $n$, replacing it with a much smaller $md$ (our coreset size is $O(md)$, assuming $\epsilon$ is a constant). This allows applying such robust methods (with their existing guarantees or practical performance) in large-scale settings where they were previously intractable. Why was robust coresets first proposed under the condition of clustering? What are the essential differences between the PCA and regression promoted in this article and clustering? We note that the concept of a coreset was first introduced by Har-Peled and Mazumdar in their seminal work as a tool for addressing clustering problems. Over the past two decades, coresets for clustering have received considerable attention, making it unsurprising that robust coresets were initially developed in this context. Our work provides a unifying framework for constructing robust strong coresets (guaranteeing approximation for all $x$) applicable to regression, PCA, and potentially clustering. To our knowledge, prior work on robust coresets for regression and PCA either did not exist or considered different, weaker definitions (e.g., local guarantees, like [WGD21], as discussed further in response to Reviewer YNPP) which do not provide the same level of worst-case theoretical guarantees as our strong coreset definition. [WGD21] Zixiu Wang, Yiwen Guo, and Hu Ding. Robust and fully-dynamic coreset for continuous-and-bounded learning (with outliers) problems. Advances in Neural Information Processing Systems 34 (2021): 14319-14331. --- Rebuttal Comment 1.1: Comment: Dear author, this is a field you are very familiar with, and your reply is self-consistent. I suggest you absorb the existing large language model to solve large-scale problems, then do some theoretical + engineering researches, applying theory to practice. But "hell is others", you can stick to doing your own research. Thank you.
null
null
null
null
null
null
INRFlow: Flow Matching for INRs in Ambient Space
Accept (poster)
Summary: Current flow matching (FM) methods are usually trained in two-stage paradigm, which sets obstacles for unifying models across data domains. To deal with this, this paper introduces INRFlow. In the proposed method, they estimate the map from coordinate to value via FM in a pointwise manner. To further model spatial dependency, they introduce latent variable via self-attention in the encoder and cross-attention in the decoder. They then applies the proposed INRFlow into image generation, image-to-3D point clound generation and protein generation. Claims And Evidence: The claims made in the submission is clear and convincing. Methods And Evaluation Criteria: The goal is to train a unifying model in ambient space, using domain agnostic architecture. The proposed methods can solve the problem well. The dataset used (FFHQ-256, LSUN-Church-256, ImageNet-128/256 etc.) are diverse and standard, which shows the generality of the proposed method. Theoretical Claims: There's no theory part in this paper. Experimental Designs Or Analyses: The experiement is extensive and sound. Supplementary Material: No. Relation To Broader Scientific Literature: The paper use INRs in the flow matching context with self- and cross-attention for latent variables, and the same strategies can be applied to other methods. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strength: the idea is simple and clear. Writing is easy to follow. Weakness: There may raise some concerns on novolties, as the proposed method is essentially combination of INR with flow matching, and using attention to handle latent variables for context information. Although the architecture is absolutely new and novel, the proposed method can solve problem well. Therefore, I would suggest borderline accept. Other Comments Or Suggestions: In 3.3 and 3.4, the paper use $x_{f_t}$ to denote coordinate at time t. However, the coordinate should be static, which serve as covariates for neural net approximating vector fields? If I'm correct, I would suggest use $x_f$ consistently for the entire paper (e.g. match to Figure 2) Questions For Authors: The paper is clearly written. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: 1. There may raise some concerns on novolties, as the proposed method is essentially combination of INR with flow matching, and using attention to handle latent variables for context information. Although the architecture is absolutely new and novel, the proposed method can solve problem well. Therefore, I would suggest borderline accept. - We want to first thank the reviewer for acknowledging that our proposed INRFlow solves different problems with a new and novel architecture. However, we want to kindly point out that our work is more than a simple combination of INR and flow matching. Though they are previous works investigating combining INR with generative models [1, 2, 3]. Which required complex multi-stage training recipes [1,2] and were not able to scale to high resolution signals [2,3]. In fact, most of the experiments are conducted on low-resolution images like 32 $\times$ 32 and 64 $\times$ 64. Our novel architecture and training objective allows us generate images up to 2048 $\times$ 2048 resolution, which is a drastic change with respect to existing work. - We want to emphasize that it’s non-trivial to obtain these results at large-scale data regime that we have. Finally, our point-wise training objective allows for efficient training via sub-sampling dense domains which previous works did not explore and allow us to tackle high-resolution data. This also enables INRFlow to do inference at arbitrary resolution in inference time without additional training (Figure 4). [1] Du, Yilun, et al. "Learning signal-agnostic manifolds of neural fields." Advances in Neural Information Processing Systems 34 (2021): 8320-8331. [2] Dupont, Emilien, et al. "From data to functa: Your data point is a function and you can treat it like one." arXiv preprint arXiv:2201.12204 (2022). [3] Zhuang, Peiye, et al. "Diffusion probabilistic fields." The Eleventh International Conference on Learning Representations. 2023.
Summary: This paper presents INRFlow, a novel domain-agnostic generative model that operates in ambient space, eliminating the need for hand-crafted data compressors in different domains. The key innovation is a conditionally independent point-wise training objective, allowing INRFlow to model continuous coordinate-value mappings across diverse data types, including images, 3D point clouds, and protein structures. The authors claim the following contributions: - Proposing INRFlow, a flow-matching generative transformer that works on ambient space to enable single-stage generative modeling on different data domains. - Empirical results show that INRFlow achieves competitive performance on image and 3D point cloud generation compared with strong domain-specific baselines. - Allowing resolution changes at inference time. Claims And Evidence: See below. Methods And Evaluation Criteria: See below. Theoretical Claims: See below. Experimental Designs Or Analyses: See below. Supplementary Material: See below. Relation To Broader Scientific Literature: See below. Essential References Not Discussed: See below. Other Strengths And Weaknesses: #### Pros: - The proposed method, INRFlow, is novel, offering a simple yet effective approach that is domain-agnostic, meaning the same architecture can be applied to images, 3D point clouds, and protein structures, demonstrating its adaptability. - The experimental evaluation is comprehensive, covering multiple domains. INRFlow performs well, achieving comparable or superior results to domain-specific architectures. - Resolution-agnostic generation: INRFlow can produce outputs at arbitrary resolutions, providing more flexibility than traditional generative models. - Single-stage training: The method eliminates the complexity of two-stage training pipelines, making it easier to implement and extend. #### Cons: - The writing quality could be improved, particularly in the method section: - "At a high level, our encoder network takes a set of coordinate-value pairs and encodes them to learnable latents through cross-attention. These latents are then updated through several self-attention blocks to provide the final latents." – Figure 2 does not depict cross-attention in the encoder, which may confuse readers. The description should clarify that cross-attention occurs after the self-attention blocks. - "Firstly, our encoder utilizes spatial aware latents where each latent is assigned a “pseudo” coordinate. Coordinate-value pairs are assigned to latents based on their distances on coordinate space." – This explanation is unclear. If the authors mean that higher resolution creates pseudo labels, further details are needed. How are these pseudo labels assigned? Interpolation? KNN? A more detailed explanation is necessary. - An ablation study would strengthen the empirical evaluation by providing insights into key design choices. For example, investigating the impact of attended coordinates $L$, performance as a function of training sample size, and other architectural decisions would improve the paper’s clarity and robustness. Other Comments Or Suggestions: NA Questions For Authors: 1. In the flow matching framework, the objective is to map $\mathbb{R}^d \rightarrow \mathbb{R}^d$. How did you handle this in the image-to-point-cloud setup, where the mapping involves $\mathbb{R}^2 \rightarrow \mathbb{R}^3$? 2. Does INRFlow exhibit permutation equivariance? Specifically, if the input coordinates are permuted, does the output follow the same permutation? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging INRFlow’s competitive experimental performance and flexibility in inference. We also appreciate your thoughtful comments which help substantially improve the quality of our work. Please find point-by-point response to your questions below. 1. How are spatial aware latents assigned? - The input coordinate-value pairs first cross attend to spatial aware latents (which are free parameters) and then self-attention is applied to the latents. INRFlow assigns each latent with a pseudo coordinate and each coordinate-value pairs is assigned to its closest latent. For example, in image generation, we define the pseudo coordinates for latents to lie in a 2D grid by default. Thus pixels within a patch cross attend to one latent, which is located at the center of that patch. We notice that the current description can be unclear and will further clarify the spatial-aware latents definition in the final version of the paper. 2. An ablation study would strengthen the empirical evaluation by providing insights into key design choices. For example, investigating the impact of attended coordinates L, performance as a function of training sample size, and other architectural decisions would improve the paper’s clarity and robustness. - We have performed an additional ablation study on selection of pseudo-coordinates for latents and the number of training samples on LSUN church 256. Results are shown in the table below (all models are trained for 200K iterations with batch size 128). We note the following and answer the reviewers questions: - The process for assigning pseudo-coordinates impacts overall performance. By default, we let pseudo-coordinates lie on a 2D grid which cover the whole image. Changing that to randomly assigned psuedo-coordinates (drawn from a uniform distribution) decreases performance due to reduced image coverage, as show by the difference between rows 1 and 2 of the table. - Increasing the number of latents $L$ has a positive impact on performance, even when defining pseudo-coordinates are random, as show in rows 2 and 3. - Finally, we also show performance as a function of training sample size, where training INRFlow on a subset of the full dataset (e.g., 15k ($\sim$12.5%), 30k ($\sim$25%) and 60k ($\sim$50%) samples), the model achieves comparable performance to using the full dataset only using ~50% of the training data with a linear drop in performance after that. This indicates the model is capable of learning the distribution of dataset with limited data samples and generating realistic samples. | Model | pseudo coordinate | \# latents | \# Data | FID_clip | FID | | ------- | ----------------- | ---------- | ------- | -------- | ----- | | INRFlow | grid | 1024 | 126k | 7.32 | 9.74 | | INRFlow | random | 1024 | 126k | 11.66 | 17.99 | | INRFlow | random | 2048 | 126k | 8.95 | 11.86 | | INRFlow | grid | 1024 | 60k | 7.62 | 10.6 | | INRFlow | grid | 1024 | 30k | 7.68 | 12.14 | | INRFlow | grid | 1024 | 15k | 12.17 | 25.52 | 3. In the flow matching framework, the objective is to map $\mathbb{R}^d \rightarrow \mathbb{R}^d$. How did you handle this in the image-to-point-cloud setup, where the mapping involves $\mathbb{R}^3 \rightarrow \mathbb{R}^3$? - In 3D point cloud generation, the mapping is defined as $\mathbb{R}^3 \rightarrow \mathbb{R}^3$, where the input and output share the same space. Conceptually, one can think of the signal space as the deformation of the input (ie. a deformation from a gaussian distribution in 3D to a particular object shape). Namely given a set of points in 3D space, the model learns to predict the transformation of the point cloud in the same 3D space which leads to semantically meaningful objects, conditioned on an input image. 4. Does INRFlow exhibit permutation equivariance? Specifically, if the input coordinates are permuted, does the output follow the same permutation? - Yes, INRFlow preserves permutation equivariance in decoding. The decoder in INRFlow is implemented as a cross-attention block and therefore inherits the permutation equivariance property. Namely, if one permutes the query coordinate-value pairs, the decoding results will be permuted accordingly. - In the encoder, INRFlow is permutation invariant. In particular, if one changes the order of input coordinate-value pairs in encoder, the assignment to spatial-aware latents stay the same which results in unchanged spatial-aware latents $z_{f_t}$. This guarantees that the learned $z_{f_t}$ well models the mapping from coordinate space to signal space.
Summary: # Update The authors have adequately addressed my concerns, and I expect that they incorporate: 1. more explanation on how to consistency generate images at different resolutions, and 2. explanation on the difference between their proposed methods and other function-space methods I mentioned to the final version of the paper. I decided not to change the score as I have already given a 4. # Old Summary The paper presents a model architecture and a training algorithm for generative models that can generate data in diverse domains as long as data in these domains can be represented as sets of discrete samples of a function $f: \mathcal{X} \rightarrow \mathbb{R}^d$. These domains include images ($\mathcal{X} = \mathbb{R}^2$, $d = 3$), point clouds ($\mathcal{X} = \mathbb{R}^3$ , $d = 3$), and protein structures ($\mathcal{X} =$ a set of descriptors of amino acids in a sequence, $d = 3$). More concretely, a data item is a finite set $f = \\{ (x,y) : x \in \mathcal{X}, y \in \mathbb{R}^d \\}$, which is a "function" in the sense that, if $(x,y_1)$ and $(x,y_2)$ are members of $f$, then $y_1 = y_2$. Another way to view a data item is to say that $f = \\{ (x_1, y_1), (x_2, y_2), \dotsc, (x_N, y_N) \\}$. We may now let $\mathbf{x} = (x_1, x_2, \dotsc, x_N)$ and $\mathbb{y} = (y_1, y_2,. \dotsc, y_N)$ be tensors. A data item is thus equivalent to an ordered pair $(\mathbf{x}, \mathbf{y})$ of tensors whose first dimensions have the same size $N$. The idea is to create a flow matching model that represents a stochastic process that transforms the data distribution to a distribution of $(\mathbf{x}, \mathbf{y})$ where $\mathbf{y}$ is standard Gaussian noise. Let $\mathbf{y}\_t = (1-t) \mathbf{y} + t\xi$ where $\xi \sim \mathcal{N}(0,I)$. We train a neural network $v\_\theta$ so that $v\_\theta(\mathbf{x}, \mathbf{y}_t, \theta)$ gives a velocity vector $\mathbf{v}$ on the trajectory that connect $\mathbf{y}_0$ (a well-formed data item) to $\mathbf{y}_1$ (a Gaussian noise tensor). This can be done by minimizing the followng loss \begin{align*} \mathcal{L}\_{CICFM} = E\_{\substack{t \sim \mathcal{U}[0,1],\\\\ (\mathbf{x},\mathbf{y}) \sim p_{\mathrm{data}}, \\\\ \xi \sim \mathcal{N}(0,I) }} \Big[ v_\theta(\mathbf{x}, \mathbf{y}_t, t) - u_t(\mathbf{x}, \mathbf{y}_t | \xi) \Big] \end{align*} where (again) $\mathbf{y}_t = (1-t)\mathbf{y} + t\xi$, and \begin{align*} u_t(\mathbf{x},\mathbf{y}_t | \xi) = \frac{\xi - \mathbf{y}_t}{1 - t}. \end{align*} To sample a data point, one starts with a noisy data item $(\mathbf{x}, \xi)$ where $\xi \sim \mathcal{N}(0,I)$. Then, one uses the trained network above to solve the differential equation $\partial \mathbf{y}\_t / \partial t = v_\theta(\mathbf{x}, \mathbf{y}_t, t)$ with the initial condition $\mathbf{y}_1 = \xi$ and find $\mathbf{y}_0$. The paper proposes a model architecture for $v_\theta$ so that the data size $N$ may be different among items in the dataset. Moreover, at test time, $N$ (and, to a certain extent, $\mathbf{x}$) can take on unseen values. This is done by dividing the networks into two parts: the encoder and the decoder. The encoder takes $(\mathbf{x}, \mathbf{y}_t)$ as input and produces a latent code $\mathbf{z}$. The decoder, on the other hand, operates on each "row" $(x\_i,y\_{t,i})$ of the input $(\mathbf{x}, \mathbf{y_t})$ independently, and uses $\mathbf{z}$ to model correlations between rows. The paper claims that their method is effective in creating generating models for (1) images, (2) 3D point clouds, (3) 3D point clouds conditioned on images, and (4) protein structures. All the generative models for all domains have the same general architectures. Due to the proposed architecture, a trained model can generate images with different resolutions and point clouds with different number of points from those that are found in the training dataset. Claims And Evidence: The paper makes several important claims. 1. A single architecture can be used to model data in different domains. 2. The paper's proposed architecture and training method yield effective generative models for the domains tested. 3. Trained models can generate data at resolutions different from those used in training time. I believe these claims are well substantiated. The variants of the same architecture are given in the Appendix, and the authors are able to reuse these architecture on different modalities. The scores that the train models achieve are competitive with baselines. Figure 4 show images with different resolutions and point clouds with different number of points. Methods And Evaluation Criteria: I believe that the datasets and the evalution metrics used are reasonable. However, I think the paper is not clear on how it achieves "resolution agnostic generation." This is especially true on how it generates samples in Figure 4 where the images and meshes at different resolutions are similar to one another. The paper say "we simply fix the initial noise seed and incrase number of coordinate-value pairs that are evaluated." What is unclear is the process of "fixing the noise seed" as it can means several things. It can either mean (a) fixing the seed of the random number generator or (b) fixing the noise signal $\mathbf{y}_1$ at a low resolution and then deriving from it $\mathbf{y}_1$ at other resolutions. I don't think Option (a) makes sense. For example, let's say we fix a random seed of 42. When we want to generate a (1D) image with 2 pixels, randomization might give us the following $(x,y)$ pairs. (0.0, 0.7), (0.5, 0.4) (Let's call this F.) However, if we keep the same random seed and try to generate a 1D image with 4 pixels, then we might end up with the following $(x,y)$ pairs: (0.00, 0.7), (0.25, 0.4), (0.50, 0.9), (0.75, 0.3) (Let's call this G.) Here, we assume that fixing a random seed would give the same sequence of random numbers. The issue is that F and G would likely to yield very different final images. A sequence that would yield an image similar to F would be an upsampled version of F, which G is clearly not one. As a result, Option (b) seems to make more sense. However, the paper never explains how it upsamples a random noise vector to a higher resolution. As a result, it is unclear to me how the images in Figure 4 were generated, and this makes the result there not reproducible. Theoretical Claims: I think the proposed training algorithm is correct. However, I think Equation (5) \begin{align*} u_t(\mathbf{x}_f, \mathbf{y}_f | \epsilon) = (1 - t)\epsilon + t \mathbf{y}_f \end{align*} is wrong, and I surmise this is a typo. It should have been \begin{align*} u\_t(\mathbf{x}\_{f\_t}, \mathbf{y}\_{f\_t} | \epsilon) = \frac{\epsilon - \mathbf{y}\_{f\_t}}{1-t}. \end{align*} For reference, see Equation (20) in the Lipman et al's flow matching paper [1]. Moreover, if we use the fact that $\mathbf{y}_{f_t} = (1-t)\mathbf{y}_f + t\epsilon$ as defined in Section 3.2 of othe paper, then we have that \begin{align*} u\_t(\mathbf{x}\_{f\_t}, \mathbf{y}\_{f\_t} | \epsilon) = \frac{\epsilon - (1-t)\mathbf{y}\_f - t\epsilon }{1-t} = \frac{1-t}{1-t}(\epsilon - \mathbf{y}\_f) = \epsilon - \mathbf{y}_f \end{align*} This matches with the target value $X_1 - X_0$ in Equation 1 of the rectified flow paper [2]. *Citation* * [1] Lipman et al. "Flow Matching for Generative Modeling." 2022. * [2] Liu et al. "Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow." 2022 Experimental Designs Or Analyses: I do not find issues with experimental designs and analyses. Supplementary Material: I skimmed the supplementary materials and found no glaring issues. Relation To Broader Scientific Literature: I think the paper should point out a significant difference between their approach and those taken by other "function space models" such as PolyINR, GEM, and GASP. For these approaches, it is possible to sample a random function $f$ and then evaluate the function mulitiple times. This is done by first sampling a latent code $z$, and then use $z$ to parameterize a neural network. For GEM and GASP, the latent codes are transformed into network parameters. For PolyINR, $z$ is turned into parameters of a generator network. Once this neural network is obtained, we can evaluate it to get the output at any input coordinates and as many input coordinates as we want. For example, if the function models an image, we can use it to generate images at $64 \times 64$, $128 \times 128$, $256 \times 256$, or any other resolutions that we want by simply feeding the function with an appropriate grid of 2D coordinates. However, I don't think INRFlow has the above capability. It is not possible to sample a random image function and then generate the same image at resolution $64 \times 64$, $128 \times 128$, and $256 \times 256$ afterwards. This is because, to generate images at different resolutions with INRFlow, one has to start from point samples of noisy images $\\{ (x,y) \\}$ at the specified resolutions. The crucial difference here is that, for INRFlow, the noise vector $y$ has to be supplied for each input coordinate $x$. However, the approaches discussed above do not to specify $y$. To make sure that the images at different resolutions are consistent with one another, it is necessary to make sure that the noise $y$ for each resolutions are properly correlated. I believe this is harder than what is done with PolyINR, GEM, and GASP (i.e., nothing). Moreover, as pointed out earlier, the way to ensure consistency between different resolutions are not adequately explained in the paper. Essential References Not Discussed: I believe that the references are adequate. However, the paper may cite papers on diffusion autoencoders such as [1] because INRFlow architecture is quite similar to that of a diffusion autoencoder. *Citation* * [1] Preechakul et al. "Diffusion Autoencoders: Toward a Meaningful and Decodable Representation." 2022. Other Strengths And Weaknesses: I think this paper is an interesting take on a unified architecture to model multiple types of signals. Other Comments Or Suggestions: N/A Questions For Authors: Please specify how to noise vectors are initialized to generate images and meshes in Figure 4 to resolve the clarity issue I pointed out the the "Methods and Evaluations" section. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments which helped substantially improve the clarity of the submission. Please find point-by-point response to your questions below. 1. Clarification of how upsampling in Figure 4 is conducted. - We agree with the reviewer that description of these experiments/implementation caused confusion and we will clarify our explanation in the final version of the paper. The resolution-free inference process is performed in the following manner. Let’s assume we have an INRFlow model trained at resolution $N=256$ for which we want to run inference at resolution $M=512$. We start by drawing $M^2$ pixel values from the gaussian prior at timestep t=1. We then take $N^2$ pixels from these $M^2$ pixels (eg. via simple grid sub-sampling) and feed this set of sub-sampled $N^2$ pixels to the encoder of INRFlow to compute spatial aware latents. Once these latents are computed, we feed the $M^2$ pixel coordinates and values to the decoder of INRFlow to compute cross-attention with the spatial-aware latents, which will give us $M^2$ pixels values for the next timestep. At the next timestep, we again repeat the process where we take $N^2$ pixels from the newly computed $M^2$ pixel values via sub-sampling to again feed through the encoder. In this setting, the spatially-aware latents act as the “neural network” parameters in GEM, GASP or PolyINR. However, instead of having these parameters explicitly parametrize a neural network or a generator, we tap into them via a cross-attention block in the decoder. To summarize: the resolution of the inputs that go into the encoder does not change at inference. We just let those inputs be a sub-sample of a higher resolution noisy image. Since the decoder of INRFlow can be evaluated continuously at any resolution, we can use it to generate an image at a higher resolution that the one seen by the encoder. We will update our explanation in the paper and include a figure in the appendix with a visual explanation of the process (link to figure: https://docs.google.com/document/d/e/2PACX-1vRVsqNt103C0EFDMdvCnWF8_2XLUps3ztW4Z5Fyu-ZHMMIKrn-Rg3TR7aep3wMvnQB-1Yivb3HeTO45/pub) 2. Typo at Equation 5. - We thank the reviewer for pointing out the typo, which we agree is incorrect. We will fix this in the final version of the paper 3. Missing reference: the paper may cite papers on diffusion autoencoders such as [1] because INRFlow architecture is quite similar to that of a diffusion autoencoder. - We thank the reviewer for pointing out this work. We will include and discuss it in the related work section. 4. I think the paper should point out a significant difference between their approach and those taken by other "function space models" such as PolyINR, GEM, and GASP. For these approaches, it is possible to sample a random function f and then evaluate the function mulitiple times. This is done by first sampling a latent code z, and then use z to parameterize a neural network. For GEM and GASP, the latent codes are transformed into network parameters. For PolyINR, z is turned into parameters of a generator network. Once this neural network is obtained, we can evaluate it to get the output at any input coordinates and as many input coordinates as we want. - The reviewer brings up a great point, which definitely needs clarification. We believe that the gap in our explanation was that one can interpret the spatial-aware latents computed from INRFlow’s encoder as the “latent codes that are transformed into network parameters” (as the reviewer explained). For INRFlow these latents codes are used to compute $k,v$ for the cross-attention block in the decoder, which takes in queries $q$ (ie. coordinate-value pairs) at any resolution. - The only thing that we need to do in order to obtain consistent outputs across different query resolutions is to keep the resolution of the encoder fixed, while the decoder can be queried at a different resolution. This is trivial to achieve by employing a simple sub-sampling operation (ie. grid sub-sampling) and keeping the sub-sampling operation fixed during inference. We will update our explanation in the paper and include a figure in the appendix with a visual explanation of the process (link to figure: https://docs.google.com/document/d/e/2PACX-1vRVsqNt103C0EFDMdvCnWF8_2XLUps3ztW4Z5Fyu-ZHMMIKrn-Rg3TR7aep3wMvnQB-1Yivb3HeTO45/pub). This simple technique allows us to change the resolution at inference time without any other additional tricks regarding noise alignment at different resolutions and produces crisp and consistent examples at higher resolutions that the one used in training (see Fig 4 and Tab 5). [1] Preechakul et al. "Diffusion Autoencoders: Toward a Meaningful and Decodable Representation." 2022.
Summary: This paper proposes INRFlow, a novel domain-agnostic approach to learn flow matching in ambient space without the need of a pretrained domain-specific encoder. INRFlow has been evaluated on three tasks: image-to-image generation, image-to-3D point cloud generation, and protein folding. The effectiveness has been demonstrated. Strength: 1. The strength compared with SOTA baselines is highlighted. 2. The domain-agnostic flow matching is very attractive. Weakness and Questions: 1. The key novelty -- spatial-aware latents -- is not well explained in either the main text or the appendix. How do you select the the pseudo coordinates? I can imagine the selection of these the pseudo coordinates can significantly impact the latent code quality and the generation quality. Can you do some ablation studies on this? 2. While Figure 2 shows that the conditions can be both images and 3D point clouds, the experiments only focus on images as the condition. This causes some confusion. Is it possible to use 3D point clouds as conditions? 3. "We found that the improvements of spatial aware latents in 3D to not be as substantial as in the 2D image setting." The pseudo coordinates are in 2D or 3D? Is this caused by the selection of the pseudo coordinates? 4. Do you have some quantitative evaluation on the protein folding task? I am willing to improve my scores based on authors' answers to my question. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. The experimental designs make sense to me. Supplementary Material: Yes. See my questions above. Relation To Broader Scientific Literature: This work is highly related to the diffusion models, flow matching, and generative AI works. Essential References Not Discussed: No Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for highlighting the strength of INRFlow as a domain-agnostic flow matching model and thoughtful comments. Please find point-by-point response to your questions below. 1. How do you select the pseudo coordinates?[...]Can you do some ablation studies on this? - In image domain, we define pseudo coordinates to lie on a 2D grid, which results in pixels grouping as patches of same size. We add an ablation study on LSUN-church-256 to compare different pseudo-coordinates. We trained all models for 200K steps with batch size 128 and report results in the table below. INRFlow achieves the best performance when using the default grid pseudo coordinates. When using randomly sampled pseudo coordinates we observe a drop in performance. - We attribute this to the fact that when pseudo-coordinates are randomly sampled, each spatial-aware latent effectively does a different amount of work (since pixels only cross-attend to the nearest pseudo-coordinate). This unbalanced load across latents makes encoding less efficient. There’re a few different ways to deal with this without necessarily relying on a grid, one is to cluster similar pseudo-coordinates to provide an equidistant distribution in 2D space, another one is to increase the number of spatial-aware latents so that each latent has to do less work. We empirically see that both of this options are effective. Ultimately, having pseudo-coordinates lie on a grid strikes a good balance of efficiency and effectiveness. |pseudo coordinate|\# latents|FID-clip|FID| |-|-|-|-| |grid|1024|7.32|9.74| |random|1024|11.66|17.99| |KMeans++|1024|10.42|15.56| |random|2048|8.95|11.86| 2. While Fig. 2 shows that conditions can be both images and 3D point clouds[...].This causes some confusion. - In Fig 2 we tried to show that both images and 3D point clouds can be modeled by INRFlow in the same way (via spatial-aware latents). We're aware how this leads to confusion and have edited the figure to make it clear (https://docs.google.com/document/d/e/2PACX-1vRVsqNt103C0EFDMdvCnWF8_2XLUps3ztW4Z5Fyu-ZHMMIKrn-Rg3TR7aep3wMvnQB-1Yivb3HeTO45/pub). 3. "Improvements of spatial-aware latents in 3D to not be as substantial as in the 2D image setting." [....] Is this caused by selection of the pseudo coordinates? - For pointclouds, pseudo coordinates are defined in 3D space as opposed to in 2D for images. There’re two main reasons that we found for 3D pointclouds not benefiting from spatial-aware latents: - Pointclouds are sparse representation of 3D geometry. If we define a fixed set of pseudo-coordinates as a 3D grid, a substantial amount of latents will encode empty space, which is not efficient for encoding (similar to voxel representations). Therefore, we opt for vanilla PerceiverIO for 3D point cloud generation obtaining great results without sacrificing efficiency. - In 3D, we learn a deformation field from a prior 3D pointcloud distribution (eg. a 3D gaussian) towards a training set of shapes. Given that the geometry of the pointcloud changes at different timesteps and for different samples, it’s not trivial to define a fixed set of pseudo-coordinates to effectively cover the full geometry. We believe that timestep-aware and sample-aware pseudo-coordinates are very interesting to study in the future and are part of future work. 4. Quantitative evaluation on protein folding task - We have included a quantitative evaluation of the protein folding task. In particular, we randomly selected 512 proteins from the AFDB-SwissProt dataset and use them as a test set. We compare INRFlow with an open-source replication of AlphaFold3 [1] (ie. Boltz-1 [2]), which is the SOTA approach for protein folding. We note that AlphaFold3 is extremely domain-specific, using complex and curated architectural designs for protein folding. For example, it relies on multi-sequence alignment and template search on existing proteins. It also designs a triangle self-attention update for atomic coordinates. Whereas INRFlow makes no assumptions about data domain and effectively models different tasks under an equivalent architecture and training objective. - We report $C_\alpha$-LDDT and TM-score (higher the better) which are commonly used metrics to evaluate how predicted protein structures align with ground truth. Results indicate that INRFlow, which uses a domain-agnostic architecture performs decently well on protein folding even when compared to SOTA models that require intricate domain-expertise embedded in the architecture. Note that we have not optimized INRFlow for model size or other hyper-parameters in the protein folding experiment. |Model|$C_\alpha$-LDDT|TM-score| |-|-|-| |AlphaFold3 (Boltz-1)|0.923|0.812| |INRFlow|0.722|0.664| [1] Abramson, J., et al. "Accurate structure prediction of biomolecular interactions with AlphaFold 3." Nature (2024). [2] Wohlwend, J., et al. "Boltz-1: Democratizing Biomolecular Interaction Modeling." bioRxiv (2024).
null
null
null
null
null
null
Cost-efficient Collaboration between On-device and Cloud Language Models
Accept (poster)
Summary: The paper presents a setting where a small model having access to local data collaborates with a state-of-the-art LLM cloud-hosted (without access to the data) to solve real tasks. To improve over an initial naive protocol (with back and forth chats between the two models), the paper introduces Minions, where the cloud model creates sub-tasks for the local model to execute. The paper presents clear reduction in costs while maintaining high performances. Claims And Evidence: The core of the paper is focused around presenting processes that reduce cost while maintaining high performance (see for instance Figure 2). Overall the claims of cost reduction while maintaining high performance are convincing and well supported by evidence (e.g. Fig 5-6 and Table 1) and overall the paper is very well structured making it a strong contribution to the conference. Methods And Evaluation Criteria: The method and evaluation criteria are robust (see Table 1) and especially the benchmark across different type of datasets (finance, health and scientific QA) gives a clear overview of benefits of this approach. Theoretical Claims: The paper doesn't rely on theoretical claims Experimental Designs Or Analyses: The experimental design is the core strength of this paper: it is well planned and shows the advantage of Minions over the initial naive protocol and in comparison to the state of the art performance of a frontier model (GPT-4o) Supplementary Material: I have checked a few details in the appendix regarding prompt conversations between Minions and cloud model and were (Section F in the appendix) and were very clearly presented Relation To Broader Scientific Literature: The paper is well positioned in the scientific literature (however this is mostly presented in the appendix and not directly in the main content of the paper) Essential References Not Discussed: I haven't noticed anything missing Other Strengths And Weaknesses: The main point which I would like the author to discuss more in details is a setting where the data on premise is sensitive and should absolutely not be shared with the cloud model. How would you guarantee that that the local LLM would not leak, even by mistake, any piece of information to the cloud model? Have you done any experiment on this specific topic? Other Comments Or Suggestions: I would improve Figure 2, which I think it's very important for the narrative of the paper, but I found a bit hard to read as many layers of information are embedded in it Questions For Authors: Nothing else Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the detailed feedback! We include a Common Response, followed by an Individual Response. Please see the revised paper at this anonymous link: https://storage.googleapis.com/anonymous-files/minions.pdf ## Common Response We appreciate the positive feedback from all the reviewers: - Local-remote systems are a “compelling” \[xykf] and “underexplored direction of research” \[vVmz], making our submission a “strong contribution to the conference” \[5xc4].  - Our central claim – that such systems offer attractive cost-accuracy tradeoffs – is substantiated \[xykf], “well-supported by evidence” \[5xc4], and “backed by experiments” \[vVmz]. - The experimental setup is “thorough” \[vVmz], “well-planned” \[5xc4], and “comprehensive” \[xykf], and forms the “core strength of the paper” \[5xc4].  The reviewers’ feedback motivated the following updates: - **Latency \[vVmz,xykf]:** We added latency benchmarks on a single consumer GPU (_e.g._ RTX 4090) and found Minion/MinionS are only 1.44× and 2.89× slower than remote-only, while yielding 30× and \~5× cost savings (See §6.5) * **Adapting LocalLM \[vVmz]:** We show that Minion accuracy can be improved by supervised finetuning (SFT) of the minion (1B & 3B scales) on the target domain (§G.2). In §7, we highlight further opportunities for co-adaptation.   * **Agentic Tool Use \[vVmz]:** Introduced a tool-augmented version of Minions, where the local model can use local tools, matching GPT4o-only accuracy (0.7) while cutting prefill cost by \~3.7× (See §E.5) * **Energy savings \[xykf]:** Added energy consumption analysis showing Minions consumes just 1/12th (1B LocalLM) and 1/6th (3B LocalLM) the energy of GPT-4o alone (see §E.4). * **Privacy implications \[xykf,5xc4]:** We highlight this important direction in §7. While privacy merits a more careful treatment in future work, our preliminary results show that local LLM-based PII filtering reduces PII leakage from 22% to 4.5% --- ## Individual Response **Exploration on sensitive data.**   > _“The main point which I would like the author to discuss more in details is a setting where the data on premise is sensitive and should absolutely not be shared with the cloud model. How would you guarantee that that the local LLM would not leak, even by mistake, any piece of information to the cloud model? Have you done any experiment on this specific topic?_ Thank you for highlighting this important setting! While a comprehensive treatment of data privacy  is beyond the scope of this paper, we agree that the topic merits a preliminary exploration. We thus present an initial analysis of data leakage mitigation within the Minions framework using  contemporary privacy-preserving techniques, including controlled prompting [1,2] and PII filtering. Methodologically, we use a prompt-based filtering layer—a secondary LLM call applied to local LMs output—to remove sensitive information. We evaluate this method on a QA dataset over a local filesystem of purchase receipts containing emails, addresses, phone numbers, credit card details, and more. With the privacy filter enabled, PII leakage rate drops from 22% to 4.5%. While preliminary and imperfect, these results highlight the importance of further research in this area. **_Clarity and Presentation._** As per your feedback, we have improved the clarity of Figure 2. **\[1]** [**https://arxiv.org/abs/2410.17127**](https://arxiv.org/abs/2410.17127) **\[2]** [**https://arxiv.org/pdf/2403.03129**](https://arxiv.org/pdf/2403.03129) --- Rebuttal Comment 1.1: Comment: Thank you for this, I'm happy with your response which addressed my main point --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our work and for highlighting the importance of settings with sensitive data. Your suggestions improved our manuscript!
Summary: This paper presents MINION and MINIONS, novel frameworks for cost-efficient collaboration between small on-device and cloud-based language models. MINION enables asymmetric collaborative communication between LocalLM (Reading) and RemoteLM (Reasoning), achieving a 30.4× cost reduction while recovering 87% of remote-only performance. However, it struggles with multi-step instructions and long contexts. To address this, MINIONS introduces task decomposition and parallelized subtasks in LocalLM, reducing costs by 5.7× while recovering 97.9% of remote-only performance. Experimental validation on multiple benchmarks highlights the trade-off between cost and accuracy. Claims And Evidence: This paper asserts that the MINIONS protocol significantly reduces cloud inference costs while maintaining accuracy comparable to cloud-based models. Additionally, it claims that by adjusting the protocol’s hyperparameters, a flexible trade-off between cost and performance can be achieved. To support this claim, the authors conduct extensive evaluations across various domain benchmarks and different model sizes and types, reinforcing the validity of their argument. Notably, the analysis of performance recovery based on model size substantiates the claim regarding the cost-performance trade-off. Methods And Evaluation Criteria: The proposed MINIONS framework is well-suited to the problem and highly plausible. The explicit role definition for both language models and the protocol designed to compensate for the limitations of small LMs are particularly compelling. However, the explanation of how the method operates in each iteration is located in the appendix, making it difficult to find and understand the exact workings of the loop. It would be beneficial to include a brief explanation in the main text or provide a hyperlink to the appendix. The evaluation criteria focus on accuracy and cost, both of which are highly appropriate for assessing the proposed protocol’s cost-performance trade-off. Theoretical Claims: Rather than relying on theoretical claims, this study is primarily supported by experimental evidence, which is appropriate for the research context. However, a comparative analysis of communication latency between the two language models alongside real experimental data would have enhanced credibility. Experimental Designs Or Analyses: The experimental design and analysis are conducted using appropriate methodologies. However, there is a lack of information on the experimental setup, such as the specific GPU used, which should be supplemented. Additionally, the comparative analysis of different hyperparameters effectively identifies the key factors influencing the protocol’s performance. Supplementary Material: The supplementary materials include datasets, model specifications, cost models, and an extended discussion of related research, providing valuable support for the main results. The inclusion of example prompts and task decomposition strategies is particularly beneficial. Relation To Broader Scientific Literature: This study builds upon prior research in multi-agent systems, retrieval-augmented generation (RAG), and cost-efficient LLM routing. Notably, it differentiates itself by addressing asymmetric roles between small on-device models and large cloud-based models, as well as exploring inter-model interactions. Citations to relevant literature are sufficient, and comparative experiments with RAG models effectively demonstrate the value of MINIONS. Essential References Not Discussed: This paper leverages the appendix to cite all relevant studies comprehensively. Other Strengths And Weaknesses: Strength - The multi-round communication approach introduced in the framework allows for iterative improvements in performance, which is a promising direction. - The paper provides an insightful cost analysis, demonstrating how task decomposition and parallelization contribute to cost reduction. Also, the study does well in quantifying the trade-offs between cost and accuracy across multiple benchmarks. - The inclusion of various hyperparameter evaluations strengthens the reliability of the findings. - Sharing prompts used in experiments, along with example responses, enhances reproducibility and improves clarity. Weakness - The size of Local LM might be too large for practical on-device deployment in resource-constrained environments. Discussing the trade-offs between model size and performance in more detail would provide valuable insights into the feasibility of different approaches. - While cost reductions are well-analyzed, a more detailed discussion of latency optimization would be helpful. Measuring actual latency could provide clearer performance. - The paper does not extensively discuss energy consumption or the impact on local resources (e.g., memory and computational overhead). A deeper analysis of these aspects would be valuable. - The framework would benefit from a clearer pipeline structure, particularly in explaining how the loop functions, as described in Section D. The current explanation is somewhat unclear, making it difficult to fully understand the method and integrate it into existing workflows. Additionally, providing example data in the appendix to illustrate how the Remote LM applies its data chunking strategy would be helpful. The inconsistent use of terminology also makes the paper harder to follow on the first read. If these aspects were clarified and the protocol were made more explicit, I would be willing to reconsider my rating. Other Comments Or Suggestions: - It might be useful to explore privacy-aware chunk extraction to enhance secure collaboration between LocalLM and RemoteLM. - Maintaining consistent terminology (e.g., ensuring "multi-step" and "multi-part" are clearly defined) would help avoid potential confusion. - In line 246, clarifying whether "cloud model" refers to RemoteLM in this study would improve clarity. Questions For Authors: - Does the framework remain effective for small language models under 1B parameters? - Has the impact of network latency on local-remote communication performance been analyzed? - Are there potential security risks, such as adversarial attacks, in local-remote collaboration? - Can this approach be extended to other multimodal inputs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed feedback! We include a Common Response, followed by an Individual Response. Please see the revised paper at this anonymous link: https://storage.googleapis.com/anonymous-files/minions.pdf ## Common Response We appreciate the positive feedback from all the reviewers: - Local-remote systems are a “compelling” \[xykf] and “underexplored direction of research” \[vVmz], making our submission a “strong contribution to the conference” \[5xc4].  - Our central claim – that such systems offer attractive cost-accuracy tradeoffs – is substantiated \[xykf], “well-supported by evidence” \[5xc4], and “backed by experiments” \[vVmz]. - The experimental setup is “thorough” \[vVmz], “well-planned” \[5xc4], and “comprehensive” \[xykf], and forms the “core strength of the paper” \[5xc4].  The reviewers’ feedback motivated the following updates: - **Latency \[vVmz,xykf]:** We added latency benchmarks on a single consumer GPU (_e.g._ RTX 4090) and found Minion/MinionS are only 1.44× and 2.89× slower than remote-only, while yielding 30× and \~5× cost savings (See §6.5) * **Adapting LocalLM \[vVmz]:** We show that Minion accuracy can be improved by supervised finetuning (SFT) of the minion (1B & 3B scales) on the target domain (§G.2). In §7, we highlight further opportunities for co-adaptation.   * **Agentic Tool Use \[vVmz]:** Introduced a tool-augmented version of Minions, where the local model can use local tools, matching GPT4o-only accuracy (0.7) while cutting prefill cost by \~3.7× (See §E.5) * **Energy savings \[xykf]:** Added energy consumption analysis showing Minions consumes just 1/12th (1B LocalLM) and 1/6th (3B LocalLM) the energy of GPT-4o alone (see §E.4). * **Privacy implications \[xykf,5xc4]:** We highlight this important direction in §7. While privacy merits a more careful treatment in future work, our preliminary results show that local LLM-based PII filtering reduces PII leakage from 22% to 4.5% --- ## Individual Response **Latency Analysis** > A more detailed discussion of latency optimization would be helpful. The updated §6 includes comprehensive latency experiments. See §6, the Common Response and the individual response to vVmz for more details.  **Feasibility** > The size of Local LM might be too large for practical on-device deployment. §6.2 now discusses the feasibility of running LLMS on modern laptops and workstations¹. Devices like the d MacBook Pro support up to 600B and 200B models with quantization², and even iPhone 15 Pro handles \~3B³. Our latency experiments on consumer grade hardware show Minion and Minions are only 1.44× and 2.89× slower than remote-only (see §6). \[1] <https://ollama.com/library> \[2] <https://www.apple.com/newsroom/2024/10> \[3] <https://machinelearning.apple.com/research/introducing-apple-foundation-models> **Model size tradeoffs** > Discussing the trade-offs between model size and performance in more detail would provide valuable insights into the feasibility. In our revised manuscript, §6.2 (Model Choice) and Figure 4 detail how performance and communication efficiency vary with local model size. **Energy consumption analysis**  > The paper does not extensively discuss energy consumption. Your point led to a new analysis, showing major energy savings by Minions (see  §E.4 for analysis). As we do not have access to the hardware running GPT-4o, we use energy consumption estimates from Epoch AI [1].  We benchmark local energy use for 1B & 3B models on an M1 Max and A100 GPU. We compare to GPT-4o-only execution, and find 12$\times$ energy savings with the 1B LM and 6$\times$ with the 3B model on A100, with similar gains on M1 Max.  \[1] https\://epoch.ai/gradient-updates/how-much-energy-does-chatgpt-use > Might be useful to explore privacy-aware chunk extraction to enhance secure collaboration. We agree! While a full treatment of data privacy is beyond the scope of this paper, we report preliminary experiments on leakage mitigation within Minions (see Common Response and response to 5xc4). **Clarity and Presentation** - Experimental Setup: We added hardware details for local models (see §B). - Protocol Description: We rewrote the Methods accordingly; added an example of remote chunking (see §4,5,G.1). - Consistent Terminology - The revision uses a more consistent terminology for the names of the local and remote models. - Consolidated "multi-step" and "multi-part" language. **Other Questions** > Does this work with small models (<1B)? Not well—performance improves significantly with local model sizes >=3B (see §6 + Fig. 4) > Is network latency a bottleneck? No, communication time is negligible (<0.002%) compared to inference (see §E.6) > Are there security concerns? Yes! There are works that study this extensively [1]. > Can this support multimodal inputs? Yes, using a VLM as the local LM enables image-text processing (see §E.7) \[1] <https://arxiv.org/abs/2310.06845> --- Rebuttal Comment 1.1: Comment: The authors clearly addressed my concerns regarding latency, on-device feasibility, and energy consumption, with the revised version effectively highlighting the framework’s strength—most notably, 12× reduction in energy consumption. Also the revised version provided a clearer understanding of the overall framework operation, which motivated an upward adjustment in the rating. I raised the scoring to 3. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our work and for the valuable feedback around cost experiments and presentation clarity which has improved the manuscript!
Summary: The paper proposes an agentic pattern for collaborative modeling between a cloud-based large LM and a client-side small LM to reduce cloud inference costs. The authors propose two approaches: 1. MINION: A simple communication protocol where the small model summarizes and interacts with the cloud model. However, it struggles with long contexts and following multi-step instructions. 2. MINIONS: An improved systems approach that decomposes tasks into smaller chunks, enabling more efficient execution and parallelization. The authors show that MINIONS recovers 97.9% of the accuracy of a cloud-only model while reducing costs by 5.7×. They also discuss ways to optimize the client LM, such as parallelizing subtasks to improve efficiency. The study evaluates a range of client models and finds that this approach is effective for models above 3B parameters, with performance improving as model size increases. Claims And Evidence: Claims are reasonably well supported and backed by experiments. Methods And Evaluation Criteria: The selected datasets cover a diverse set of domains domains in finance, healthcare, and science. It would have been desirable to include and discuss evaluation methods that more explicitly span tasks of varying complexity (e..g. simple Q&A retrieval tasks, more complex reasoning, ...). Also, given the setup for cloud/client, agentic tasks such as automatically perform certain actions would have been relevant to include. Lastly, the main evaluation is cost and performance. One major reason for client side inference is closeness to user, and providing a snappy experience. Beside a little mention at the beginning, there is no evaluation of latency across the paper.I would expect that this approach would significantly increase latency and diminish value of this approach. Theoretical Claims: I did not check correctness of theoretical claims Experimental Designs Or Analyses: The authors perform a comprehensive set of experiments. Their setup and analysis seems sound, but there is a definitve gap in evaluating latency as pointed out above. Supplementary Material: The paper has comprehensive supplementary material, discussing the method in more details, additional references, and all prompts that have been used. Relation To Broader Scientific Literature: The paper makes a contribution in the very crowded space of LLM agents. It's angle on splitting agents across cloud and client is generally an underexplored area and interesting direction of research. The papers main positioning seems to be reducing cloud inferencing cost. As such, a more detailed comparison with alternative approaches such as prompt compression, speculative decoding may be desirable. Alternatively, the author should double down more on the benefits of cloud/client setup, and what type of experiences this could enable. Essential References Not Discussed: There is other work leveraging a smaller model combined with a big one. Consider citing the following: Prompt compression using a smaller model. This will reduce cloud efficiency and potentially also could run on client Jiang et al. (2023). LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models. EMNLP 2023. Pan et al. (2024). LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression. ACL 2024. Consider referencing for collaborative cloud/edge modeling (and related references): Hao et al (2024) Hybrid SLM and LLM for Edge-Cloud Collaborative Inference. EdgeFM 2024 Xia et al. (2024). Hybrid Retrieval-Augmented Generation for Real-time Composition Assistance, EMNLP Industry Track 2024 Other Strengths And Weaknesses: Strengths: - Interesting take on cloud/client models of agent scenarios - thorough experimental study (despite lack broader benchmarks) Weakness: - Mainly experimental with little theoretical backing - Main ML aspect is prompt engineering, and combination of multiple agents. Adapting (or at least the discussion of it) the small model to this interaction pattern would be an interesting ml aspect and extension. Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the detailed feedback! We include a Common Response, followed by an Individual Response. Please see the revised paper at this anonymous link: https://storage.googleapis.com/anonymous-files/minions.pdf ## Common Response We appreciate the positive feedback from all the reviewers: - Local-remote systems are a “compelling” \[xykf] and “underexplored direction of research” \[vVmz], making our submission a “strong contribution to the conference” \[5xc4].  - Our central claim – that such systems offer attractive cost-accuracy tradeoffs – is substantiated \[xykf], “well-supported by evidence” \[5xc4], and “backed by experiments” \[vVmz]. - The experimental setup is “thorough” \[vVmz], “well-planned” \[5xc4], and “comprehensive” \[xykf], and forms the “core strength of the paper” \[5xc4].  The reviewers’ feedback motivated the following updates: - **Latency \[vVmz,xykf]:** We added latency benchmarks on a single consumer GPU (_e.g._ RTX 4090) and found Minion/MinionS are only 1.44× and 2.89× slower than remote-only, while yielding 30× and \~5× cost savings (See §6.5) * **Adapting LocalLM \[vVmz]:** We show that Minion accuracy can be improved by supervised finetuning (SFT) of the minion (1B & 3B scales) on the target domain (§G.2). In §7, we highlight further opportunities for co-adaptation.   * **Agentic Tool Use \[vVmz]:** Introduced a tool-augmented version of Minions, where the local model can use local tools, matching GPT4o-only accuracy (0.7) while cutting prefill cost by \~3.7× (See §E.5) * **Energy savings \[xykf]:** Added energy consumption analysis showing Minions consumes just 1/12th (1B LocalLM) and 1/6th (3B LocalLM) the energy of GPT-4o alone (see §E.4). * **Privacy implications \[xykf,5xc4]:** We highlight this important direction in §7. While privacy merits a more careful treatment in future work, our preliminary results show that local LLM-based PII filtering reduces PII leakage from 22% to 4.5% --- ## Individual Response **Latency analysis** > There is no evaluation of latency in the paper. I would expect that this approach would increase latency. Your suggestion led to new latency experiments to complement the theoretical analysis in the original submission. We benchmarked Minion/MinionS on 2 consumer-grade GPUs commonly used in local workstations (e.g., RTX 4090, MSRP 1,599 USD on 03/26), finding they are only $1.44×$ and $2.89×$ slower than remote-only, while offering \~30× and \~5× cost savings. See Tab. 2 in the revised manuscript for more details.  We note that these empirical latency measurements depend on point-in-time factors like local hardware and cloud load. Thus, in §C.2 we provide a theoretical framework for estimating the latency overhead of any local-remote system given model and hardware specifications like memory bandwidth. **Minion + agentic tool use.** > Given the setup for cloud/client, agentic tasks such as automatically perform certain actions would have been relevant to include. To support agentic tasks—where the model autonomously performs actions—we extend the Minions framework to enable local tool use (see §E.5). In this setup, the local LM executes actions guided by the remote LM. We evaluate this on filesystem queries with 5 tools and find that using Qwen2.5 locally with GPT-4o as the remote LM, matches the performance of GPT-4o-only while using less than 28% of the remote tokens (see Tab. 12). **Expansion of related works.** We now cite several of the works you highlighted including speculative decoding, collaborative inference, and prompt compression.  **Adapting LocalLM** > Adapting the small model to this interaction pattern would be an interesting ml aspect and extension. See common response. The revised manuscript now takes a step in this direction by finetuning small models on the target domain and demonstrating improvements in Minion accuracy (§G.2). The revised discussion (see “Local-remote model co-design”) spells out a number of extensions we are excited about, including multi-agent co-training. **Stratifying by task complexity** > It would have been desirable to include evaluations that more explicitly span tasks of varying complexity We agree and therefore stratify Financebench (FIN) and Longhealth (LH) problems by complexity. Surprisingly, we find that the Minions protocol outperforms the remote-only condition on harder problems. For example, on simple info. extraction tasks in FIN, Minions (with Qwen2.5) trails by 22.7 points, but on complex extraction & numerical reasoning tasks, it outperforms the remote-only by +4.6. Same holds for LH, where Minions (with Llama-8B) is -6.2 pts. on single-span questions but leads by +16.0 on multi-span synthesis. This trend holds across model sizes. **Expanded discussion of cloud/client setup**  > Alternatively, the author should double down more on the benefits of cloud/client setup… We’ve expanded §7 (Discussion) to better highlight the benefits of the cloud/client setup.
null
null
null
null
null
null
null
null
MERGE$^3$: Efficient Evolutionary Merging on Consumer-grade GPUs
Accept (poster)
Summary: This paper performs model merging using multi-objective evolutionary search that yields Pareto optimal solutions. The fitness function of the evolutionary algorithm requires one to evaluate a given model’s performance several times. The authors propose using a performance estimator using item-response theory (IRT) that can estimate the model’s true performance by only evaluating it on a subset of the data. NSGA-II evolutionary algorithm is employed by Merge3 where each objective is the performance of the merged model on a particular task. They demonstrate that their method is capable of cross-lingual skill transfer. The report the efficacy of their performance predictor using mean square error between the predicted and the true scores. Claims And Evidence: Ya the claims are supported by evidence. But like I pointed out in other sections, there is scope for improvement. Methods And Evaluation Criteria: These make sense 1. This paper demonstrated its efficacy by merging math models with Romanian, German and Dutch langauge respectively and evaluated the merged model's performance on the GSM8k in the corresponding language. 2. They evaluate Merge3's performance against the EvoMerge model using the same settings. Despite using 50 times lesser compute, this method yields a model that is only slightly worse than what EvoMerge resulted in However, while evaluating performance estimators, it is important to evaluate the spearman correlation between the true ranking and the ranking of the models based on the estimator. The accuracy of the models might be very close to each other. It is important to demonstrate that the model is able to discriminate between them and rank them correctly. This is a common practice in Neural Architecture Search [1], Hyperparameter Optimization etc. [1] How Powerful are Performance Predictors in Neural Architecture Search? White et al. Theoretical Claims: Ya I checked their proofs on the performance estimators being $\epsilon$-stable and $\epsilon$-consistent, MP-IRT being asymptotically consistent and also preserving near-optimality. Experimental Designs Or Analyses: Please include Ada-merging and Emr-Merging [1] as baselines [1] EMR-Merging: Tuning-Free High-Performance Model Merging, Huang et al. Supplementary Material: Yes i read the entire supplementary material. They described the details of the evolutionary algorithm and their library. They also provide some clustering based alternatives to random sampling. Relation To Broader Scientific Literature: Ya this paper provides an evolutionary algorithm based model merging method that reduces the computational burden by using only a subset of the evaluation data. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths 1. This work proposes a more efficient way to perform evolutionary search and speeds up evomerge by 50 times. Weakness 1. Of late, papers such as adamerging, emr-merging etc are merging 8 models each trained on a particular task. Would this method scale to perform well on that task? It would require a lot of more compute as the number of models increase. 2. The IRT based estimators are heavily borrowed from the tinybenchmarks paper with a slight adaptation to estimate only the interpolation coefficients owing to the nature of model merging. 3. What about low resource languages where several models are not available to initialize the population? Other Comments Or Suggestions: You can spend a paragraph explaining algorithm 1. Also algorithm 1 must be in the main paper and not the appendix. Questions For Authors: 1. You have demonstrated on cross-lingual tasks. Can your method outperform the baselines on tasks involving the same language? 2. Why are other method performing so poorly on these tasks? Would the surgery [2] model alleviate the representation bias in this case? 3. In algorithm 1, it is not clear how the population is initialized. What is ad-hoc genetic algorithm? [2] Representation Surgery for Multi-Task Model Merging, Yang et al. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your detailed and constructive review. We appreciate your insights and will do our best to address your concerns within the space constraints of the rebuttal. ### Methods and evaluation criteria We report Spearman rank correlations (higher is better) using the Figure 3 setup. Due to space limits, we show averages across datasets and sample sizes (10–100); full breakdowns will appear in the paper. Results show that GMP-IRT consistently outperforms GP-IRT in ranking accuracy. | | **n=10** | **n=20** | **n=30** | **n=50** | **n=100** | | --- | --- | --- | --- | --- | --- | | **gmpirt** | **0.54** | **0.68** | **0.73** | **0.83** | **0.84** | | **gpirt** | 0.51 | 0.58 | 0.69 | 0.71 | 0.77 | | | **arc** | **gsm8k** | **hellaswag** | **truthful** | **winogrande** | | --- | --- | --- | --- | --- | --- | | **gmpirt** | **0.84** | **0.64** | **0.57** | **0.93** | **0.63** | | **gpirt** | 0.77 | 0.59 | 0.48 | 0.92 | 0.51 | ### Experimental design and analyses **Ada-Merging and EMR Baselines:** Thank you for highlighting these baselines—we agree they are important. Our current setup relies on MergeKit, which doesn’t yet support them. Implementing and optimizing them (e.g., for quantization) is beyond the rebuttal scope, but we plan to add support and include them in the camera-ready or follow-up. ### Weaknesses **W1 — scaling number of models**: Our approach scales well with the number of endpoint models because its computational cost depends on the population size (25 in our experiments) and the number of evolutionary iterations—both independent of how many endpoints are merged. We do note that the search space grows linearly with the number of endpoints, so more sophisticated initialization or mutation strategies might be needed to ensure efficient convergence. **W2 — novelty of the estimators:** While inspired by tinyBenchmarks, our estimators (mpIRT and gmPIRT) are tailored for merging. By leveraging the known abilities of endpoint models and assuming linearity, we obtain more efficient *and* more accurate ability estimation—crucial for guiding evolutionary search. **W3 — Low-Resource Settings & Initialization:** MERGE³ uses the same endpoint models as standard merging methods. The initial population is generated by sampling interpolation coefficients—*not* from additional models. This makes our method equally applicable in low-resource settings, and we’ll clarify this distinction in the paper. ### Comments and questions **C1:** **Algorithm 1 explanation and position:** We agree Algorithm 1 is central and will move it to the main paper in the camera-ready using the extra page. We'll also add a brief explanation to clarify its key steps. **Q1 — In-Language Merging:** While our primary focus was cross-lingual transfer, we also investigated merging models **within a single language**—Italian in this case. We merged a math-in-Italian model ([MetaMath-Mistral-7B + Mistral-Ita-7B](https://huggingface.co/DeepMount00/Mistral-Ita-7b)) with a code model with Italian capabilities ([CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)), then evaluated code generation performance on the [MBPP dataset](https://huggingface.co/datasets/google-research-datasets/mbpp) (zero-shot pass@1) and math accuracy in Italian. As shown below, the merged model not only achieves higher code accuracy but also preserves math performance: | Model | Code Accuracy | Math Accuracy | | --- | --- | --- | | Merged Model | **0.218** | **0.596** | | Math Model | 0.212 | 0.552 | | CodeNinja | 0.200 | 0.192 | These findings demonstrate that MERGE³ can integrate multiple task-specific abilities within the same language, and we plan to include this experiment in the appendix of the revised paper. **Q2 — Baselines failure:** We appreciate the reference to [2], which highlights representation misalignment as a major factor in poor performance for standard merging methods. Our baselines typically do not address this issue, so a technique like representation surgery could help reduce bias. While our work focuses on *efficiently* merging models via IRT-based estimators, we view representational alignment as a promising, complementary direction. We will cite [2] in the final version and consider leveraging such approaches to further enhance MERGE³. **Q3 — Initialization and "Ad-hoc Genetic Algorithm":** We randomly initialize the population by creating interpolations of the same endpoint models used by our baselines—no additional models are needed. By “ad-hoc genetic algorithm,” we mean standard evolutionary operators tailored for merging. Specifically, we use **Simulated Binary Crossover** to recombine parents and **Polynomial Mutation** to introduce small perturbations. We will revise Algorithm 1 and the main text to make these steps clearer. We thank you again for your valuable feedback. We remain available for any further questions or clarifications. --- Rebuttal Comment 1.1: Comment: I thank the authors for answering my questions. I would like to keep my score. The paper needs to be clearly written, detailing algorithm 1 including aspects such as the how the population is initialized. As the number of models to be merged increases, as the authors also pointed out, the search space increases. The authors need to demonstrate that their search space and algorithm are capable of yielding a well-performing model. It is also essential to include baselines such as Representation Surgery and Ada-merging and demonstrate that evolutionary search is indeed necessary despite its increased computational expense. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response. While we appreciate their engagement, we are disheartened by the outcome, especially given the substantial effort we made to directly address all concerns within the very limited rebuttal window. We believe the raised points were either already addressed or are slated to be fully resolved in the final version of the paper. > The paper **needs to be clearly written**, **detailing algorithm 1** including aspects such as the **how the population is initialized**. > We addressed this directly in our rebuttal. **Algorithm 1 has been moved to the main paper with expanded explanations and added clarity.** Due to ICML policy, uploading an updated manuscript during the rebuttal phase was not permitted. Additionally, our supplementary material already includes a code implementation with detailed information about population initialization. We are confident this concern will be fully resolved in the camera-ready version. > As the number of models to be merged increases, as the authors also pointed out, **the search space increases**. The authors need to demonstrate that their **search space and algorithm are capable of yielding a well-performing model**. > We respectfully believe this is not a limitation of our approach. In Section 6.3, we merge **four** independently fine-tuned 7B-scale models into a single multilingual model using only a **consumer GPU**—a setting substantially larger and more challenging than those tackled by prior merging methods. In fact, while EMR-Merging reports *fully* combining 6 language models, each is only GPT-2 Small (124M)—over **70× smaller** per model. We also clarify that in evolutionary merging, the **search space grows with the number of merging hyperparameters**, not with model size. For a merge of *n* models, the number of hyperparameters increases linearly, but remains small (e.g., 4 models → 3 coefficients for TIES). The computational bottleneck is not the size of this space, but the **cost of evaluating each candidate model**, which our method drastically reduces via IRT and dataset subsampling. We will expand on this explanation in the final version. > It is also essential to include baselines such as **Representation Surgery and AdaMerging** and **demonstrate that evolutionary search is indeed necessary** despite its increased computational expense. > We respectfully **disagree that these baselines are essential in the context of generative large-scale LLM merging**. Both Representation Surgery and AdaMerging were designed for computer vision tasks, and their use in NLP is limited to small models such as BERT or GPT-2 Small. These methods are not implemented in MergeKit (the de facto LLM merging library), nor are they used in the **Hugging Face Open LLM Leaderboard**, which includes nearly 5,000 models—many of them merged using the baselines that we included. Moreover, adapting AdaMerging or Representation Surgery to our setting would **require substantial engineering** effort to scale them from 100M–200M parameter models to 7B+ and **is beyond the scope of this work, likely warranting a dedicated paper in its own right.** This was not feasible within the tight rebuttal window, especially given the other new experiments we included (e.g., Spearman correlation, in-language merging). We fully intend to explore these methods in future work or the camera-ready version. If these concerns played such a central role in the reviewer’s final decision, we believe it would have been appropriate to explicitly state their impact on the final score earlier in the discussion—rather than only a few hours before the discussion period closes and **4 days** **after the acknowledgment deadline had passed.** Given the limited timeframe, we feel we were unfairly penalized without a fair opportunity to respond. As noted previously, we plan to support and include these baselines in the camera-ready version or a follow-up submission, while all the other concerns have been addressed. In summary, we believe we have responded comprehensively and constructively to all points raised. We respectfully ask the reviewer to reconsider their score, as the remaining concerns do not appear to constitute grounds for rejection given the contributions and evidence provided. We remain committed to further improving the paper and incorporating all helpful feedback in the final version.
Summary: This paper proposes an efficient evolutionary model merging framework to achieve multilingual model merging and cross-language knowledge transfer, and conducts a large number of experiments and theoretical analysis to verify the effectiveness of the method. Claims And Evidence: YES Methods And Evaluation Criteria: YES Theoretical Claims: YES Experimental Designs Or Analyses: YES Supplementary Material: YES Relation To Broader Scientific Literature: This paper significantly improves the efficiency of previous evolutionary model merging[1]. [1] Evolutionary optimization of model merging recipes. Nature Machine Intelligence, 2025. Essential References Not Discussed: Clarify the more specific challenges/differences between [1] and this paper's method. [1] tinybenchmarks: evaluating llms with fewer examples. ICML, 2024. Other Strengths And Weaknesses: Strengths: - This paper proposes an efficient evolutionary model merging framework for multilingual model merging and cross-lingual knowledge transfer. - This paper provides theoretical guarantees to verify the effectiveness of the method. - This paper is clearly written and provides source code implementation. Weaknesses: - This paper seems to be just an application of [1] to model merging. It is not clear what challenges/difficulties there are in directly applying [1] to model merging, as this seems to be a straightforward application. The authors mention [1] in all subsections of this paper’s methods section, which seems to be a direct extension of [1] in the model merging setting. The authors need to clarify the connection more deeply. - This paper relies on a large validation set for data selection, however, in the standard model merging setting, such data does not seem to be available. - If validation set data is available, how does the performance compare to the approach of this paper if the available validation data is directly used as input to the test phase of the merged model through in-context learning? [1] tinybenchmarks: evaluating llms with fewer examples. ICML, 2024. Other Comments Or Suggestions: See weaknesses Questions For Authors: See weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough evaluation of our work and for highlighting areas in need of clarification. We appreciate the opportunity to provide more detail on our key contributions relative to [1], our assumptions regarding the availability of a validation set, and our comparisons with in-context learning (ICL). Below, we address each of your points in turn. ### Weaknesses **W1** — **difference with tinyBenchmarks:** We thank the reviewer for pointing this out and agree that it is essential to clarify the difference from [1] (“tinyBenchmarks”). While [1] provides two estimators (pIRT and gpIRT) for efficient *evaluation* of large language models, our work proposes two *new* estimators—mpIRT and gmPIRT—***specifically tailored for model merging***. We demonstrate that plugging these new estimators into the *evolutionary* merging pipeline yields a *fifty-fold* reduction in computational costs, with negligible loss in accuracy. Concretely, mpIRT and gmPIRT incorporate the *additional prior* we have in the merging setting, namely ***access to all endpoint models’ abilities***. This lets us approximate the merged model’s ability by a *linear combination* of the endpoints’, rather than fitting a full IRT vector from scratch. Doing so drastically reduces the data and compute needed per iteration of evolutionary search, which is the main bottleneck. By contrast, directly applying [1] to merging *without* leveraging this extra prior would require re-fitting a merged model’s IRT parameters each time, significantly slowing down evolution. Thus, although we take inspiration from [1]’s general approach to *efficient performance estimation*, our paper addresses new challenges in model merging, such as evolving interpolation coefficients across multiple tasks/languages and making performance estimators *merging-aware*. **W2 — Need for a validation set:** We follow the evolutionary merging setting introduced in [2], which assumes a validation (or evaluation) set is available to measure candidate models’ fitness. This convention is also common in other merging works, such as those that use a held-out set to tune the scaling factor [3]. That said, we agree that exploring *fully unsupervised* proxies—e.g., perplexity or entropy on unlabeled data—would be an exciting direction, as it would relax the requirement for supervised validation data in merging. We plan to investigate such approaches in future work. [1] Polo, Felipe Maia, et al. "tinyBenchmarks: evaluating LLMs with fewer examples." *International Conference on Machine Learning*. PMLR, 2024. [2] Akiba, Takuya, et al. "Evolutionary optimization of model merging recipes." *Nature Machine Intelligence* (2025): 1-10. [3] Ilharco, Gabriel, et al. "Editing models with task arithmetic." *The Eleventh International Conference on Learning Representations*. **W3 — Comparison with In-Context Learning (ICL)** We appreciate the reviewer’s suggestion. To address it, we ran the proposed few-shot in-context learning approach on our multilingual experiments, providing 20 samples as context at inference time for two baselines (TIES-DARE and Task Arithmetic). As shown in the table below, Merge³ significantly outperforms these few-shot baselines on each language: | Method | DE | IT | NL | EN | | --- | --- | --- | --- | --- | | TIES-DARE Few-shot (20) | 0.227 | 0.226 | 0.227 | 0.226 | | Task Arithmetic Few-shot (20) | 0.427 | 0.406 | 0.491 | 0.566 | | Merge³ (gmpIRT-20) [Ours] | **0.720** | **0.690** | **0.690** | **0.790** | Furthermore, using ICL makes the context significantly longer and increases memory requirements, whereas our merged model has *no additional overhead at inference*. Once we merge the models offline, the resulting single network can be deployed with the same resource footprint as a standard model of that size. Thus, while few-shot ICL can help in certain scenarios, **model merging** provides a more permanent and resource-efficient solution. We are grateful for the time and attention you invested in reviewing our paper. Your feedback has been very helpful, and we believe that the clarifications provided address your concerns. We look forward to further refining our work in response to your comments.
Summary: This paper introduces MERGE3, a framework for efficient evolutionary model merging on consumer-grade GPUs. The method addresses computational bottlenecks in evolutionary merging by: (1) extracting a reduced dataset for evaluation, (2) estimating model abilities using Item Response Theory (IRT), and (3) evolving optimal merges via IRT-based performance estimators. The authors assert MERGE3 reduces fitness computation costs by 50× while preserving performance. Claims And Evidence: The paper makes several claims that are generally well-supported: 1. The 50× reduction in computational cost is demonstrated through calculations and empirical measurements. 2. Performance preservation is shown through comparisons with models evolved on full datasets. 3. Cross-lingual knowledge transfer effectiveness is demonstrated through experiments on multiple language pairs. The evidence is convincing, particularly the experimental results showing comparable performance to models requiring much greater computational resources. Methods And Evaluation Criteria: The methodology is coherent and properly described. The three-stage approach (Extract, Estimate, Evolve) is logically structured. Evaluation metrics and baselines are appropriate. The authors rigorously evaluate against standard merging techniques (TIES-DARE, SLERP, Task Arithmetic) and compare against state-of-the-art models like EvoLLM-JP-7B. Theoretical Claims: The theoretical foundation is solid. The authors provide formal guarantees for their performance estimators. Experimental Designs Or Analyses: The experiments are comprehensive but have some limitations: 1. Good coverage of cross-lingual transfer and multilingual model evolution 2. Appropriate baselines and metrics 3. Limited analysis of hyperparameter sensitivity Ablation studies could be more extensive to isolate the contribution of each component Hardware benchmarks are limited to a single GPU model (NVIDIA 4090) Supplementary Material: The supplementary material is thorough, including mathematical proofs, additional experimental results, detailed implementation specifics, FLOPs calculations. The Mergenetic library described sounds valuable. Relation To Broader Scientific Literature: The paper properly positions itself within the model merging literature. Connections to IRT are well-established, and the authors appropriately attribute previous work Essential References Not Discussed: The literature review is comprehensive, covering key works in model merging, evolutionary algorithms, and IRT. Other Strengths And Weaknesses: Strengths: 1. Addresses a practical limitation in state-of-the-art model merging 2. Novel theoretical guarantees for performance estimation 3. Open-source library implementation Weaknesses: 1. Limited discussion of potential negative transfer in cross-lingual merging 2. Could explore more dataset reduction strategies beyond random sampling Other Comments Or Suggestions: 1. Consider expanding the analysis to more GPU configurations Questions For Authors: How does your approach extend to scenarios with more than two endpoint models? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your thoughtful and detailed review. We are glad you find our framework coherent, our theoretical underpinnings solid, and our empirical results convincing. Below, we address your specific points and questions. **Limited analysis of hyperparameters:** The main hyperparameter in our pipeline is the interpolation coefficient **c** (Equation 5), for which we followed the default from TinyBenchmarks [1] without tuning. Following the reviewer’s suggestion, we ran a hyperparameter sweep over **c**. The results (included in the revised paper) show that while our original methods—**MP-IRT** and **GMP-IRT**—already outperformed competitors, the tuned **GMP-IRT*** further improves performance | Dataset | GMP-IRT* | GMP-IRT | GP-IRT* | GP-IRT | MP-IRT | | --- | --- | --- | --- | --- | --- | | ARC | **0.035** | 0.040 | 0.046 | 0.049 | 0.048 | | WINOGRANDE | **0.018** | 0.031 | 0.032 | 0.037 | 0.036 | | GSM8k | **0.057** | 0.057 | 0.074 | 0.064 | 0.062 | | HELLASWAG | **0.046** | 0.056 | 0.077 | 0.071 | 0.047 | | TRUTHFULQA | **0.040** | 0.045 | 0.062 | 0.055 | 0.044 | For the **evolutionary run**, we used 175 individuals, split into 25 subpopulations over 7 iterations—chosen to ensure all experiments completed in under 24 hours. While not extensively tuned, this setup balanced runtime and performance well. **Ablation studies:** Because evolutionary search is inherently noisy and computationally expensive to run many times end-to-end, the most feasible way to measure each module’s impact is to isolate it and evaluate it independently. In Figure 3, we focus on performance estimators, while Figure 4 examines ability estimators. Additional ablation results are included in Appendix C.2 and C.3. We agree these analyses are crucial, and we will add a concise summary of them in the main text to more clearly highlight each component’s contribution. **Benchmarks limited to one GPU.** We will include multi-GPU benchmarks in the revised paper to better illustrate MERGE³’s accessibility. Below is an example using Mistral-7B on GSM8K-RO (10 examples, 4-bit models, SLERP merging): | **GPU** | **Eval Time** | **Merge Time** | | --- | --- | --- | | 3090 24GB | 65s | 135s | | 4090 24GB | 45s | 160s | | V100 32GB | 80s | 220s | These times show MERGE³ is practical even on older GPUs. We will add more results in the final version, including how runtimes scale with GPU class and batch size. ### Weaknesses **W1 — negative transfer:** In our updated experiments, we study **negative transfer** in cross-lingual merging using the DE, NL, RO GSM8K dataset and compare the **Negative Transfer Rate (NTR)** of MERGE3 against SLERP, TIES, TA. Negative transfer is observed when the merged model fails to answer a question correctly despite at least one base model having answered it correctly. Namely, `NTR = (# of negatively transferred questions) / (# of questions at least one base model got right)` | Language | MERGE3 (↓ better) | SLERP | TIES | TA | | --- | --- | --- | --- | --- | | Dutch | **0.38** | 0.95 | 0.96 | 0.96 | | Romanian | **0.52** | 0.87 | 0.87 | 0.88 | | German | **0.35** | 0.80 | 0.68 | 0.69 | We see that MERGE3 substantially reduces negative transfer compared to standard interpolation methods. We will include full details and derivations in the appendix for transparency. **W2 — additional data reduction strategies**: We agree that exploring non-random sampling strategies is an intriguing direction, and we experimented with two additional methods—an IRT-based clustering approach (as in [1]) and a “Representation Clustering” technique that uses concatenated embeddings from our endpoint models and applies PCA plus k-means. In both cases, we observed no clear performance benefits relative to simpler random sampling, especially after considering the added complexity and compute overhead. Thus, we ultimately opted for random sampling, but we will include these findings (presently in Appendix C.1) more prominently in the final version, highlighting why more sophisticated methods did not yield sufficient gains to justify their complexity. ### Questions **Q1 — extending to more than two endpoints**: Our evolutionary framework naturally extends to merging more than two endpoint models. The only change is an increase in the dimensionality of the search space, as we optimize more interpolation coefficients. The rest of the pipeline remains unchanged. We maintain a fixed population size (e.g., 25) and apply standard evolutionary operations. We demonstrated this in practice by merging three models for Jap math and four models across IT, EN, NL, and DE in the multilingual setting—both without any changes to the architecture or training procedure. Thank you again for your thoughtful feedback. We're happy to provide further clarification if needed.
Summary: The authors present a framework for efficient evolutionary merging of language models for creating models with strong multi-task and/or cross-lingual task performance from a library of existing fine-tuned models without additional fine-tuning. In MERGE$^3$, the critical efficiency benefit comes from Extracting (in this case, randomly sampling) a much smaller sample of examples for different tasks from full datasets. Latent ability vectors are then iteratively Estimated for each model on different tasks via Item Resposne Theory, and these IRT-based estimators are used to inform optimal merges for each iteration. The authors conclude that MERGE$^3$ results in models that are competitive with an evolutionary merging algorithm that relies on full evaluation at each step. Claims And Evidence: The paper's claims are generally supported when they are are concrete and specific. To me, however, it feels like a stretch to state that the algorithm "reduc\[es\] fitness computation costs 50x **while preserving performance**". In my opinion, it would be better to state objectively the % of accuracy performance preserved at the fraction of compute used. Additionally, the concrete example and general description (at least VRAM capacity) of the consumer GPU used should be included earlier than in the Appendix. Methods And Evaluation Criteria: Overall yes, though some aspects of presentation could be improved for fairness and clarity. I would have liked to see wall-clock training time results presented in the main body -- presumably there might be some amount of overhead from loading in models and data such that naive estimation from FLOP counts may be insufficient Theoretical Claims: Seems correct, but I have some issues with clarity (see below) Experimental Designs Or Analyses: seems sound and valid. Supplementary Material: No, just appendix Relation To Broader Scientific Literature: Main related work seems to be the Akiba et al Evolutionary Model Merging paper, where the key contribution over that paper is to use only a much smaller subset of dataset examples to estimate latent model ability. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: 1. Reported effiicency gains are significant and do indeed enable an approximation of a method that is prohibitively expensive if performed naively 2. Interesting use of item response theory in the context of model merging 3. Proposed framework is supported by both empirical results and theoretical grounding Weaknesses: 1. See Questions 2-4. I have some uncertainty about the true wall-clock compute savings that result from using MERGE$^3$. I am thinking of e.g. potential overhead from repeated loading of different models into VRAM. Though these numbers are dependent on the specific GPU, model, and amount of data, I would feel better about accepting the "50x" figure with some kind of breakdown of specific amount of time needed for each framework step (and for each end-to-end iteration of Evolve) 2. Some clarity issues throughout, especially in the Section 5 theoretical analysis. See Comments 5-6 below 3. Missing baseline , See Q6 Other Comments Or Suggestions: 1. Consider updating the title to have "Evolutionary **Model** Merging", and specifically mentioning language models in the abstract 2. In the intro, somewhere between "In this paper" and "Our approach", I would have liked to see it explicitly stated that fitness computation is a key bottleneck in the standard evolutionary merging approach that is being compared to. Though it is clear looking back, it could help with clarity on a first read 3. "Consumer GPU" can mean many different things -- I think it would improve the specificity of the authors' claims to state explicitly the kind of GPU that was used and what kind of VRAM it has, earlier in the main body. Currently it is only implied that the specific device that benchmarking was performed on was a 4090, and GPU's capacity is not stated until the appendix. 4. I would have liked to see something like Table 7 in the main body 5. I would strongly prefer avoidance of using the same $i$ to index variables repeatedly. It sometimes refers to examples in $\mathcal{D}$ (e.g. under Eq.1), but it sometimes seems to refer to models in the pool (e.g. in Eq. 3). Is Assumption 1 meant to imply that the dimension of the latent ability vectors must equal $|\mathcal{D}|$? 6. Please define "endpoint model" before using it, and spell out MP-IRT and GMP-IRT the first time it appears 7. In general, aligning axes/scales across figures in the same group would help with clarity. For Figure 4 in particular, going from 0 to 1.0 in the same increments would be helpful Questions For Authors: 1. Did the authors try any merging in data flow space as Akiba et al did? 2. What exactly is meant by "Estimated total time" in Table 7? Does this only account for Evolve runs (not Estimating?) 3. Relatedly, is my understanding of Estimate vs Evolve correct? I understand Estimate as an initial, more complete computation of estimated ability in the initial candidate pool of models. Estimate is only performed once. Evolve, then, also covers iterative updating of performance estimator parameters via repeated inference on the *reduced* set of examples from the Extract step -- is this correct? 4. Can the authors provide a discussion of scaling of compute requirements as datasets, set of tasks, and/or models grow, especially wrt Estimate vs Evolve? e.g. is there some order of magnitude of dataset size at which the compute requirements of Estimate could be expected to dominate Evolve? 5. What is the expected dimensionality of the latent ability vectors? 6. In Table 1, authors provide a baseline of translated ARC performance on models only fine-tuned for the language. Are there baselines for translated ARC performance on models fine-tuned only for ARC? Additionally, I understand that this is not always practical, but in this case it seems possible to provide a baseline of translated ARC performance on models fine-tuned on the translated ARC datasets, along with information about compute requirements for the same. This would help us understand whether MERGE$^3$ is expected to pareto-dominate even a direct fine-tuning approach when sufficient training data is available Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed and thoughtful review. Your feedback helped us identify key areas for clarification. Below, we respond to each point. **Claims And Evidence** - In the revision, we’ll replace “while preserving performance” with: 50× compute reduction with ~86% accuracy retained (e.g., Japanese GSM8K). The 50× figure reflects general FLOP savings; the 86% retention is task-specific and may vary. We’ll clarify this distinction to avoid overstatement. - We will clarify the GPU used for the experiments in the main manuscript, and specify minimum VRAM requirements. We used an RTX 4090, with Batch Size 8, Quantization 4bit, Model size 7B. **Methods And Evaluation Criteria** - Please refer to W1. ### Weaknesses - **W1 — true wall clock time**: We agree that FLOPs don’t fully capture runtime efficiency. We’ll include wall-clock breakdowns of Estimate and Evolve steps, as well as end-to-end timing per MERGE³ iteration. To clarify: both MERGE³ and EvoMerge load the same models, so load time is similar; the gain comes from evaluating fewer examples, which significantly reduces total runtime. We’ll support this with empirical data. - **W2 — clarity issues:** We now use *i* for data examples and *j* for models to remove ambiguity. “Endpoint model” is defined at first use, and MP-IRT/GMP-IRT are spelled out with brief explanations. Assumption 1 does not imply a latent space of dimension |D|, but that a merged model’s ability is a linear combination of endpoint abilities. We’ll clarify its role further in the revised paper. - **W3 — finetuning baseline**: Unfortunately, full fine-tuning of a 7B model on ARC isn’t feasible within the rebuttal. We also note that this experiment has substantially different requirements from our budget-friendly MERGE3 framework, which is specifically designed for consumer-grade hardware and operates using only a small number of datapoints (20). That said, we agree this comparison would be valuable and plan to include it in the final version. ### Comments **C1 — changing title:** we’ve updated the abstract to explicitly mention that our method targets **language models**, which we agree improves clarity. Regarding the title, we’ll aim to revise it to include “evolutionary *model* merging” for the camera-ready version, subject to the venue’s guidelines on title changes. **C2 — emphasizing fitness as a key bottleneck**: We revised the introduction to highlight that **fitness evaluation is the main bottleneck** in evolutionary merging. **C3 — GPU specification**: Please see the “claims and evidence” section. **C4 — wall-time**: Please see the answer to W1. **C5 and C6 —** **Clarifying notation:** Thanks for pointing out this lack of clarity. Please refer to the answer to W2. **C7 — Aligning axes in figures:** we will align the axes and scales across grouped figures so that all plots use consistent increments in the revised version of the paper. ### Questions **Q1 — Merging in Data Flow Space (DFS)**: We tried DFS merging as in Akiba et al., but found no consistent gains. Their Table 1 shows DFS often underperforms parameter space (PS) merging, despite adding 3B parameters—nearly half the base model. Given our focus on efficiency for consumer GPUs, this overhead was impractical, so we prioritized size-preserving strategies. **Q2 — Estimated total time**: It is a measure based on up to 12 hours runs for each method on a single NVIDIA 4090. This is an end to end measure of the entire pipeline: including model loading, estimation of abilities, and evolution of models. **Q3 — understanding of Estimate vs Evolve**: Yes, that’s correct. **Estimate** is run once to compute endpoint abilities using the full dataset. **Evolve** then runs iteratively, using only the reduced dataset and our estimators to evaluate new merged models efficiently. **Q4 — scaling of compute requirements**: We agree this is an important consideration and will include a discussion in the paper. In MERGE³, **Estimate** is run once per endpoint model on the full dataset, while **Evolve** runs repeatedly on a reduced subset. In practice, **Evolve dominates compute** because: 1. **Correctness labels are often public** (e.g., Open LLM leaderboard), making Estimate nearly free. 2. **Estimate is one-time**, whereas Evolve runs over many generations and a full population. When correctness must be computed manually, a rough rule of thumb is: **if** M × N > P × K (where M = endpoint models, N = full dataset size, P = population, K = reduced subset), then Estimate might dominate. Otherwise, Evolve is the primary cost. **Q5 — dimensionality of the latent ability vectors**: The dimensionality is a hyperparameter; we set it to **16**, following TinyBenchmarks [1]. **Q6 — see W1** We thank you again for your valuable feedback. We remain available for any further questions or clarifications. --- Rebuttal Comment 1.1: Comment: Thank you, I appreciate the detailed response. I would be happy to see this paper be accepted. Looking forward to see future iterations --- Reply to Comment 1.1.1: Comment: Thank you again for your thoughtful and constructive feedback. We’ve done our best to address your concerns in the rebuttal and are planning additional improvements in the final version. If you feel your concerns have been resolved, we’d be grateful if you would consider updating your score.
null
null
null
null
null
null
STAMP Your Content: Proving Dataset Membership via Watermarked Rephrasings
Accept (poster)
Summary: Interesting topic and good experimental design choices. However the work lacks empirical evidence that watermarking is what makes the method strong. Claims And Evidence: Good: LLM rephrasing enables reliable statistical tests for dataset membership. Bad: The fact that it works better with green/red watermarking is not proven. It seems like adding a distortion (and so making the reformulation worse perplixity-wise) increases memorization and thus the strength of the test. However, the authors do not test simply rephrasing with higher temperature and without watermarking, which could be a simpler solution. If it works, watermarking is not necessary. Methods And Evaluation Criteria: Yes, the benchmarks and models evaluated make sense. As stated as a limitation however, it would have been better to have some pretraining experiments. Within the compute budget used in this work, the authors could have made them. The main limitation that I see is to keep x copies of a benchmark, they could also leak. Also, you could just keep your benchmark private the whole time. That way you are sure that no LLM is contaminated! For protection of other types of texts, the authors' methods make more sense though. Theoretical Claims: No theoretical claim. Experimental Designs Or Analyses: yes, see "claims & evidence" section. Main limitation is that the emphasis is mostly on protecting benchmarks, while the method makes more sense for other types of texts: if one is okay to keep some versions of the benchmark private, than it couldnt they keep the original benchmark private? Supplementary Material: yes, the templates look good and its nice to be included Relation To Broader Scientific Literature: It compares to two good baselines, and the related work is pretty complete Essential References Not Discussed: It lacks related works to radioactivity of watermarks see https://arxiv.org/abs/2402.14904, i.e. how to find traces of watermarks in the model. FYI the same authors have (apparently post ICML deadline) posted a paper on how to use that to protect benchmarks specifically, without the need for multiple reformulations Other Strengths And Weaknesses: - Very nicely written - Important line of work, and the idea of rephrasing is nice in order to have good calibration and enable Dataset inference - Good experimental design choices, except IMO too much emphasis on benchmark protection Other Comments Or Suggestions: see above Questions For Authors: - discuss litterrature on radioactive watermarks - add an experiments where instead of watermarking with green/red list during reformulation, just use a higher temperature. In other words, add a scatter plot with p-value (y-axis) and perplexity of the reformulation (x-axis), with or without watermarking but different temperature. Perplexity is probably the key factor here - Again about perplexity, what makes the method works is the distortion. For benchmarks, the authors show that its okay because it keeps model's accuracy. However, for other texts, people would maybe not want to publish online a reformulated version of their text with flaws. I sense that the authors want to pass the message that there is no flaws most of the time: the previous experiment (p-value as a function of perplexity) would highlight this point. Otherwise, this should be more clearly stated as a limitation. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their positive assessment of our work. We respond to the raised concerns below. ### Re: Experimental Designs > E1: Main limitation is that the emphasis is mostly on protecting benchmarks, while the method makes more sense for other types of texts We would like to clarify that our study is not limited to protecting benchmarks. Our focus is on the broader problem of dataset inference. We conduct preliminary experiments on detecting benchmark contamination due to its importance and the strict requirements for ensuring quality of the rephrased versions. Importantly, in Section 5, we demonstrate STAMP can successfully detect membership of diverse real-world content **including blog articles and research abstracts.** > E2: if one is okay to keep some versions of the benchmark private, than it couldnt they keep the original benchmark private? We agree with the reviewer that private benchmarking presents one solution to the contamination problem. However, as highlighted by previous work [1], private benchmarking raises concerns about transparency. We believe our work contributes to an ongoing discussion on ensuring trustworhty evaluations and provides a method to detect contamination for public benchmarks. ### Essential References > R1: Radioactive Watermarks (also Q1) In [2], authors demonstrate that it's possible to detect if an LLM is **fine-tuned** on the outputs of another watermarked LLM, by detecting the watermark signal in the outputs. A follow-up work [3] (posted the ICML deadline) extends this framework to detect benchmark contamination through watermarked rephrasing, similar to our approach. We find that their approach has limited applicability since it requires the rephrasing LLM and the contaminated LLM to share the same tokenizer. Moreover, we hypothesise that their approach would require stronger watermarks and higher repetition in training data. We conduct preliminary experiments to verify our hypothesis. Our results show that while [3] can detect contamination with higher repetitions and stronger watermarks, STAMP significantly outperforms this approach across all settings (lower p-value is better, with p < 0.05 indicating contamination and ~0 denotes vanishingly small p values). | Watermark Strength | Repetition | P value | | |---|---|---|---| | | | STAMP | [3] | | 2.0 | 1 | 6.8e-28 | 0.65 | | 2.0 | 4 | ~0 | 1.1e-01 | | 4.0 | 4 | ~0 | 4.7e-3 | ### Sampling with a higher temperature We thank the reviewer for suggesting an interesting experiment (Q2). We use this section to discuss reviewer's concern about whether watermarking is necessary and instead if we could just use a higher temperature. We would like to note that while one intuition behind our approach to use watermarking is indeed increasing memorisation (due to distortions as pointed by the reviewer), we also highlight another important intuition: using different keys (as hash keys) for different rephrases embeds distinct watermarking signals, **reducing token overlap between versions** and increasing perplexity divergence specifically in cases of contamination (since the contaminated model would overfit the tokens in the public rephrase). Our position is that while sampling with a high temperature is important, the use of green/red list watermarking is complementary and helps in improving the sensitivity of our test. We perform some preliminary experiments by sampling with a temperature of $1.2$ and found our initial results to be negative. We compute the std dev. of ppl across rephrasing of each sample and present the 95 percentile value below. Our results show that at higher T ($T$ > 1.2) the model often generates gibberish, which renders our test ineffective to the large and frequent outliers. While we believe in principle, we could calibrate the T better, our initial results show that increasing T beyond a point might not be optimal and watermarking can be complementary to higher temperatures. | temp | Watermark Strength | 95 percentile std(ppl) | |---|---|---| | 1 | 0.0 | 40 | | 1 | 2.0 | 48 | | 1.2 | 0.0 | 3863 | ### Questions: Q1 and Q2 are addressed above. > Q3: Flaws due to the distortion This is indeed a valid concern and to address this concern, we performed a human study in Section 5 where we indeed find that 6 out of 24 authors indicated their abstracts could use **minor edits**, suggesting the rephrases can have flaws. However, **majority of the authors found the rephrasing to be acceptable.** We believe that this will be less of a concern going forward, as general model capabilities (including paraphrasing) continues to improve. Regardless, we will note this as a limitation in our draft. --- [1] Bansel et al. Peeking Behind Closed Doors: Risks of LLM Evaluation by Private Data Curators. ICLR 25 blog post [2] Sander et al. Watermarking Makes Language Models Radioactive. NeurIPS 24 [3] Sander et al. Detecting Benchmark Contamination Through Watermarking. Preprint
Summary: The paper proposes a framework, called Stamp, for detecting dataset membership (infering whether a dataset was included in the pretraining dataset of an LLM). The framework consists of generating multiple watermarked rephrases of the content, with a distinct watermark embedded in each rephrasing. One version is released publicly, the others kept private. When a model is then released, they compute the model perplexity on both the public version and the private versions. Using a statistical test to compare model likelihoods, they can make an informed inference. They specifically use the KGW watermarking scheme, steering generations towards a green subset of the vocabulary. They test their approach by further pretraining pythia 1B on contaminated training data. They find their method to work very well, better than other approaches and able to identify even small contamination. Claims And Evidence: Yes. The only unfortunate thing is the connection to real-world pretraining data sizes. Methods And Evaluation Criteria: Yes. Theoretical Claims: NA Experimental Designs Or Analyses: Yes. Their main experiment on training pythia 1B on a deliberately contaminated dataset makes a lot of sense. All the ablations are well thought through and add valuable insights. Supplementary Material: No. Relation To Broader Scientific Literature: They consider very relevant pieces of work to position themselves. - They convincingly show that Zhang et al.'s approach suffers from a distribution shift, making the contamination detection flawed in practice. - They cite both Wei et al. and Meeus et al., who use unique sequences to mark content. They argue that these techniques impair machine readability, indexing and retrieval- making it impractical for content creators. For benchmarks, they argue that these techniques might alter their utility. These arguments are fairly convincing. Essential References Not Discussed: I would not per se say essential, but the following pieces of work could contribute to a potentially better positioning of the work: 1. A paper showing that it is feasible to detect whether models are trained on watermarked content, could be a justification in why watermarking is chosen as a technology to do this. Sander, T., Fernandez, P., Durmus, A., Douze, M., & Furon, T. (2024). Watermarking makes language models radioactive. Advances in Neural Information Processing Systems, 37, 21079-21113. 2. The benchmark big-bench actually includes a 'canary' in their benchmark which could enable the detection as well.I don't think it's particularly effective compared to STAMP, but it does feel like something relevant in the related work. Srivastava, A., Rastogi, A., Rao, A., Shoeb, A. A. M., Abid, A., Fisch, A., ... & Wang, G. (2022). Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615. 3. A position paper arguing that MIAs cannot prove that an LLM was trained on certain data. Zhang, J., Das, D., Kamath, G., & Tramèr, F. (2024). Membership inference attacks cannot prove that a model was trained on your data. arXiv preprint arXiv:2409.19798. Other Strengths And Weaknesses: Strengths: - The paper is very well written, the method is clearly novel and works well. Overall a great piece. - The analysis that a bag of words classifier can distinguish between human generated text and LLM-based rephrases is interesting and very useful. Authors might find it to useful to relate this to similar mistakes made in the field of membership inference attacks [1,2]. - The paper includes the important ablation studies that I as a reviewer thought about when reading, and executes them very carefully. Specifically: (i) The use of watermarked rephrases instead of regular rephrases, (ii) Maintaining the utility of the benchmark after the watermarked rephrases and (iii) When only a part of the benchmark is used for training. Weaknesses: - I find the experimental setup to be quite convincing, but the entire analysis does not compellingly show if this would scale to real-world pretrained models. Figure 3 shows that with pretraining data size, the method becomes less effective, while the scale of 7B tokens does not come close to the scale of today's pretraining datasets. However, they still show that this is better than any other method available today in their setup, so this is probably ok. - The paper could improve if they consider different watermarking schemes. [1] Das, D., Zhang, J., & Tramèr, F. (2024). Blind baselines beat membership inference attacks for foundation models. arXiv preprint arXiv:2406.16201. [2] Meeus, M., Shilov, I., Jain, S., Faysse, M., Rei, M., & de Montjoye, Y. A. (2024). SoK: Membership Inference Attacks on LLMs are Rushing Nowhere (and How to Fix It). arXiv preprint arXiv:2406.17975. Other Comments Or Suggestions: - Another limitation of methods used by Wei et al. and Meeus et al. is the possibility that these unique sequences or copyright traps are removed by training data preprocessing techniques (either by perplexity filtering or by deduplication) - which stamp is not prone to. It might be good to also mention that as a justification. - Currently you have two different citations of Meeus et al. (copyright traps for LLMs), and it's not clear whether this is referring to two different papers or consistently to the same. Questions For Authors: - I appreciate the ablation done for stamp using rephrases that are watermarked versus not watermarked. From table 1, it seems that the not watermarked versions using stamp also work well. Are there any trade-offs to be made to use the watermarked version? - For the scale of real-world models and benchmarks, what is the value of private key count you would actually recommend? And how many data samples would you need for it to be meaningful? Could you apply any scaling laws to results such as in Figure 3? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and are happy to see that the reviewer enjoyed our writing and found our method novel. We respond to the reviewer’s comments and questions below. ### Re: Weaknesses > W1: Scaling to real-world pretrained models. We acknowledge the reviewer's concern about our detector scaling to real-world models. However as the reviewer points out, our method is better than any other method available today. We believe that with larger LLMs and some amount of repetition (factors known to increase memorization [3]) our method will be effective at the scale of real-world datasets and will, hopefully, show better scaling properties compared to other approach. We would like to refer the reviewer to our response to Reviewer ojw8, where we show that for the same $\frac{token}{parameter}$, larger models yield stronger detection results. ### Re: Questions > Q1: I appreciate the ablation done for stamp... Are there any trade-offs to be made to use the watermarked version? We agree that the version of STAMP without watermarked variants also works well, but we demonstrate that watermarking increases the statistical strength of our test. This suggests better scaling trends with larger pretraining datasets. In principle, using a stronger watermark signal may increase detectability but may hurt the quality and utility of the paraphrased text. > Q2: For the scale of real-world models and benchmarks >> (2a): what is the value of private key count you would actually recommend? We believe our analysis in Figure 2 (right) is largely independent of scale of model and benchmarks. We find that 5-6 keys should suffice, as at this point the empirical average of each test sample closely approximates the true average, a factor we believe is independent of the scale of model and benchmarks. >> (2b): And how many data samples would you need for it to be meaningful? Our test provides statistically significant results with as few as a few hundred test pairs (500-1000). Importantly, as noted in our response to Reviewer aozb, our framework allows a single copyrighted sample to generate multiple test pairs. >> (2c): Could you apply any scaling laws to results such as in Figure 3? Thanks for the suggestion. We conduct additional analyses to apply scaling laws to our results. Using the data points from Figure 3, we fit the power law from [3] (linear relationship between $log\( \text{p value} \)$ and $log(D)$). We obtain a good fit (e.g. $r^2$ (goodness of fit) of 0.9 for trivia_qa) with the resulting curves predicting that our method will obtain statistically significant results (p<0.05) upto $\approx10$B tokens. ### Re: Essential References: We agree with the reviewer that while these missing references are not essential, discussing them would help us better position our work. > R1: Watermarking makes language models radioactive We refer the reviewer to our rebuttal to Reviewer oVrP for detailed discussion on radioactive watermarks including additional experiments. We will include these in the next version of the our draft. > R2: `Canary` in the big-bench benchmark We believe that while these canaries were originally designed for a different purpose, in principle, they could serve a similar detection function as Wei et al.'s random sequence insertion method. Though potentially effective in some scenarios, such an approach would face the same limitations we highlight for Wei et al.'s work. > R3: Membership inference attacks cannot prove that a model was trained on your data. The position paper [3] argues that existing MIAs that use data collected a posteriori for calibration are statistically unsound, a position that aligns with our analysis Zhang et al.'s approach, where we highlight it suffers from a distribution shift. More importantly, we believe our framework meets the criteria of a sound training data proof argued in the position paper. Our test rejects the null hypothesis when the test statistic on the public dataset is unusual compared to private datasets which a priori were equally likely to have been used for training. > C2: Two different citations of Meeus et al. (copyright traps for LLMs). Thanks for pointing this out! These refer to the same paper and we will fix it in the draft. --- Once again, we thank you for the constructive feedback. Working on the questions has helped us improve the quality of our analysis. Please let us know if we can address any further concerns. [3] Kaplan et al. Scaling Laws for Neural Language Models [4] Zhang et al. Membership inference attacks cannot prove that a model was trained on your data. SaTML 25 [5] Carlini et al. Quantifying Memorization Across Neural Language Models. ICLR 23
Summary: The authors propose a method for dataset membership inference based on generating one public paraphrase of specific content and several private ones, then using a perplexity-based statistical test for detecting whether the dataset was part of the training set. ### Update after rebuttal: Thank you for addressing my concerns. I updated my score Claims And Evidence: The claims appear to be supported by enough evidence. Methods And Evaluation Criteria: The proposed method and evaluation criteria seem sound to me. Theoretical Claims: N/A Experimental Designs Or Analyses: One potential issue is that the authors consider only continual pretraining. The concern is that the samples seen toward the end of training are more likely to be "fresher in the LLM's memory," so the results for regular pretraining (which is a more realistic scenario) might differ significantly. Supplementary Material: The authors included additional experiments and details in the Appendix. I find the results on partial contamination particularly interesting and insightful (Figure 4). Relation To Broader Scientific Literature: The proposed method improves over the considered baselines. However, I am not fully familiar with the dataset membership inference literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I noticed a few weaknesses that I hope the authors can clarify or address. If the goal is to detect copyrighted samples, it seems like a strong assumption to presume that the "defender" has a relatively large dataset of copyrighted samples. It could be the case that they only have a few samples, which might render the method ineffective. On the other hand, if the goal is to detect benchmarks in the model's pretraining, I don't think it's realistic to assume that someone could know in advance that a benchmark might be used for a model, then ensure it is paraphrased everywhere it appears on the internet, so they can rely on dataset membership detection later. Additionally, if all the benchmarks from the model's pretraining were "paraphrased," this could negatively affect performance, as the model might start learning the "watermarks" introduced by the paraphrases. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We respond to the reviewer’s comments and questions below. ### Re: Weaknesses > W1: If the goal is to detect copyrighted samples, it seems like a strong assumption to presume that the "defender" has a relatively large dataset of copyrighted samples. It could be the case that they only have a few samples, which might render the method ineffective. We would like to clarify a potential misunderstanding about the sample complexity of our detection method. We find that our test yields statistically significant results with as few as **400 pairs** (Figure 3). Importantly, a single copyrighted sample can generate multiple test pairs. For instance, in our blog post case study (section 5.2), we perform dataset inference using **44 posts** by performing dataset inference on a collection of paragraphs from the blog, each paragraph forming a separate test pair. This approach allows our method to work effectively even with limited copyrighted samples, giving content creators flexibility in defining what constitutes a pair for the statistical test. > W2: On the other hand, if the goal is to detect benchmarks in the model's pretraining, I don't think it's realistic to assume that someone could know in advance that a benchmark might be used for a model, then ensure it is paraphrased everywhere it appears on the internet, so they can rely on dataset membership detection later. We would like to clarify that our approach is not meant to protect existing benchmarks but we offer a solution for future dataset releases. As per our approach, the **onus is on the benchmark creators to watermarks their own benchmarks before releasing them online.** The creator generates multiple paraphrases, each with a unique watermarked key. One version is released publicly for model evaluation, while others remain private, as explained in section 4.1 of our draft. To detect contamination, benchmark creators can apply our statistical test **on any target model** to obtain evidence if the **target model's** training data included their benchmark. We highlight this requirement in Section 7 as a limitation, noting that **data must be carefully prepared before public release** to enable detection. This constraint is not unique to our approach but is shared by existing works [1,2] and we believe is fundamental for a sound statistical test [3]. > W3: Additionally, if all the benchmarks from the model's pretraining were "paraphrased" this could negatively affect performance, as the model might start learning the "watermarks" introduced by the paraphrases. We would like to point out that an important feature of our work is that it allows benchmark creators to _use different private hash keys to watermark the public version of each document in their collection_. Given that each dataset only constitutes a small fraction of overall training corpora, and with prior work [4] demonstrating that even training on data with just two different keys doesn't result in models learning individual watermarks, it is unlikely that the model would learn the watermark. Hopefully, this assuages your concern about potential watermarking learning. Additionally, we experiment (in the section 4.2; under false positive analysis) to detect membership of held out samples that are watermarked with the same key as the contaminated samples, and find that our approach (correctly) does not detect them to a part of the training corpus. **This experiment provides further evidence that models do not learn the watermarks introduced by the paraphrasers.** ### Re: Experimental Design > One potential issue is that the authors consider only continual pretraining... results for regular pretraining... might differ significantly. We acknowledge the reviewer's concern about the potential recency bias in our setup. While previous research [5] has shown that training order has little effect on memorization, the training dynamics of memorization in LLMs remains an active area of research. While such factors can influence the memorization, our contribution is in demonstrating for any given level of memorization, our test provides greater sensitivity compared to any other existing approach and is robust against false positives. --- Once again, we thank you for the insightful questions. We hope this response has addressed your concerns and would request you to kindly reconsider your overall assessment. Please let us know if we can address any further concerns. [1] Wei et al. Proving membership in LLM pretraining data via data watermarks. ACL 24 [2] Oren et al. Proving Test Set Contamination in Black-Box Language Models. ICLR 24 [3] Zhang et al. Membership Inference Attacks Cannot Prove that a Model Was Trained On Your Data. SaTML 25 [4] Gu et al. On the Learnability of Watermarks for Language Models. ICLR 24 [5] Biderman et al. Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling. ICML 2023
Summary: This paper presents STAMP, a framework that helps content creators detect whether their content (e.g., benchmark test sets, blog articles, research abstracts) has been used without authorization in the pretraining of large language models. The key idea is to release watermarked rephrasings of the content, which embed subtle signals via a known watermarking scheme. By generating multiple watermarked variants for each piece of content (one “public” version is released online, the rest are kept private), one can perform a paired statistical test on the target model’s perplexities. If a model’s likelihood of producing the “public” (known) watermark version is systematically higher than for private/unreleased variants, it strongly indicates that the model was trained on that public data. Crucially, the authors show that this approach outperforms prior membership-inference or contamination-detection methods, even when each dataset appears only once (at a minuscule proportion of the training tokens). They also confirm that STAMP’s watermarked versions do not degrade the usability (accuracy or ranking) of the benchmark or text. Claims And Evidence: 1. Claim: The authors can reliably detect dataset membership for text that was used once in a massive training set. Evidence: In controlled experiments on 1B-parameter “Pythia” LLM variants, with four benchmarks contaminated at <0.001% of tokens, STAMP still achieves low p-values (e.g. 10^-4 to 10^-6). 2. Claim: Watermarking-based rephrasings enhance memorization signals, thereby boosting detection sensitivity. Evidence: They compare watermarked vs. non-watermarked rephrasings and show that the watermarked versions yield significantly stronger results (two orders of magnitude in p-values). 3. Claim: STAMP preserves the “utility” of a dataset (e.g., a benchmark’s function as a measure of model performance). Evidence: They evaluate standard LLMs on both the original and watermarked benchmark copies. The absolute accuracy remains nearly the same, and crucially, the relative ranking of models is unchanged, demonstrating that the test’s transformations do not distort the difficulty or nature of the underlying tasks. 4. Claim: STAMP effectively avoids false positives. Evidence: When applying STAMP to models that never saw the watermarked dataset, the resulting p-values show no spurious membership detection. Similarly, watermarked hold-out sets from the same domain also do not register as “in the training data,” indicating the method is capturing genuine membership. Methods And Evaluation Criteria: Yes, the authors evaluated the effectivenss of STAMP by continuing pre-training the Pythia-1B on deliberately contaminated pretraining data. The evaluation process includes p-values from the paired t-test measure how convincingly the model “prefers” the public watermarked text; AUROC for membership inference attacks (used as comparative baselines); Effect on Utility: They check if watermarked datasets still produce valid measures of model performance (preserving the rank-order and approximate accuracy). Theoretical Claims: There are no proposed new theoretical proofs for this paper. The paper mainly focuses on application and experimental results. Experimental Designs Or Analyses: The authors mainly conduct two main sets of experiments: 1. Benchmarks: Inject 4 standard test sets (TriviaQA, ARC, MMLU, GSM8K) once into a ~6.7B token corpus for a 1B-parameter model (Pythia-1B). Even though each dataset is <0.001% of the training data, STAMP reliably detects contamination (p<1e-4 to 1e-6). 2. Case Studies: Paper abstracts (EMNLP 2024) and AI Snake Oil blog posts. They show that STAMP can also detect these real-world sets in the model’s training data. Supplementary Material: There are no supplementary materials submitted. The appendix includes related works, more experimental results and more details about experimental setup. Relation To Broader Scientific Literature: Data Contamination is an important problem that ties into recent concerns of test-set contamination (particularly for LLMs) and how one can detect or mitigate it. There has been a lot of recent studies that focus on this direction. Essential References Not Discussed: 1. Important baselines not discussed or compared: there has been a lot of membership inference methods for contamination detection [1,2,3]. The authors only focused on the curated experiments in the paper but there are other available benchmarks in these papers or the authors did not evaluate these methods on the proposed data and models in this paper. 2. There are other data contamination papers that conducted similar experiments in the paper, though from different perspectives but the methodology are quite similar [4,5]. Many recent literatures are missing, not limited to what mentioned above. [1] Shi, Weijia, et al. "Detecting pretraining data from large language models." arXiv preprint arXiv:2310.16789 (2023). [2] Zhang, Jingyang, et al. "Min-k%++: Improved baseline for detecting pre-training data from large language models." arXiv preprint arXiv:2404.02936 (2024). [3] Zhang, Weichao, et al. "Pretraining data detection for large language models: A divergence-based calibration method." arXiv preprint arXiv:2409.14781 (2024). [4] Jiang, Minhao, et al. "Investigating data contamination for pre-training language models." arXiv preprint arXiv:2401.06059 (2024). [5] Yang, Shuo, et al. "Rethinking benchmark and contamination for language models with rephrased samples." arXiv preprint arXiv:2311.04850 (2023). Other Strengths And Weaknesses: Strengths: 1. The paper is easy to follow. 2. Applicable to real text of various lengths (from short question sets to multi-paragraph blog posts). 3. Achieves robust detection even with only a single copy of each example in the training set. Weaknessses: 1. Relies on “grey-box” access, meaning you can query token probabilities from the suspect model (some commercial APIs may not allow direct logit or perplexity queries). And as mentioned above, the experiments did not include the comparison with other white-box or grey-box methods, whic pose real applicability of the proposed method. Other Comments Or Suggestions: 1. It might be helpful to test the method on even larger, more highly capable LLMs. The paper suggests it should generalize well, since bigger models memorize more. 2. How does the proposed method deal with the rephrased or intended contamination as mentioned in [1,2]? [1] Jiang, Minhao, et al. "Investigating data contamination for pre-training language models." arXiv preprint arXiv:2401.06059 (2024). [2] Yang, Shuo, et al. "Rethinking benchmark and contamination for language models with rephrased samples." arXiv preprint arXiv:2311.04850 (2023). Questions For Authors: See above Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and feedback. We are happy to see that they appreciate the robust detectability that our method offers, and find the paper easy to follow. We discuss their concerns below: ### Re: Important baselines not discussed (ER1) Thanks for sharing these baselines, we would like to clarify that in our current submission, we already compare our approach against *min-k* [1] (along with other popular MIAs) in section 4.2 under baselines and Table 7, and find our approach to outperform these baselines. Based on your suggestions, we conduct new experiments to benchmark additional MIAs [2,3] and share the detection performance (AUROC) under two settings of non-members:`same documents` where we use different rephrasing of the same test samples and `different documents` where we use a held out set of test samples. The AUROC scores $\approx 0.5$ show that such baselines are ineffective in determining membership. | Dataset | Same Documents | | Different Documents | | |---|---|---|---|---| | | **Min-k++** [2] | **DC-PDD** [3] | **Min-k++** [2] | **DC-PDD** [3] | | TriviaQA | 0.50 | 0.52 | 0.44 | 0.58 | | ARC-C | 0.49 | 0.51 | 0.45 | 0.52 | | MMLU | 0.49 | 0.52 | 0.45 | 0.52 | | GSM8k | 0.50 | 0.52 | 0.48 | 0.52 | Overall our findings corroborate with recent studies [6,7] that highlight the failure of such heuristic based MIAs. Additionally, as highlighted by previus work [8], such heuristic MIAs do not provide a sound statistical proof of membership. Given these additional experiments, we hope that your major concern is addressed. ### Re: Other data contamination papers (ER2) We respectfully disagree with the assertion that our methodology is "quite similar" to the cited works. [5] primarily studies how inclusion of simple variations of test data (including rephrasings) can artificially inflate benchmark performance. Similarly, [4,5] highlight limitations of existing white-box decontamination approaches. In contrast, our approach utilizes _watermarked rephrasings_ as a component of our proposed framework which provides a principled statistical framework for dataset inference. We will update the draft to discuss and constrast our work from these related papers [4,5]. ### Re: Weakness: “grey-box” access We acknowledge your concern about our work requiring grey-box access to model probabilities (also discussed as a limitation in Section 7 of our draft). However, majority of prior work on dataset inference assumes grey-box access, similar to our work. In the current landscape where there is a lack of successful _black-box MIAs that provide a statistical proof_, alternative approaches can be used: probability extraction attacks [9] or through trusted third parties such as legal arbiters. Importantly, as the field develops better black-box metrics, our statistical framework may be adapted to perform paired tests on those metrics instead of loss values. ### Re: Other Comments > It might be helpful to test the method on even larger, more highly capable LLMs. The paper suggests it should generalize well, since bigger models memorize more. We agree that testing on larger LLMs would be valuable. Existing research [10] shows that larger models memorize training data more aggressively. We present preliminary results comparing models of different sizes, maintaining a proportional token-to-parameter ratio: | parameter ($P$) | token ($T$) | ARC-C | GSM8K | TriviaQA | MMLU | |----------|-------|-------|-------|----------|-------| | 410m | 400m | -16.5 | -28.5 | -16.7 | -8.4 | | 1000m | 1000m | -21.7 | -39.8 | -27.8 | -16.8 | The results ($log \( \text{p value} \)$ , where lower is better and anything below -3 is statistically significant) suggest that for a fixed $\frac{T}{P}$, our method performs better on larger models, supporting our hypothesis about improved detection with model scale. > Dealing with the rephrased or intended contamination? Thanks for the insightful question! To clarify, our work focuses specifically on verbatim contamination, as our primary goal is to detect membership of a dataset with statistical guarantees. We will update Section 7 to explicitly discuss this point, and clarify the scope of our study. --- Once again, we thank the reviewer for the constructive feedback. We hope our response addresses their concerns effectively and kindly request you to reconsider your assessment of our work. Please let us know if we can address any other concerns. [6] Duan et al. Do membership inference attacks work on large language models? COLM 24 [7] Maini et al. LLM Dataset Inference: Did you train on my dataset? NeurIPS 24 [8] Zhang et al. Membership Inference Attacks Cannot Prove that a Model Was Trained On Your Data. SaTML 25 [9] Morris et al. Language Model Inversion. ICLR 24 [10] Carlini et al. Quantifying memorization across neural language models. ICLR 23
null
null
null
null
null
null
OmniAudio: Generating Spatial Audio from 360-Degree Video
Accept (poster)
Summary: This work created Sphere360, a real-world dataset for realistic 3D audio reproduction. An efficient sem-automated pipeline for collecting and cleaning paired video-audio data is established. The challenges of the created task are clearly described. The demos are interesting. Code and datasets will be made publicly available. Claims And Evidence: Yes. The dual-branch framework which utilizes panoramic and FoV video inputs is verified on the created Sphere360 dataset. The claimed contributions are well supported. Methods And Evaluation Criteria: Yes. The created Sphere-Bench provides a credible benchmark. Theoretical Claims: Yes. The usage of the equirectangular representation and the extraction of the FoV video follow standard projections. Experimental Designs Or Analyses: Yes. The experimental results and comprehensive analyses provide evidences of the effectiveness of the proposed solution. Supplementary Material: Yes. The distributions of the dataset are useful for better understanding the benchmark. Relation To Broader Scientific Literature: Yes. The method is relevant to autonous driving and augmented reality. Essential References Not Discussed: No. The related work analysis is comprehensive. Other Strengths And Weaknesses: 1. If it is possible, please consider conducting some experiments with different FoV setups for comparison to help understand the most suitable configuration of the framework depicted in Fig. 2. 2. The computational complexity results like FLOPs/MACs, the number of parameters, and the training/inference time of the proposed method could be presented to help understand the efficiency of the proposed solution. 3. It would be nice to discuss some limitations of the presented work and point out some future work directions. 4. In the related work section, it would be nice to also discuss some research works on panoramic video processing such as panoramic semantic segmentation and panoramic generation, etc. Most of the concerns have been addressed in the rebuttal. The reviewer would like to maintain the positive rating. Other Comments Or Suggestions: If it is possible, please consider conducting a user study to compare the quality of the generated spatial audio data. The term of "FOV video inputs" could be revised to "perspective video". FOV means the field of view, which could also be associated to panoramic data with a large FOV. Questions For Authors: Would you consider your proposed method with more recent flow-matching strategies to help better understand the effectiveness of your solution? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of our demo page and our dataset and all your valuable feedback. We plan to open-source the codebase in April 2025 to facilitate community-driven improvements in this direction and welcome the reviewer's specific recommendations on cutting-edge techniques worthy of prioritized exploration. We hope our response fully addresses your concerns and questions. ## Open-Source Sphere360 Dataset Please check our **Response to Reviewer nf2N** under **Open-Source Sphere360 Dataset**. ## Updated Table 2 with Model Size, Inference Time, and TFLOPS We sincerely apologize for the typos in Table 2 in the submission. Please check our **Response to Reviewer Xvna** under **Claims that AudioSpace achieves SOTA performance** As shown in the updated Table 2, **AudioSpace indeed achieves state-of-the-art performance on both in-domain Sphere360 and out-domain YT360-Test.** We also added the number of parameters, inference time, and TFLOPS into comparison. Although AudioSpace has slightly more parameters than MMAudio+AS, **it achieves notably faster inference than all baselines.** ## Extended V2A Validation We further compared AudioSpace with baselines on the perspective video-to-stereo audio (V2A) generation task on VGGSound. Under the same multimodal setting as MMAudio (i.e., + Text modality), **AudioSpace+Text notably outperforms SOTA MMAudio and sets new SOTA performance on the traditional V2A task**, in addition to the new 360V2SA task. | Model|Params|FD|KL| |-------------------------|--------|--------|--------| | Diff-Foley | 859M | 324.88 | 3.03 | | Seeing-and-Hearing | 415M | 261.60 | 2.30 | | Frieren | 159M | 80.69 | 2.83 | | MMAudio | 1.0B | 43.26 | 1.56 | | AudioSpace (ours) | 1.2B | 34.56 | 1.64 | | **+ Text modality** | 1.2B | **33.24** | **1.40** | ## More Ablation Study on Dual-Branch Design Thank you for the insightful suggestion for more FOV settings. We attach more dual-branch experimental results in Response to Reviewer nf2N under [Combining Equirectangular Projection with Multiple FOV Cuts]. **They further demonstrate that our "Frontal + ERP" dual-branch design outperforms other dual-branch variants.** We will add these experimental results and analyses to the revised paper. ## Limitation and Future Work Thank you for the kind reminder. Please refer to Appendix F for discussions of limitations and future work. ## Related Works on Panoramic Video Processing We greatly appreciate your guidance in strengthening the contextualization of our work. In the revised paper, we will add discussions on advancements in panoramic video processing into Section 2 Related Work. ## User Study In this work, we employ human evaluation based on the Mean Opinion Score (MOS) to quantitatively assess both spatial audio quality (MOS-SQ) and video-audio alignment faithfulness (MOS-AF), as detailed in Section 5.1. Additionally, we present a comprehensive case study in Section 5.3 to comparatively analyze the generated outputs of AudioSpace against baseline methods. Supplementary qualitative results are included in Appendix H. We will further expand it with additional case studies in the revised paper. To enhance reproducibility and community accessibility, we are planning to deploy an online interactive demo platform via Gradio and Hugging Face Spaces for real-time generation and evaluation. ## Usage of Terminology Thank you for this suggestion. We agree that "perspective video" more accurately conveys the conventional narrow-FOV nature of the inputs compared to panoramic formats. In the revised paper, we will replace all instances of "FOV video" with "perspective video", and add a footnote in Section 3.1 clarifying that "Perspective video" denotes standard rectilinear projections with ≤120° horizontal FOV, distinct from 360° equirectangular formats. ## The Integration of more recent Flow-matching strategies We sincerely appreciate the reviewer's constructive suggestion regarding flow-matching strategies. The proposed coarse-to-fine self-supervised pre-training and dual-branch design are agnostic to flow matching strategies. Our current framework achieves state-of-the-art performance while maintaining the fastest inference speed in both 360V2A and V2A tasks. While our experiments demonstrate that the existing configuration optimally balances accuracy and efficiency for target scenarios, we fully agree that deeper integration of advanced flow matching strategies could unlock further theoretical insights. We plan to open-source the codebase in April 2025 to facilitate community-driven improvements in this direction, and welcome the reviewer's specific recommendations on cutting-edge techniques worthy of prioritized exploration. --- Rebuttal Comment 1.1: Comment: The reviewer would like to thank the authors for their rebuttal and responses. Many of the concerns have been addressed. The usage of terminology and more ablations on the dual-branch design and validations should be updated in the final version. The reviewer would like to maintain the positive rating of weak accept. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your time, expertise, and constructive engagement throughout the review process. We are deeply grateful for your recognition of our efforts to address the concerns raised earlier. Your valuable feedback has been instrumental in significantly enhancing the quality of our work. Thank you very much for confirming that **"Many of the concerns have been addressed."**. We will update the usage of terminology and more ablations on the dual-branch design and validations, as well as other mentioned revisions, in the revised paper. To further facilitate open research, community engagement, and enhance the quality of our paper, we have completed the following works: 1. **Inference Code Release** We have open-sourced the inference code via the anonymous repository: https://anonymous.4open.science/r/Audiospace-1348/. Due to the model size (>10GB) and anonymity constraints, we are currently exploring feasible methods to share the pre-trained model weights. We will update the repository promptly once we resolve the issue of anonymously releasing large models. Additionally, we are actively organizing the training code and plan to release it by the end of April 2025. 2. **Enhanced Dataset Documentation** The Sphere360 dataset repository (https://anonymous.4open.science/r/Sphere360-CF51/) has been updated with enhanced documentation, including clearer usage guidelines and dataset structure descriptions. 3. **Manuscript Revisions** All suggestions and experimental results discussed during the rebuttal phase will be carefully incorporated into the revised manuscript to ensure a comprehensive presentation of our contributions. **We are more than happy to provide further responses to any additional questions or concerns you might have before the author response deadline**. We respectfully ask if you would consider increasing your original Overall Recommendation 3:Weak accept, based on our responses and efforts to address all concerns and questions and enhance the paper, as we aim to fully address all feedback provided. Should any further clarifications or adjustments be needed, please feel free to share your concerns—we are fully committed to addressing them promptly.
Summary: This paper addresses a novel task called 360V2SA, which involves generating First-order Ambisonics (FOA) spatial audio from 360-degree videos. To tackle this challenge, the authors introduce a new dataset called Sphere360, containing more than 100k clips of real-world 360-degree videos paired with their FOA audio. They also propose AudioSpace, a dual-branch flow-matching framework that combines both panoramic and field-of-view (FOV) video representations. The model is first pre-trained in a self-supervised manner using both spatial (FOA) and non-spatial audio, and is then fine-tuned to generate high-quality spatial audio. Experimental results demonstrate that AudioSpace outperforms several baseline models. ## Update after rebuttal After the discussion, I raised my rating since the authors addressed my concerns. Claims And Evidence: The claims made by the authors are largely supported by the experiments on Sphere360 and YT360. They demonstrate meaningful improvements in various spatial audio metrics (like DoA errors) as well as in perceptual studies (MOS scores). The large scale of their new dataset and the multiple comparisons to baselines also strengthen their evidence. Methods And Evaluation Criteria: The approach focuses on a flow-matching generative model combined with a two-stage training pipeline. They evaluate the generated audio via both objective metrics (e.g., FD, KL divergence, DoA angle errors) and subjective ratings (MOS-SQ for spatial audio quality and MOS-AF for video alignment). These metrics are highly relevant for audio generation tasks. The evaluation on both Sphere360 and an out-of-distribution set (YT360) is also a good indicator of generalization. Theoretical Claims: I did not find major issues with the theoretical discussion -- it is mostly referencing known generative modeling frameworks. Experimental Designs Or Analyses: The experiments are well-structured, with a thorough comparison to baselines. The results consistently favor the proposed approach. However, I think more analysis would help clarify a few points: 1. The dual-branch approach is compared against single-branch variants (Section 5 and Table 4), but I would like more clarity on whether combining equirectangular with multiple FOV “cuts” (instead of just one) provides additional improvement. 2. The ratio between non-spatial data and FOA data for pre-training could be further detailed, helping readers understand how general audio tasks complement the specialized FOA domain. Supplementary Material: I have reviewed all the content in the Appendix. I think a bit more detail on how future researchers can access and use the dataset would strengthen the transparency of this work. Relation To Broader Scientific Literature: This paper advances video-to-audio generation by focusing on FOA audio from 360-degree sources, which has a meaningful place in multi-modal generative modeling. Essential References Not Discussed: I think the references cited are comprehensive. Other Strengths And Weaknesses: I like this paper's motivation and demo, here're some additional weaknesses: 1. I think adding more detail about the variety of ambient scenes or specific events in Sphere360 in the Appendix would help demonstrate coverage. 2. The authors briefly mention the dual-branch system (Section 4.3), but I found the exact architectural interplay between the local and global features a bit missing. A more detailed figure of how these features combine could help. 3. The authors should consider discussing the inference latency of this model, as it is an important factor for AR/VR applications. Other Comments Or Suggestions: None. Questions For Authors: See the content in the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for recognizing our motivation and demo, as well as acknowledging our experimental validations. We hope our response addresses all your concerns and questions. ## Updated Table 2 and Inference Latency We sincerely apologize for the typos in Table 2 in the submission. Please check our **Response to Reviewer Xvna** under **Claims that AudioSpace achieves SOTA performance**  As shown in the updated Table 2, **AudioSpace indeed achieves state-of-the-art performance on both in-domain Sphere360 and out-domain YT360-Test.** We also added the number of parameters and inference time into comparison. Although AudioSpace has slightly more parameters than SOTA MMAudio+AS, **it achieves notably faster inference than all baselines.** ## Combining Equirectangular Projection with Multiple FOV Cuts We are grateful for your suggestion regarding the dual-branch approach and its comparison with dual-branch variants. Our dual-branch design employs a 120° frontal view to concentrate on prominent sound sources while integrating panoramic features as contextual conditioning to overcome the limitations of visual information beyond the central view. To elucidate the impact of combining equirectangular projection with multiple FOV cuts, we have assessed three innovative FOV-cut strategies to replace the local FOV video: hexadirectional 360° cuts (ERP+6cuts) which capture views from front, back, left, right, up, and down directions, quadrant cuts (ERP+4cuts) which capture views from front, left, right, and back directions, and bipolar cuts (ERP+2cuts) which capture views from front and back directions. | Dual-Branch Method| FD | KL | $\Delta_{abs}\theta$ | $\Delta_{abs}\phi$ | $\Delta_{Angular}$ | |--------------------|------|------|----------------------|--------------------|--------------------| | ERP+Front | **88.30** | **1.58** | **1.36** |0.52 | 1.28 | | ERP+2FOV Cuts | 95.84 | 1.77 | 1.37 | 0.55 | 1.30 | | ERP+4FOV Cuts | 92.89 | 1.65 |1.36| **0.51** | 1.27 | | ERP+6FOV Cuts | 90.16 | 1.59 | 1.37 | 0.52 | **1.26** | The results show that **our dual-branch strategy still achieves the best audio quality**, reflected in FD and KL, and **achieves similar spatial metrics compared to multiple FOV views**. ## Ratio between non-spatial data and FOA data for pre-training The pre-training corpus consists of approximately 2M samples from general non-spatial audio datasets (detailed in Section 5.1) and 100K samples from the specialized Sphere360 FOA dataset, yielding a ratio of 20:1 between non-spatial and spatial audio data. Table 3 shows that coarse-to-fine pre-training substantially improves FD and KL than pre-training with non-spatial data only or FOA data only, confirming that general audio tasks effectively complement the specialized FOA domain. ## Open-Source Sphere360 Dataset We fully agree that transparency and reproducibility are critical for the research community. 1. Open-Source Release: Sphere360 dataset with metadata (including all `youtube_id` identifiers and timestamps) and semi-automated construction pipeline are released at https://anonymous.4open.science/r/Sphere360-CF51/. 2. Legal-Compliant Data Access Due to YouTube's Terms of Service restrictions, we cannot directly redistribute original videos. Instead, we provide a version-controlled crawling script that reconstructs the raw dataset using the provided `youtube_id` list. This framework ensures that researchers can fully reconstruct our dataset while complying with content distribution policies. We will refine the documentation based on community feedback to ease the procedure. ## More Different Audio Events in Appendix Thank you for the valuable suggestion. We will expand Appendix by incorporating additional examples of diverse audio events to better demonstrate the coverage and diversity of our dataset. ## More Detailed Figure for Dual-Branch Architecture Thank you for highlighting this critical architectural detail. **We will revise Figure 10 to explicitly illustrate the dual-branch fusion process** with the following additions: 1. Local Feature Pathway (FOV-Audio Alignment) * Visual FOV patches are processed through a linear layer to align their dimensionality with audio latents. * The adapted FOV features are directly combined with audio latents via element-wise addition before entering the Diffusion Transformer, ensuring pixel-level spatial correspondence. 2. Global Context Pathway (360° Scene Guidance) * The full 360° video features are condensed into a global descriptor using max-pooling, capturing holistic scene semantics. * This global descriptor is fused with the diffusion timestep embedding through element-wise addition, providing consistent scene-level conditioning across all transformer layers. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns and including the additional experiments. From my perspective, the paper has made enough contributions, and I will raise my rating to Accept unless Reviewer Xvna identifies further major concerns (the initial concerns could be addressed after reading the rebuttal). Please include all new experiments in the revised version. --- Reply to Comment 1.1.1: Comment: Thank you sincerely for your time, expertise, and constructive engagement throughout the review process. We are deeply grateful for your recognition of our efforts to address the concerns raised earlier, and for your decision to improve the overall recommendation of our paper. Your valuable feedback has been instrumental in significantly enhancing the quality of our work. ## Updates for Reproducibility and Quality: To further facilitate open research, community engagement, and enhance the quality of our paper, we have implemented three key improvements: 1. **Inference Code Release** We have open-sourced the inference code via the anonymous repository: https://anonymous.4open.science/r/Audiospace-1348/. Due to the model size (>10GB) and anonymity constraints, we are currently exploring feasible methods to share the pre-trained weights. We will update the repository promptly once resolved. Additionally, we are actively organizing the training code and plan to release it by the end of this month. 2. **Enhanced Dataset Documentation** The Sphere360 dataset repository (https://anonymous.4open.science/r/Sphere360-CF51/) has been updated with enhanced documentation, including clearer usage guidelines and dataset structure descriptions. 3. **Manuscript Revisions** All suggestions and experimental results discussed during the rebuttal phase will be carefully incorporated into the revised manuscript to ensure a comprehensive presentation of our contributions. Should any further clarifications or adjustments be needed, please feel free to share your concerns—we are fully committed to addressing them promptly.
Summary: This paper addresses an interesting problem of generating spatial audio from panoramic videos. They first propose a real-world dataset, Sphere360, for 360 videos and their spatial audios. They also propose an effective training strategy combining coarse-to-fine pre-training and dual-branch video encoding for spatial-aware generation. The proposed method, AudioSpace, achieves state-of-the-art performance on the Sphere360-Bench Claims And Evidence: * The high-quality demo videos show great performance in generating spatial audio from the 360 video. * The quantity results, e.g., Tab.2, demonstrate significant performance improvement of the proposed method. Methods And Evaluation Criteria: The evaluation is reasonable. For the proposed method: There are two main stages: (1) A coarse-to-fine self-supervised flow matching pre-training to alleviate the issue of data scarcity using both unlabeled spatial and non-spatial audio. (2) Fine-tuning the diffusion transformer by efficiently integrating panoramic video representation. Theoretical Claims: There is no proof for theoretical claims Experimental Designs Or Analyses: The experiments and ablation study are well-designed. I want to see more visual comparisons for the ablation study. Supplementary Material: Yes, The dataset collection pipeline and the details of the evaluation. Relation To Broader Scientific Literature: Relate to some research area of CG. Essential References Not Discussed: None Other Strengths And Weaknesses: Weaknesses: * As the methodology consists of two phases of training, are there relevant experiments analyzing the respective roles of these two phases of training and the relationship between the two phases of training? * I noticed that the demo videos are all single scenes, what would happen if the given 360 video had scene transitions? And the authors should discuss what the length limit of a 360 video should be. Other Comments Or Suggestions: The motivation and quality of this paper are very good and I would have increased my score if the author had answered my questions adequately at the rebuttal. Questions For Authors: Refer to the "Other Strengths And Weaknesses". Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of the strong motivation and exceptional quality of our paper. We hope our response thoroughly addresses your concerns and questions. ## Updated Table 2 and Inference Latency We sincerely apologize for the typos in Table 2 in the submission. Please check our **Response to Reviewer Xvna** under **Claims that AudioSpace achieves SOTA performance.** As shown in the updated Table 2, **AudioSpace indeed achieves state-of-the-art performance on both in-domain Sphere360 and out-domain YT360-Test.** We also added the number of parameters and inference time into comparison. Although AudioSpace has slightly more parameters than MMAudio+AS, **it achieves notably faster inference than all baselines.** ## Roles of Training Stages AudioSpace includes a coarse-to-fine self-supervised pre-training stage (**Stage 1**) and a spatial-aware supervised fine-tuning stage (**Stage 2**). ### 1. Roles of Training Stages **Stage 1**: Utilizes large-scale unlabeled audio datasets (~2M samples) to build foundational audio understanding capabilities, crucial for 360-degree video-to-spatial audio generation. It focuses on **General Audio Distribution Learning** through masked audio context modeling, enhancing audio feature representation and temporal coherence. **Stage 2**: Tailors the model for spatial audio generation from 360-degree video. Key aspects of Stage 2 are: * **Modality Alignment**: Integrates video data for precise spatial audio generation. * **Task-Specific Optimization**: Refines the model for the 360V2SA goal, balancing audio quality and spatial accuracy. ### 2. Validation on 360V2SA Task | Configuration | FD | KL | | --- | --- | --- | | Stage 2 only (full data) | 104.57 | 1.83 | | Stage 1+2 (full data) | **88.30** | **1.58** | | Stage 1+2 (80% data) | 89.86 | 1.60 | | Stage 1+2 (60% data) | 93.11 | 1.80 | | Stage 1+2 (40% data) | 105.26 | 1.88 | The table shows that after Stage 1, using 40% of Stage 2 data matches Stage 2 full-data baseline, 60% outperforms it, and 100% substantially exceeds it, in FD and KL: Stage 1 leads to reduced reliance on labeled data and increased robustness. ### 3. Further Validation on V2A Task We validate the effects of training stages on traditional V2A tasks on Sphere360. Stage 1 reduces FD by 16.3% relative and KL by 19.2% relative, confirming its effectiveness in improving audio quality. | Stage | FD | KL | | --- | --- | --- | | Stage 2 | 41.30 | 2.03 | | Stage 1+2 | **34.56** | **1.64** | The two stages complement each other. Stage 1 builds a strong audio base, while Stage 2 adds modality-specific constraints, preventing overfitting to limited video-audio pairs. The tables highlight the necessity of both stages. Without pre-training, the model struggles with coherent audio, even with video guidance. ## Video Length Constraints Our training uses fixed 10-second clips and rotary positional encoding rather than absolute positional embedding for flexible inference lengths. However, the 10-second limit of the video datasets restricts generated audio to under 10 seconds. The self-supervised pre-training helps with audio coherence, but reliability beyond 10 seconds is uncertain. This is due to potential degradation in modality alignment and feature consistency. Future research is needed to develop extended temporal modeling for long-form audio synthesis. ## Scene Transition Cases Thank you for raising this interesting and insightful question. Our training and demo data mainly use fixed-camera 360° videos for immersive experiences. To test AudioSpace’s robustness to scene transitions, we used 50 single-scene videos, paired them into 25 groups, and spliced 5-second segments into 10-second clips with abrupt transitions. **We added interactive spatial audio for these clips on the demo page - [Scene Transition Cases]**. Results show that in some examples, our model can generate transitions that are natural and seamless. It even integrates the content from both segments into a coherent whole—for instance, when the two segments showcase performances by different musical instruments, the synthesis can merge them into a balanced duet at times. However, because our training data predominantly consists of single-scene videos, the model occasionally emphasizes one segment over the other, sometimes overlooking the acoustic nuances of the less dominant scene. ## More Visual Comparisons for Ablation Studies Due to the time limit of the first rebuttal phase, we will add more visual comparisons for the ablation study in the final response. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' responses, and I will keep my acceptance. --- Reply to Comment 1.1.1: Comment: Thank you sincerely for your time, expertise, and constructive engagement throughout the review process. We deeply appreciate your recognition of our work, as well as your valuable feedback that has significantly improved the quality of this work. ## More Visual Ablation Comparisons As acknowledged during the initial review phase, time constraints limited our inclusion of detailed ablation visualizations. In direct response to your feedback, we have now comprehensively updated the **[Demo Page - Ablation Study]** with systematic comparisons. These visualizations explicitly demonstrate the contributions of individual components in our framework, aligning with your suggestions for methodological transparency. ## Updates for Reproducibility and Quality: To further facilitate open research, community engagement, and enhance the quality of our paper, we have implemented three key improvements: 1. **Inference Code Release** We have open-sourced the inference code via the anonymous repository: https://anonymous.4open.science/r/Audiospace-1348/. Due to the model size (>10GB) and anonymity constraints, we are currently exploring feasible methods to share the pre-trained weights. We will update the repository promptly once resolved. Additionally, we are actively organizing the training code and plan to release it by the end of this month. 2. **Enhanced Dataset Documentation** The Sphere360 dataset repository (https://anonymous.4open.science/r/Sphere360-CF51/) has been updated with enhanced documentation, including clearer usage guidelines and dataset structure descriptions. 3. **Manuscript Revisions** All suggestions and experimental results discussed during the rebuttal phase will be carefully incorporated into the revised manuscript to ensure a comprehensive presentation of our contributions. Should any further clarifications or adjustments be needed, please feel free to share your concerns—we are fully committed to addressing them promptly.
Summary: This paper proposes the task of generating spatial audio from 360-degree videos. To support this task, the authors construct a dataset named Sphere360, comprising curated real-world 360-degree videos collected from YouTube. Leveraging this dataset, the authors introduce the AudioSpace model, which employs self-supervised pretraining with both spatial and non-spatial datasets, along with a dual-branch architecture utilizing panoramic and field-of-view (FOV) video inputs. Given the preliminary nature of this task, several baseline models are established for comparison. Experimental results indicate that AudioSpace achieves promising outcomes in both objective and subjective evaluations. Claims And Evidence: - The claim that AudioSpace achieves state-of-the-art (SOTA) performance is questionable. The authors frequently emphasize their final model's results in bold throughout Tables 2, 3, and 4, even in cases of tied scores or superior performance by other models. This could mislead readers into perceiving AudioSpace as consistently superior across all metrics. - The introduced 360V2SA dataset represents a valuable contribution to the community, facilitating further research into 360-degree video-guided spatial audio generation. Methods And Evaluation Criteria: - In Section 3 (Data Cleaning), the authors state that videos with an audio-visual similarity score below 1 are discarded. Clarification is needed regarding this threshold. Specifically, what is the range of audio-visual similarity scores? If this score refers to cosine similarity, the threshold value provided may be incorrect or misleading. - During 360-degree video-guided fine-tuning, authors mention max-pooling 360 features to serve as a global condition. However, extracting only a single vector from the image encoder might not sufficiently represent motion information (e.g., a car moving from right to left). Additional explanation or justification of this method is necessary. - Are the outputs depicted in Figure 2(b) fed directly into the decoder of the spatial VAE? The inference of each component requires clearer explanation. - The selection method for the field of view (FOV) input in the visual encoder is not explained. Additionally, how should the FOV be determined during test time? Clarification on FOV selection criteria is necessary. Theoretical Claims: No theoretical claims are presented. Experimental Designs Or Analyses: - The comparison setup may not be entirely fair, as existing methods are trained exclusively on Sphere360, whereas AudioSpace undergoes initial training on FreeSound/AudioSet/VGGSound before Sphere360. Conducting comparisons with identical training datasets across all methods would better highlight the effectiveness of the proposed model. Additionally, the authors note in Table 3 that coarse training significantly contributes to performance; thus, applying similar training strategies to existing methods would clarify the specific impact of the proposed model design. - In Table 2, specifically for $\Delta_{abs}\phi$ in YT360-Test, AudioSpace is incorrectly marked in bold despite performing worse than ViSAGe. Similar inconsistencies appear in Table 3, where numbers ($\Delta_{angular}$) are incorrectly highlighted or tied results ($\Delta_{abs}\theta$) are unnecessarily emphasized for AudioSpace. Such inconsistencies may create misleading interpretations, suggesting the authors intentionally emphasize favorable results. Similar issues appear in Table 4. Supplementary Material: - The reviewer has examined the supplementary material. Relevant concerns and questions are detailed in other sections. Relation To Broader Scientific Literature: No explicit relation to broader scientific literature is identified. Essential References Not Discussed: N/A Other Strengths And Weaknesses: See other sections. Other Comments Or Suggestions: - Line 158 references Table 8, which does not contain information about audio event distribution as suggested. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive feedback and detailed suggestions. We hope our response below fully resolves your concerns and questions. ## Claims that AudioSpace achieves SOTA performance We sincerely apologize for the two typos in Table 2 main results and typos and boldfacing errors in Table 3 and Table 4. These were completely unintentional oversights on our part. **As shown in the updated Table 2 below, AudioSpace indeed achieves state-of-the-art performance on both in-domain Sphere360 and out-domain YT360-Test.** ### 1. Open-Source Commitment Codebase and model weights will be open-sourced by April 2025. The Sphere360 dataset is already released at https://anonymous.4open.science/r/Sphere360-CF51/. ### 2.Updated Table 2 (Bold=Best, Highlight fixed numbers): Fixed typos in YT360-Test results. AudioSpace achieves **best objective and subjective results and fastest inference speed**: |Model|FD|KL|$\Delta_{abs}\theta$|$\Delta_{abs}\phi$|$\Delta_{Angular}$|Params|Infer Time(s)|TFlops| |---|---|---|---|---|---|---|---|---| |Diff-Foley+AS|361.65|2.22|/|/|/|0.94B|2.40|9.11| |MMAudio+AS|190.40|`1.71`|/|/|/|1.03B|3.01|7.18| |ViSAGe(FOV)|199.09|1.86|2.21|0.88|1.99|0.36B|22.37|0.68| |ViSAGe(360)|225.52|1.95|2.18|0.86|1.98|0.36B|22.37|0.68| |AudioSpace|**92.57**|**1.64**|**1.27**|**`0.53`**|**1.27**|1.22B|**0.92**|21.15| ### 3. Extended V2A Validation: AudioSpace also excels in video-to-audio generation. Please see **Response to Reviewer cQFE** for results. ### 4. Fixed Table 3 and Table 4: AudioSpace results (row1 in both tables) are correct and best in FD, KL, and most spatial metrics. Fixed Tables 3 and 4 are on the demo page. **Coarse-to-fine pre-training and dual-branch design are superior to alternatives**. ## Fairness of Experimental Comparisons ### 1.Fair Comparison Protocol All baselines in Table 2 follow their original pre-training and are fine-tuned on Sphere360 with the same setup. |Model|Training Data (Type/Scale)|Pretraining Strategy| |---|---|---| |ViSAGe|VGGSound/YT-360 (~300k pairs)|Video-guided init.| |MMAudio|VGGSound/WavCaps/AudioCaps (~1.1M pairs)|Text+video init.| |AudioSpace|FreeSound/AudioSet/VGGSound/Sphere360 (~2M audio samples)|**Audio-centric pretraining**| **Uniform pretraining is unsuitable as baselines need multimodal pairs (text/video+audio)**, while AudioSpace focuses on **audio diversity** through domain-specific pretraining. ### 2. Architectural Superiority: Table 3 **w/o PT** (trained from scratch on Sphere360) and Table 2 show structural advantages: **Without pre-training, AudioSpace surpasses all pretrained-finetuned baselines on Sphere360 in all objective metrics.** ## Threshold for audio-visual similarity score for data cleaning The threshold of 1 corresponds to the original cosine similarity score scaled by 10 for easier processing, which does not impact comparative results or conclusions. The threshold was chosen based on video-audio alignment control and data analysis. ## 360˚ Video Representation We appreciate the question about motion representation in ERP-formatted videos. Our dual-branch design addresses ERP limitations as follows: ### 1. Geometric Distortion in ERP ERP causes planar distortions and boundary discontinuities, conflicting with CLIP encoders’ planar continuity assumptions, leading to feature misalignment. Table 4 shows single-branch ERP-only model suffers from severe feature distortion (FD: 97.83 vs. our 88.30) and semantic inconsistency (KL: 1.87 vs. our 1.58), indicating naive planar encoding fails in spherical contexts. ### 2. Dual-Branch Design with Frontal Perspective ERP distorts global spatial relationships, but motion trajectories remain coherent within FOV. Our temporal modeling FoV branch isolates motion modeling to distortion-free frontal regions, ensuring stable motion encoding, as shown by Y-channel intensity variations in Fig.13. ### 3. Limitations and Future Directions Objects entirely outside the frontal viewport may challenge our current approach, though they represent <5% of training data in typical 360° videos. Future work will explore geometric-aware feature rectification to address ERP distortions at polar regions. ## Clarification of Inference During inference, the frontal-view segment of the panoramic video is used for local conditioning, and the full panoramic video for global conditioning. Both features guide the diffusion transformer's ODE solver (Euler method, CFG scale=5). The audio patents are decoded by the pre-trained spatial VAE decoder to produce the final spatial audio outputs. ## FOV selection criteria During training and testing, we use the frontal-view large FOV (120°) as the default FOV video. This choice aligns with real-world conditions where front-facing viewpoints capture main sound sources. ## Typos Thank you for your reminder. Line 158 should refer to Figure 8, not Table 8, for the audio event distribution. We will make all the fixes in the revised paper. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. Most of my concerns are resolved and will update by ratings to weak accept. --- Reply to Comment 1.1.1: Comment: Thank you sincerely for your time, expertise, and constructive engagement throughout the review process. We are deeply grateful for your recognition of our efforts to address the concerns raised earlier, and for your decision to improve the overall recommendation of our paper. Your valuable feedback has been instrumental in significantly enhancing the quality of our work. ## Updates for Reproducibility and Quality: To further facilitate open research, community engagement, and enhance the quality of our paper, we have implemented three key improvements: 1. **Inference Code Release** We have open-sourced the inference code via the anonymous repository: https://anonymous.4open.science/r/Audiospace-1348/. Due to the model size (>10GB) and anonymity constraints, we are currently exploring feasible methods to share the pre-trained weights. We will update the repository promptly once resolved. Additionally, we are actively organizing the training code and plan to release it by the end of this month. 2. **Enhanced Dataset Documentation** The Sphere360 dataset repository (https://anonymous.4open.science/r/Sphere360-CF51/) has been updated with enhanced documentation, including clearer usage guidelines and dataset structure descriptions. 3. **Manuscript Revisions** All suggestions and experimental results discussed during the rebuttal phase will be carefully incorporated into the revised manuscript to ensure a comprehensive presentation of our contributions. Should any further clarifications or adjustments be needed, please feel free to share your concerns—we are fully committed to addressing them promptly.
null
null
null
null
null
null
Self-cross Feature based Spiking Neural Networks for Efficient Few-shot Learning
Accept (poster)
Summary: This paper proposes a few-shot learning framework based on a spiking neural network (SNN), combining a self-feature extraction module and a cross-feature comparison module to optimize feature representation and reduce energy consumption. The paper enhances the generalization and noise resistance of the model by combining the time-efficient training loss (TET Loss) and the InfoNCE loss. Experimental results show that the framework significantly improves the classification performance on the neuromorphic dataset N-Omniglot, and achieves performance comparable to that of an artificial neural network (ANN) on the static datasets CUB and miniImageNet, while maintaining low energy consumption. The main contribution of the paper is to propose a new SNN framework that can effectively extract spatiotemporal features in small-sample learning, and verify its superiority through experiments. Claims And Evidence: The claims of the paper are fully supported by the experimental results. The authors conducted extensive experiments on multiple datasets to demonstrate the effectiveness of the proposed method. In particular, on the N-Omniglot dataset, the model achieved an accuracy of 98.9% in the 5-way 5-shot task, significantly outperforming existing SNN methods. In addition, the paper also provides detailed ablation experiments to verify the effectiveness of the self-feature extraction module and the cross-feature comparison module. The experimental design and result analysis are reasonable, and the data supports the main claims of the paper. Methods And Evaluation Criteria: The method proposed in the paper is reasonable in few-shot learning tasks, especially for the spatiotemporal feature extraction problem of SNN. The author used multiple standard datasets (such as N-Omniglot, CUB and miniImageNet) for evaluation. These datasets are widely used in the field of small sample learning and can effectively verify the performance of the model. In addition, the paper also demonstrated the advantages of SNN in energy consumption through energy consumption calculation, further proving the practicality of the method. Theoretical Claims: Not applicable. This paper does not involve complex theoretical proofs. Experimental Designs Or Analyses: The experimental design is generally reasonable. The authors verified the performance of the model on multiple datasets and conducted ablation experiments to demonstrate the contribution of each module. However, the experimental part lacks robustness testing of the model under different kinds of datasets such as event-based neuromorphic datasets, which is an important consideration in practical applications. Supplementary Material: No supplementary material is provided. Relation To Broader Scientific Literature: The work in this paper focuses on few-shot learning with SNNs. The innovation of this paper is to design an SNN with hybrid feature extractions modules, and further improves the performance of SNN in few-shot learning. Essential References Not Discussed: This paper cites a large number of relevant literature, and it could provide more latest methods. Other Strengths And Weaknesses: The main advantage of this paper is that it proposes a new SNN framework that can effectively extract spatiotemporal features in few-shot learning tasks, and verifies its superior performance and energy efficiency through experiments. However, the experimental part lacks the robustness analysis of the model under different data sets such as event-based neuromorphic datasets, which is an important consideration in practical applications. Other Comments Or Suggestions: 1.Whether the network structure of SNNs could be improved to further enhance the performance? 2.The experiment mentioned that the time step T has an impact on the model performance on the neuromorphic dataset N-Omniglot , but the impact of the time step T on the results on the static dataset was not shown. Questions For Authors: 1. Are there plans to further test the robustness of the model under different dataset noise levels? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We sincerely appreciate your insightful suggestions. **Q1:The experimental part lacks the robustness analysis of the model under different datasets such as event-based neuromorphic datasets.** R1:We are very grateful for this valuable suggestion. We supplemented the performance of the model on different datasets and different noise levels to further verify the generalization performance and robustness of the model. The experimental results are shown as follows: | Dataset | Noise | Loss | Acc | |------------|-------|---------|------| | **N-Omniglot** | 0.0 | CE | 93.8 | | | | InfoNCE | 94.3 | | | 0.4 | CE | 84.2 | | | | InfoNCE | 86.5 | | | 0.8 | CE | 75.6 | | | | InfoNCE | 79.4 | | **CUB** | 0.0 | CE | 74.1 | | | | InfoNCE | 77.4 | | | 0.4 | CE | 59.8 | | | | InfoNCE | 65.7 | | | 0.8 | CE | 36.5 | | | | InfoNCE | 43.6 | **Q2:Whether the network structure of SNNs could be improved to further enhance the performance?** R2:We sincerely appreciate the reviewer's constructive suggestion. Indeed, we agree that the network architecture of spiking neural networks (SNNs) holds significant potential for further performance enhancement. While our current model has achieved state-of-the-art performance on the neuromorphic N-Omniglot dataset, we recognize several promising directions for future architectural improvements. These include exploring spiking residual connections with learnable skip weights to address the vanishing gradient problem in deep SNNs while maintaining event-driven sparsity, as well as developing sparse event-driven attention modules and spike-optimized Transformer structures that operate exclusively on active spikes. Such innovations could not only improve model performance but also further reduce energy consumption, thereby advancing the development of efficient and biologically plausible neuromorphic systems. We plan to thoroughly investigate these directions in our future work. **Q3.The effect of the time step T on the results on the static dataset is not shown.** R3: We are very grateful for your valuable suggestions. We have added experiments on static datasets CUB and *mini*ImageNet with time steps of 2, 4, and 8 to show the effect of time step T on static datasets. | Dataset | T | acc | |---------------|---|-------------| | **CUB** | 2 | 71.43±0.48 | | | 4 | 73.27±0.47 | | | 8 | 76.42±0.46 | | ***mini*ImageNet** | 2 | 60.26±0.45 | | | 4 | 60.43±0.46 | | | 8 | 60.75±0.57 | --- Rebuttal Comment 1.1: Comment: Thanks for the author's detailed response. All my concerns have been resolved. And the experimental results with different noise levels and time steps on the dataset enhace the quality of the paper further.
Summary: This paper proposes a few-shot learning framework based on spiking neural networks (SNNs), which combines a self-feature extraction module and a cross-feature comparison module to significantly improve classification performance and reduce energy consumption. This method is innovative in using SNNs for efficient few-shot learning, and experimental results show that it outperforms existing methods on multiple data sets. Claims And Evidence: The high efficiency and low energy consumption of the SSCF model proposed in this paper in few-shot learning are supported by detailed experimental results, especially its excellent performance on datasets such as N-Omniglot, CUB-200-2011, and miniImageNet, showing significant performance improvement and energy consumption reduction. Methods And Evaluation Criteria: The SSCF model and its evaluation criteria proposed in this paper are reasonable and effective in the few-shot learning problem. By combining the self-feature extraction module and the cross-feature comparison module, the model significantly improves the feature representation capability, which is particularly suitable for efficient learning in resource-constrained environments. In dynamic and static benchmark data sets, its superior performance in different scenarios is demonstrated in many aspects. At the same time, the training strategy combining TET Loss and InfoNCE Loss further enhances the generalization ability and robustness of the model, making this method have high practical value in dealing with a small number of sample learning problems in practical applications. Theoretical Claims: This paper has no theoretical proof. Experimental Designs Or Analyses: The experimental design and analysis in this paper are reasonable and effective as a whole. However, the selection of the optimal λ coefficient also lacks sufficient experimental basis. The existence of these problems makes the rigor of some experimental designs need to be strengthened. It is recommended to further supplement relevant analysis to improve the credibility of the experiment. Supplementary Material: No supplementary material is provided for this paper. Relation To Broader Scientific Literature: The key contribution of this paper is to propose an SNN framework that combines self-feature extraction and cross-feature comparison, which significantly improves the classification performance and energy efficiency in few-shot learning tasks. This method innovates based on existing research, especially compared with traditional ANN methods, showing stronger generalization ability and lower energy consumption. It provides new directions and ideas for future research, especially in efficient learning in resource-constrained environments. Essential References Not Discussed: The key contribution of this paper is mainly focused on the comparison with existing few-shot learning methods, but some important related works are not mentioned, such as [1][2]. [1] Zhan Q, Wang B, Jiang A, et al. A two-stage spiking meta-learning method for few-shot classification[J]. Knowledge-Based Systems, 2024, 284: 111220. [2] Yu X, Fang Y, Liu Z, et al. Few-shot learning on graphs: from meta-learning to pre-training and prompting[J]. arXiv preprint arXiv:2402.01440, 2024. Other Strengths And Weaknesses: Strengths: 1. This paper introduces a new SNN-based framework that integrates SFE and CFC modules to enhance feature representation and classification accuracy. Ablation experiments further verify the importance of these modules in improving performance. The combination of TET loss and InfoNCE loss enhances temporal dynamics and feature discrimination, making the framework more robust to noisy data. In addition, this paper has a clear structure. 2. The method proposed in the paper shows impressive results on different datasets and is even comparable to SOTA ANN methods. Weakness: 1. The choice of the optimal λ coefficient lacks a detailed experimental basis or theoretical explanation. 2. There are not enough visualizations in the article to better intuitively feel the effectiveness of the model. 3. The experimental part lacks a comparison with the latest SNN meta-learning methods. Other Comments Or Suggestions: 1. Experiments should be set up at different time steps on static datasets to further verify the effectiveness of the model. 2. The proposed method should be compared with the latest SNN meta-learning methods, such as [1]. 3. There is an incorrect question mark in the first sentence of section 3.1, and the author should check the whole text carefully to avoid spelling errors. [1] Zhan Q, Wang B, Jiang A, et al. A two-stage spiking meta-learning method for few-shot classification[J]. Knowledge-Based Systems, 2024, 284: 111220. Questions For Authors: 1. Could the proposed model be applied to larger datasets? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We sincerely appreciate your insightful suggestions. **Q1:The choice of the optimal λ coefficient lacks a detailed experimental basis or theoretical explanation.** R1:We appreciate this insightful feedback. We supplement the detailed experimental basis from three aspects: 1.Empirical Evidence (Existing Results) Table 4 already shows λ=0.7 yields peak accuracy on CUB/*mini*ImageNet. 2.Extended ablation studies will demonstrate: λ>0.5 consistently outperforms pure contrastive learning (λ=0) by +6.2% accuracy λ<0.9 avoids overfitting to temporal features (test loss ↓15%) 3.Biological Plausibility:λ≈0.7 aligns with hippocampal learning studies :70% local plasticity (TET) + 30% global modulation (InfoNCE) **Q2: There are not enough visualizations in the article to better intuitively feel the effectiveness of the model.** R2: We sincerely appreciate the reviewer's valuable suggestion regarding the need for more visualizations to better demonstrate our model's effectiveness. We acknowledge that the lack of visual elements in our original submission may have reduced the paper's intuitive appeal. In response to this constructive feedback, we have enhanced our revision by incorporating spike-based visualizations that specifically illustrate the activity patterns within our model's encoder component. These visualizations clearly demonstrate how increasing time steps lead to progressive activation of features through spike emissions, effectively capturing richer representations. While we regret that we cannot share these visualization results during the rebuttal phase, we are confident they will significantly improve readers' understanding of our model's dynamic feature extraction capabilities in the final version. This addition will make the model's effectiveness more tangible and visually apparent to the research community. **Q3:.Experiments should be set up at different time steps on static datasets to further verify the effectiveness of the model.** R3:We are very grateful for your valuable suggestions. We have added experiments on static datasets CUB and *mini*ImageNet with time steps of 2, 4, and 8 to show the effect of time step T on static datasets | Dataset | T | acc | |---------------|---|-------------| | **CUB** | 2 | 71.43±0.48 | | | 4 | 73.27±0.47 | | | 8 | 76.42±0.46 | | ***mini*ImageNet** | 2 | 60.26±0.45 | | | 4 | 60.43±0.46 | | | 8 | 60.75±0.57 | **Q4.:Essential References Not Discussed.** R4: In our revision, we will incorporate a dedicated discussion comparing two key approaches: (1) Zhan et al.'s two-stage spiking meta-learning method [1] that introduces a novel bio-inspired framework combining spike-timing-dependent plasticity with meta-learning for few-shot classification, and (2) Yu et al.'s graph-based few-shot learning approach [2] that proposes a unified framework bridging meta-learning, pre-training and prompting techniques for graph-structured data. While Zhan's work focuses on biologically plausible SNN architectures and Yu's emphasizes graph neural networks, our method distinguishes itself by developing an event-driven, temporally-aware architecture that achieves superior computational efficiency while maintaining competitive accuracy. These comparisons will help better position our contributions within the broader landscape of few-shot learning research. **Q5:Could the proposed model be applied to larger datasets?** R5:We sincerely appreciate this forward-looking question, which aligns perfectly with our research vision. Indeed, our architecture is designed to be extensible to larger datasets, as demonstrated by its robust performance on medium-scale benchmarks like N-Omniglot (1,623 classes) and miniImageNet (100 classes). The model's strong backbone and efficient spike-based processing make it particularly suitable for scaling up. In our ongoing and future work, we plan to systematically evaluate the model's performance on larger-scale datasets such as tieredImageNet, which will further validate its scalability and generalization capabilities while maintaining computational efficiency. This extension represents a natural and important direction for our research. I have carefully revised and proofread the spelling errors and citation errors. Thank you again for your valuable suggestions. **Reference** [1] A two-stage spiking meta-learning method for few-shot classification[J]. Knowledge-Based Systems, 2024. \ [2] Few-shot learning on graphs: from meta-learning to pre-training and prompting, 2024. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. My concerns have been addressed. I have read other reviews and would like to raise my score. I hope you can incorporate my suggestions in the final version and conduct further experiments in the future.
Summary: The paper proposes a few-shot learning framework based on SNNs, which combines a self-feature extractor module and a cross-feature contrastive module to refine feature representation and reduce power consumption. Claims And Evidence: Please refer to Other Strengths and Weaknesses. Methods And Evaluation Criteria: It makes sense in general Theoretical Claims: There is a lack of theoretical analysis. Experimental Designs Or Analyses: Most experiments were checked. Please refer to Other Strengths and Weaknesses. Supplementary Material: No Supplementary Material Relation To Broader Scientific Literature: The paper proposes a few-shot learning framework based on SNNs. In its experimental settings, the method proposed in the paper has achieved competitive results (May not be the SOTA). Essential References Not Discussed: The methods from the past two years are not limited to the following two papers. Comparing recent works with the results of this paper under the same experimental settings and assumptions is recommended. [1] Brain-Inspired Meta-Learning for Few-Shot Bearing Fault Diagnosis, TNNLS 2024 [2] An Efficient Knowledge Transfer Strategy for Spiking Neural Networks from Static to Event Domain, AAAI 2023 Other Strengths And Weaknesses: 1. The paper mainly states how to design the method, but the innovation in the SNN architecture is limited. The framework combines previous works in general, and there is a lack of design motivation and theoretical analysis. 2. Regarding the comparison of energy efficiency results(Table 7), for the SNN, only the result of a certain FPGA from four years ago (80 GOPS/W) is presented. If the latest data of devices like the TPU are considered, generally, they can reach 4 TOPS/W. Calculated in this way, the energy consumption advantage of the SNN may not be that significant. It is recommended to conduct the comparison using the latest energy consumption models 3. Continuing from the previous question, in the case of insufficient evidence for the energy efficiency experiment, the method proposed in the paper has not achieved state-of-the-art results under the CUB dataset. 4. There is a lack of comparison with methods from recent years on the N-Omniglot dataset. The methods from the past two years are not limited to the following two papers. Comparing recent works with the results of this paper under the same experimental settings and assumptions is recommended [1] Brain-Inspired Meta-Learning for Few-Shot Bearing Fault Diagnosis, TNNLS 2024 [2] An Efficient Knowledge Transfer Strategy for Spiking Neural Networks from Static to Event Domain, AAAI 2023 Other Comments Or Suggestions: some typos, like line 121. Questions For Authors: Please refer to Other Strengths and Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the insightful feedback, which has helped us clarify our contributions and identify areas for improvement. Below, we address each concern point-by-point: **Q1:Design motivation and theoretical analysis.** R1:Our paper introduces a novel framework, SSCF (Spiking Self-Cross Feature Network), which advances SNN-based few-shot learning through two key innovations: (1) a Self-Feature Extractor that captures intra-class temporal correlations, overcoming SNNs' spatiotemporal feature extraction limitations, and (2) a Cross-Feature Contrastive module employing 4D convolutions and joint attention to optimize support-query alignment. Unlike existing SNN-FSL works focusing on spike-time encoding, our framework uniquely integrates self-correlation analysis with cross-attention mechanisms, offering more comprehensive feature learning while maintaining SNNs' energy efficiency advantages. **Q2:About the last Energy consumption model comparison.** R2:Thank you for highlighting this important aspect. Because of the real condition limitation in the lab, we are sorry that we do not have the conditions to use TPU yet. The experiments in our paper are conducted on NVIDIA 4060ti, with a performance of 22 TFLOPS. That is why we employ the energy consumption estimation based on the value 1.732mJ, which is consistent with [1]Efficient Event-based Semantic Segmentation with Spike-driven Lightweight Transformer-based Networks,2024. Here, we further supplement the new energy consumption estimation by referring the recent energy as [2]Spiking-physformer: camera-based remote photoplethysmography with parallel spike-driven transformer,2025. SOPs= *fr* × T × FLOPs Power(ANN)=4.6pJ × FLOPs Power(SNN)=0.9pJ × SOPs | Method | SOPs | FLOPs | Energy | |------------|--------|--------|----------| | ReNet(ANN) | - | 4.84G | 22.264mJ | | SSCF(ours) | 1.39G | 0.13G | 1.849mJ | **Q3:Insufficient evidence for the energy efficiency experiment, CUB dataset not achieved SOTA.** R3:Thanks for your suggestion. The significant energy reduction stems from SNNs’ event-driven, sparse processing, aligning with neuromorphic principles. While performance is slightly lower than SOTA (-3.22% in 5-way-1-shot), our method bridges SNNs and ANNs, offering a low-power alternative with minimal accuracy loss—a key contribution for resource-constrained applications.Since SNNs have great spatiotemporal dynamics thanks to the spike-based neuron computation model, thus have advantages on processing event-based neuromorphic dataset, our experiments verified that the proposed SNNs’ few shot learning method remains and even improves these advantages. On event datasets (e.g., N-Omniglot), our model achieves 84.4% (20-way-1-shot), improving +20.3% over [3], leveraging SNNs’ spatiotemporal dynamics for event-data advantages. Moreover,on *mini*ImageNet, it also outperforms [4] by +7.3%, demonstrating competitive performance.Therefore, we emphasize that our model achieves competitive performance among these datasets. **Q4:Comparison with methods from recent years on the N-Omniglot dataset.** R4:First of all, thank you very much for the two references you provided. [5] Knowledge Transfer for SNNs (AAAI 2024) explores static-to-event domain adaptation, which is orthogonal to our work. Their experiments are conducted on event-based datasets, and our performance on N-Omniglot surpasses it. [6] Brain-Inspired Meta-Learning (TNNLS 2024) focuses on bearing fault diagnosis, due to the data employed in this paper focus on bearing part failure problem and is not public, which makes it difficult for me to compare it under the same experimental setting. Instead, we add discussion about this study in the related work section. | Dataset | method | backbone | Task | acc | |------------|---------------------|----------|--------------|------| | **N-Omniglot** | plain | SCNN | 20-way 1-shot | 63.4 | | | Knowledge-Transfer[2] | SCNN | | 64.1 | | | SSCF(ours) | SCNN | | 67.8 | | | SSCF(ours) | VGGSNN | | 84.4 | I have carefully revised and proofread the spelling errors and citation errors. Thank you again for your valuable suggestions. **Reference** [1] Efficient Event-based Semantic Segmentation with Spike-driven Lightweight Transformer-based Networks, 2024.\ [2] Spiking-physformer: camera-based remote photoplethysmography with parallel spike-driven transformer. Neural Networks, 2025\ [3] An Efficient Knowledge Transfer Strategy for Spiking Neural Networks from Static to Event Domain, AAAI 2024.\ [4] A two-stage spiking meta-learning method for few-shot classification[J]. Knowledge-Based Systems, 2024.\ [5] An Efficient Knowledge Transfer Strategy for Spiking Neural Networks from Static to Event Domain, AAAI 2024.\ [6] Brain-Inspired Meta-Learning for Few-Shot Bearing Fault Diagnosis, TNNLS 2024 --- Rebuttal Comment 1.1: Comment: Thank you for the responses, which have addressed most of my concerns. I have re-rated the paper to “weak accept”.
Summary: This paper focuses on leveraging spiking neural networks (SNNs) for few-shot learning (FSL) to enhance generalization ability and energy efficiency. The proposed method combines a self-feature extractor module and a cross-feature contrastive module to refine feature representation and reduce power consumption. Experimental results show that the proposed method improves the classification performance with low power consumption. Claims And Evidence: The majority of the claims presented in the submission are backed by clear evidence, though certain aspects are problematic: (1) Generalization ability The paper claims that the model shows strong generalization capabilities; however, the paper only evaluates it on three datasets, leaving it unclear how effectively the model would generalize to other, more diverse datasets, such as tieredimagenet, cifar, metadataset (2) The relevance to FSL The proposed method does not address the challenges faced in few-shot learning, and its relevance to few-shot learning tasks is limited. It seems this is a general work not specified for FSL. Methods And Evaluation Criteria: The benchmark is designed for few-shot learning, but the proposed method has little relevance to few-shot learning. Theoretical Claims: The theoretical claims about the model and losses are correct. However, the proposed method is not novel; it merely employs two simple modules on SNN. While these modules enhance performance, there is, firstly, no reasonable theoretical analysis explaining why they are effective for few-shot tasks. Secondly, these modules bear a strong resemblance to self-attention and cross-attention mechanisms, although applied within a convolutional network framework. Experimental Designs Or Analyses: (1) Dataset The selected datasets are commonly used in few-shot learning, but with only three datasets, it is insufficient to validate the generalization capability and robustness of the proposed method. (2) Performance Evaluation Although comparisons are made with some SNN-FSL and SOTA few-shot learning methods, the methods being compared are outdated, rendering the comparison meaningless. There is no comparison with the latest and highest-performing methods. Even when compared, the proposed method does not surpass SOTA. (3) Ablation Studies The ablation experiments validate the effectiveness of the two modules, but only two datasets are tested, with the third dataset remaining unverified. Supplementary Material: This paper has no supplementary material. Relation To Broader Scientific Literature: (1) Relevance to Few-Shot Learning This paper aims to address the challenges faced in few-shot learning and serves as a supplement to few-shot learning research. However, the paper lacks an analysis of why the proposed method can tackle the challenges of few-shot learning, and its connection to few-shot learning is not clearly demonstrated. (2) Relevance to Spiking Neural Networks This paper highlights that SNNs exhibit potential in few-shot learning due to their event-driven nature and low energy consumption. Existing research, such as Efficient Online Few-Shot Learning and the SOEL system, has utilized SNNs for few-shot learning, addressing some issues but still facing challenges like capturing features effectively and performing cross-class comparisons. Building on these studies, this paper proposes an SNN framework that integrates a self-feature extractor module and a cross-feature comparison module to improve feature representation and reduce energy consumption, thereby enhancing the performance of SNNs in few-shot learning. This work serves as a supplement to existing SNN-based few-shot learning research. However, the proposed method does not resolve the issues mentioned in the paper, such as lack of cross-class feature comparison and classification. Essential References Not Discussed: No. Other Strengths And Weaknesses: Weaknesses: (1) The method proposed in the paper is not novel; it simply adds two simple attention-like modules to SNN. (2) The proposed method does not address the challenges faced by existing SNN-FSL approaches, as mentioned in the paper, such as the struggle to effectively capture features from the complex spatiotemporal characteristics of inputs and the lack of cross-class feature comparison and classification. (3) The proposed method does not address the issue of data scarcity faced in few-shot learning. (4) The methods being compared are too outdated. It is necessary to include comparisons with the latest SOTA methods. (5) Since the paper mentions that the proposed method improves generalization, yet the experiments lack evidence to substantiate this claim. (6) The method diagrams are extremely blurry and contain significant amounts of blank space. (7) There are spelling errors and blank citations.(Line 018, Line 121). Other Comments Or Suggestions: (1) Please provide additional analyses and experiments to support your claim. (2) Please correct the spelling and formatting errors. Questions For Authors: Please refer to the comments above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the insightful feedback, which has helped us clarify our contributions and identify areas for improvement. **Q1:How to tackles the challenges of SNN’s few shot learning? Simply adds two simple attention-like modules ?** R1:We appreciate this critique and clarify that our key innovations go beyond simple "attention-like" modules: Our innovations transcend conventional attention mechanisms: The Self-feature extractor enhances intra-class representation through relational feature integration ("observe yourself for what" ); The Cross Feature Contrastive module dynamically models support-query relationships via joint attention maps. Together, they synergistically improve SNN few-shot learning through dual optimization of self-representation and cross-class comparison, not mere attention replication. **Q2: Cross-class comparison?** R2:Our method enables cross-class comparison via two specialized modules: A bottleneck-structured self-feature extractor capturing intra-class patterns while boosting inter-class discrimination; A 4D-convolution based cross-feature comparator generating attention maps for explicit query-support relationship modeling. This SNN-compatible design combines event-driven LIF dynamics with advanced feature extraction, achieving 98.9% accuracy (5-way 5-shot on N-Omniglot) with clear feature separation (t-SNE verified). **Q3:Data scarcity** R3:Data scarcity drives few-shot learning research, motivating our development of SNN-based solutions that leverage meta-learning and metric learning to extract meaningful patterns from limited samples while maintaining generalization capability. Our systematic validation demonstrates significant improvements through: comprehensive benchmarking against state-of-the-art SNN few-shot methods (see R4 results), and synergistic operation of our novel self-feature extraction and cross-feature comparison modules, which collectively enhance few-shot learning performance in SNNs while preserving their energy-efficient characteristics. **Q4:Compared to other recent advanced SNNs’ few shot learning methods.** R4:Thanks for your suggestion.We have added some of the latest SNN few-shot learning methods for comparison, The performance comparison is show as follows: | Dataset | method | backbone | Task | acc | |------------|---------------------|----------|--------------|------| | **N-Omniglot** | plain[1] | SCNN | 20-way 1-shot | 63.4 | | | Knowledge-Transfer[2] | SCNN | | 64.1 | | | SSCF(ours) | SCNN | | 67.8 | | | SSCF(ours) | VGGSNN | | 84.4 | | Dataset | method | backbone | Task | acc | |------------------|-------------|------------------------|--------------|------| | *mini*ImageNet | OWOML [3] | SNN-ResNet-12 | 5-way 1-shot | 45.2 | | | CESM [4] | SNN-WideResNet-28-10 | | 51.8 | | | MESM [4] | SNN-WideResNet-28-10 | | 53.6 | | | SSCF(ours) | VGGSNN | | 60.9 | **Q5:The Generalization ability of the proposed model.** Thanks for your suggestion. Here we validate the performance of our model on N-Omniglot, CUB, and *mini*ImageNet datasets, which are common-used validation settings as [5-7] . These datasets are recognized benchmark datasets in the FSL field, covering neuromorphic data, fine-grained classification, and general object classification scenarios, respectively.Meanwhile, in order to further verify the generalization performance and robustness of the model, we supplemented the performance of the model under different datasets and different noise levels as Table 1 in Reviewer 4's comments. Thanks for your feedback. We have re-uploaded high-resolution images in the revision, and have carefully revised and proofread spelling errors and citation errors. We are sorry that we can not upload the figure during the rebuttal phase, but we really add more method details in the method d diagram to make it clear. Thank you again for your valuable suggestions. **Reference** [1] N-omniglot, a large-scale neuromorphic dataset for spatio-temporal sparse few-shot learning, 2022\ [2] An Efficient Knowledge Transfer Strategy for Spiking Neural Networks from Static to Event Domain, AAAI 2024\ [3] Fast on-device adaptation for spiking neural networks via online-within-online meta-learning, IEEE 2021\ [4] A two-stage spiking meta-learning method for few-shot classification[J]. Knowledge-Based Systems, 2024\ [5] Spatial-aware Metric Network via Patch-wise Feature Alignment for Few-shot Learning[J]. IEEE 2025.\ [6] Hard-Positive Prototypical Networks for Few-shot Classification, IEEE 2025.\ [7] EMNet: A Novel Few-Shot Image Classification Model with Enhanced Self-Correlation Attention and Multi-Branch Joint Module,2025
null
null
null
null
null
null
Active Treatment Effect Estimation via Limited Samples
Accept (poster)
Summary: The paper proposes a new active learning strategy for experimental design and an accompanying ATE estimator, derived from literature on estimators with finite sample guarantees. This is especially relevant to cases where experimental sampling must be constrained due to cost or other concerns. Additionally, the paper briefly discusses an extension to scenarios where SUTVA is violated. Claims And Evidence: The key contributions stated in the introduction are all supported by clear evidence. Methods And Evaluation Criteria: The proposed methods make sense for the problem. Theoretical Claims: I only briefly reviewed the proofs, and do not have any immediate concerns about correctness. The proof of Lemma 4.4 borrows notation from Ghadiri et al without defining it or referencing that paper, making it hard to follow without previously reading the other paper. In particular, $2y = t + Z \bigodot \mu$. Also, it does not appear that $\mu$ is formally defined in the main body of the text, but is presumably consistent with the definition from Ghadiri et al. Experimental Designs Or Analyses: For the primary proposed algorithm, the experimental section was sound and the baselines were consistent with other papers in this area. However, there is no experimental section for the SUTVA relaxed CGAS method. This does not impact the primary argument of the paper, but perhaps hints that the Section 6 material is better saved for an expanded sequel with more analysis. One suggestion: The difference in sample complexity between the proposed RWAS method and the previously proposed RAHT method is primarily a function of the feature space size $d$, so I suspect further experiments investigating empirical performance differences as $d$ varies may be interesting. Further reinforcing this, in Table 2, the Twins data set shows the largest different between RWAS and RAHT and also happens to have the largest feature space. Supplementary Material: I briefly reviewed the appendix, which is primarily comprised of proofs. I have no immediate concerns. Relation To Broader Scientific Literature: The paper adapts the proposed method from Ghadiri et al to include a more efficient sampling strategy, derived from the more theoretical work of Chen and Price. The proposed estimator shows improved sample efficiency both theoretically and empirically. Within the much broader literature, experiments with limited sample sizes are quite prevalent due to a number of reasons, including high costs or ethical concerns. The proposed methodology seems like a potentially valuable solution to this common scenario. Essential References Not Discussed: Nothing notable is omitted from my knowledge. Other Strengths And Weaknesses: The paper does a good job of presenting the problem and discussing the solution. It was relatively easy to follow their arguments and evaluation. There are some minor notational issues, and while section 6 appears to be quite valuable, it does not appear to have the space it deserves, including empirical validation. Other Comments Or Suggestions: I think there may be notational issues with the index $j$ in definition 4.1. Initially we extract the sample with index $r(i) = j$, so presumably $X_{r(i)} = X_j$, which is used to create $P_{ij}$. In a following sentence where $A$ is defined, $j$ now denotes the $j$th coordinate of $X_i$, and we confusingly have $X_{r(i),j}$ despite previously defining $r(i) = j$ a few sentences prior. in the section 6 header, 'assumption' is misspelled as assupmtion Questions For Authors: In Table 1, the sample complexity for Addanki et al. is stated as $O(d log(d) + d/\epsilon)$, which is the sample complexity for leverage sampling. However, only the proposed ITE estimator in that paper relies on leverage sampling. Their proposed ATE estimator instead recursively partitions the data using the GSW algorithm from Harshaw et al., so presumably the sample complexity would be different. Am I mistaken? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your advice! Here is our response point by point: > **Q1: The proof of Lemma 4.4 borrows notation from Ghadiri et al. without defining it or referencing that paper, making it hard to follow without previously reading the other paper. In particular, $2 y=t+Z \bigodot \mu$. Also, it does not appear that $\mu$ is formally defined in the main body of the text, but is presumably consistent with the definition from Ghadiri et al.** Thanks for the careful review and wonderful comment! Here $\mu$ indicates that the vector composes of $Y_i(0)+Y_i(1)$. and $\bigodot$ denots the Hadamard product. We agree that, even though we had previously stated that these local notations were adopted from Ghadiri et al., it is important to make this explicit again in the main text or appendix. We have revised the manuscript accordingly with our utmost care and sincerity, which will be presented with full rigor and clarity in the final camera-ready version. > **Q2: Add network experiment** Thanks for your advice. First, we have conducted large-scale experiments across a range of values for both $d$ and $n$, as shown in `RW kg3L`. Second, we have fully implemented your suggestion and carried out additional experiments under settings with network interference. Specifically, we followed both the simulation setup and the real-world dataset configuration used in the work by Lu et al. [1]. We replicate their basic settings wholly. We refer reader for the experimental results in the link https://anonymous.4open.science/r/Causal-Active-Sample-B0C5/README.md. > **Q3: Expeiremnt focuses on d.** Refer to `RW kg3L`. > **Q4: Notation issue** Thanks for your comment! We will fix this notational issue by choosing another symbol to denote the row of the vector or matrix to avoid potential incomprehension. >**Q5: The proposed ATE estimator in Addanki et al. instead recursively partitions the data using the GSW algorithm from Harshaw et al., so presumably the sample complexity would be different.** Thanks for your comment. The sample complexity for Addanki et al is $O(dlog(d)+d/\epsilon)$ for the ITE estimation; for ATE estimation, in their excellent paper, the complexity could be controlled by "ATE estimation. For ATE estimation we give a randomized algorithm that selects at most $s$ individuals for treatment/control assignment and obtains an error of $\widetilde{O}\left(\sigma / \sqrt{s}+\left(\left\|\boldsymbol{\beta}^1\right\|+\left\|\boldsymbol{\beta}^0\right\|\right) / s\right)$," We will revise it in the final version. Thanks for your wonderful comment! ---- [1] Adjusting auxiliary variables under approximate neighborhood interference, X Lu, Y Wang, Z Zhang, arXiv preprint arXiv:2411.19789.
Summary: Experimental design for estimating treatment effects does not generally have strong finite-sample guarantees, especially as the dimensionality of the covariates grows. Recent works implement experimental design based on leverage scores. This work proposes an alternative approach called IRD, which helps achieve a sample complexity for the estimation error that is linear in the covariate dimensionality. The method is validated with a variety of standard semi-synthetic experiments. **Update after rebuttal**: after considering the additional results provided, I have decided to increase my score. Claims And Evidence: The contribution is clear and the method is sufficiently demonstrated on a variety of standard benchmarks. It would be helpful to see results that specifically highlight the finite-sample guarantee as a function covariate dimensionality. Without this, there is little intuition on the supposed performance benefits. Methods And Evaluation Criteria: The core novelties of this method like the adaptation of IRD are never fully described. Section 4.1 gives some intuition and a broad overview of the goals for the proposed method, but the steps in Algorithms 1 and 2 are not justified in detail. In my view, the authors need to put more effort into clearly explaining the motivation behind the new algorithm, beyond referencing recent works that they are building upon. Theoretical Claims: I did not check the lengthy appendices that contain all of the proofs. However, I took a brief look and there are no obvious errors. Experimental Designs Or Analyses: The paper presents numerous standard benchmarks in treatment-effect estimation like IHDP and the Boston dataset, for different subsample sizes. Supplementary Material: I took a quick look over the proofs that comprise most of the supplementary material. Relation To Broader Scientific Literature: The contributions are clearly stated in relation to recent works like those of Ghadiri et al. and Harshaw et al. Essential References Not Discussed: References are sufficient. Other Strengths And Weaknesses: The problem of active sampling for treatment-effect estimation with high-dimensional covariates is clearly significant. The solution appears to have clear benefits over other recent works. It would be very helpful to better describe the method so that readers can understand the key contributions. Other Comments Or Suggestions: Some of the language is inappropriate for a venue like ICML. For instance, in the beginning of Section 4, the authors use exaggerated adjectives like "outstanding" and "superior" without reference to specific claims or objectives. Questions For Authors: 1. Specifically what role do partitioning and subsampling play in the proposed method? 2. Does this method easily extend to multiple treatments? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your review and comments! Here are the responses to all of your questions. > **Q1: It would be helpful to see results as a function of covariate dimensionality.** Thanks for your suggestion! We provide the following supplementary experiments on the performance differences as d varies with $n=1000$ samples. The advantage of our RWAS algorithm over traditional CRA/GSW methods is especially apparent when the sample dimension d is large, as a result of the optimal sample complexity in active sampling process. | d | HT | Hajek | CRA | GSW | RAHT | SC | 4-Vectors | RWAS (Ours) | |-------|------------|------------|-------------|-------------|-------------|-------------|-------------|---------------| | 10 | 1.98 (0.78) | 1.27 (0.53) | 0.98 (0.40) | **0.93 (0.38)** | 1.08 (0.47) | 1.13 (0.65) | 1.15 (0.53) | 1.08 (0.46) | | 20 | 3.67 (0.88) | 2.88 (0.57) | **2.10 (0.49)** | 2.28 (0.70) | 2.41 (0.59) | 2.50 (0.73) | 2.46 (0.62) | 2.19 (0.55) | | 50 | 1.14 (0.74) | 1.03 (0.50) | 1.02 (0.39) | 0.80 (0.64) | 0.80 (0.56) | 0.80 (0.68) | 0.92 (0.61) | **0.71 (0.47)** | | 100 | 1.82 (0.95) | 1.62 (0.86) | 2.22 (0.53) | 1.78 (0.87) | 1.53 (0.83) | 1.73 (0.90) | 1.80 (0.85) | **1.51 (0.82)** | > **Q2: Further description on the core novelties of this method like the adaptation of IRD.** Thanks for your concern. Please see the response to `RW PucS`. > **Q3: Revision on the word like outstanding** Thank you for this helpful note. In response, we have carefully revised the wording in the beginning of Section 4 to remove exaggerated adjectives such as "outstanding" and "superior". We now use more neutral, objective language and anchor our claims directly to specific empirical results and theoretical guarantees. This revision aims to improve clarity and expression throughout the manuscript. We sincerely appreciate your feedback on this matter. >**Q4: How to emphasize the role of partitioning & subsampling** Thanks for your concern! The overall target of partitioning and subsampling is to find efficient causal effect estimators with limited cost on sample collection and computation, but focusing on different components. * Subsampling finds the most representative samples in terms of covariate distribution by selecting and up-weighting ``influential'' directions in the data. * Partitioning seeks for well-balanced assignments and provide unbiased effect estimators with controllable expected error. In Algorithm 1, partitioning and subsampling are deployed in two disjoint subset of the entire samples, one of which applies GSW with regression adjustment, and the other applies IRD. Through weighting and averaging, we obtain an unbiased estimate with the total sample size being O(d). Besides, the key idea is to improve the MSE through well-balanced assignments and representative samples. As stated in line 65, two types of techniques are applied to achieve these goals: partitioning and subsampling. The IRD algorithm selects the representative samples to ensure a controllable estimation error. At the same time, the GSW design is applied to achieve balanced assignments by partitioning the samples into treatment and control groups. This motivates the following thought, serving as another line of motivation: * We can use O(1) samples for the GSW regression-adjusted estimator. * However, learning the regression coefficient requires O(d) samples. * Therefore, we divide the data into two disjoint sets, one of which applies GSW with regression adjustment, and the other applies IRD, using O(d) samples to learn the regression coefficient from the first set. * These two sets each construct an estimator, and after weighting and averaging, we obtain an unbiased estimate, with the total sample size being O(d). Note that using either set alone as an estimator may be biased because we actively select individuals corresponding to X with good representational properties for the estimation. This group is not independent and is randomly selected from the entire sample. > **Q5: Does this method easily extend to multiple treatments?** Thanks for your question. It is not complicated to propose a solution for multiple-treatment extension. For example, the estimator on the treatment effects between two treatment levels $Z=z_1$ and $Z=z_2$ can be constructed through our method restricting treatment allocation on $\{z_1,z_2\}$ as long as the positivity assumption $P(Z_i=z)>0$ holds for targeted levels. On the other hand, it is possible to find methods to compare different treatment pairs through one sampling process, or combine multiple processes to further increase efficiency, which requires further research. It might be highly nono-trivial due to the proof related to the martingale in the GSW design.
Summary: The authors considered the problem of estimating the causal effect in an active sampling framework. In particular, they proposed a method called RWAS which attains the sample query of $O(d/\epsilon)$ to achieve $\epsilon$-approximation error where $d$ is the number of covariates. Moreover, they also provided a lower bound, showing the optimality of the proposed method up to some constant. They performed experiments on synthetic and real datasets showing that the proposed method has a better performance compared to baseline methods. ### Update after rebuttal I checked the proof of the lower bound given in the rebuttal and it looks correct to me. I suggested adding these explanations to the paper. I adjusted my score. However, I still believe that the paper is not well-written. Therefore, I could not recommend a clear acceptance. Claims And Evidence: The paper is a theoretical work and proofs are provided for theorems or lemmas in the paper. However, the paper is not well-written and it is hard to follow some parts of it For instance, some terms or definitions are not defined. Moreover, it is often assumed that the reader is completely knowledgable about all aspects of the problem. However, the authors should give more explanation or context about the proposed method. Methods And Evaluation Criteria: The main metric for comparing the methods is a sample query to achieve an $\epsilon$-approximation error which is a reasonable evaluation criterion. Theoretical Claims: Unfortunately, I only had time to check the proof of Lemma 4.2. Experimental Designs Or Analyses: I did not check the codes, but I read the experiment section. Based on the text, it seems that the experimental results are sound and the authors also provided some explanations for the plots. Supplementary Material: I just checked Appendix A regarding the proof of Lemma 4.2. Relation To Broader Scientific Literature: Identifying causal effects is one of the main goals in many areas of empirical science. One of the main approaches in causal effect identification is to perform experiments. This paper considers active sampling in experiment design in order to provide order-optimal methods in estimating the causal effect. Essential References Not Discussed: I am not an expert in the specific area of active sampling for causal inference. Therefore, I am not sure whether any important reference is missed. Other Strengths And Weaknesses: Strengths: - The proposed method provides an order optimal method for active sampling to estimate the causal effect. For this purpose, they also gave a lower bound on the number of queries of any algorithm to achieve an $\epsilon$-approximation error. - The experimental results showed that the proposed method has better performance compared to previous work in most real datasets. Weakness: - Some parts of the paper is not well written and some notations or terms are used without defining them which make hard to read the paper or validate the proofs. Other Comments Or Suggestions: I suggest revising the whole paper and making sure that all the terms are defined. Moreover, please provide a general description of the proposed method first and then describe the details of each part. In the current version, for instance, it is hard to understand how Algorithm 1 works such as what is the purpose of GSW design. As another example, the term HT is used before saying that it refers to Horvitz-Thompson estimator. Questions For Authors: - What are technical novelties compared to Chen & Price (2019)? It seems that the results are adoptation of results in Chen & Price (2019) to the current setting. The explanation of the proposed method in lines 134-142 is vague. For instance, it is not clear "This approach serves as a refinement of Chen & Price (2019), in which we transfer the strategy (Definition 5 in Chen & Price (2019)) to the design-based setting." - Please explain Algorithm 1 line by line and explain the intuition behind each part there. - The lines 149-157 are not written well. For instance, "We say it is a “good $\epsilon$-reweighting sampling strategy” if (i) Defining a matrix". Moreover, what is the second condition (ii) here? - Regarding the statement of Theorem 5.1, first please fix "$\forall$ any" to "For any". Moreover, the statement is a little bit weird as it says that "for any algorithm whose output $\hat{\tau}$ satisfying $|\hat{\tau}-\tau|\leq 0.1$ with probability 0.75", then sample queries is at least $2.86d/\epsilon$ but $|\hat{\tau}-\tau|\leq 0.1$ does not depend on $\epsilon$ and it is just needed to be less than 0.1. - In line 287, what is the notation "*"? In line, what is the notation $\tilde{\mathcal{N}}(i)$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable questions! We address your questions point-to-point in the following. > **Q1: However, the authors should provide more explanation or context about the proposed method.** 1. **Insightful motivation**. **Motivating example**. Suppose $X$ is $n \times 2$, and we want a subset to approximate $X^\top X$ for regression. Random sampling may cluster points along a single direction, leading to rank deficiency or poor conditioning. In contrast, Algorithm 2 adapts sampling probabilities to promote diversity: it increases the chance of selecting points in underrepresented directions by updating $B_i$. This ensures that new samples help span missing dimensions. After a few rounds, the selected subset captures both primary axes and preserves the geometry. To avoid over-representing outliers, constraints on $\alpha_i$ limit how much influence any point can have. As a result, the spectrum of the reweighted matrix $A^\top A$ remains close to that of the full dataset (identity matrix after normalisation). We avoid redundancy while preserving rare but critical directions—achieving near-optimal regression performance with far fewer, well-chosen samples. 2. **Detailed interpretation of main algorithm** | **Step** | **Description** | |----------------------|-----------------| | Line 1 | Subsequently assign the Bernoulli trial to the cases selected by IRD. | | Lines 2–4 | The IRD Algorithm (Alg. 2) is deployed to select a subset $S$ representative to the covariate distribution of the entire samples and derive the estimation on adjustment parameter $\hat{\beta}^{act}$. | | Line 5 | Deduce the Horvitz-Thompson estimator on $S$ following the Bernoulli trial assigned in line 1. | | Lines 6–7 | Randomly sample a subset from the complementary set of $S$ and deploy the GSW design to derive a balanced assignment. Adjust the HT estimator to improve efficiency. | | Line 8 | Weight the causal effect estimators on representative set $S$ and set $\bar{S}_{m'}$ to yield the combined estimator using well-balanced assignments and well-representative samples. | Another line of motivation, refer to `RW RneH` for the **Q4**. 3. **Additional comparision** * **The technical novelty (i)**: First, we refine the previous result in Chen & Price (2019) to adapt to a finite-sample setting. It resorts to the refinement of the parameter in the $\epsilon$-reweighting strategy, compared to that of Chen & Price (2019), Definition 2.1. Intuitively, consider the OLS estimator regressing $Y$ on $X$ for any arbitrary weight $w_i$. We have $E[w_i (\hat{Y}_i^{ols} - Y_i)] = 0$, so the expectation of the noise term vanishes. However, in finite samples, such sum may usually deviate from zero, necessitating further refinement of the weighting parameters. * **The technical novelty (ii)**: It brings significant challenges since, compared with the setting in Chen & Price (2019), each individual's outcome is not fixed as $Y_i$, instead, the outcome is randomly chosen as $Y_i(0), Y_i(1)$, jumping between the real world and the counterfactual world. > **Q2: technical novelties compares to Chen & Price (2019)?** Kindly see **Q1-Additional comparison** > **Q3: Explain Algorithm 1** Kindly see **Q1-Detailed interpretation of main algorithm**. >**Q4: The lines 149-157's writing, and the meaning of Condition (ii).** The second condition (ii) is intended to impose an upper bound on the total weighting factors $\alpha_i$. Specifically, this condition ensures that the cumulative weights are controlled, preventing any single sample or a small group of samples from excessively dominating the overall estimator. To clarify the mathematical intuition, we provide the following rewritten version of the paragraph. Briefly, it includes (i) Spectral approximation condition, (ii) Bounded cumulative weight condition and (iii) Single-point influence condition. Due to space limitations, we kindly refer you to the anonymous link: https://anonymous.4open.science/r/Causal-Active-Sample-B0C5/README.md. > **Q5: The parameter analysis of the lower bound** Overall, whether setting it as $|\hat{\tau}-\tau|\leq \epsilon$ or $0.1$ are both correct (as long as $\epsilon < 0.1$); We sincerely appreciate your suggestion—explicitly including the $\epsilon$ parameter in the estimation error bound improves clarity and helps readers better understand the overall process and how the Shannon-Hartley upper bound applies to the estimation error. > **Q6: Notation meaning** “∗” is a “null option”; also refer to the dear reviewers to Algorithm 4 in the appendix. the notation $\mathcal{N}_b^{\pi}(i)$ is deferred in the above anonymous link. *** We are eager to hear your feedback. We’d deeply appreciate it if you could let us know whether your concerns have been addressed. --- Rebuttal Comment 1.1: Comment: Thanks for the response. It is now somewhat clearer to me what the contributions of the current work are. However, I still believe that the paper is not well-written — at least, I personally find it difficult to follow. This might be due to the fact that I am not a complete expert in this specific area of causal inference, and some concepts that the authors assume to be known could benefit from clearer explanations. To fairly assess this paper, I suggest relying more on the other reviewers who appear to be more familiar with this area. Regarding the responses, I still have one question about the lower bound. I think the statement in the paper, as well as the explanation in the rebuttal, is not quite correct. If I choose a very small $\epsilon < 0.1$, then an estimator that merely guarantees $|\tau - \hat{\tau}| < 0.1$ would still require access to $\Omega(d/\epsilon)$, which is quite counterintuitive. I still believe that the condition $|\tau - \hat{\tau}| < 0.1$ should be replaced with $|\tau - \hat{\tau}| < \epsilon$, and that the authors are actually proving this bound for any $\epsilon \in (0, 0.1)$. --- Reply to Comment 1.1.1: Comment: Thank you very much for your prompt response! ____ Such a counterintuitive case of **data generation** does exist—where the estimation error is only required to be controlled within $0.1$ with some probability, yet achieving this still objectively requires samples of at least order $O(d/\epsilon)$. **Intuition** Even under the mild condition of the estimation error, researchers can still generate "hard, unsatisfactory" instances to ensure that its estimation is "difficult", requiring many samples. **Example** In our Appendix E ("The proof of the lower bound"), we provide a counterintuitive construction of data generation under the super-population perspective: We consider the following constructions: $\tau\left({X}_i\right)=L\left({X}_i\right)+\mu$. Here $L(\cdot)$ is the pre-fixed function, selected from the linear family $\mathcal{L}$ satisfying $\parallel \mathcal{L}\parallel_D=1$. $\mu$ is "important", namely, the i.i.d. Gaussian noise satisfying $\mu \sim N\left(0, \frac{1}{\varepsilon}\right) . D$ is a uniform distribution under the $d$ dimensional Euclidean space. We consider that ${X}_i$ is sampled from $D$. We set the $\tau(X_i)$ as the corresponding ITE value $\tau_i$, and then we can naturally provide a feasible construction of $Y_i(1), Y_i(0)$. **Why this example makes sense.** We follow the technique in the active sampling. **Step 1** **First, there exists a subset {$\mathcal{L}=\{L_1, L_2,...L_s\}$} which satisfies $s \geq 2^{1.919d}$ and each pair among them contains a nontrivial "distance".** Specifically, $\exists$ a subset $\{L_1, L_2,... L_s\}=: \mathscr{L} \subseteq \mathcal{L}$ with $s \geq 2^{0.7d}$, $\parallel L_i\parallel_D = 1$, $\parallel L_i \parallel_{\infty} \leq 1$. Moreover, $ \parallel L_i-L_j \parallel_D \geq 0.2$. > This construction is achieved via a recursive greedy algorithm. As we illustrated in Appendix E (page 17), we first restrict $\mathcal{L}$ to all function mappings to ${\{ \pm 1 \} }^d$. On this basis, we recursively pick the legitimate function from it and in each step guarantees that we remove the function within the distance $0.2$ from our selected one. This process will output at least $2^{1.919d}$ functions according to our Lemma E.1, Appendix 4. **Step 2** **We make a lower bound of the mutual information between the randomly chosen generation function $L_j$ and any algorithm's output.** > Let $I(L_j; \hat{\tau})$ denote the mutual information of a uniformly randomly chosen function $L_j \in \mathcal{L}$ and algorithm's output $\hat{\tau}$ given $|\mathcal{L}|=s$ observations $(X_1,...), (X_2,...), ...(X_s, ...)$ generating from $\tau\left({X}_i\right)=L_j\left({X}_i\right)+noise\sim \mathcal{N}(0,\frac{1}{\epsilon})$. By Fano's inequality, we get $I(L_j; \hat{\tau}) = H(L_j) - H(L_j \mid \hat{\tau}) \geq 0.4log(s) \geq 1.43$ ($H(\cdot)$ is an entropy). **Step 3** **Third, we can also get an upper bound of the above mutual information.** > By Shannon–Hartley theorem, it leads to $I(L_j; \hat{\tau}) \leq \frac{s \epsilon}{2}$. Combined with **Steps 1-3**, it leads to the expected result, i.e, $1.43 \leq I(L_j; \hat{\tau}) \leq \frac{s \epsilon}{2}$ leads to $s = \Omega(2.86d /\epsilon)$, namely, the requirement of $O(d/\epsilon)$ samples. Inspired by for insightful consideration, we polish the statement of our theorem (under superpopulation), and will be presented in the Camera-ready version: > Theorem 5.1. (Lower bound). $\forall$ any fixed dimension $d$, for any $\epsilon \in(0,0.1)$ sufficiently small, there exists a feasible set of $\{\boldsymbol{Y}^1, \boldsymbol{Y}^0\}$ such that for any algorithm whose output $\hat{\tau}$ satisfying $|\hat{\tau}-\tau| \leq 0.1$ with probability $0.75$ , need at least $2.86 d / \varepsilon$ sample queries. Specifically, such data generation process could be chosen as i.i.d Gaussian noise $\mu = N(0, \frac{1}{\epsilon})$, and $\tau\left({X}_i\right)=L\left({X}_i\right)+\mu$, where $X_i$ sampled from some distribution $D$, and $\parallel L \parallel_D = 1$. **Insight** It demonstrates that the sample complexity of our method is optimal and cannot be improved. This is because, even under a relaxed requirement on the estimation error, there still exist challenging instances for which achieving accurate estimation requires a complexity of $O(d / \epsilon )$. ____ References T. Cover, J. Thomas (1991). Elements of Information Theory. pp. 38–42. ISBN 978-0-471-06259-2. Hartley, R. V. L. (July 1928). "Transmission of Information" (PDF). Bell System Technical Journal. 7 (3): 535–563. ____ Your suggestions have already been incorporated into the revised version to improve its clarity and accessibility for a broader research audience. Moreover, it would mean a great deal if our efforts to address your concerns could be reflected in your evaluation. We would be deeply grateful if you would consider revisiting your initial score in light of our discussions with our joint effort.
Summary: This paper developed a finite-sample estimator with sample complexity analysis for causal effect estimation. The paper demonstrated the near-optimality of the sample size, and further extended the framework to social networks. Numerical experiments with simulated and real-world data supported the effectiveness of the proposed estimator. ### update after rebuttal: I have read through the authors' response and will be keeping my original scores. Claims And Evidence: Overall the proposed framework is well-described and the main claims in the paper are well-supported. The main theoretical results include the ATE estimation error upper bound and a matching lower bound under a superpopulation perspective. The proof sketch provided for the upper bound is clearly presented. Methods And Evaluation Criteria: The upper bound was established via two main steps: the IRD algorithm adapted from Chen & Price (2019)'s design-based setting results, and using the obtained coefficients from IRD to adjust the finite-sample estimator. The lower bound shows that this upper bound is near-optimal. Based on the experiments with synthetic and real-world data, the proposed estimator did not substantially outperform all baselines, but achieved comparable or better results in most of the scenarios. Theoretical Claims: To my best knowledge the upper bound proof is correct. Experimental Designs Or Analyses: Based on the presented content in the main paper, the experimental design appears to be sound. Supplementary Material: I was able to review the overall proof for the upper bound in rough details, they appear to be correct to my best knowledge. Relation To Broader Scientific Literature: The broader literature was well-addressed. Essential References Not Discussed: I did not notice missing essential references. Other Strengths And Weaknesses: The paper was well-organized, claims and proof sketches were clearly stated. Other Comments Or Suggestions: Minor typo: section 4 1st paragraph, Iterative Reweighting in Design-based "Set- ting" Questions For Authors: In the experiments, there are several settings where CRA / GSW actually outperformed the proposed estimator. Can you provide more intuition or justifications when the proposed estimators tend to perform better or worse? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### **Acknowledgement and General Response to `RW kg3L`, `RW PucS` , `RW RneH`, `RW fVjN`** ### Thrilled to receive such a positive reception from dear reviewers! Also, sincerely thank all reviewers for their insightful suggestions! In this general response, we carefully synthesized the reviewers’ suggestions to facilitate the AC’s assessment and, more importantly, use them as a blueprint to guarantee the quality of the camera-ready version. We have addressed these points: 1. **Detailed and intuitive explanation of motivation, methods and algorithms** (`RW PucS`, `RW RneH`). In our rebuttal and our revised version, we provide _**(i) insightful motivation and intuition, (ii) additional comparison with previous literature and (iii) detailed interpretation of the main algorithm**_, especially for audiences who are unfamiliar with causality or active sampling. 2. **Experiments** (`RW kg3L`, `RW fVjN`). We provide additional performance comparisons upon the dimension $d$, the size $n$, the network interference, etc., to check in which circumstance the method is more advantageous. These empirical results back up our theories and intuitions. 3. **Mention some misleading statements and notations** (`RW kg3L`, `RW PucS`, `RW fVjN`). Thanks for the meticulousness. Our improved version has made a few isolated definitions more transparent and intuitive. If have any further questions, please don’t hesitate to contact us directly. We will respond as promptly as possible and do our utmost to address every concern thoroughly. Let’s enjoy this fruitful discussion together. ---- ### **For `RWkg3L`: Thanks and please kindly find below our concise and clear rebuttal addressing your concerns.** ### Truly appreciate your strong endorsement! Here is our response: > **Q1: the synthetic experimental design appears to be sound. In the experiments, there are several settings where CRA / GSW actually outperformed the proposed estimator. Can you provide more intuition or justifications as to why the proposed estimators tend to perform better or worse?** Thanks for your advice! More intuitions are as follows: 1. **The advantage of our RWAS algorithm over traditional CRA/GSW methods is especially larger when the sample dimension $d$ is large**. This is because we have proven, through bounding arguments, that the sample complexity of RWAS is relatively optimal during the active sampling process that achieves a (1+\epsilon) relative fitting error, which is O(d/\epsilon). As a result, when the sample dimension dd is relatively large, our error is minorer than that of other methods. 2. **The data structure (i.e., whether there exists a subset with efficient global representational capability) plays an important role in comparing the performance of these algorithms**. Compared to the GSW algorithm, our key differences are that * We adopt the IRD active sampling strategy (yielding a subset whose $X^\top X$ distribution closely mirrors the overall distribution), thus obtaining regression coefficients from $X$ to the individual treatment effect (ITE = Y(1)-Y(0)); and * We apply regression adjustment to GSW itself (leveraging the regression coefficients from the former step). Intuitively, suppose the covariate distribution of this subset closely approximates the overall distribution. In that case, (i) our regression-adjusted GSW will be more efficient, and (ii) our IRD will produce a smaller fitting error for estimating the overall ATE. 3. **Motivated by your inspiration, we provide the following supplementary experiments** on the performance differences as d varies, which aligns with the above intuition. | d | HT | Hajek | CRA | GSW | RAHT | SC | 4-Vectors | RWAS (Ours) | |-------|------------|------------|-------------|-------------|-------------|-------------|-------------|---------------| | 10 | 1.98 (0.78) | 1.27 (0.53) | 0.98 (0.40) | **0.93 (0.38)** | 1.08 (0.47) | 1.13 (0.65) | 1.15 (0.53) | 1.08 (0.46) | | 20 | 3.67 (0.88) | 2.88 (0.57) | **2.10 (0.49)** | 2.28 (0.70) | 2.41 (0.59) | 2.50 (0.73) | 2.46 (0.62) | 2.19 (0.55) | | 50 | 1.14 (0.74) | 1.03 (0.50) | 1.02 (0.39) | 0.80 (0.64) | 0.80 (0.56) | 0.80 (0.68) | 0.92 (0.61) | **0.71 (0.47)** | | 100 | 1.82 (0.95) | 1.62 (0.86) | 2.22 (0.53) | 1.78 (0.87) | 1.53 (0.83) | 1.73 (0.90) | 1.80 (0.85) | **1.51 (0.82)** | The results indicate that our RWAS methods dominate the previous methods, especially when $d$ is large, which is relevant in practice. The code is available at https://anonymous.4open.science/r/Causal-Active-Sample-B0C5/README.md. > **Q2: Minor typo.** Sincerely, thanks for your careful review! We revise this topo from "Set-ting" to "Setting".
null
null
null
null
null
null
NeuronTune: Towards Self-Guided Spurious Bias Mitigation
Accept (poster)
Summary: The paper proposes NeuronTune, a method for mitigating spurious correlations in neural networks by intervening directly in the latent space at the neuron (feature-dimension) level. The work addresses the common challenge that many “robustness” or “debiasing” approaches rely on knowing or inferring spurious attribute annotations (e.g., “group labels”), which can be expensive to obtain or sometimes even unknown a priori. NeuronTune provides a self-guided approach: it identifies “biased dimensions” whose activations correlate strongly with misclassifications and suppresses those dimensions through post-hoc last-layer retraining. Claims And Evidence: Overall, the claims are backed by both theoretical insights and strong empirical evidence. Methods And Evaluation Criteria: These criteria align with standard practices in the literature on spurious correlation mitigation. Theoretical Claims: Key theoretical elements include: 1. A formal data model that encapsulates both core and spurious features. 2. Proofs establishing the connection between median-activation differences and biased (spurious) neuron detection. 3. Analysis that shows last-layer retraining plus biased-neuron suppression reduces reliance on spurious features. The analysis is coherent for linearized settings and helps explain why high-activation, misclassification-associated neurons reveal spurious correlations. Experimental Designs Or Analyses: The methodology is generally rigorous. One natural trade-off is a mild drop in average accuracy for a bigger gain in worst-group performance. Supplementary Material: The paper’s appendix contains proofs, extended experiments, and deeper ablation studies. They confirm that focusing on biased-neuron detection plus last-layer retraining is both theoretically justified and effective. Relation To Broader Scientific Literature: The authors contextualize well with respect to spurious bias research and last-layer retraining. They might also elaborate on recent neuron-level debugging works such as “NeuroInspect: Interpretable Neuron-based Debugging Framework through Class-conditional Visualizations” [Ju et al., 2023] to compare or contrast how each approach manipulates internal neurons. Essential References Not Discussed: A potentially relevant reference is “NeuroInspect,” which also performs neuron-level interventions. Referencing it explicitly could strengthen the discussion on interpretable neuron-level debugging strategies. Other Strengths And Weaknesses: ## Strengths: - Self-contained, requiring no group-label supervision. - Effective across various architectures (ResNet, BERT) and data modalities (vision and NLP). - Theoretical rationale matches empirical success. ## Weaknesses: - Limited interpretability: Although the approach identifies “biased neurons,” the paper does not deeply explore how to interpret or label them. (This is not a critical flaw, but might be interesting for users to see “which spurious concepts are discovered.”) - Dependence on a separate identification set: The paper relies heavily on the notion that a distinct dataset (or at least a validation set) is available. In real scenarios, that might be limited or small. The ablations show that using training data to identify biased dimensions is less effective (Table 4). Other Comments Or Suggestions: - Iterative neuron identification: After carrying out one pass of NeuronTune, it might be worthwhile to re-run the biased-neuron detection on the newly tuned model. If certain features remain entangled or if new spurious signals emerge, a second iteration could potentially refine results. - Interpretation: It would be interesting to present visual or textual exemplars that highlight what the suppressed neurons correspond to semantically. Questions For Authors: 1. Iterative Tuning: Have you explored running multiple rounds of neuron identification and suppression? If so, did you notice any diminishing returns or additional complexity? If not, do you speculate that repeated rounds might identify new “biased” neurons or further improve performance? 2. Partial vs. Full Suppression: Did you consider only attenuating (rather than fully zeroing) the activations of biased dimensions? Could such a variant reduce potential losses in overall accuracy? 3. Identification Set Size/Composition: Are there heuristics or rules of thumb for how large or diverse the identification set (used to detect biased neurons) should be? For instance, if only limited validation data is available, how does that affect the stability of the median-based criterion? 4. Interpretability: Have you tried mapping the suppressed neurons to specific “concepts” or features (e.g., through visualization)? If so, does this reveal anything interesting about the nature of spurious correlations the model relies upon? 5. Extension to Other Tasks: The paper focuses on classification tasks with known spurious correlations. Could the NeuronTune concept extend to multi-label or structured prediction tasks? If so, would that require any modification to the methodology? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed review and feedback on our submission. We provide our responses to your questions below. ## Essential References Not Discussed - **Comparison with NeuroInspect**: NeuroInspect identifies neurons responsible for mistakes from the counterfactual explanation perspective and minimizes the influence of identified neurons via regularization. In contrast, our approach identifies biased neurons by probing their activation patterns and directly manipulates identified neurons by suppressing their contributions to the final predictions.\ NeuroInspect emphasizes on interpretable neuron-level debugging, allowing practitioners to further pinpoint the causes of the errors. Our work aims at a self-guided spurious bias detection and mitigation framework without human intervention.\ We see potential in applying the interpretation techniques to our method in future work. We will cite NeuroInspect and add the above discussion to our revised manuscript. ## Weaknesses 1. **Limited interpretability**: We hope to clarify that our work does not focus on interpretation. Instead, we focus on a self-guided framework without human intervention. By definition, biased neurons may encode a certain portion of core features, making the interpretation of neurons nontrivial. Nevertheless, we provide visualizations in Figures 3-6 in Appendix to analyze which spurious concepts are discovered. Please refer to our response to Question 4 for details. 2. **Dependence on a separate identification set**: One of our major contributions is removing the dependence on group labels in the identification set. This offers great flexibility and freedom for practitioners to select identification data based on their needs to detect and mitigate targeted spurious biases. Hence, obtaining such data is relatively straightforward. In our experiments, we used the validation set for consistency in performance comparison, but in real-world scenarios, any sufficiently representative data can serve this purpose. ## Questions: 1. **Iterative Tuning**: Yes, we performed multiple rounds of neuron identification and suppression (L256-259, right column). Diminishing returns will be observed after a certain number of rounds. In general, running multiple rounds is better than running a single round with improved performance, as new biased neurons will be identified. 2. **Partial vs. Full Suppression**: Although partial suppression reduces the loss in overall accuracy, its effect is similar to no suppression, as models can continue to adjust their weights on nonzero biased activations, leading to severe spurious bias. For example, on the CelebA dataset, when we multiplied the activations of biased dimensions with the masking values in the table below, we see that only when the masking value is zero (full suppression) did the model achieve improved WGA. |Masking value|0|0.2|0.4|0.6|0.8|1.0| |---|---|---|---|---|---|---| |WGA|87.3±0.4|71.5±1.5|72.2±1.2|72.9±1.5|73.1±1.5|73.0±1.2| |Acc.|90.3±0.5|93.8±0.2|93.8±0.3|93.8±0.2|93.8±0.2|93.9±0.2| 3. **Identification Set Size/Composition**: The identification set should consist of diverse samples that are not memorized by the model. We find that when we use less than 30\% samples of the validation set, the obtained worst-group accuracy is low with high variance, whereas the average accuracy remains high with low variance. When we use around 30\% of the validation set, the number of minority group samples (e.g., waterbirds with land backgrounds in Waterbirds or males with blonde hair in CelebA) is approximately 50. Therefore, to ensure the effectiveness of our method, each data group should ideally contain more than 50 samples. In a group-label-free setting, it is suggested to include many diverse samples whenever possible. 4. **Interpretability**: Yes, we have shown images that highly activate the identified biased neurons along with the heatmaps highlighting regions that contribute to the neuron activations in Figures 3-6 in Appendix. The identified biased neurons mainly represent spurious features. For example, we observe that biased neurons mainly represent blue/green backgrounds for landbirds in Figure 5(b) and represent forest backgrounds for waterbirds in Figure 6(b). Note that since a neuron may represent a mixture of spurious and core features, the visualizations sometimes may not be directly interpretable. 5. **Extension to Other Tasks**: Yes, NeuronTune can be extended to multi-label prediction tasks. Instead of considering all identified neurons for different labels as biased, as shown in Eq. (6), we can consider biased neurons as label-specific and mask neurons based on the labels of input samples during retraining. This is because in the multi-label setting, a biased neuron for one label may be the core contributor to another label, which is different from the assumption in a standard classification setting (L272-274). Please let us know if you have further questions.
Summary: The authors propose to improve OOD generalization by identifying neurons that contribute significantly to misclassification on the validation set, and then retraining the output layer while setting those neurons to zero. Relative to ERM, this simple idea trades off in-distribution accuracy for significant improvement on worst-group accuracy across many image and text classification tasks. Claims And Evidence: * The authors' main claim is that their algorithm is robust to "spurious correlations," which are defined as a non-causal relationship. * Their approach is motivated using a DGP with causal/anticausal features, originally proposed in the Invariant Causal Prediction (Peters et al., 2015) paper. * Their experimental results show that their algorithm is able to effectively trade-off in-distribution performance for out-of-distribution performance in the presence of subpopulation shift. * I believe the connection between the claims and evidence is currently weak, and would be improved by reframing the algorithm as a way to mitigate subpopulation shift. Methods And Evaluation Criteria: * Yes, the proposed methods and evaluation make sense for the problem of OOD generalization. Theoretical Claims: N/A Experimental Designs Or Analyses: * The experimental design and analysis are sound. * I liked the fact that the authors are very explicit about how they perform model selection using in-distribution data. This is an extremely important point that is often omitted in this research area. Supplementary Material: N/A Relation To Broader Scientific Literature: The authors primarily contextualize their work in relation to last-layer retraining with labeled spurious correlations, and domain generalization (labeled groups). I think this is fine, but I believe the work more strongly relates to subpopulation shift (see "Other strengths and weaknesses.") Essential References Not Discussed: N/A Other Strengths And Weaknesses: The proposed idea is elegant, and meaningfully beats ERM on a wide variety of very difficult benchmarks. However, I think the current causality-based presentation obfuscates the simplicity of the idea, and does not help the reader gain intuition on why it should improve OOD generalization. This is how I understand your method. When you train ERM on datasets with imbalanced subpopulations, it maximizes average-case accuracy and performs very poorly on rare subpopulations. E.g. on waterbirds, it focuses on being correct on waterbirds on water, and sacrifices its ability to classify waterbirds on land. Then, when you look at the misclassifications on the validation set, the features that you identify are precisely the ones that make you misclassify waterbirds on land. You zero these features out, and retrain the output layer to assign higher weights to the features that would allow you to correctly classify waterbirds on land, i.e. bird features. By doing this, you slightly sacrifice your ability to classify waterbirds on water, but significantly improve your ability to classify waterbirds on land. Hence, your algorithm improves on worst-group accuracy at the expense of average-case accuracy. The same conclusion can be reached using CivilComments, which is categorized as a subpopulation shift dataset within WILDS. I think this makes much more sense than the causality-based argument which hinges on unintuitive technical assumptions about the data generating process. I am going to recommend acceptance as-is, because I understand that my interpretation may be overfit to my own personal intuitions about this research area. However, I believe this paper would be stronger with this reframing. Other Comments Or Suggestions: N/A Questions For Authors: Let me know what you think about the subpopulation shift interpretation that I proposed in "other strengths and weaknesses." Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and for recognizing both the simplicity and effectiveness of our approach. We appreciate your valuable feedback and are glad that our efforts to explain the model selection strategies were helpful. ## Regarding the subpopulation shift interpretation We appreciate your thorough understanding of our work and your insightful interpretation from the perspective of subpopulation shifts, i.e., changes in the proportion of certain subpopulations between training and testing. We would also like to clarify the distinction between mitigating **spurious bias** and **subpopulation shifts**. - Mitigating subpopulation shifts often relies on reducing spurious bias, such as decreasing the strong correlation between waterbirds and water to improve generalization for waterbirds on land. Tables 1 and 2 demonstrate NeuronTune’s effectiveness in handling subpopulation shifts on the Waterbirds, CelebA, MultiNLI, and CivilComments datasets. - In contrast, mitigating spurious bias has broader implications beyond subpopulation shifts. For example, Table 3 highlights NeuronTune’s ability to enhance robustness on the ImageNet-A dataset, which consists of samples that are challenging for a pre-trained model but do not conform to well-defined subpopulation shifts, such as images with unique pixels. This evaluation scenario represents a broader and more complex problem than subpopulation shifts. **Therefore, by framing our method as targeting spurious bias, we address a wider scope of challenges beyond subpopulation shifts.** With proper examples, our current presentation aligns well with your proposed subpopulation shift interpretation. We detail the correspondence between your interpretation and our presentation below. > When you train ERM on datasets with imbalanced subpopulations, it maximizes average-case accuracy and performs very poorly on rare subpopulations. E.g. on waterbirds, it focuses on being correct on waterbirds on water, and sacrifices its ability to classify waterbirds on land. Consider that $a$ in Eq. (2) controls subpopulations in data, e.g., when $a=1$, it may represent a group of waterbirds on water, and when $a=0$, it may represent a group of waterbirds on land. The probability $p$ controls the severity of imbalance in subpopulations. When $p$ is close to one (L159 left), the data is severely imbalanced in subpopulations. After training with ERM, the model minimizes the training loss, i.e., maximizes average-case accuracy, but obtains a large nonzero weight on the spurious feature (Lemma 1) and is away from the optimal model (Corollary 1). For example, the model may focus on being correct on waterbirds on water and sacrifice its ability to classify waterbirds on land. > Then, when you look at the misclassifications on the validation set, the features that you identify are precisely the ones that make you misclassify waterbirds on land. You zero these features out, and retrain the output layer to assign higher weights to the features that would allow you to correctly classify waterbirds on land, i.e., bird features. Our principle of neuron identification (Proposition 4.1) states that when a spurious correlation breaks, the neurons that still positively contribute to mispredictions will be selected. For example, neurons that lead to misclassification on waterbirds on land will be identified. After retraining, Theorem 4.3 indicates that weights on core features, such as bird features, will be increased (L200-201, right). > By doing this, you slightly sacrifice your ability to classify waterbirds on water, but significantly improve your ability to classify waterbirds on land. Our findings (L196-201, right) show that our method increases the weights on core features while keeping the weights on spurious features unchanged. This suggests that our approach makes a slight trade-off in average-case accuracy to achieve improved worst-group accuracy. For example, our method may slightly reduce the model’s ability to classify waterbirds on water due to a relative decrease in reliance on the water feature, while significantly enhancing its ability to classify waterbirds on land. Thanks again for your thoughtful comments. We will incorporate concrete examples to provide readers with better intuition and a clearer understanding of our approach. Please let us know if you have further questions.
Summary: The work proposes a bias-unaware post-hoc model debiasing method. The approach is based on the observation that, in a setting where a high majority of samples, but not all, contain a biased attribute, the neurons in the penultimate layer that are affected by the spurious attribute exhibit a different behavior between correctly classified and misclassified samples. The work theoretically motivates this in a linear regression setting, and defines the difference between medians of classified and misclassified samples as an indicator for neurons that are sensitive to spurious attributes. The debiasing is conducted by setting neurons in the penultimate layer above a certain distance threshold to zero, and retraining the final classification layer. A toy example serves to further highlight the efficacy of this approach. Empirical experiments on four popular benchmark datasets show worst-group-accuracy values of the proposed method competitive with approaches from prior work which leverage bias labels in the validation set for model selection. ## update after rebuttal I thank the authors for their clarifications. I am very happy to see the increased number of trials. I am convinced of the quality, novelty and significance of this work, and thus choose to revise my initial recommendation to "Accept". Claims And Evidence: The work claims their proposed debiasing approach, which does not require bias annotations, significantly mitigates spurious biases, i.e., is competitive with previous bias mitigation approaches that do require bias annotations. This claim is supported by empirical evidence, comparing the worst-group-accuracy of various baselines. There seems to be a small issue given the high error, likely caused by the small number of conducted trials. However, as the work does not explicitly claim state-of-the-art, but only competitiveness, this is only a minor issue. Methods And Evaluation Criteria: The work conducts empirical analysis on common benchmark datasets from the model debiasing literature. Here, the worst-group performance is used as an evaluation criteria, which is a common metric. The work also uses the *accuracy gap* to evaluate the gain between average and worst-group-performance. While I understand the motivation behind this metric, I do not think that this is very useful, as models with very low worst-group-performance obtain inflated values, which still require the comparison to average and worst-group-performance. The work also proposes a synthetic dataset to visually motivate its approach. While the visualization does indeed provide a good intuition for the issue. Theoretical Claims: The work makes some theoretical claims on a linear regression model in order to motivate its method. Here, Proposition 4.1, given separated spurious and core features in a linear model, demonstrates the negative contribution of spurious neurons. Given these negative contributions, Theorem 4.2 shows that there is then some distance between the expected activations of correctly classified and misclassified samples relying on this negative contribution of Proposition 4.1. Theorem 4.3 then assumes that the neurons detecting core features are aligned with the true core direction approximately to the same degree as the neurons detecting spurious features are aligned to the true spurious direction, with which the proposed approach is shown to reduce the distance between the unbiased and the biased model. I did not rigorously check Theorems 4.2 and 4.3. Experimental Designs Or Analyses: The experimental design of the results discussed on Tables 1-3 is based on a straight-forward comparison on previous benchmarks. The only potential issue here could be the low number for trials (three). Supplementary Material: I briefly reviewed the proofs for Proposition 4.1 and Theorems 4.2 and 4.3. Relation To Broader Scientific Literature: The work discusses most relevant works, making the common separation between bias-aware, bias-unaware, and semi-bias-aware mitigation approaches. The idea that bias can be identified through the misclassification of bias-conflicting samples has been identified in prior work (Kim et al, 2022a; Liu et al., 2021). Reweighing or retraining the final layer for bias mitigation has also been leveraged in previous work (Liu et al., 2021; Qiu et al., 2023). The novelty in this work is the contribution of individual neurons to this misclassification, and the subsequent, targeted removal thereof. Essential References Not Discussed: This works bases its bias detection on the idea that biased models misclassify bias-conflicting samples. Two prior works that leverage the same idea are cited and compared empirically (Kim et al, 2022a; Liu et al., 2021). However, the work does not make this connection clear. Other Strengths And Weaknesses: ### Strengths - Unsupervised bias mitigation, not relying on known biases, is an interesting and important issue - The linear regression example motivates the approach well. I really like Figure 2, except for the issue of the decision boundary described below. - The theoretical analysis provides a good foundation for the efficacy of the model. - The work is written clearly, and seems polished well. ### Weaknesses - The empirical results have high error due to a low number of trials, and do not show very clear improvements. - The framework relies on the assumption that a overwhelming majority of the training set contains biased samples in order for the model to react to bias-conflicting samples. Other Comments Or Suggestions: - In Figure 2b: The median for Dimension 2 red seems off. - l.255 left: linear combination spurious and core in embedding typically holds in embedding space, "combination is nonlinear" should be specified more clearly (i.e., in input) - l.307 left: absolution -> absolute - l.791: so that suppress[ing] them improves the ... Questions For Authors: 1. The error values in Table 2 are quite high. Could this be caused by the low number of trials? Would at least e.g. five trials be possible? Did you conduct a Wilcoxon-rank-sum test? 2. The decision boundaries are different between training and testing in Figure 2a and 2c. Is this caused by errors in the contour lines? Should they not be exactly the same? 3. l.379 left states "Our method achives highest WGA across the datasets" for Table 1 and Table 2. However, this does not seem to be fully the case. E.g., On Multi NLI, AFR is just overall better, yet NeuroTune is in bold-face. 4. When using the validation set to find neurons that are sensitive to spurious behavior, is the training set used to tune the last layer? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback on our submission. Here are our responses to address your questions. ## Evaluation Criteria - **Usefulness of Accuracy Gap**: We hope to explicitly provide readers with a direct view on the numeric difference between worst-group accuracy and average accuracy, making it easier to assess how consistently a model performs across different groups of data. When models with very low worst-group accuracy exhibit inflated accuracy gaps, such as the ERM models, this indeed highlights the severe robustness issue of the models. In this paper, we report the average accuracy, worst-group accuracy, and the accuracy gap between them to provide a comprehensive evaluation of each model's overall performance. ## Essential References Not Discussed - **Connection to the two related works**: Thanks to the reviewer for giving a clear comparison between our work and the works proposed by Kim et al, 2022a and Liu et al., 2021. Indeed, our work builds on idea of these works by extending the focus from identifying bias-conflicting samples to detecting biased neurons within a model, enabling direct intervention in the model’s decision-making process. We will add the above discussion to our revised manuscript to better connect our method to these two works. ## Weaknesses 1. **Regarding number of trials and clear improvements**: Following your suggestion, we ran our method for five additional trials. Overall, the accuracies remained stable with reduced variance. Our previous results in the paper and the new results below clearly demonstrate the advantage of our method in the unsupervised bias mitigation setting, where no group labels are available. || NeuronTune ||NeuronTune$^\dagger$|| |---|---|---|---|---| ||WGA|Acc.|WGA|Acc.| |Waterbirds|92.2±0.3|94.4±0.2|92.5±0.9|94.5±0.3| |CelebA|83.1±1.1|92.0±0.5|87.3±0.4|90.3±0.5| |MultiNLI|72.1±0.1|81.1±0.6|72.5±0.3|80.3±0.6| |CivilComments|82.4±0.2|89.2±0.1|82.7±0.4|89.4±0.2| 2. **Regarding the assumption on the training set**: We would like to clarify that our method does not rely on any assumptions about the training set. Since our approach uses an independent identification set for bias detection, it can be applied to models trained on any datasets. Notably, if the training set contains fewer biased samples, the resulting model is less susceptible to spurious bias. In such cases, NeuronTune will identify fewer biased neurons and the model will receive less intervention for bias mitigation. ## Other Comments Or Suggestions We will make revisions at the suggested locations and thoroughly review the rest of our paper. ## Questions 1. **The error values in Table 2 are quite high. Could this be caused by the low number of trials? Would at least e.g. five trials be possible? Did you conduct a Wilcoxon-rank-sum test?** We hope to clarify that the variances of our method are relatively low compared to other methods. After running five additional trials, we observed reduced variance. Please refer to the table in Weakness 1 for the results. Our method demonstrates clear improvements in the unsupervised bias mitigation setting. Since we compare against the best-reported performance of baseline methods, a Wilcoxon rank-sum test is not directly applicable in this context. 2. **The decision boundaries are different between training and testing in Figure 2a and 2c**: This discrepancy arises because we used two separate K-nearest neighbor classifiers, each fitted to the predictions on the training and test sets separately, to determine the decision boundaries. To ensure consistency, we have now used a shared K-nearest neighbor classifier fitted on both training and test data to determine decision boundaries. The revised figure can be accessed [here](https://anonymous.4open.science/r/NeuronTune-6CFC/synthetic_example.png). 3. **Regarding the statement in l.379 left**: We hope to clarify that in Tables 1 and 2, we group methods that are directly comparable and boldface the best result within each group of methods (L345 and L398). For example, on MultiNLI (Table 2), AFR belongs to the methods that require group labels, while NeuronTune does not require group labels. NeuronTune achieves the highest WGA within methods that do not require group labels and even remains competitive against methods that require group labels. We will revise L379 to further clarify this in the paper. 6. **When using the validation set to find neurons that are sensitive to spurious behavior, is the training set used to tune the last layer?** Yes, for "NeuronTune," the validation set is used only to identify biased neurons, while the training set is reused to tune the last layer. "NeuronTune$^\dagger$" is a variant where half of the validation set is used for detection and the other half for tuning. Please let us know if you have further questions.
null
null
null
null
null
null
null
null
Validating Mechanistic Interpretations: An Axiomatic Approach
Accept (poster)
Summary: They propose a framework to quantify the effectiveness of a total decomposition of a model into sequential interpretable components. They find that this framework is able to validate (by showing a high probability in each of the equations of their axioms) explanations they create for the model components of a transformer trained to classify satisfiable 2-sat formulas. They also use their axioms to validate a previous interpretation (from [Nanda et. al. 2023 https://openreview.net/forum?id=9XFSbDPmdW]) for a transformer performing modular addition. ## update after rebuttal I find that my opinion of the approach remains after the rebuttal period, and retain my initial score. Claims And Evidence: I find that they successfully demonstrate their method in toy examples. However, there are a few points that would be required to analyse their claims and methods in my opinion: - Their axioms at face value require a full decomposition of a network into interpretable components. Since it is exceedingly unlikely that full model interpretations will exist for practical problems, one needs to interpret their axioms for sub-networks in practice; They go over how to re-interpret their axioms for these cases in Appendix B.1. I would argue that to sufficiently prove the utility of their framework, some practical example of existing foundation models should be experimentally analysed using their axioms as interpreted in B.1. - Further, Their method is demonstrated, but not evaluated relative to anything else. For example one could design experiments where a model has a ground truth interpretation, and then show their method evaluates the correct interpretation in a better way than a false but plausible interpretation relative to some baselines. Methods And Evaluation Criteria: Their toy examples are appropriate, but more work and experiments would be required to evaluate their framework in practical foundation models. Also, their methods should be evaluated in their identification of good interpretations versus other baselines. Theoretical Claims: Checked the arguments of appendix D and found no issues. Experimental Designs Or Analyses: Yes, I checked over the details of the experiments in section 4 and 5 and found no issues. Supplementary Material: Reviewed Appendix sections B,D, in detail and looked briefly at the rest. Relation To Broader Scientific Literature: The broader context of this work is in the addition of their framework to the set of methods for evaluating the degree of validity of human-interpretations for model components. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: Weaknesses exist in terms of utility of their approach versus other methods for evaluating explainability of interpretation of model components. Their interpretability analysis on the case study in section 4 is well done. Other Comments Or Suggestions: - I’m not sure if axiom is the right or best terminology for the inequalities called as such. A better description would be to treat those as “degrees of explainability”. I.E. the probability in the respective “axiom” quantifies the degree of explainability of the interpretation across prefix equivalence, prefix replaceability, etc … - Typo 118 right: “functions need to be individually instantiated every mechanistic interpretation.” - Since the computational graph is not just a sequence, how does the residual work with indexing and in all the axioms when switching from human and model components with alpha/gamma? Perhaps a sentence stating how to handle residual connections in general in the main text. - Are the axioms for some fixed epsilon? Maybe denote them as Axiom i (\varepsilon) to be explicit. - Be explicit when stating the axioms that the equality in the probability for each axiom is appropriate for the axioms when it does not take place in a high dimensional vector space, but rather the discrete (human-interpretable) output space or the human interpretable space. This is mentioned later in sentence at line 389 right when discretizing is necessary to evaluate the interpretation in section 5. The discussion following this in line 418 right, highlights a general weakness of these axioms when the equality is evaluated in a high dimensional vector space, since discretization may be necessary to apply their axioms to a given model interpretation, as was the case for section 5. “[discretization] limits the ability of Axioms 1 and 2 to validate the internal behavior.” Questions For Authors: - Why do these axioms in particular capture explainability well in a more appropriate way than some others? For example we could have formulated a single axiom where for any nondecreasing sequence i_n of indexes in [len(d)], we have the \varepsilon-replaceability of all components between i_{2k} , i_{2k+1}. - Why the non-homogenous architecture in experiments, where the second layer has more attention heads? - What is the loss function for training in experiments of section 4? Is it the usual next token prediction, or binary cross entropy on the sequence classification? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your helpful comments, we respond in detail below. ## Q1. More experiments are required to evaluate the framework by analyzing practical foundation models using the axioms from B.1 We sketch how the circuit for IOI in GPT-2 [1] may be expressed and analyzed in our framework in the response to Q1 of reviewer LJQi. Evaluating circuits in our framework is straightforward given a fully-specified mechanistic interpretation. However, in practice, we observe that popular mechanistic interpretations such as the IOI circuit leave many key details unspecified. Circuit analyses of this type can be strengthened by evaluating using our framework as it forces the analysts to spell out all the details; we hope that such studies will be conducted in future work. [1] Wang et al. Interpretability in the Wild: A Circuit for Indirect Object Identification in GPT-2 Small. 2022. ## Q2. Framework should be evaluated in comparison to baseline evaluation frameworks We agree that evaluation of this type would be valuable, however, we note that we are the first to propose an axiomatic framework for the evaluation of mechanistic interpretations, presenting a natural and principled set of criteria; we believe that this already constitutes a valuable contribution to the community. We hope that our work inspires further conversation in this direction, and leave such an analysis to future work. ## Q3. How do the axioms handle residual connections? We can express residual connections via concrete functions which return the residual stream as part of a tuple of values. See the analysis of the 2-SAT model for an example: the second concrete component returns both the residual stream and the hidden layer of the MLP. Appendix B.1 discusses a variation of Axioms 1 through 4 which operate on nonlinear graphs, and in this case no change is necessary. In general, we may treat the residual connection as computing the identity function in the abstract model; we discuss this in more detail in the response to Q1 for reviewer LJQi. ## Q4. Are the axioms for some fixed $\epsilon$? Yes, that is correct. In definition 3.1, we define an interpretation as $\epsilon$-accurate when Axioms 1 through 4 are satisfied with that fixed value of $\epsilon$; we will clarify this. ## Q5. Axioms in high-dimensional vector spaces We do state that our axioms require that the abstract states be discrete (lines 128-130), however we will improve clarity. No such restriction applies to the concrete model. While lines 410-420 illustrate an important tradeoff of discretization techniques, this is not a weakness of our axioms. Noting that our equivalence class formulation for discretization generalizes comparison up to a tolerance, any evaluation approach which considers equivalence of intermediate values (essential to have any hope at resolving the extensional equivalence problems of the type discussed in [2]) will observe the same behavior. Also see response to Q2 of reviewer zQST. [2] Scheurer et al. Practical pitfalls of causal scrubbing. AI Alignment Forum. 2023. ## Q6. Why do the axioms capture explainability better than other approaches? Firstly, our axioms define clear standards to evaluate. Approaches such as causal abstraction are overly broad, making it unclear to the analyst how their interpretation should be evaluated. If we view the equalities in our axioms as tests under causal interventions, we propose a specific and natural set of interventions which ensure that both internal and output behaviors of the interpretation and the concrete model match, and moreover that the corresponding abstract and concrete components are interchangeable. Secondly, we were motivated to formulate our axioms in a manner which captures validation techniques already used informally in the mechanistic interpretation community to evaluate interpretations. Finally, our axioms improve upon existing approaches by considering all of internal equivalence, output equivalence, and compositionality. We discuss the importance of compositional evaluation in appendices D and E. As shown by [2], evaluating only on outputs does not ensure that the intermediate values agree. We discuss these differences in more detail in section 3. We should note that we do not claim that our axioms cannot be improved upon; rather, our work represents a first step at formalizing validation of mechanistic interpretations. We hope that work in this direction continues and that our work inspires further improvements to evaluation techniques. Regarding the specific approach mentioned, as described above, techniques which neglect the equivalence of internal values invite the problem of only establishing extensional equivalence [2]. ## Q7. Why the non-homogeneous architecture? Our goal was to identify the simplest architecture with high accuracy, which we observed with this architecture. ## Q8. What is the loss function? We use a next-token prediction objective. --- Rebuttal Comment 1.1: Comment: >... we observe that popular mechanistic interpretations such as the IOI circuit leave many key details unspecified. Circuit analyses of this type can be strengthened by evaluating using our framework as it forces the analysts to spell out all the details … I don’t think that your axiomatization helped, or was required, to see that components were underspecified in the incomplete interpretation in this example. I do not think computing epsilon values for an interpretation in a vacuum has power in showing the validity/utility of your approach to “demonstrate the applicability of these axioms for validating mechanistic interpretations”. Furthermore, to properly contrast between two interpretation methods (eg like the 2-sat experiment aimed to do), one needs to check that a deeper causal analysis agrees with which of the two interpretations has lower epsilon values, otherwise there is the tautology of evaluating the axioms in a vacuum. In the 2-sat experiment, the epsilon values of different axioms even disagree: In the second block hidden layer, some axioms rank “decision tree” better, and some prefer “disjunction only”. As someone who is explicitly looking for ways to compare the quality of different interpretations for the same model component, I am not convinced that I should use your approach, without empirical justification of the distinguishing power of these axioms. --- Reply to Comment 1.1.1: Comment: > I don’t think that your axiomatization helped, or was required, to see that components were underspecified in the incomplete interpretation in this example. We do not argue that our axiomatization is necessary to identify that components are underspecified. The authors of [1] specify as such in their appendix. Our point is that as our axioms require fully-specified interpretations, evaluation with them would avoid such issues in the first place. > I do not think computing epsilon values for an interpretation in a vacuum has power in showing the validity/utility of your approach to “demonstrate the applicability of these axioms for validating mechanistic interpretations.” We should note that $\epsilon$ describes the probability that the behavior of the concrete and abstract models differ, and is meaningful in itself. We may view $\epsilon$ as a form of error, in the sense that $1 - \epsilon$ is the accuracy with which the abstract model or intervened concrete model predicts the behavior of the original concrete model. In our experiments, we observe values of $\epsilon$ in the full range of 0 to 1. We observe values of up to 1 for bad interpretations, as well as values of 0 for interpretations which perfectly match the behavior of the model, indicating that our axioms do indeed separate valid from invalid hypotheses. Moreover, by defining natural and desirable properties which any valid mechanistic interpretation should satisfy, our axioms not only formalize the extant informal practices for validating mechanistic interpretations used by the community but also highlight the weaknesses of the existing practices. For instance, our 2-SAT analysis in Appendix E illustrates the importance of compositional evaluation as imposed by the prefix axioms—a validation step that existing mechanistic interpretability analyses lack. > Furthermore, to properly contrast between two interpretation methods (eg like the 2-sat experiment aimed to do), one needs to check that a deeper causal analysis agrees with which of the two interpretations has lower epsilon values, otherwise there is the tautology of evaluating the axioms in a vacuum. > … I am not convinced that I should use your approach, without empirical justification of the distinguishing power of these axioms. We should note that our replaceability axioms *already express* such a causal analysis, of a form similar to those used in existing mechanistic interpretability works. In particular, they explicitly test the effect of intervention on the concrete state as a function of the abstract state. While we agree with the reviewer that a thorough evaluation comparing the distinguishing power of the axioms with that of other approaches would be useful, we emphasize that our work is the first of its kind and can spur further efforts at formalizing mechanistic interpretability. We believe such a formalization is valuable in its own right. > In the 2-sat experiment, the epsilon values of different axioms even disagree: In the second block hidden layer, some axioms rank “decision tree” better, and some prefer “disjunction only”. This is not a flaw of our axioms, and is in fact a desirable property. This is why an interpretation is $\epsilon$-accurate only when it is $\epsilon$-accurate in terms of *all* axioms. If all axioms agreed, they would be testing equivalent properties, and only one would be necessary.
Summary: This paper introduces a set of axioms aimed at formalizing mechanistic interpretability for neural networks, inspired by abstract interpretation concepts in program analysis. The authors define mechanistic interpretations as human-interpretable programs that approximately replicate the computations of neural networks. To validate such interpretations, they propose six axioms that focus on input-output equivalence and component-level fidelity. The paper applies these axioms in two case studies: a previously known transformer-based modular arithmetic model and a novel transformer trained to solve the 2-SAT problem. Empirical analysis confirms the axioms can quantitatively validate mechanistic interpretations, contributing a structured evaluation framework. Claims And Evidence: The main claims of formalizing and empirically validating mechanistic interpretations through axioms are reasonably supported. However, critical claims about the general applicability and scalability of the proposed axiomatic framework beyond simplistic algorithmic tasks lack clear and convincing evidence. While results on toy examples (2-SAT, modular addition) are encouraging, the absence of larger or practically relevant neural network benchmarks diminishes the robustness of broader claims. Methods And Evaluation Criteria: The proposed axioms and evaluation methodology make conceptual sense for assessing mechanistic interpretations on small-scale, algorithmically transparent models. However, the chosen tasks—modular arithmetic and 2-SAT problems—are relatively trivial compared to practical neural network applications, limiting the generalizability and real-world relevance of the proposed evaluation criteria. Theoretical Claims: The theoretical claims are clearly presented, and definitions of the axioms appear sound. I reviewed the definitions and the axiomatic formulations carefully. While the theoretical framing seems correct, no significant issues emerged. However, the theory primarily covers simple computational models, and extending it theoretically or empirically to more complex and realistic architectures would enhance its value significantly. Experimental Designs Or Analyses: The experimental validation provided is sound but limited. The analyses (attention patterns, decision tree extraction for neuron activations) for the chosen examples are adequate, though they only apply to very simple Transformer architectures trained on straightforward tasks. The robustness of these axioms when applied to larger and more complex datasets, or real-world tasks, is unclear. More diverse experimental setups would substantially strengthen the paper. Supplementary Material: I reviewed parts of the supplementary material, including proofs and additional experiments provided in Appendices A, B, C, and G. The supplementary material is comprehensive, clear, and supports the main claims well. Relation To Broader Scientific Literature: The paper situates itself effectively within existing literature, clearly distinguishing its contribution from related approaches such as causal abstraction and causal scrubbing. It successfully positions its axiomatic framework as complementary and extending prior works by emphasizing compositional evaluation. Essential References Not Discussed: The paper neglects discussion of broader interpretability frameworks beyond the program-analysis-inspired perspective. Moreover, broader causal interpretation literature could have further contextualized and strengthened the motivation behind formal axiomatic approaches. - Sundararajan, Mukund, Ankur Taly, and Qiqi Yan. "Axiomatic attribution for deep networks." International conference on machine learning. PMLR, 2017. - Mueller, Aaron, et al. "The quest for the right mediator: A history, survey, and theoretical grounding of causal interpretability." arXiv preprint arXiv:2408.01416 (2024). Other Strengths And Weaknesses: The paper's strength lies in providing clear formalization and axioms for validating mechanistic interpretations, filling an important gap in the literature. The theoretical clarity and detailed empirical evaluation on chosen tasks are commendable. However, the primary weaknesses include: - Limited practical relevance due to simplistic and overly specialized choice of tasks (2-SAT and modular arithmetic). - Lack of demonstration or discussion on scalability to real-world, complex neural networks and tasks. - The approach might become less meaningful or too restrictive for networks with significantly richer internal structures or continuous-valued intermediate states. Other Comments Or Suggestions: None. Questions For Authors: - How does the proposed axiomatic approach scale to larger, real-world models, particularly when intermediate states are high-dimensional continuous vectors rather than discrete symbols? - Could you provide concrete guidance on determining appropriate abstraction functions \alpha and concretization functions \gamma in realistic settings with large neural networks? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments, we respond in detail below. ## Q1. Case studies are too simple; it is unclear whether the approach is applicable to larger models We note that our framework is already compatible with larger models and more complex architectures, and we emphasize that our main contribution is our evaluation framework (i.e., axioms) for validating mechanistic interpretations and not techniques to derive mechanistic interpretations. Hence, any increased difficulty of deriving mechanistic interpretations on larger models does not affect the applicability of our approach. See our response to Q1 by reviewer LJQi for more details. ## Q2. How does the proposed axiomatic approach scale to larger, real-world models, particularly when intermediate states are high-dimensional continuous vectors rather than discrete symbols? As noted above, our approach is compatible with larger models. We note that the intermediate states of the concrete model are, in all cases (including for 2-SAT and modular arithmetic models), high dimensional continuous-valued vectors. Our requirement that the intermediate states be discrete is only for the abstract model. In particular, we require this as we agree that mechanistic interpretations cannot hope to achieve equality of high-dimensional continuous vectors in Axioms 1 and 2. However, we emphasize that our requirement is natural for useful mechanistic interpretations, in particular, as any interpretations which operate on high dimensional continuous vectors will not generally be human-interpretable and hence will be of limited utility. ## Q3. Essential references not discussed Thank you for the references, we will incorporate them into the paper. We should note that while [1] proposes an axiomatic framework for interpretability, it considers input interpretability only. [2] is primarily a survey of the architectural components interpreted and techniques such as probing for identifying the representations of abstract values, and reflects the class of concrete programs which $\lambda_T$ must represent. [1] Sundararajan, Mukund, Ankur Taly, and Qiqi Yan. "Axiomatic attribution for deep networks." International conference on machine learning. PMLR, 2017. [2] Mueller, Aaron, et al. "The quest for the right mediator: A history, survey, and theoretical grounding of causal interpretability." arXiv preprint arXiv:2408.01416 (2024).
Summary: This paper is a first effort to draw a parallel between mechanistic interpretability and abstract interpretation from programming language theory. This is a natural analogy, and its very exciting to see contact between these two areas. The authors introduce four axioms for how an abstraction interpretation of a neural network should operate. Then, they present two case studies. In the first, they train a two layer transformer on SAT solving and then develop a mechanistic interpretation by looking at attention heads and MLP activations. In the second, they reevaluate the modular arithmetic algorithm presented in Nanda (2022) under their framework. Both analyses are successful. Claims And Evidence: There are some claims about connections to causal scrubbing/abstraction that I would be curious to hear more about. What are the issues that come from only adhering to some of your axioms, and what are some precise examples where this approach differs from causal abstraction? Maybe something you could add something about this to your thorough appendix! Methods And Evaluation Criteria: I would say that the weakest part of the paper is that you don't include baseline evaluations that help contextualize your error terms. Sure, some of the error terms look really low, but without context new metrics are very difficult to understand! How about is the larger error you see in the MLP for the SAT solving mechanistic interpretation? Is there some evaluation metric that is more similar to interchange intervention accuracy from causal abstraction stuff? Getting an impression of how much of the model behavior your mechanistic interpretation matches would be great! Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design seems very good to me. Easy to understand and well-written. Supplementary Material: The appendix looks amazing! Plenty of interesting extra details and figures, was enjoyable to scroll through. Relation To Broader Scientific Literature: Mechanistic interpretability is in need of more theoretical work that grounds out the field in existing theoretical frameworks. Prior work looks to formal theories of causality, but this work begins to build a bridge to program language theory which is a rich and well-studied field with formal foundations that mechanistic interpretability could use! Essential References Not Discussed: You should check out this paper: https://arxiv.org/pdf/2301.04709 Which goes much further into the causal abstraction theory and connection to mechanistic interpretability. In particular, I would be curious what you make of Remark 35 given that you claim in this paper that causal abstraction lakes the concretization and abstraction operations. I think you would find it deepens the connections to this work greatly! Other Strengths And Weaknesses: This paper is an original take on mechanistic interpretability and brings over exciting new tools and ideas from programming language theory. The experiments are simple, but they clearly demonstrate the relevant concepts. The biggest weakness of this paper is that its evaluated on toy models, but I would say the primary contribution is the new framework and concepts that they are introducing to mechanistic interpretability. Other Comments Or Suggestions: N/A Questions For Authors: Could you give an intervention-based definition for the axioms you defined? Or maybe just explain whether/how interventions are being performed in your experiments on transformers? I think that interventions are being performed for each of the axiom verifications, but it would be cool to have it spelled out Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your detailed comments, and for your strong support for our work! We respond in detail below. ## Q1. What happens when only some of the axioms are adhered to? In appendices D and E, we include a discussion of what happens when we consider component axioms (Axioms 2 and 4) alone; in this case, we do not consider compositionality of the interpretation. Likewise, considering replaceability alone invites the problem of the interpretation being extensionally equivalent (equivalent in terms of input-output behavior) to the model under analysis but lacking intensional equivalence (equivalence in terms of internal structure) [1]. [1] Scheurer et al. Practical pitfalls of causal scrubbing. AI Alignment Forum. 2023. ## Q2. How should the $\epsilon$ values be contextualized? We may view the $\epsilon$ as a form of error, in the sense that $1 - \epsilon$ is the accuracy with which the abstract model or intervened concrete model predicts the behavior of the original concrete model. In our experiments, we observe values of $\epsilon$ in the full range of 0 to 1. We observe values of up to 1 for bad interpretations, as well as values of 0 for interpretations which perfectly match the behavior of the model (the only reason the paper does not report any $\epsilon$ values of 0 is because we compute Clopper-Pearson confidence intervals). In this sense, $\epsilon$ describes the probability that the behavior of the concrete and abstract models differ, and is meaningful in itself. We should add that $\epsilon$ is, in addition, useful for relative comparisons, identifying which interpretation better matches the underlying behavior of the model. ## Q3. Relationship of our framework with causal abstraction and causal interventions We will make a note about Remark 35 of [2], thank you for the suggestion! However, it is important to note that $\tau^{-1}$ may not always be a feasible choice for concretization. In particular, $\tau$ will not be invertible in general, and hence this necessitates application of set semantics, which may be infeasible in the concrete domain, and hence for replaceability. In addition, we should note that it is not necessarily the case that abstract interpretations align individual high level with individual low level variables. While our axioms do not exactly follow the structure of interchange intervention (in particular, the interventions are not derived by substituting intermediate states from different inputs), they may be regarded as performing a specific class of causal interventions, which are soft interventions of the type described in [2]. The replaceability axioms may be viewed as interventions on the original concrete model. If we consider component replaceability and $x_i$ is the input to the $i^{\text{th}}$ concrete component, the intervention performed in component replaceability replaces the constraint $x_{i+1} = d_t\[i\](x_i)$ with the constraint $x_{i+1} = \gamma_i \circ d_h\[i\] \circ \alpha_{i-1} x_i$. The equivalence axioms may be defined as analogous soft interventions on the corresponding abstracted prefixes of the concrete model. From that perspective, our axioms present a principled choice for a standardized set of causal interventions, which we believe is very valuable to the mechanistic interpretability community. In particular, while frameworks such as that of [2] are very broad, that breadth means that there is no clear standard choice for evaluation, hence causal abstraction analyses continue to emphasize interchange intervention accuracy, which does not directly evaluate the equivalence of internal representations. [2] Geiger et al. Causal Abstraction: A Theoretical Foundation for Mechanistic Interpretability. 2024. ## Q4. The framework is evaluated only on toy models. We do not intend the case studies presented to serve as an evaluation of our framework, but as an illustration of how it may be applied. Our framework can indeed be used to validate interpretations of larger models; see the response to Q1 of reviewer LJQi for more details. As the reviewer notes, our primary contribution is the new framework for evaluation of mechanistic interpretations and not particular techniques to derive them.
Summary: The paper introduces a formal framework for assessing mechanistic interpretations of neural networks. The authors propose a set of axioms inspired by abstract interpretation from program analysis to systematically evaluate whether a given mechanistic interpretation accurately captures a model’s internal computations. The proposed framework emphasizes compositionality, requiring that each step in the interpretation aligns with the corresponding computations in the network. The axioms are validated through case studies on Transformer-based models trained to solve the modular addition and the 2-SAT problem. Experimental results demonstrate that the axioms provide a structured way to evaluate interpretability quantitatively. Claims And Evidence: The major claims made by the paper are: 1. Mechanistic interpretability can be rigorously defined through a set of axioms that ensure both input-output fidelity and internal component alignment. 2. Valid interpretations should respect compositionality, meaning that replacing parts of a model with their interpreted counterparts should minimally affect outputs. 3. The proposed axioms can be empirically validated using statistical tests. The claims are supported through experimental validation. The case studies show that the axioms can characterize valid mechanistic interpretations. Methods And Evaluation Criteria: The evaluation methods used in the paper are reasonable and align with the goal of assessing mechanistic interpretability. The authors apply their framework to two models, including an existing modular addition Transformer and a novel 2-SAT-solving Transformer. These evaluation strategies provide the empirical foundation for the proposed axiomatic approach. Theoretical Claims: The paper argues that mechanistic interpretations can be formally validated using an axiomatic approach. The theoretical framework is sound, and the derivations appear correct. Experimental Designs Or Analyses: The experimental setup is well-structured and supports the claims: 1. The 2-SAT model’s interpretation is carefully analyzed using attention patterns, decision trees, and abstraction-concretization mappings. 2. The modular addition model’s interpretation is validated through discretization techniques and statistical checks. 3. The impact of different abstraction choices (e.g., disjunction-only vs. full decision tree interpretations) is explored, demonstrating trade-offs in interpretability fidelity. Supplementary Material: Yes, the paper includes extensive appendices detailing additional analyses and experimental results. Relation To Broader Scientific Literature: The work extends prior research on mechanistic interpretability by providing a formal framework. Essential References Not Discussed: The paper discusses the most relevant work in mechanistic interpretability. Other Strengths And Weaknesses: Strengths: 1. The paper is well-written and presents a clear, formal framework for mechanistic interpretability. 2. The proposed axioms provide a structured way to evaluate interpretations, filling a gap in prior work. 3. The experimental validation is thorough, with strong empirical support for the claims. Weaknesses: 1. The method primarily evaluates small-scale models trained on synthetic tasks (e.g., modular arithmetic and 2-SAT). It remains unclear how well this approach generalizes to larger, practical models. 2. Following 1, the axiomatic framework may require significant manual effort when applied to real-world models. Other Comments Or Suggestions: N/A. Questions For Authors: 1. How sensitive are the axioms to the choice of abstraction and concretization functions? 2. Is the proposed axioms sufficient to validate mechanistic interpretation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments, we respond in detail below. ## Q1. It is unclear how well this approach generalizes to larger models We emphasize that our key contribution is our framework (i.e., axioms) for evaluation of mechanistic interpretations and not particular techniques to derive them. Hence, while deriving/finding mechanistic interpretations is more difficult on larger models, it does not impact the applicability of our approach. As an illustration, consider the application of our framework to IOI [1]. The IOI circuit for GPT-2, at a high level, identifies the indirect object by identifying all names in a sentence, filtering out any duplicated names, and selecting the remaining name. The analysis in [1]. is at the level of attention heads, and consists of five broad categories of heads: 1. **Previous token heads**, which copy information from the prior token 2. **Duplicate token heads**, which identify whether there exists any prior duplicate of the current token 3. **Induction heads**, which serve the same function as duplicate token heads, mediated via the previous token heads 4. **S-inhibition heads**, which output a signal suppressing attention by name mover heads to duplicated names 5. **Name mover heads**, which copy names except those suppressed by the S-inhibition heads for output Each of these attention heads computes a well-defined human-interpretable function and are representable in our framework; each may be modeled as an independent node in the computational graph. Each depends on the residual stream flowing into the corresponding layer; the subsequent residual stream is then a function of the preceding residual stream and all interpreted attention heads. As only attention heads are interpreted, the remainder of a block is an identity function in the abstract model. In this way, we can construct isomorphic concrete and abstract computational graphs for arbitrary circuits. Appendix B.1 describes how the resulting computational graphs may be linearized, allowing the application of axioms 1 through 4, and how to extend our axioms to operate natively on nonlinear computational graphs. Moreover, our framework provides a principled approach for evaluating a number of key questions left by [1] for future work. In particular, the authors of [1] hypothesize that S-inhibition heads output relative, and not absolute positions of duplicated names, but do not clearly demonstrate that this is the case. Applying our axioms to the corresponding abstract models would permit the analyst to identify the better hypothesis. However, we again emphasize that our focus is on the evaluation framework, and not methods for deriving mechanistic interpretations. Thus, while the analysis of GPT-2 Small and other larger models is compatible with our axioms, conducting such an analysis and deriving mechanistic interpretations is out of scope for our work. [1] Wang et al. Interpretability in the Wild: A Circuit for Indirect Object Identification in GPT-2 Small. 2022. ## Q2. Significant manual effort may be necessary We agree that significant effort is necessary for the analyst to derive a mechanistic interpretation, but it should be noted that mechanistic interpretability is in general a time-consuming and highly manual process. While automated approaches to deriving interpretations are an interesting direction of research in their own right, our work is about evaluation of derived mechanistic interpretations and not about techniques to derive an interpretation. ## Q3. How sensitive are the axioms to the choice of abstraction and concretization functions? The axioms may be sensitive to this choice, but we should note that these are essential components of the interpretation; interpretations which claim that the same abstract function is computed over different representations are not the same, and hence their evaluations should not be either. It is the analyst's responsibility to choose these functions appropriately, and our axioms are useful for identifying which choice more closely matches the behavior of the model. ## Q4. Are the proposed axioms sufficient to validate mechanistic interpretations? Yes, our axioms are sufficient, presenting a strong set of criteria to ensure that the behavior of the concrete model and the claimed mechanistic interpretation are interchangeable and behave in the same way. We recognize that stronger axioms are possible, and, in particular, we propose two additional axioms, Axiom 5 and 6 in the appendix. These strengthen the evaluation, but are difficult to operationalize, and we believe that Axioms 1 through 4, as presented in the main body, already constitute an effective evaluation valuable to the mechanistic interpretability community. Our work represents a first step in the direction of formally validating mechanistic interpretations. We hope that our work inspires a conversation in the community and leads to further improvements to evaluation techniques.
null
null
null
null
null
null
Simplicity Bias and Optimization Threshold in Two-Layer ReLU Networks
Accept (poster)
Summary: This is a theoretical paper seeking to explain the phenomenon that in some situations overparameterized models, once the number of noisy training samples is large, fail to interpolate the training data and instead converge to a solution that minimizes the test loss. This was observed with in-context learning and with diffusion models. In this paper, the architecture is overparameterized two-layer ReLU networks, and the focus is on training them from a small initialization by gradient flow on noisy data which is labelled by a linear teacher and which satisfies some further simplifying assumptions. The main result confirms the motivating phenomenon, and its proof provides further insights, pinpointing an early-phase alignment of neurons as the principal cause. Another contribution are concentration bounds for the extremal vectors that drive the alignment process, and the assumptions here are less restrictive than in the main result. The paper also reports and discusses numerical experiments in setups related to but extending the setting of the main theoretical result. Claims And Evidence: Full proofs of the theoretical results, which are the paper's main contributions, are provided in the appendix. In addition, the key proofs are sketched in the main. For the numerical experiments, sufficient details are provided so that it should not be difficult to reproduce them. Methods And Evaluation Criteria: The experiments explore several settings that go beyond the assumptions for the theoretical results, and those include considering the GeLU activation and the Adam optimizer. Although it could have been interesting to investigate real datasets, diffusion models, and in-context learning in order to link the theoretical results more firmly to practical situations, I think the level of experimental exploration that the authors chose to perform makes sense for this theoretical work which seems to be the first in this direction. Theoretical Claims: Yes, and I am convinced that they are correct. Experimental Designs Or Analyses: Yes, as far as possible. Supplementary Material: I read the appendix of the paper. Relation To Broader Scientific Literature: The paper builds on Boursier and Flammarion 2024, and on a number of recent works in the literature that investigate the training dynamics of one-hidden layer ReLU and other homogenous neural networks from a small initialization. However, as far as I know, the link between neuron alignment in early training and generalisation at the expense of interpolation is for the first time established rigorously in this work. Moreover, the theoretical results in this paper are not obvious and do not follow easily from the previous works. Essential References Not Discussed: I am not aware of any. Other Strengths And Weaknesses: The paper is well written and the main result is accompanied by a detailed discussion. The theoretical results are non-trivial, and their proofs are provided in the appendix. The concentration bounds result may indeed be useful for future work. The experimental results are interesting, and their discussion is informative. The gap between the theoretical setting in this work and the empirically observed phenomenon with in-context learning and diffusion modes is large, and it is not yet clear whether and how the properties of the training dynamics identified here are related to what actually happens in those practical situations. (E.g. it might be that case that the stability issues studied by Qiao et al. in NeurIPS 2024 are at play there to a greater extent than the neuron alignment in early training.) Other Comments Or Suggestions: In the References, check whether papers cited as on arXiv have been published, and consider providing a clickable link for each item. Questions For Authors: I do not have any at this point. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s insightful feedback. > The gap between the theoretical setting in this work and the empirically observed phenomenon with in-context learning and diffusion modes is large, and it is not yet clear whether and how the properties of the training dynamics identified here are related to what actually happens in those practical situations. (E.g. it might be that case that the stability issues studied by Qiao et al. in NeurIPS 2024 are at play there to a greater extent than the neuron alignment in early training.) We agree that it remains unclear whether empirical observations in more complex architectures are better explained by our theoretical results or those of Qiao et al. (2024). Evaluating this in real-world settings would require extensive experiments beyond the scope of our primarily theoretical contribution, as also noted by the reviewer. That said, Section A.5 in the appendix suggests that in our toy model—while not necessarily representative of more complex architectures—non-convergence appears to be driven by early alignment rather than stability issues. We view our work as an alternative perspective to that of Qiao et al. (2024) in explaining non-convergence. In practice, it is likely that both phenomena contribute to the observed behavior in real-world settings. > In the References, check whether papers cited as on arXiv have been published, and consider providing a clickable link for each item. We will update the references accordingly.
Summary: - In the context of two-layer ReLU networks, the paper theoretically explores the issue where trained models got stuck in spurious local minima of the training loss as the number of training samples exceed a certain threshold. - It is demonstrated that networks might converge towards simpler solutions rather than interpolating training data, a phenomena so called directional alignment. - It is pointed out that this type of simplicity bias is indeed beneficial and enhances the generalization of trained models. Claims And Evidence: The theoretical claims are supported by mathematical proofs. Methods And Evaluation Criteria: An experiment with a toy model is illustrated to demonstrate the theoretical result. Theoretical Claims: I did not check the correctness of the proofs due to the time limit. Experimental Designs Or Analyses: The experiment on the training of an overparametrized two-layer neural networks show that the behavior of training and test loss are indeed consistent with the desired regimes of generalization. Supplementary Material: No due to the limited time. Relation To Broader Scientific Literature: The paper advances the previous work on early alignment phenomenon by further indicating that this bias is beneficial to the generalization capability of the model. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: - The paper makes original contributions on top of (Boursier and Flammarion, 2024)'s work. - The paper seems technically sound to me, even though I did not carefully check the proofs of the theoretical claims. Other Comments Or Suggestions: Maybe the theorems and propositions could be rephrased to a more plain and less mathematically rigorous language for a higher clarity. Questions For Authors: I don't have other questions for the authors Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s insightful feedback. > the theorems and propositions could be rephrased in a more accessible and less mathematically rigorous manner to improve clarity We we will enhance the clarity of our results in the revised version. Specifically, we will include an informal statement of Theorem 4.1 at the end of the introduction. However, we do not find it feasible to do the same for Section 3, as it requires prior knowledge of extremal vectors. More precisely, we will make the following revisions regarding the clarity and organization of the paper: - **Clarification of key results**: We will include an informal statement of Theorem 4.1 at the end of the introduction. - **Emphasis on the goal of Section 3**: This section aims to provide a simple characterization of extremal vectors (up to $\sqrt{\frac{d}{n}}$ terms) when the number of training samples is large. This serves two purposes: (1) forming the first step in the proof of Theorem 4.1 and (2) offering a general framework that may be useful in future research, as noted by Reviewer TN6U. - **Additional clarification of key results**: We will add an explanation of how Proposition 3.1 relates to and complements Theorem 3.1. - **Discussion of main implications**: We will briefly summarize the key implications of Theorem 4.1 at the beginning of the discussion section. - **Expanded discussion in the appendix**: We will elaborate on our work’s connections to related literature, particularly regarding the double descent phenomenon. Additionally, we will compare the NTK/lazy training regime with the feature learning/mean-field regime. We appreciate the reviewers' valuable suggestions and look forward to refining our work accordingly
Summary: The paper theoretically studies how overparameterized networks converge to simpler generalizing solutions (as opposed to interpolating training data) when there are sufficiently many training samples. They do this by studying early alignment, where networks align their weights to the directions of the data early in training before changing in magnitude. They show that early alignment corresponds to the true loss as the number of training samples increases (reaching an optimization threshold), corresponding to a few key directions and thus yielding a simplicity bias which they show persists throughout training. Claims And Evidence: The claims made appear clear and well-supported. Methods And Evaluation Criteria: The methods seem fine. Theoretical Claims: I did not check the correctness of any proofs. Experimental Designs Or Analyses: The experimental design seems well-done and there are additional experiments included in the supplementary to support the theory in the main text. Supplementary Material: I reviewed the additional experimental results in the supplementary, which seem fine. Relation To Broader Scientific Literature: The paper shows theoretically how the number of training samples impacts the early alignment phase, leading to a simplicity bias and show that this persists through training. This helps elucidate how the number of training samples, parameters, and data dimensionality relate to generalizing versus overfitting (data interpolating) solutions and the corresponding underlying mechanism. Essential References Not Discussed: Atanasov et al. (2021) seems particularly relevant to this relevant since they also study early alignment and how it relates to feature learning. Other Strengths And Weaknesses: Strengths: The paper is well-motivated, well-written, and studies an important topic in deep learning theory. The theory appears sound and the experiments compliment the theory well and support its generality. Weaknesses: The paper is quite dense and it’s difficult to follow the main points at times. The results and their applications could be emphasized more in a discussion that highlights their significance. Other Comments Or Suggestions: It might be helpful to readers to have some high-level explanations of the theoretical parts as well as some visuals to help with the intuition behind the theory. There could also be more emphasis and discussion on the simplicity bias. I suggest bringing another one of the simulations from the supplementary into the main text for more experimental support of the theory. It could also be interesting to link this to double descent and “data-double descent” (Henighan et al., 2023) in a discussion. In general I suggest having a more extensive discussion relating the results of this work to existing literature. Questions For Authors: 1. Other work studying early alignment has done so in the context of feature learning and the neural tangent kernel (Jacot et al. 2019). Does data interpolation correspond to the “lazy” learning regime, or would you consider this to be something separate? Can you discuss the connection of this work to feature learning? 2. The authors state “Loss of omnidirectionality is specific to the (leaky) ReLU activation and does not hold for smooth activations.” Do other (smooth) activations exhibit early alignment and do they learn generalizing solutions, or are they constrained to data interpolation? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s insightful feedback. > Atanasov et al. (2021) seems particularly relevant since they also study early alignment and how it relates to feature learning. Thank you for pointing out this reference. We will include it in the revised version, as it provides valuable insights into the early alignment phenomenon. > It might be helpful to readers to have some high-level explanations of the theoretical parts as well as some visuals to help with the intuition behind the theory. There could also be more emphasis and discussion on the simplicity bias. I suggest bringing another one of the simulations from the supplementary into the main text for more experimental support of the theory. It could also be interesting to link this to double descent and “data-double descent” (Henighan et al., 2023) in a discussion. In general I suggest having a more extensive discussion relating the results of this work to existing literature. We appreciate these suggestions. Our discussion section already connects our results to the literature on convergence to global minima, implicit/simplicity bias, and benign & tempered overfitting. We thank the reviewer for pointing out Henighan et al. (2023), which we will discuss in the revised version. Our setting does not exhibit a data double descent phenomenon, as the observed test loss consistently decreases with the number of samples, whereas double descent is characterized by an intermediate bump in the loss curve. However, the toy experiments in Henighan et al. (2023) illustrate a similar phenomenon: for a sufficiently large number of training points, the training loss remains high while the model learns optimal features. It remains unclear though whether this high training loss stems from an underparameterized regime (i.e., the model lacks sufficient capacity to memorize the data) or if optimization fails to reach the ERM in their setup. Given the breadth of related topics, we initially omitted certain discussions (e.g., NTK and double descent) that we felt were less central to our study. However, based on the reviewer’s feedback, we will add a section in the appendix relating our work to these topics. Additionally, we will move Figure 3 (unless another figure is preferred by the reviewers) from the appendix to the main text, utilizing the allowed extra page. > Other work studying early alignment has done so in the context of feature learning and the neural tangent kernel (Jacot et al. 2019). Does data interpolation correspond to the “lazy” learning regime, or would you consider this to be something separate? Can you discuss the connection of this work to feature learning? There might be some confusion here. We distinguish between feature learning and the NTK/lazy regime, as they involve fundamentally different training dynamics (see Chizat et al., *On Lazy Training in Differentiable Programming* for an in-depth discussion). Our study specifically focuses on the feature learning regime with small initialization, as indicated by our initialization choice (Equation (2)), where both the inner and outer layers scale as $\frac{1}{\sqrt{m}}$. In contrast, in the NTK/lazy regime (corresponding to large initialization scales), theory predicts that interpolation should occur at convergence, which is contrary to our main result. However, empirically demonstrating this interpolation in our toy model (with large $n$) is computationally challenging, as it would require an extremely large number of parameters. We will clarify this distinction and add a discussion on NTK vs. feature learning regimes in the appendix. > Do other (smooth) activations exhibit early alignment and do they learn generalizing solutions, or are they constrained to data interpolation? Our experiments in the appendix include results with the smooth GeLU activation, where we observe similar early alignment behavior. While theory predicts that with an infinite number of neurons, interpolation should eventually occur, it remains unclear how many neurons are required in practice. Our experiments suggest that this number is quite large.
Summary: This paper studies the simplicity bias in regression to a two-layer ReLU network using gradient flow. The author shows that despite overparameterization, the network may converge toward simpler solutions rather than merely interpolating the data, leading to a drastic improvement in test loss. Claims And Evidence: I have a question: In lines 72 and 73, you claim that learning useful features stops before full interpolation in modern architectures (even with prolonged training). I assume this statement refers to diffusion models and large language models. Is there any reference to support this claim? Methods And Evaluation Criteria: Yes, although two-layer ReLU neural networks and gradient flow are oversimplified models used to make the problem tractable, they are accepted by the machine learning theory community. Theoretical Claims: No, but the results seem reasonable to me. Experimental Designs Or Analyses: Yes, I checked the experiment details, and they make sense to me. Supplementary Material: No. Relation To Broader Scientific Literature: This work contributes to the growing body of research on the implicit bias of overparameterized models, particularly in gradient-based optimization of neural networks. It builds on prior studies analyzing the simplicity bias in two-layer ReLU networks and extends these ideas by examining early training dynamics. However, clearer connections to empirical findings in large-scale deep learning, such as diffusion models and large language models, would strengthen its contribution. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. Theoretical contributions provide insights into the simplicity bias of gradient-based optimization in overparameterized models. 2. The analysis of early training dynamics is well-motivated and connects to the broader study of generalization in deep learning. Weaknesses: 1. The connection to real-world large-scale models, such as diffusion models and LLMs, remains somewhat abstract and could benefit from empirical validation or discussion of practical implications. 2. The organization of the paper is somewhat unclear. Other Comments Or Suggestions: Please unify the notation in Theorem 3.1, as the mixed use of $E_{X,y}$ and $E_{\mu}$ is confusing. Questions For Authors: 1. You showed that the empirical gradient approximates the expected gradient with an error rate of $O(\sqrt{\log n/n})$ in your theorem 3.1 in Theorem 3.1 and analyzed the critical direction/extremal vectors of the early training dynamics near some "good" expected gradient in Proposition 3.1. I am wondering about the relationship between these two results. Is Proposition 3.1 a consequence of Theorem 3.1? 2. I am also curious about the existence of the extremal vector in Equation (6), as it depends only on the distribution $\mu$. This suggests that the extremal vector exists only for sufficiently "good" $\mu$. A natural question is: for what choices of $\mu$ does the extremal vector in Equation (6) exist? 3. What is the relationship between Section 3 and Section 4? I understand that Section 4 considers a special case of Section 3, where the data distribution $\mu$ follows a linear model. Are the lemmas in Section 3 useful for proving the lemmas in Section 4? Additionally, what is the connection between the implications of Proposition 3.1 and Theorem 4.1? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s insightful feedback. > In lines 72 and 73, is there any reference to support this claim? Yes, this phenomenon is extensively discussed by Raventos et al. (2024). In particular, Figure 4 and the discussion at the bottom of page 7 suggest that the model does not interpolate, even when trained indefinitely. For diffusion models, Kadkhodaie et al. (2023) indicate that models are trained until convergence, yet interpolation ceases beyond a certain number of training samples. However, their experimental details do not specify the exact duration of training. > The connection to real-world large-scale models remains abstract Assessing the causes of non-convergence in real-world setups—beyond the analysis in Section A.5 for our toy model—would require extensive experiments that warrant a separate study. As reviewer TN6U noted, our primary contribution is theoretical, making such empirical investigations beyond our current scope. That said, Raventos et al. (2024) provide strong empirical evidence supporting the relevance of our work to ICL. Their experiments suggest that after a certain number of task examples, the model stops reaching the ERM solution (modeled as a Nadaraya-Watson kernel with poor generalization) and instead converges to a spurious local minimum that generalizes well, aligning with the optimal estimator. Furthermore, their findings indicate that prolonged training does not improve performance, reinforcing our claim. A similar pattern is observed in Kadkhodaie et al. (2023) for diffusion models: the training loss remains low with few samples but increases beyond a certain threshold, while the test loss continues to improve. > The organization of the paper is somewhat unclear. In response to the different reviewers’ feedback, we will make the following revisions regarding the clarity and organization of the paper: - we will include an informal statement of Theorem 4.1 at the end of the introduction - emphasis on the goal of Section 3 (see more below) - we will add an explanation of how Proposition 3.1 relates to and complements Theorem 3.1 - we will summarize the key implications of Theorem 4.1 at the beginning of the discussion section - we will elaborate on our work’s connections to related literature, particularly regarding the double descent phenomenon. Additionally, we will compare the NTK/lazy training regime with the feature learning/mean-field regime. We appreciate the reviewers' valuable suggestions and look forward to refining our work accordingly. > the mixed use of $\mathbb{E}_ {X,y}$ and $\mathbb{E}_{\mu} $ is confusing. We appreciate this feedback. The notation will be clarified (and unified) in the revised version. > Is Proposition 3.1 a consequence of Theorem 3.1? As stated after Proposition 3.1, “Proposition 3.1 relies on the tail bound version of Theorem 3.1 and continuity arguments.” While Theorem 3.1 alone does not directly imply Proposition 3.1, its tail bound version plays a crucial role. This distinction is why we did not present Proposition 3.1 as a corollary. We will add further clarification in the revised version. > for what choices of $\mu$ does the extremal vector in Equation (6) exist? Extremal vectors exist for any distribution $\mu$ since they correspond to critical directions of the continuous function $G$ (defined on line 196). However, if $G$ is non-differentiable, the definition of extremal vectors should be adapted to account for subgradients. Equation (6) assumes differentiability, which holds for distributions $\mu$ that are continuous with respect to the Lebesgue measure (or, more specifically, when the marginal of $x$ is continuous). > What is the relationship between Section 3 and Section 4? The first step in the proof of Theorem 4.1 relies on the tail bound version of Theorem 3.1. Specifically, Section 4 requires a characterization of extremal vectors in the finite-data setting, which is facilitated by first studying their infinite-data counterparts (Equation (6)) and then applying the tail bound version of Theorem 3.1 to establish their proximity. While Proposition 3.1 is not used in Section 4, it supports the idea that early alignment behavior in the infinite-data case is not significantly different from the large but finite $n$ case. Theorem 3.1 bounds $|D_n-D|$, but additional assumptions are needed to directly conclude that the extremal vectors of $G_n$ are close to those of $G$. Proposition 3.1 addresses this gap. We believe that Theorem 3.1 (and its tail bound version) and Proposition 3.1 are broadly applicable and may be useful in future work, as suggested by Reviewer TN6U. The paper is structured accordingly: Section 3 presents a general theoretical result that can be used beyond this work, while Section 4 applies it to a specific setting. We will clarify this connection in the revised version.
null
null
null
null
null
null
Persistent Topological Features in Large Language Models
Accept (poster)
Summary: The authors of this paper explore the applicability of Zigzag filtrations in the Persistent Homology framework for feature extraction in LLM analysis. They propose building filtration on top of simplicial complexes defined by kNN-neighborhoods instead of proximity-induced cliques more commonly used in the persistent homology framework. Authors use those filtrations to analyze the evolution of internal representations of a set of texts across different layers of LLM (a text is represented by its last token; thus, a set of texts is a point cloud). They introduce a metric of similarity between layers topology and identify layers that contribute little to the model performance. Finally, this paper covers a method of model pruning by removing blocks of the identified layers. ## update after rebuttal I think that with the proposed changes, this paper will be an interesting contribution to the research area, although some concerns remain regarding the advantages of the proposed method compared to the "usual" simplicial filtrations. I have raised my score to 'Accept'. Claims And Evidence: Claim of “Identification of Phases of Prompt processing” is too broad - prompts can’t be truly represented with only one token (as it is done in the paper). Methods And Evaluation Criteria: Seems reasonable. Theoretical Claims: In this paper no theorems or statements requiring mathematical proofs were made. Experimental Designs Or Analyses: 1) Why was used sliding window of 5, why no other sizes were reported? 2) Is there any difference (in terms of identified 4 phases) for encoder-only models (in the paper only decoder models are covered)? Supplementary Material: The code for reproduction of the described experiments was provided in anonymized repository. Relation To Broader Scientific Literature: Application of the already existing topological method to a new problem. Essential References Not Discussed: I didn’t find any critically important references that were not covered in the literature review. Other Strengths And Weaknesses: Strengths: S.1. Applications of zigzag filtrations had little coverage in the literature as of now. S.2. The paper is well-written and easy to follow. Weaknesses: W.1. The title feels a little too broad. Different methods of extraction of persistent features from the internal representations of LLMs was covered in lots of previous works, while this paper introduces only one type of new filtration (new features) and explores only one downstream task. Other Comments Or Suggestions: 1) Only one task of layer pruning was explored. Some other tasks, especially those focusing on topology of individual texts, would be much appreciated. Questions For Authors: 1) Have you performed your main experiments only with 0-, and 1-dimensional topological features? 2) From what dimensionality topology of $kNN$-complexes becomes trivial? 3) Could you please provide for Figures 4 plot of average length of intervals in (classic) persistence diagrams for each of the layers for comparison? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their useful feedback. They recognize the novel approach of using zigzag persistence in the context of interpretability of NNs and the clarity of presentation. Additionally, they raise a few points which we address element-wise below: > Claim of “Identification of Phases of Prompt processing” is too broad - prompts can’t be truly represented with only one token (as it is done in the paper). While we agree with the referee that an analysis of whole sentences would allow for finer-grained information about phases, we chose to simplify the analysis computationally by adopting the common practice in studies of the geometry of representations of looking at the last token only. > Why was used sliding window of 5, why no other sizes were reported? Figure 15 shows the analysis for different sliding windows, which we use to zoom in on a phase in which accuracy drops significantly. In other ranges, the size of the window would not change results significantly, if kept small enough. > Is there any difference (in terms of identified 4 phases) for encoder-only models (in the paper only decoder models are covered)? We believe that a difference might indeed be seen for encoder-only models. For those models, considering the last token as a proxy for the whole sentence would not be correct. Along these lines, previous studies of protein language models (see e.g. Valeriani et al. 2023) have considered averaging over tokens. We deserve these studies to future work. > Only one task of layer pruning was explored. Some other tasks, especially those focusing on topology of individual texts, would be much appreciated. We thank the referee for suggesting exploring different downstream tasks. Indeed previous work has considered using geometric quantities (e.g. intrinsic dimension) to identify human text from AI-generated text (Tulchinski et al 2024). An analysis of individual sentences would imply using our algorithm on the tokens of a prompt, i.e. without reducing the whole prompt to a single prompt. Such analysis would be interesting, though quite different from the present work since the manifolds in the two cases would represent rather different dynamics. > Have you performed your main experiments only with 0-, and 1-dimensional topological features? We have performed our calculations for up to 3-dimensional holes. However, as mentioned in section 4.2, 0-,2- and 3-dimensional holes have relatively low number counts, making it hard to draw solid conclusions from them. A brief discussion on which homology dimension contains relevant information is included in Appendix C. >From what dimensionality topology of kNN-complexes becomes trivial? We find non-zero counts up to 3-dimensional holes. This is specific to kNN complexes which are typically more connected as compared to other complex such as e.g. Vietoris-Rips complexes. > Could you please provide for Figures 4 plot of average length of intervals in (classic) persistence diagrams for each of the layers for comparison? We apologize with the referee, but we did not understand this question. Are they asking to perform persistent homology at each layer for comparison? --- Rebuttal Comment 1.1: Comment: Thank you for your responses. > We apologize with the referee, but we did not understand this question. Are they asking to perform persistent homology at each layer for comparison? I apologize for the unclear question formulation. I was asking about the average length of lifespan intervals for simplicial filtration (Vietoris-Rips) for each of the LLM layers. I was interested in its comparison to the number of intervals (from Zigzag filtration) that "pass" through each layer (Figure 4). For me, it would be very interesting to understand whether it is possible to achieve similar results using tools based on simplicial filtrations. I have read other reviews and responses to them. I think that with the proposed changes, this paper will be an interesting contribution to the research area, although some concerns remain regarding the advantages of the proposed method compared to the "usual" simplicial filtrations. I am willing to raise my score to 'Accept'.
Summary: The paper tackles the problem of understanding how LLMs work by looking at how layers sequentially transform prompts. Unlike current art that only provides static views of internal representations, the paper uses zig-zag persistence across layers obtained from simplicial complexes built using kNN. Based on their empirical analysis, the authors identify 4 phases of LLM processing. Finally, the paper shows how we can leverage insights from the initial analysis to perform layer pruning, obtaining competitive results with SOTA. --- **Post-rebuttal update** I want to thank the authors for their efforts to address my concerns, including the additional plots/experiments. After reading the other reviews/responses, I decided to keep my initial score. Claims And Evidence: The paper claims a "fast and scalable pipeline to characterize Trasformer's layers". In particular, the paper establishes that prompts are transformed according to 4 phases: - Rapid arrangement of positions - Stable, long-lived relations among prompts - A transition phase where the model refines these relations - Another phase of new rearrangements I would rank the evidence for this claim as moderate. More specifically, the distinction between the phases is a bit fuzzy and hyper-parameter-dependent — for instance, the shape of the plots in Figure 3 changes significantly with alpha. Also, the paper only reports Birth's relative frequency plots for variations of Llama (do not consider other LLMs). Also, Figure 2 aims to show "that a large amount of 1-dimensional holes are short-lived and that long-lived features appear after the first half of the model". First, it is unclear if this is the case --- e.g., the number of tuples of persistence = 5 seems similar for l=5 and l=25 (or persistence = 10 for l=5 and l=20). Also, in Appendix D1, I expected to see the same plots for different datasets and models. However, the paper shows persistence differences for different model combinations, and it is unclear if it should lead to the same conclusion. Methods And Evaluation Criteria: In the first set of experiments, the paper mainly considers variants of Llama. Thus, it is hard to check if the findings apply to different models. The proposed evaluation criteria seem sensible, but since they are related to the proposed descriptor, they are not standard in the literature. Also, the paper's analysis only leverages the last token embeddings as a surrogate for the latent representation of the whole prompt. For the experiments on layer pruning, the paper considers two other LLMs (Mistral and Pythia) and three benchmarks. This is less than found in other papers on layer pruning like "ShortGPT: Layers in Large Language Models are More Redundant Than You Expect". Finally, it seems the analysis relies on 1-dim topological descriptors despite the paper mentioning $p$-dim holes. Theoretical Claims: The paper does not make theoretical claims. Experimental Designs Or Analyses: Please see my comments about the experiments in "Methods and Evaluation Criterion" and "Claims and Evidence". Supplementary Material: I have skimmed over the supplementary material (all sections). Relation To Broader Scientific Literature: In Section 4.3, the paper briefly mentions how its findings (the identified four phases) can be related to previous literature/findings. For instance, phase 1 has been related to local contextualization (Lad et al, 2024) and increased dimensionality (Valeriani et al, 2023). Phase 2 may be related to the decreasing dimensionality found in (Cheng et al., 2024). Overall, I found the discussion in Section 4.3 shallow. It would be good if the authors could provide a summary of new insights and how it advances our understanding of LLMs. Essential References Not Discussed: Overall, the paper does a good job covering related literature. I have no additional recommendations. Other Strengths And Weaknesses: **Strengths** *Novelty*: As far as I know, this is the first paper to propose using zig-zag PH to track the evolution of internal representations of LLMs and NNs in general. *Relevance*: The paper tackles the very relevant problem of understanding how LLMs work. **Weaknesses**: *Presentation*: The authors could provide more details on the zig-zag algorithm in the Appendix. For instance, how does it decide which simplex dies when two simplices get merged in the subsequent layer? Does it matter? In addition, the paper introduces a variant of the PI vectorization scheme. However, there is little discussion about the variant. In particular, the specific form of Eq. (4) is not properly motivated. In addition, why is "smoothness as a function of layers" important for the application at hand? *Results on layer pruning*. Overall, the results regarding layer pruning are not impressive. Indeed, since the competitive methods seem simpler (angular distance and bi-score), I am not sure if one would prefer this approach. Other Comments Or Suggestions: Minor issues: - Line 117: the choice for (Gromov et al, 2024) and (Men at al, 2024) is not motivated. - Based on Fig. 1, the range of $b,d$ should be {0, \dots, 2(N - 1)}. - Is $l_i=i$ in all cases? If so, why use $l_i$? Questions For Authors: 1. Do the experiments only consider 1-dim holes? Why? 2. Could the proposed approach be used to analyze (or prune) other models (e.g., CNNs)? Would it apply to settings where the embedding dimension changes across layers? 3. Why do the proposed model and baselines only cut consecutive layers (see Table 2)? 4. What novel insights does this paper bring to the community? In other words, how does this paper contribute to advancing our understanding of LLMs that were not present in prior works? 5. How does the computational cost of the proposed method compare to those of (Gromov et al, 2024) and (Men at al, 2024)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful feedback. They recognize the novelty of our approach to analyzing LLMs and its importance in enhancing understanding of these models. They also acknowledge a proper review of existing literature. As for feedback on points to improve, we reply element-wise below: > the shape of the plots in Figure 3 changes significantly with alpha The variability in alpha is a feature of the method since it allows to explore different regimes (i.e. short-lived vs long-lived features). This is explained in the paragraphs where phases are described. As a side note, we show that this behavior is consistent across models in App. D. > the paper only reports Birth's relative frequency plots for variations of Llama (do not consider other LLMs). [...] Also, in Appendix D1, I expected to see the same plots for different datasets and models. We have computed these plots, but did not add them to the manuscript. They can be accessed here: https://anonymous.4open.science/r/conferenceProject-019A/src/plots/Rebuttal-plots.md > [...] it is unclear if this is the case --- e.g., the number of tuples of persistence = 5 seems similar for l=5 and l=25 (or persistence = 10 for l=5 and l=20). We agree that this statement is hard to confirm by eye in Fig. 2. We are open to rephrase it as: “Figure 2 shows that features born after the first half of the model’s depth have a higher tendency to be long-lived with respect to features born earlier on.” > [..] This is less than found in other papers on layer pruning like "ShortGPT: Layers in Large Language Models are More Redundant Than You Expect". We have run additional benchmarks, results included here: https://anonymous.4open.science/r/conferenceProject-019A/src/plots/Rebuttal-plots.md. >how does it decide which simplex dies when two simplices get merged in the subsequent layer? Does it matter? The topological feature that dies is uniquely determined by the decomposition of the zigzag module provided in Equations 11-12, removing any ambiguity in its identification. > [..] the specific form of Eq. (4) is not properly motivated. In addition, why is "smoothness as a function of layers" important for the application at hand? PIs are usually linked to a choice of kernel. In our case, the PD is already amenable to a density grid, the value of density being feature counts. Intersection layers are only necessary for defining the filtration (e.g. one could use the union equivalently). This is desirable since we are interested in studying statistically the LLM layers. The smoothness is related to smoothing formigram (Prop. C.2) in https://arxiv.org/abs/1712.04064 where they smooth barcodes defined on X (all the layers including intersection our case) to barcodes on S(X) (just the LLM layers) >Overall, the results regarding layer pruning are not impressive. Indeed, since the competitive methods seem simpler (angular distance and bi-score), I am not sure if one would prefer this approach. See a reply to a similar feedback in response to reviewer zXjm. Replies to minor issues: >the choice for (Gromov et al, 2024) and (Men at al, 2024) is not motivated. These methods were chosen as it is easy to compare with ours, since the algorithms require as an input the number of blocks to be removed, which is what our criterion based on inter-layer persistence outputs. >Based on Fig. 1, the range of (b,d) should be {0, \dots, 2(N - 1)}. Correct, this is a typo caused by Python notation. In fact, there are 2N-1 snapshots. >Is $l_i =i$ ? in all cases? If so, why use $l_i$? The notation $\ell_i$ was used to be suggestive of the fact that the indices refer to layers. Replies to “Questions for Authors”: 1. As noted at the beginning of Sec. 4.2, experiments are performed for 0,1,2,3-dim holes, however only 1-dim holes have large number counts so that a statistical study is stable. 2. Yes, the method can be used to analyze models with variable embedding dimension (e.g. CNNs), this is mostly because our method is distance-based and we can expect neighborhoods to shift smoothly in CNNs (see eg https://arxiv.org/pdf/2007.03506) 3. Even though our method allows us to cut non-consecutive layers, our results are in line with the compared methods cutting consecutive layers. If tested on more specialized prompts, our algorithm might cut non-consecutive layers for a low enough threshold (see e.g. Java code in Fig. 9). 4. The key point of our work is to build an interpretable framework that allows studying the internal reps of NNs as a whole system, rather than collecting summaries of snapshots and then combining them a posteriori. This is important in for interpreting different phases of prompt processing, as argued in our work. 5. The computational cost of our method is higher than other methods, as detailed in the reply to 4Kx7. While it is an important worry for scaling up our analysis, for this specific task, it is not an issue given pruning is performed only once.
Summary: The authors introduce the concept of Zigzag persistence from topological data analysis to understand how features evolve through layers. The authors aim to offer a statistical perspective on how prompts are rearranged and their relative positions changed in the representation space, providing insights into the system’s operation as an integrated whole. Claims And Evidence: The idea of using zigzag filtration to understand the internal representation of LLM is novel, and the authors present the concept in an accessible way to readers. Methods And Evaluation Criteria: This paper focuses more on explaining the idea rather than evaluation. The authors evaluate their methods with four smaller, open-sourced models. Theoretical Claims: This paper is heavily dependent on the idea of zigzag in TDA. There's no proof. Experimental Designs Or Analyses: The experiment results show that persistent topological features and their similarities remain consistent across various models, layers, and hyperparameter settings within the framework, indicating a level of universality in the topological structure of LLM representations. Supplementary Material: NA. Relation To Broader Scientific Literature: See my questions for authors. Essential References Not Discussed: NA. Other Strengths And Weaknesses: The strength is the novelty of this work; the authors apply this new idea to understand the consistency and features flow between different layers. The weakness is that the paper is proof-of-concept type, so the experiment result is still limited. Other Comments Or Suggestions: NA Questions For Authors: My biggest question is why the idea of Zigzag is important/interesting to the ML community. I'm not an expert in TDA, but this looks like a niche concept to me. The improvement in the accuracy of the authors' method compared to the pruning method looks marginal. Is it possible to report SE so the interpretation of the results can be more robust? Also the computational cost of the authors' method seems expensive. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Reply to zXjm We thank the reviewer for their useful feedback. They highlight the novelty of using zigzag persistence from topological data analysis to understand feature evolution in LLMs, appreciating the accessible presentation. They raise a few concerns, which we address below: > My biggest question is why the idea of Zigzag is important/interesting to the ML community. I'm not an expert in TDA, but this looks like a niche concept to me Within the TDA community, there exist several works addressing time-varying point clouds. The reason for the relatively slower progress in this field has been mainly computational: in a similar way as in multi-persistence, the complexity grows rapidly when adding filtration parameters, and real-world applications are harder to implement. Nevertheless, recent advances in fast algorithms have allowed these methods to be applied more widely. In the specific case of Zigzag persistence, the fast zigzag algorithm of [1] allowed us to run the pipeline on large and high-dimensional datasets (see reply to 4Kx7 for computational complexity). Consequently, these algorithms have become feasible to apply in the framework of interpreting dynamical changes in internal representations of neural networks. In short, we would argue that recent computational advances in analysing time-varying point clouds with TDA approaches have greatly increased relevance for the broader ML community. > The improvement in the accuracy of the authors' method compared to the pruning method looks marginal As the review remarks earlier on, as the work’s main objective is to connect a relatively underexploited mathematical algorithm (zigzag) to LLMs interpretability, the scope of the layer pruning analysis is to show that our method can be used on a downstream task with performances on par with state of the art methods. > Is it possible to report SE so the interpretation of the results can be more robust? There is relatively little variability in the benchmarks used for layer pruning, as it comes from boostrapping. Consequently, our computation reveals a standard deviation of ~1e-4. We note that whenever the accuracy score is the same for our work and the cited references, the results are exactly equal, i.e. the methods suggest pruning the same layers. >Also the computational cost of the authors' method seems expensive. We refer to our reply to 4Kx7 for computational costs of the algorithm. As for the specific comparison to the layer pruning methods cited in our work, our algorithm is relatively slower, but it contains strictly more information (beyond layer pruning), as it analyzes all layers at the same time, rather than considering pairwise comparisons among layers. [1] Dey, T. K. and Hou, T. Fast Computation of Zigzag Persistence. In Chechik, S., Navarro, G., Rotenberg, E., and Herman, G. (eds.), 30th Annual European Symposium on Algorithms (ESA 2022), volume 244 of Leibniz International Proceedings in Informatics
Summary: This work introduces a framework for applying the topological descriptor zigzag persistence to analyze the internal representations of large language models (LLM). The experiments are conducted to evaluate the LLM models (Llama2-7B, Llama3-8B, Mistral 7B, and Pythia 6.9B), which demonstrates the effectiveness of the proposed framework in understanding the internal dynamics of LLMs and its practical utility in tasks like layer pruning. Claims And Evidence: The authors provide experiments, statistical descriptors, and visualizations to validate the proposed framework. Methods And Evaluation Criteria: The proposed method does make sense. Theoretical Claims: Just briefly check the proofs, no significant issues found. Experimental Designs Or Analyses: The effectiveness of zigzag PD and the effective persistence image should be evaluated further. Supplementary Material: No Relation To Broader Scientific Literature: Relation to TDA, neural network interpretability on large language models, with the focus on on neural network interpretability by introducing topological descriptors that capture the stability and evolution of prompt relations across layers. Essential References Not Discussed: Dynamic Persistence [1], related to zigzag persistence, should be added in references and compared with. [1] Kim, Woojin, and Facundo Mémoli. "Spatiotemporal persistent homology for dynamic metric spaces." Discrete & Computational Geometry 66 (2021): 831-875. Other Strengths And Weaknesses: Strengths: This work provides a novel approach on combining zigzag persistence, a kind of topological descriptor, with LLM analysis, for tasks such as model interpretability, and the result illustrates the efficacy of the proposed method. Weaknesses: 1. The integration of the proposed method into filtration, and the tracking of the evolution of internal representations both are desired, which deserve to be appended for further validating the proposed zigzag persistence based method. 2. The comparison to related TDA approach is insufficient. Other Comments Or Suggestions: The runtime efficiency evaluation for the proposed method is missing, which is quite desirable to know the efficiency performance of the proposed method. Questions For Authors: How does the proposed method compare to dynamic persistence? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Reply to 4Kx7 We thank reviewer 4Kx7 for their useful feedback. While recognizing the novelty of our approach in combining zigzag persistence for LLM interpretability, the reviewer correctly points out that our work should have included a broader discussion of similar TDA approaches. We agree with this comment and recognize it would improve the effectiveness of our work. Here we reply to direct points raised by the referee in this regard, and we would be open to including part of this discussion in an edited version of the manuscript > Dynamic Persistence [1], related to zigzag persistence, should be added in references and compared with. The reference is already cited in the main text as “Kim and Memoli, 2021.” > The integration of the proposed method into filtration, and the tracking of the evolution of internal representations both are desired, which deserve to be appended for further validating the proposed zigzag persistence based method. Given the large number of points considered in this analysis, we adopt a statistical approach to study the effect of layer transformations on internal representations, where the identity of individual points is not essential. The idea of tracking the zigzag representatives using [3] could certainly be investigated. Testing this pipeline on an example in a more controlled setting would be useful in validating the proposed zigzag framework and it would allow in principle to track more closely the evolution of features across layers. We deserve these studies for future work. >The comparison to related TDA approach is insufficient, How does the proposed method compare to dynamic persistence? Here are a few points of comparison: 1. We choose kNN filtration (whereas it is Rips in [1]) since it is more suitable to high dimensional data, especially for LLM reps. The stability of kNN filtrations is discussed in [4] in the context of persistent homology, we recognize that we missed this reference in the submitted manuscript, we would be open to add it in an edited version. 2. [1] varies both a time and a scale parameter whereas we fix the scale. Their main summary statistic (the rank invariant) involves calculating a 6-dimensional summary statistic (4 across layers and 2 across scale) which can make analysis hard for both computational reasons as well as finding the informative summary statistic. Nevertheless, varying also the scale parameter is definitely worth investigating in this context, and the techniques in [1] would be a starting point for implementing it. 3. The work [5] from the same authors is more related to our work since the maximal group diagram and the persistence clustergram described in the above paper are "annotated" (with the representative topological features) barcodes. In this work, they fix a scale similar to our case. >The runtime efficiency evaluation for the proposed method is missing, which is quite desirable to know the efficiency performance of the proposed method. A broad assessment of computational efficiency is present in the current version (Appendix B). We agree this is an important point to address, thus here are more details on it: the theoretical complexity of FastZigZag is O($m^{\omega}$) with $\omega < 2.37286$ for a sequence of $m$ deletion and additions that is an advancement from previous algorithms with a cubic complexity, which allowed us to do our experiments with point clouds of 10k points over ~30 layers. More details on the algorithm are described in the paper by Dey and Hou [2]. As for the knn graph, its creation with a greedy algorithm is O($n^2$) with n being the data points. In total, we have a O($n^2 * N_{layers}$) + O($m^{\omega}$). We would be open to adding this discussion in an edited version. [1] Kim, Woojin, and Facundo Mémoli. "Spatiotemporal persistent homology for dynamic metric spaces." Discrete & Computational Geometry 66 (2021): 831-875. [2] Dey, T. K. and Hou, T. Fast Computation of Zigzag Persistence. In Chechik, S., Navarro, G., Rotenberg, E., and Herman, G. (eds.), 30th Annual European Symposium on Algorithms (ESA 2022), volume 244 of Leibniz International Proceedings in Informatics [3] Dey, Tamal K., Tao Hou, and Dmitriy Morozov. "A Fast Algorithm for Computing Zigzag Representatives." Proceedings of the 2025 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA). Society for Industrial and Applied Mathematics, 2025. [4] Le, Minh Quang, and Dane Taylor. "Persistent homology with k-nearest-neighbor filtrations reveals topological convergence of PageRank." Foundations of Data Science 7.2 (2025): 536-567. [5] Kim, Woojin, and Facundo Mémoli. "Extracting persistent clusters in dynamic data via möbius inversion." Discrete & Computational Geometry 71.4 (2024): 1276-1342.
null
null
null
null
null
null
A Causal World Model Underlying Next Token Prediction: Exploring GPT in a Controlled Environment
Accept (poster)
Summary: This paper explores whether GPT models, designed for next-token prediction, implicitly learn a causal world model from which sequences are generated. The authors derive a causal interpretation of the GPT attention mechanism and suggest that GPT models can be used for zero-shot causal structure learning with a proposed confidence score. The study is conducted in a controlled environment using Othello and Chess games, where a GPT model trained on real-world games is tested on synthetic data sequences of random legal moves. Results indicate that the GPT model can often generate legal next moves for out-of-distribution sequences by capturing causal structures encoded in the attention mechanism, but struggles and fails to capture causal structures when generating illegal moves. The introduction elaborates on GPT's unexpected capabilities beyond next-token predictions, proposing that these abilities might arise from implicit learning of causal structures during pre-training due to Occam's razor, which favors simpler, more compact solutions. The paper builds on recent methods for causal interpretation in transformer models, adapting them to GPT, and investigates the correlation between GPT-generated errors and the uncertainty in representing causal structures. Claims And Evidence: Partly. Please see questions. Methods And Evaluation Criteria: The proposed method is simple yet effective. However, I have a question about whether the results can only be obtained through causality or if they can also be achieved using correlation-based methods. Please see questions. Theoretical Claims: No theoretical claims Experimental Designs Or Analyses: I have checked the soundness of experiments. Supplementary Material: I have checked the appendix, but not the supplementary material. Relation To Broader Scientific Literature: To the best of my knowledge, this is the first paper introducing causality to study the world model in a controlled environment. If the authors can demonstrate that causality is necessary, I believe it will inspire follow-up research and have a significant impact in this area. Essential References Not Discussed: No to the best of my knowledge. Other Strengths And Weaknesses: **Strengths:** 1. This paper is the first to introduce causality into the controlled study of the world model. 2. It is interesting to see that attention weights pruning can improve the legal move, and that legal moves are correlated with confidence. **Weaknesses:** 1. For the results discussed in Sections 5.2 and 5.3, I would like to know whether a causality-based method is necessary. I will improve the score if the authors can address this point. (Please refer to Questions 1 and 2) 2. The authors state that "they do not explain how the board game is encoded within the attention matrices and why the attention mechanism can represent the board state". But the authors do not explain these questions clearly. (Please refer to Question 3) Other Comments Or Suggestions: In Eq (9), the LHS is a matrix while the RHS is a value. The computation of $H_{ind}$ and $H_{dep}$ should be clearly defined in the main text. Questions For Authors: It would be interesting to explore whether correlation-based methods can reproduce the results presented in Sections 5.2 and 5.3. Some simple correlation-based methods include: 1. **Attention as the "causal structure" $\mathcal{G}$**: 1. Let $G$ be the adjacency matrix of $\mathcal{G}$. Set $G_{ij} = 1$ if $A_{ij} > \frac{\alpha}{i}$. 2. Repeat the algorithm in Section 4.3 to compute the confidence. 3. Check whether the results in Sections 5.2 and 5.3 can be reproduced. 2. **Precision matrix as the "causal structure" $\mathcal{G}$**: 1. Compute the covariance matrix based on attention $C = [D^{-1}A][D^{-1}A]^T$. 2. Recover the precision matrix $\Theta$ using an existing library (e.g., [Ref. A]), and set $G_{ij} = 1$ if $\Theta_{ij} > \alpha$. If $C$ is well-conditioned, one can also use $\Theta = C^{-1}$. 3. Repeat the algorithm in Section 4.3 to compute the confidence. 4. Check whether the results in Sections 5.2 and 5.3 can be reproduced. 3. The authors use the last layer attention. Why is only the last layer useful? The authors should explain the functions of the previous layers' attentions and MLPs. [Ref. A] https://github.com/choltz95/sparse-precision-matrix-estimation/tree/master Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your thorough review and for the clear and detailed suggestions. We believe these suggestions further highlight the significance of the causality-based approach presented in the paper compared to correlation-based approaches. # Answer to questions ## Re Question 1 and 2 Firstly, under the assumption made in the paper that hidden confounders may be present, a correlation-based method may not be applicable. The method we presented in the paper employs a causal discovery algorithm to extract a graph, accounting for the possible presence of latent confounders, from the attention matrix. As described below and empirically demonstrated, accuracy in generating legal tokens is more strongly associated with the causality-based approach presented in the paper than to the correlation-based methods, as measured by the confidence score (Section 4.3). You suggested 2 correlation-based methods to derive an adjacency matrix. The first is thresholding the attention matrix, and the second is thresholding the precision matrix. The first method is related to testing marginal independence (empty conditioning set), which was examined in Section 5.2.1, Figure 3a. In the second method the precision matrix represents pair-wise independence conditioned on all other nodes (only the largest possible conditioning set size, $n-2$). In relation to these methods, constraint-based causal discovery tests independence conditioned on a range of conditioning set sizes from 0 to the maximal possible, as needed. In Section 5.2.1 Figure 3 we demonstrate that statistical significance results obtained using empty conditioning sets are different than those obtained using size 1 (Figure 3a-3c). The aggregation of results from CI tests required by causal discovery improves the number of statistically significant cases (Figure 3d) differentiating legal from illegal move generation. Furthermore, given a graph $\mathbf{G}$, nodes' values are $\boldsymbol{X}=\boldsymbol{X}\mathbf{G}+\boldsymbol{U} \implies \boldsymbol{X}=(\mathbf{I}-\mathbf{G})^{-1}\boldsymbol{U}$ (Equation 4). Treating attention as $\mathbf{G}$, instead of $(\mathbf{I}-\mathbf{G})^{-1}$ imposes restrictions on the graph. ### Empirical Evaluation For each game length we calculated confidence (Section 4.3). As in Figure 5, we binned confidence values and calculated mean accuracy. Results for Chess are in the tables below. It is evident that, except for a few cases, for both correlation-based methods there is no clear trend of accuracy with respect to confidence, as clearly evident for causality-based approach (Figure 5). **Attention thresholding** |Len|Bin 1|Bin 2|Bin 3|Bin 4|Bin 5|Bin 6|Bin 7| |-|-|-|-|-|-|-|-| |10|0.994|0.992|0.987|0.993|0.996|0.933|0.933| |15|0.998|0.987|0.977|0.967|0.978|0.988|1.000| |20|0.982|0.974|0.975|0.978|0.979|0.967|0.973| |25|0.889|0.960|0.964|0.957|0.969|0.986|1.000| |30|0.867|0.954|0.942|0.950|0.953|0.950|0.963| |40|0.667|0.980|0.970|0.943|0.938|0.928|0.938| **Precision-matrix thresholding** |Len|Bin 1|Bin 2|Bin 3|Bin 4|Bin 5|Bin 6|Bin 7| |-|-|-|-|-|-|-|-| |10|1.000|0.988|0.988|0.989|0.994|0.990|0.988| |15|1.000|0.982|0.977|0.980|0.982|0.979|0.975| |20|0.961|0.967|0.971|0.967|0.973|0.969|0.963| |25|0.941|0.956|0.964|0.963|0.966|0.967|0.980| |30|0.961|0.953|0.949|0.963|0.967|0.920|0.961| |40|0.945|0.940|0.936|0.934|0.925|0.882|0.933| ## Re Question 3 A graph constructed using the last layer attention represents a graph over the output tokens. The last layer's attention output represents tokens (pre-training loss compares output tokens to input tokens), up to an MLP non-linear mapping, where MLP has no effect on inter-token relations and therefore does not influence causal structure learning. Earlier layers represent context (such as exogenous nodes in the SCM) for their following layer. See also the explanation following Equation 8. For example, in Othello intermediate layers may represent the board (as exhaustively tested and found by Li et al., 2023) which serve as context for the causal graph over game moves represented by the last layer. Furthermore, recent studies empirically show that the last layers have more weight in determining the next token, specifically in Othello GPT [1] and in LLMs in general [2,3,4]. [1] How GPT learns layer by layer. Du et al., 2025. [2] Adaptive Large Language Models by Layerwise Attention Shortcuts. Verma et al., 2024. [3] The Mechanics of Conceptual Interpretation in GPT Models: Interpretative Insights. Aljaafari et al., 2024. [4] Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers? Jin et al., 2024. # Re Other comments or suggestions: * In Eq 9 both RHS and LHS are matrices. A hat over index $i$ represents the omission of the $i$-th row/column, as implicitly mentioned in the sentence before the equation. We will explicitly describe this notation. * We will clearly define the computation of the entropy values $H_\textrm{ind}$ and $H_\textrm{dep}$. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed rebuttal. My concerns have been addressed and I have raised the score.
Summary: N/A Claims And Evidence: N/A Methods And Evaluation Criteria: N/A Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: While I acknowledge my limited expertise in this domain and therefore express low confidence in this review, I have several concerns that lead me to recommend rejection. My concerns are as follows: - First, I struggle to identify the core contribution of this paper. The authors explain the relationship between GPT and causal learning, but this connection seems rather obvious. GPT's autoregressive generation is inherently causal, and its architectural design enables efficient generation through causal mechanisms. The fact that GPT can learn causal knowledge about the real world is well-established. The attention mechanism learns token similarities through its similarity matrix, from which causal structures can naturally be extracted. None of this is surprising. - Second, the paper's treatment of "world model" lacks precision. While this term is currently popular across various domains including robotics, autonomous driving, video generation, and 3D generation, it lacks a clear definition. - Third, the practical implications of these findings are unclear. Understanding the relationships between GPT, causal learning, and world models, but how does this knowledge advance the field? Can it guide the development of better GPT models for real-world applications? The paper lacks substantial discussion of these practical considerations. Due to my limited knowledge in this field, my review may be biased, and I welcome corrections from the authors and other reviewers. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your review and your perspective on the paper. We value your feedback and believe the following answers your concerns. * Re first point. We would like to clarify that autoregressive generation is not inherently causal. Often the attention in GPT is called 'causal' but it only means that a token is a function of previous tokens in the sequence. That is, the upper triangular part of the attention matrix is masked (zero). However, this is an oversimplification of the broader term "causal". In the paper we show that an attention value $\mathbf{A}_{i,j}$ describes the sum of all directed paths from node $j$ to node $i$, that is the effect node $j$ has on node $i$ accounting for all directed paths. Another difference between the attention and causal graphs is the set of encoded conditional independence relations. For example, consider the causal relations $X$ causes $Z$ and $Y$ causes $Z$, and no other relations: a graph: $X$ --> $Z$ <-- $Y$. The graph entails that $X$ and $Y$ are marginally independent, but dependent conditioned on $Z$. However, note that several causal graphs may entail the exact same independence relations (e.g., a hidden confounders may be present between $X$ & $Z$ and $Y$ & $Z$ instead of causal relations). The attention matrix only represents the correlation between $X,Y$ and $Z$ (the level of 'attention' $Z$ gives to $X$ and $Y$) and does not represent causality. The causal interpretation in this paper demonstrates that a causal graph, including hidden confounders, could be extracted, where the attention matrix represents the total effect a node has on another through all directed causal paths. That is, the $(i,j)$ element in the attention matrix $\mathbf{A}$ represents the sum of all the directed paths from node $j$ to node $i$ in the causal graph. We also introduce a novel confidence score for the learned graph using entropy of p-values of statistical tests used during the causal structure learning. * Re second point. We will clarify in the paper that a 'causal world model' describes the causal mechanism underlying the observations as well as probability distribution of hidden variables, as often used in the causal inference literature. Specifically, we will note that it is assumed that underlying each input sequence there exists a corresponding structural causal model (SCM, Section 3.2). That is, the causal world model consists of the causal mechanism that generates tokens. * Re third point. We discuss implications of the findings in Section 6 (Conclusions) and in Section 'Impact statement" (after Section 6. Conclusions; part of ICML 2025 template which does not count towards page limit). The implications include a) the ability for zero-shot causal discovery which can be beneficial in various scientific domains (e.g., understanding the mechanism by which a GPT trained on the protein space generates novel protein sequences [1]), b) measuring uncertainty of attention heads per input sequence, c) calibration during training (accuracy vs. causal confidence), and d) better human supervision by examining the causal mechanism by which a token is generated. [1] Ferruz, N., Schmidt, S., & Höcker, B. (2022). ProtGPT2 is a deep unsupervised language model for protein design. Nature communications, 13(1), 4348. --- Rebuttal Comment 1.1: Comment: Thank you for the author's detailed response! My concern has been resolved, and I will raise the score.
Summary: The paper investigates whether a GPT trained for next-token prediction implicitly learns a causal world model, using an interpretation of the attention matrix as encoding a linear Gaussian SCM, first proposed in Rohekar et al. (2024). The introduce a causal discovery method for learning partially oriented causal graphs from the attention matrix using conditional independence testing. To indicate that the network is learning to represent a causal graph for a given sequence, they introduce a `structural confidence score’ R(A) which is the entropy difference between the conditional independence test p-values for detected dependencies and independencies. Experimental results show that sequences with higher structural confidence scores correlate with correct legal move predictions, and that pruning low-confidence attention heads does not affect performance, whereas pruning high-confidence heads does. ## update after rebuttal Following the authors rebuttal, I have updated my score and am leaning towards acceptance, but with a low confidence due to my own lack of familiarity with the mechanistic interpretability aspects of the paper and how faithfully they map over to the causal claims. Claims And Evidence: The main claims are supported by correlating high R(A) with legal move accuracy. However, it is not immediately clear to me why this implies the causal interpretation is valid, when this correlation could have other explanations. It is also not immediately clear to me why this was the experiment they chose to validate the interpretation, rather than the more obvious ones of comparing the learned causal structure to the true causal structure, or making interventions and seeing how the learned causal structure changes (see questions below). Overall, I find the proposal quite compelling but am confused by the way they have gone about validating it. Methods And Evaluation Criteria: The choice of domains is reasonable and has been studied before in this context Theoretical Claims: The key theoretical assumption, that D^-1 A \simeq (I-G)^-1, is plausible but lacks rigorous derivation beyond conceptual arguments. It would be beneficial to validate this claim on synthetic data with a known causal structure. Experimental Designs Or Analyses: I have not assessed these in depth, but they appear to be rigorous, confidence intervals are given, etc. Supplementary Material: I have not reviewed these in depth Relation To Broader Scientific Literature: The work aligns with recent findings that transformers learn implicit world models (Li et al., 2023; Nanda et al., 2023) and extends these ideas by applying an adapted version of Rohekar et al. (2024) for GPT models. The paper does not cite alternative interpretations of attention (e.g., information flow, memory retrieval, statistical smoothing), and it may be worth discussing how the causal interpretation of attention relates to these other interpretations. Essential References Not Discussed: There are some papers the authors could discuss below, that motivate for the papers results (i.e. why should we even expect the network to have learned causal structure), [1] showing that this is required for out of distribution generalization, and [2] giving examples where causal world models are detected using mechanistic interpretability. These are not essential, but could strengthen the argument. [1] Robust agents learn causal world models (Richens et. al). [2]Transformers Use Causal World Models in Maze-Solving Tasks (Spies et. al.) Other Strengths And Weaknesses: I find the approach compelling and the overall quality of the paper good, but it is perhaps somewhat incremental in extending Rohekar et al. (2024) to GPT models? Other Comments Or Suggestions: There are lots of grammar errors in the paper, which made some key parts of it hard to follow. Abstract “Are generative pre-trained transformer (GPT) models, trained only to predict the next token, implicitly learn a world model from which a sequence is generated one token at a time?”, suggest replacing “Are” with “Do”. “suggesting a causal world model that arises from this interpretation.” this sentence doesnt make sense to me,that the causal world model arises from your interpretation? “Note that even if some of the nodes are latent confounders is still (I − G) −1 triangular” Many more, would recommend some polishing Questions For Authors: 1. Can you provide a rigorous justification for the claim that D^-1 A \simeq (I-G)^-1 ? Have you tested this on synthetic data with a known causal graph, and found that this correspondence approximately holds? 2. Im confused why you don’t include an interventional study, where you change the game rules (e.g., modifying Othello transition dynamics) to verify if the causal structure you recover using your interpretation changes according to this intervention. This feels like much more direct evidence for the causal interpretation than correlating the models ability to predict legal moves for a given sequence with the certainty in the causal structure returned by your causal discovery algorithm. 3. “in cases where the confidence in recovering structures from the attention matrices is low, GPT generally fails to generate a token that adheres to the game rules”. This doesnt obviously imply that “GPT implicitly learns to represent causal structures in attention heads”. Can you explain this connection more? 4. How does the learned causal structure compare to the actual game rules? Could you provide qualitative examples of inferred causal graphs? It is not clear from R(A), which measures only how concentrated the p-values are for the conditional independence tests, that the model is learning the correct causal graph. Are there other reasons that a high R(A) could correlate with legal moves? E.g. move legality is determined by a few most recent states, resulting in a high localised attention pattern, which could perhaps show up as a high R(A)? 5. Given that multiple interpretations of attention heads exist (e.g., memory retrieval, information diffusion), what makes the causal interpretation the most compelling? (note [1] could give some justification). It could be beneficial to include in the paper a discussion of other interpretations, and how these could explain the observed correlation between R(A) and performance? (e.g. consider in your analysis alternative hypotheses beyond the causal structure learning hypothesis you are proposing). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed feedback, insights, and important questions. Addressing your review improves the over clarity of the paper and emphasizes the significance of the contribution. # Re Questions for Authors 1. The relation $\mathbf{D}^{-1}\mathbf{A} = (\mathbf{I}-\mathbf{G})^{-1}$ is not an assumption, but rather a result by considering each token as a node in an SCM (the sentence just before equation 8). Since attention calculates $\mathbf{Z}=\mathbf{A}\mathbf{V}$ (Equation 1) and the SCM calculates $\boldsymbol{X}=(\mathbf{I}-\mathbf{G})^{-1}\boldsymbol{U}$ (Equation 4), we equate the outputs: token embedding and SCM node value. This is described two sentences before Equation 9, but we will clarify this point more explicitly in the paper. 2. A ground truth of causal graphs is not available for most domains. In the strategy games of Othello and Chess, since there are multiple next legal moves, a causal graph over a given set of game moves may contain information about the strategy of the player (in case of a real game) in addition to the game rules. Moreover, it is unclear how to modify the game rules such that a sequence of game moves is legal for both the original rules and the modified rules, while making sure the set of next legal moves entailed by the modified rules does not overlap with that entailed by the original moves. That is, for a fair comparison the input sequence should be legal for both the original and intervened rules, and for correctly testing, the next legal move should be different. Nevertheless, at the request of Reviewer g9Sr, we evaluated the accuracy of legal move generation as the function of confidence score (Section 4.3) calculated for correlation-based methods they suggested. See also Figure 3a vs. Figure 3d in the ablation study (Section 5.2). We believe your concern is additionally addressed in our reply to their Questions 1 and 2. We did not find an increase in legal move generation accuracy as a function of the confidence score for correlation-based methods (as found for the learned causal graph). 3. For a given sequence of tokens and a legal next token generation, there may be multiple, equally minimizing the loss, attention matrices. During pre-training, these attention matrices were not restricted to have high confidence for determining high-order conditional independence (CI) relations between tokens (in comparison to marginal pair-wise correlations). We examined confidence of attention matrices generating legal vs illegal token generation. In all our experiments we found a correlation between the causal confidence score, which was calculated during causal structure learning, and the accuracy of generating legal moves. Confidence decreases when uncertainty increases in CI tests decisions used for constructing the causal graph structure (e.g., an edge is removed if conditional independence is found, and edge orientation is determined based on the conditioning set that disconnected two nodes). We will add a corresponding clarification this in the paper. 4. We provided a qualitative examples of causal graphs learned for the first few moves in Othello and Chess games in Figure 1 and Figure 6, respectively. These graphs are actual results. After these few moves it becomes challenging to follow the true causal relations. For example, during the Othello game, previously placed pieces may flip their color multiple times affecting the set of legal moves. Next, we regard the possibility of having high $R(\mathbf{A})$ due to the last few moves effecting the next move. In our Ablation study in Figure 3 we compare (a) marginal (in)dependence relations to (d) conditional independence relations that are required to construct causal graph. It is demonstrated that the difference in $R(\mathbf{A})$ between legal and illegal token generation is more prominent when considering independence relations required for constructing a causal graph. As mentioned in our reply to your Question 2, we also examined the experiment in Figure 5 for correlation-based methods. 5. The main difference between the causal interpretation and other interpretation is the incorporation of conditional independence relations having various conditioning set sizes rather that treating the attention values as weights. As mentioned in our answer in Question 1, by relating output embeddings computed using the attention matrix to the node values in an SCM we obtain the relation in Equation 9. By equating the covariance matrices (Equation 8) we can employ a constrain-based causal discovery algorithm. In our experiments we demonstrated that a confidence score computed from conditional independence relations provides a better differentiation between legal and illegal token generation compared to marginal independence relations. Finally, we found the papers you suggested to include the the introduction to strengthen the significance of this paper's results. As suggested, we will add a related discussion. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses. I have updated my scores. The authors have written a paper at the intersection of these two fields, which is an important and under explored area, but makes it challenging to get good reviews. But it would be a shame if this factor prevented publication and in doing so further exploration of this intersection. So while my score is a weak accept, reflecting my uncertainty in the validity of the papers claims, I think the paper should probably be accepted.
Summary: The work explores whether or not GPT style models learn a causal world model implicitly without explicitly being trained to do so by using the predict the next token objective. This is done in Othello and chess, and the theoretical formalization paired with the empirical results strongly suggest that GPT style models do learn a causal world model. The strong results in Figures 4, 5 support this, and they even show this extends to out of distribution outputs (Figure 7). Claims And Evidence: The claims in the submission are supported by strong and clear evidence. The authors support their claim regarding causality both with a theoretical formalization and then empirical results. Methods And Evaluation Criteria: The benchmark datasets, consisting of Othello and Chess, make sense for the problem of determining whether world models learn a causal structure. Theoretical Claims: I briefly checked through all of the proofs for theoretical claims and did not find any issues Experimental Designs Or Analyses: I thoroughly checked the validity/soundness of all experiments regarding Figures 3-5, 7-8. I did not see any issues and believe the experimental design strongly support the claims. Supplementary Material: I reviewed all parts of the supplementary and included my thoughts in the rest of the review. Relation To Broader Scientific Literature: There are two main ways this is related to the broad scientific literature. First, is the relation to world models and autoregressive (GPT) style models. Second, is the relation to causality/causal inference. The results bring link two domains together, and effectively demonstrate that GPT style autoregressive models can learn a world model which understands causal factors at play and can reason about what this causality implies. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ## Strengths - Very important work to formalize theoretically that GPTs can learn a causal world model - strong theoretical analysis of causality in attention and generally strong results (Figures 4, 5) to support them ## Weaknesses - It's still unclear the extent to which generalization occurs, even though we know it worked for the OOD test set - Although in Figure 3 the results are statistically significant, they were not statistically significant very often, especially since the p value was not very high at 0.05%. - It's unclear whether these results would hold for language models not trained solely on chess/othello data, or data that is noisy/imperfect Other Comments Or Suggestions: - Figure out a better way other than n grams to show the test and train distributions are different - Do more experiments regarding the extent to which OOD generalization occurs - The figures were not very clear to me, the captions could have better encapsulated what was being done. - It would have been nice to see non game benchmarks, i.e. math benchmarks or some other problems with causal structure, but the benchmarks were sufficient for the claims. Questions For Authors: - The paper states "determined threshold of significance level (α). It is important to note that there is a one-to-one correspondence between the results of these CI test and the entailed causal structure. That is, a causal structure can be represented uniquely by a set of CI tests and their results." - How? This needs a citation, or is there a mistake? - Why is positional encoding not used? - Do they mean positional encoding specifically for the board? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed review and suggestions for improvement. Your suggestions will improve the clarity and emphasize the significance of the proposed approach and findings presented in the paper. We also thank you for the many suggestions for future work. **Re Other Comments or Suggestions** Upon your interesting suggestions, in our future work we will extend the analysis for OOD data in different domains. Specifically, we plan to explore improvements for pre-training relying on the findings in the paper. For example, we plan to explore the effect of OOD generalization after calibrating token-generation accuracy with respect to causal confidence (Equation 10). Finally, we will improve the clarity of figures' captions as requested and correct grammatical errors and typos. **Re Questions for Authors** * The full set of CI tests having all possible conditioning set sizes has a one-to-one correspondence with a causal graph (up to Markov equivalence). A causal-discovery algorithm uses a sub-set of this exhaustive set of CI tests. A sound and complete algorithm, like the one we use in the paper, creates a one-to-one correspondence between this subset and a Markov equivalence class. We will add this clarification along with a citation (Spirtes et al., 2009). * Yes. Positional encoding specifically for the board was not used (in accordance with the work of Li et al., 2023). Positional encoding was (also) not used by Li et al., 2023 who trained a GPT without domain information, such as the presence of a board and its 2D alignment on which the game moves take place. We will clarify this. --- Rebuttal Comment 1.1: Comment: [Accidentally posted as official comment] Thanks for the rebuttal and clarification. It would have been great to see some of the weaknesses/comments addressed regarding whether or not these results hold for noisy data as well as data that is OOD and has been generated/curated/validated as being different than the training set in other ways (i.e. more results on this). Additionally, the concerns regarding whether or not language models not trained solely on chess/othello data or noisy data would still learn such causality is unclear, limiting the application of this work. This is especially important as recent work suggests that generalization/reasoning is very much tied to data quality [1], which could be the same for causality. Additionally, my general lack of understanding of the literature on causality makes me less confident in my initial score. Thus, I am reducing my score slightly to a 3. The general response to reviewers was strong, although further addressing my points with empirical evidence would be helpful. [1] https://arxiv.org/pdf/2503.07604 --- Reply to Comment 1.1.1: Comment: Thank you for your follow up questions and thank you for your initial positive feedback. We apologize for this late response, as we received your follow up questions just yesterday. We believe our answers can help address your concerns and clarify points you raised. Upon your suggestion, we further created a noisy data and examined the method we presented in the paper. Specifically, we added noise to each test sequence of game moves by replacing $p_{noise}$ percentage of moves with random illegal moves (recall that the initial test sequence was sampled from the set of all possible legal moves, oblivious to winning the game). This process constitutes OOD test sequences as these kind of illegal sequences are inherently different from the training sequences (recall that GPT was pre-trained on real games played with the intention of winning). The following tables, summarize the results for the Othello game for different game lengths and for different noise levels ($p_{noise}$). We split the test sequences into those that resulted in GPT generation of a) illegal next token and b) legal next-token. We calculated the mean causal confidence (Equation 10) for each group and report their difference. First, the accuracy of the model in generating legal next tokens for different noise levels is provided in the following table. |$p_{noise}$|17 moves|20 moves|22 moves|25 moves|30 moves| |:-----:|:------:|:------:|:------:|:------:|:------:| | 0.00 | 0.892 | 0.906 | 0.909 | 0.920 | 0.936 | | 0.05 | 0.858 | 0.835 | 0.841 | 0.842 | 0.859 | | 0.10 | 0.794 | 0.779 | 0.784 | 0.767 | 0.776 | | 0.15 | 0.729 | 0.736 | 0.744 | 0.713 | 0.699 | | 0.20 | 0.663 | 0.702 | 0.673 | 0.667 | 0.664 | | 0.30 | 0.580 | 0.584 | 0.616 | 0.599 | 0.579 | Next, the difference between the causal confidence of legal token generation and illegal token generation is provided in the following table. Positive values indicate that the mean confidence of legal token generation is higher than that of illegal token generation (higher is better). |$p_{noise}$|17 moves|20 moves|22 moves|25 moves|30 moves| |:-----:|:------:|:------:|:------:|:------:|:------:| | 0.00 | 0.091 | 0.226 | 0.175 | 0.102 | 0.220 | | 0.05 | 0.081 | 0.189 | 0.137 | 0.139 | 0.148 | | 0.10 | 0.089 | 0.193 | 0.151 | 0.153 | 0.178 | | 0.15 | 0.176 | 0.212 | 0.143 | 0.147 | 0.095 | | 0.20 | 0.178 | 0.170 | 0.148 | 0.119 | 0.154 | | 0.30 | 0.037 | 0.143 | 0.034 | 0.025 | 0.015 | It is evident that the causal confidence of legal token generation is consistently higher than that of illegal token generation for all the test noise levels (positive values). This extends the conclusion in the paper to noisy and OOD test data. We would like to note that training data does not take part in the causal discovery process. In this paper we assume that the GPT pre-training process trains the attention mechanism to capture relations between tokens in the input sequence. For applications in which noise is expected in the data, this method requires the attention to be able to capture correct independence relations as described in the paper. As long as the trained GPT can faithfully represent relations between tokens through attention, the causal discovery part will be free of errors. Note that there is no training involved for the causal discovery part. In general, causal discovery under unknown noise in highly non-linear relations is an unsolved problem. The use of large datasets by GPT to convert these noisy and non-linear relations to stable linear relations (via self-supervision) for causal discovery is one of this paper's contributions (section 4.2). We will clarify this in the paper. Finally we would like to mention a few potential applications beyond Othello and Chess that can readily benefit from this paper's contribution. These include protein sequence generation [1, 2] and material design [3]. In protein sequence generation, tokens are amino acids and in material design tokens describe the atomic structure (e.g., via SMILES format). For example, domain experts may utilize existing foundation models for these domains to reason about causal relations between amino acids or between molecules. 1. Ferruz, Noelia, Steffen Schmidt, and Birte Höcker. "ProtGPT2 is a deep unsupervised language model for protein design." Nature communications 13.1 (2022): 4348. 2. Rives, Alexander, et al. "Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences." Proceedings of the National Academy of Sciences 118.15 (2021): e2016239118. 3. Soares, Eduardo, et al. "A large encoder-decoder family of foundation models for chemical language." arXiv preprint arXiv:2407.20267 (2024).
null
null
null
null
null
null
Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer
Accept (poster)
Summary: This paper introduces APT, extending on PFNs and TabPFN, which leverages adversarial synthetic data agents for pretraining and incorporates a mixture block architecture to handle classification tasks with an arbitrary number of classes, addressing the class size limitation. Claims And Evidence: No, the authors claim that they employ meta-learning methods for tabular prediction in the zero-shot scenarios. However, this claim is not adequately reflected in the method section, leaving the meta-learning process and its adaptation to zero-shot scenarios unclear. Methods And Evaluation Criteria: Yes Theoretical Claims: The paper does not introduce any significant theoretical contributions on meta learning in the zero-shot scenarios. Experimental Designs Or Analyses: Yes, the authors conduct the experiments that adversarial pre-training enhances the performance of TabPFN, showing that adversarial synthetic data agents generate more diverse datasets compared to standard random generators in TabPFN. However, this paper does not clearly explain how APT effectively performs meta-learning or adapts to zero-shot scenarios. Additionally, while mixture block architecture design has been shown to improve generalization and significantly accelerate pretraining, Table 1 indicates that most selected baselines are not designed for the zero-shot scenarios. This raises concerns about the effectiveness of APT in truly zero-shot settings. Supplementary Material: Yes, authors provide the appendix and code in the supplementary material. Relation To Broader Scientific Literature: This work extends on TabPFN by enhancing the diversity of its pretraining dataset through adversarial generation and enabling arbitrary classification through a mixture block model architecture. TabPFN, leveraging context learning and PFN, has demonstrated strong performance in small-scale tabular prediction. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. Data Agent Reset helps improve the diversity of generated data. Weaknesses: 1. APT extended on PFN and TabPFN, which is proposed to perform zero-shot meta-learning on tabular prediction tasks without using any real-world dataset to pre-train the model, but the authors do not clearly explain its meta-learning mechanism and how it enables zero-shot adaptation in the task of tabular prediction. 2. The research motivation of this paper is not clear, authors do not clarify the significance of zero-shot table prediction or its practical implications. Additionally, authors do not also provide a clear task definition for zero-shot meta-learning in tabular prediction. 3. The experimental baselines primarily consist of some methods with requiring labeled samples for effective training. This raises concerns about whether directly comparing these methods with APT in a zero-shot scenario meaningfully reflects APT’s zero-shot capabilities. 4. As shown in the Table1, APT does not exhibit a significant improvement in ROC-AUC when compared to CatBoost and TabPFN. Other Comments Or Suggestions: We suggest that the author should provide a detailed definition of zero-shot meta-learning for tabular prediction and clarify the significance of zero-shot scenarios in tabular prediction tasks, specifically explaining why zero-shot table prediction is necessary. Questions For Authors: 1. How does APT perform meta-learning, and how does it enable zero-shot adaptation? 2. Are the selected baseline methods appropriate for evaluating APT’s zero-shot capabilities, given that they require labeled samples for training? 3. What are the advantages of APT compared to CatBoost and TabPFN? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank reviewers for their valuable feedback and comments. We see that the reviewer’s main concern lies in the confusion about zero-shot meta-learning, so first and foremost, we want to address this concern and try our best to provide the reviewer a clear picture. --- Weakness 1 & 2 (Question 1): **Response**: As the reviewer has noticed, APT only improves on TabPFN’s pre-training procedure and inference-time generalizability, and that’s the goal of this research. While we believe these innovations are interesting and significant, they do not change the fundamental Bayesian mechanism of TabPFN as well as how and why it can perform meta-learning. TabPFN is a zero-shot meta-learner. We will reiterate the precise definition in the next paragraph, but for a detailed explanation on how and why PFN-based models can perform meta-learning, how and why PFN-based models can adapt to zero-shot scenarios, and why zero-shot tabular prediction is important, please kindly see [1] and [2]. Per the reviewer’s feedback, we have added a sentence at L75-76 left column in the introduction section that reads *“PFN-based methods, such as TabPFN, are zero-shot meta-learners. For more details on how PFN’s underlying Bayesian mechanism can perform meta-learning with zero optimization on an unseen dataset, see [1].”* to clarify that PFN-based models' capabilitiy to perform zero-shot meta-learning is *not* where our unqiue contribution lies. Some recent TabPFN-based works tend to call this class of method “tabular foundation models” [3][4], while the term tabular “zero-shot meta-learner” is slightly more specific. We have provided the definition of zero-shot meta-learner in the introduction section, and we will further highlight it in the proposed method section per reviewer’s suggestion: - A model that performs no optimization (i.e. zero gradient updates for a deep learning model) on its parameters while predicting on unseen datasets. This is mentioned in [5] and it is a simple extension on the definition of meta-learner: - A model that performs few optimizations (i.e. few gradient updates for a deep learning model) on its parameters while predicting on unseen datasets. We chose to not categorize PFN-based methods as foundation models because the reason why PFN-based models work is very different from the reason why large vision and language models work – they do not pretrain on any real-world data and their goal is to learn to adaptively acquire data representation of unseen data during inference time. Nonetheless, both concepts have very general and straightforward definitions, and are both correct categorizations of PFNs. We hope that this choice of categorization would not be a major deciding factor, as how PFN-based models perform meta-learning under the zero-optimization scenario is well documented in [1]. --- Weakness 3 (Question 2): **Response**: Yes, this baseline selection is appropriate. It is no different from the baseline selection of of other prior works that benchmark PFNs such as [1], TabPFN, and TabZilla [6], as Reviewer pxwT also pointed out. We want to respectfully note that the reviewer might have a slight misunderstanding regarding the evaluation setups of PFN-based works? (If this is not the case, please kindly ignore this comment) The traditional methods such as CatBoost in the evaluations are not trained under zero-shot scenarios, they are trained on the full training sets and given much more computational time for tuning – this actually gives zero-shot methods more disadvantages and further showcases their zero-shot capabilities. The research on tabular zero-shot meta-learning is still very young, and there are no alternatives to select as baselines other than PFNs, to the best of our knowledge. --- Weakness 4 (Question 3): **Response**: The advantage of APT compared to TabPFN is its inference-time generalizability (it can predict on datasets with unseen classes of unseen cardinality) and pre-training data diversity; the advantage of APT compared to CatBoost is its exceptional runtime thanks to the zero-shot capability. We want to note that the sheer performance on small tabular classification tasks is nearly saturated [6], and none of the recent zero-shot meta-learners or other deep methods is able to significantly beat GBDTs such as CatBoost on small tabular prediction tasks with no limitation on feature size, class size, or number of missing values [1][6]. We believe the data diversity improvement in pre-training as well as the proposal of the mixture block that fundamentally addressed TabPFN’s class size limitation are important contributions to PFNs aside from the improved performances, and we believe we have showcased their power through extensive ablations. --- *References*: [1] [S Müller, et al. ICLR 2022] [2] [T Nagler. ICML 2023] [3] [N Hollmann, et al. Nature 2025] [4] [SB Hoo, et al., Arxiv 2501.02945] [5] [VK Verma, et al. AAAI 2020] [6] [D McElfresh, et al. NeurIPS 2024] --- Rebuttal Comment 1.1: Comment: I confirm that I have read the author response to my review and will update my review in light of this response as necessary.
Summary: This work introduces APT, a zero-shot meta-learning model for tabular prediction, pre-trained with adversarial synthetic data agents. It improves TabPFN, removes class size limitations via a mixture block architecture, and matches SOTA GBDTs on small tabular tasks. While enhancing performance in classification and regression, APT retains quadratic scaling and struggles on large datasets. Claims And Evidence: Yes, the claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, it follows the previous works of tabpfn and tabular benchmark. Theoretical Claims: This paper does not have theoretical claims. Experimental Designs Or Analyses: Yes Supplementary Material: Yes, I have review the supplementary material including the codes. Relation To Broader Scientific Literature: The ability of tabpfn is highly dependant on the quality of the synthetic data, this paper is trying to improve the quality of the synthetic data by leveraging the adversarial learning. It also gives a solution to remove the class size limitation of the tabpfn. Essential References Not Discussed: No. Other Strengths And Weaknesses: The idea of adversarially generating synthetic data is interesting. Other Comments Or Suggestions: More related works of tabpfn [ICLR'23, Nature'25] should be discussed, both of them are the stem of the tabpfn. Furthermore, the tabpfn [ICLR'23] has been extended to the journal version [Nature'25] and release the tabpfn v2, more discussion about the extension and the comparison with the tabpfn v2 is needed. Questions For Authors: 1. Recently, the extended version of tabpfn [Nature'25] is published and release the tabpfn v2, the experiments of this paper can be compared with the tabpfn v2. 2. The tabpfn [ICLR'23] use the structure causal model to generate the synthetic data, can the proposed method be applied to the structure causal model? (ref: TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their clear and valuable feedback on our work, and we address your questions and comments as follows: --- “Other Comments Or Suggestions” & Question 1: **Response**: Thank the reviewer for the suggestion. We added citation to TabPFN v2 [Nature'25] per your comment, but we want to respectfully explain to the reviewer why comparison to TabPFN [ICLR'23] is more appropriate than comparison to TabPFN v2 under the scope of this work: - Per ICML guidelines (https://icml.cc/Conferences/2025/ReviewerInstructions), the main conference submission has the same four-month concurrent cutoff adopted from ICLR, and reviewers should not expect authors to discuss these works that have been recently made public. TabPFN v2 was made public within one month of the submission deadline. - We do not see the Nature version of TabPFN to be a better comparison to APT than the ICLR version of TabPFN. The primary reason is reproducibility. We take ablation *very* seriously in this work, and TabPFN v2 does not release their pre-training code, while the primary contribution of this work is the improvement to an existing pre-training scheme. As stated in our work, all setups and hyperparameters of our synthetic data distributions (as well as the transformer model) are set to be exactly the same as TabPFN, in order to fairly examine the impact of adversarial data agents to pre-training. --- Question 2: **Response**: Yes, the proposed method already uses the SCM-based data generating mechanism. For the purpose of ablation, we did not make any further changes to TabPFN’s underlying generating mechanism as well as hyperparameter settings, other than necessary changes that enable stable adversarial training (L304-305 left column; Appendix B; Appendix C). Note that in this work, we described the SCM-MLP simply as “sparsified noisy MLPs” to be more explicit and straightforward, since all structural equations in the so-called SCM are modeled unifiedly as linear equations plus simple activation, and the exogenous noise in the structural equations are unifiedly modeled as gaussian with the same mean and pre-sampled variance. But in light of the reviewer’s comment, we understand that it’s important to draw connections to concepts and terminologies used in prior work, so we have added this information to L137-142 left column which now reads: *“predictors $\mathbf{x}^{(k)}_i$ and response $y^{(k)}_i$ are values of randomly selected neurons in sparsified noisy MLPs (i.e. the SCM data generating mechanism) with some additional pre-processing. More details regarding this approach can be found in Appendix B.1.”*. --- We hope that in light of our responses and further clarifications, the reviewer would consider raising their score of our paper to a strong acceptance. Thank the reviewer for their time and effort.
Summary: The paper introduces an Adversarially Pre-trained Transformer (APT) for zero-shot meta-learning on tabular prediction tasks. APT is pre-trained using adversarial synthetic data agents that continuously generate challenging datasets, enabling the model to generalize to unseen tasks without requiring real-world data for training. The authors also propose a mixture block architecture to handle classification tasks with arbitrary class sizes, addressing a key limitation of prior methods like TabPFN. Experiments show that APT achieves state-of-the-art performance on small tabular classification tasks, with improved generalization and faster pre-training compared to existing methods. ## Update After Rebuttal Thanks for the authors' response! I will keep the rating. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: There's no proof in the paper Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: Adding synthetic adversarial training data could help the generalization of tabular model training, which can provide beneficial insights. Essential References Not Discussed: I am not in the field of tabular data paper. Other Strengths And Weaknesses: Strengths: 1. The use of adversarial synthetic data agents enhances the diversity and difficulty of the training data, leading to better generalization and robustness in zero-shot learning scenarios. 2. The proposed mixture block eliminates the class size limitation of previous methods, making the model more flexible and applicable to a wider range of classification tasks. 3. Extensive experiments demonstrate that APT achieves competitive or superior performance on small tabular classification tasks, demonstrating its efficiency and effectiveness. Weakness: 1. The proposed approach performs well on small tabular datasets, but it might be difficult to be applied for large-scale dataset because of the quadratic runtime and memory complexity. A more detailed discussion on potential optimizations for scalability would strengthen the paper. 2. Regarding the improvements in regression tasks, while better than TabPFN, APT still falls short of tree-based models in regression. More discussion about this could be helpful. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their succinct and extremely clear examination of our work. In light of your feedback, we have added the following paragraphs: --- Weakness 1: **Response:** *The limitations are imposed on PFNs by the transformer architecture’s quadratic computation scaling. However, considerable research in recent years has significantly accelerated the transformer and increased its context length, in some cases up to 1 million tokens [1][2]. It is a worthwhile effort for future research to apply these accelerations to PFNs beyond the scope of this paper. The architecture we employ can be modified to include these advancements.* We intend to add this paragraph to the conclusion section in the final version of the paper, together with specific references to the notable advancements in transformer capabilities. This should provide a clear roadmap for future work to improve on APT performance. *Reference*: [1] Yuhuai Wu, Markus Norman Rabe, DeLesley Hutchins, and Christian Szegedy. Memorizing transformers. ICLR 2022 [2] Aydar Bulatov, Yuri Kuratov, and Mikhail S. Burtsev. Scaling transformer to 1m tokens and beyond with RMT. CoRR, 2023. --- Weakness 2: **Response:** *Note that rather than re-designing the base synthetic data sampling distributions or re-performing extensive hyperparameter search, we use the exact same synthetic data sampling distributions and hyperparameters that were used in TabPFN for the purpose of ablation, in order to clearly demonstrate the contribution of adversarial training. TabPFN was trained only on classification problems, and therefore it is possible that these hyperparameters are over optimized for classification tasks, and under optimized for regression tasks.* We will add this paragraph to end of the experiments section in the final version of the paper, to point out and attempt to explain the relative performance in regression. --- We hope that in light of including these additional brief discussions in the paper, the reviewer would consider raising their score of our paper to a strong acceptance. We appreciate the reviewer for their time and effort.
Summary: This paper introduces the Adversarially Pre-trained Transformer (APT), which is a novel zero-shot meta-learning method for tabular data prediction tasks. By employing adversarial synthetic data agents and a mixture block architecture, APT addresses key limitations in prior tabular learning methods, particularly in handling datasets with multiple classes and complex feature distributions. APT achieves state-of-the-art performance on small tabular classification tasks, generating more diverse synthetic data while maintaining computational efficiency comparable to existing methods. Claims And Evidence: The scatter-sum mixture block represents a conceptually groundbreaking architectural approach that transcends conventional multi-layer perceptron final layer designs, offering tantalizing possibilities for cross-domain generalizability. This design is particularly compelling due to its potential to reimagine output prediction mechanisms across diverse machine learning domains. The meticulous dataset preprocessing approach demonstrates exceptional rigor through the deliberate exclusion of four vision datasets (MNIST 784, CIFAR-10, Devnagari-Script, and Fashion-MNIST) from the original collection, ensuring a laser-focused analysis of genuinely tabular data representations. Methods And Evaluation Criteria: The Adversarial Data Agents methodology presents profound methodological challenges that demand exhaustive critical examination. Fundamental epistemological questions emerge that fundamentally challenge the approach's conceptual and practical foundations below. - How are benign samples conceptualized and potentially utilized within the synthetic data generation process? - Are original datasets meaningfully incorporated, or are they systematically marginalized? - What substantive computational and statistical principles underpin the concept of "random initialization"? - What precise algorithmic mechanisms constitute the categorical sampling strategy? Moreover, the approach raises critical questions about numerical feature generation: - What sophisticated computational strategies ensure the synthetic features maintain meaningful statistical properties and representational fidelity? - How did you prevent the generation of mathematically correct but contextually meaningless synthetic data points? - The methodology bears a fascinating conceptual resemblance to Bayesian nonparametric methods like Gaussian processes, yet simultaneously surfaces profound computational concerns. Does this approach necessitate maintaining the entire training set during testing, potentially incurring prohibitive memory and computational resource expenditures? Theoretical Claims: The research confronts inherent challenges in adversarial training, including potentially catastrophic methodological limitations of persistent risks of model collapse, intrinsic training instability, and potential representational degradation mechanisms. The approach inherently risks replicating classic Generative Adversarial Network (GAN) pathologies, raising significant questions about the long-term stability and generalizability of the proposed synthetic data generation strategy. Experimental Designs Or Analyses: The experimental design reveals several noteworthy anomalies and potential methodological limitations. In detail, the performance of Support Vector Machines (SVM) appears statistically unexpected and demands rigorous investigative scrutiny. The research conspicuously lacks a comprehensive exploration of large-scale tabular datasets and extensive classification scenarios. Supplementary Material: The paper provides comprehensive appendices covering detailed related work analysis, background on Prior-Data Fitted Networks, explicit hyperparameter settings, and additional experimental results. Relation To Broader Scientific Literature: The research builds upon and extends Prior-Data Fitted Networks (PFNs), zero-shot meta-learning approaches, adversarial training techniques, and transformer architectures for tabular data. Essential References Not Discussed: It seems that there are no essential references not discussed. Other Strengths And Weaknesses: Strengths: - The research presents remarkable methodological innovation by creatively combining adversarial synthetic data generation with transformer-based learning architectures. The scatter-sum mixture block represents a groundbreaking approach that transcends conventional multi-layer perceptron design, offering promising cross-domain generalizability while addressing class size limitations in zero-shot meta-learning approaches. Weaknesses: - Significant generalizability limitations emerge from the current implementation. Critical questions persist regarding the method's applicability across different data types, particularly its performance on datasets with varied distributional characteristics and complex feature interactions. - The research is fundamentally constrained by its reliance on a specific computational approach, potentially restricting broader methodological applicability. The notable sensitivity to hyperparameters suggests potential challenges in achieving consistent performance without extensive manual tuning. - The approach's theoretical foundations remain incompletely developed, predominantly relying on empirical validation rather than rigorous mathematical substantiation. This empirical emphasis, while demonstrating practical effectiveness, leaves critical theoretical mechanisms unexplored and potentially undermines the method's deeper scientific understanding. - Reproducibility concerns are substantial. The complex methodological design, combined with the potential absence of comprehensive implementation details, may pose significant challenges for researchers attempting to replicate or build upon this work. - The computational complexity and memory scaling limitations inherited from transformer architectures represent a profound methodological constraint, potentially limiting the approach's utility for large-scale or resource-intensive applications. Other Comments Or Suggestions: Please refer to the other sections. Questions For Authors: - How can the potential overfitting to synthetic data distributions be comprehensively mitigated? - What sophisticated mechanisms ensure the adversarial data generation approach's robustness across diverse data domains? - Can the mixture block's innovative architectural approach be effectively translated to other machine learning domains? - What strategies might comprehensively address the computational limitations for larger, more complex datasets? - How do the researchers explain the statistically unexpected SVM performance characteristics? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their kind and detailed examination of our work. In particular, the reviewer raises many profound questions about PFNs in general -- many of which our paper does not resolve on its own. Resolving the full gamut of these questions is a task for the entire research community working on PFNs, and most likely cannot all be achieved in a single paper. Our much more modest aspirations in this work are to develop certain aspects and capabilities of PFNS further than the state-of-the-art. To address the specific questions and weaknesses the reviewer raised: --- **Question 1**: We have not seen evidence of overfitting in this work. We can pose the question of how we would know if our APT model is overfitting to synthetic data. One possible answer could be that the model would perform poorly at zero-shot predictions on *real* data. Given the competitiveness of our APT model with GBDTs and other models we benchmarked on, the empirical answer to this questions seems to be that PFNs thus far do not seem to be overfitting on synthetic data to a degree that it prevents them from excelling at prediction on real data. --- **Question 2**: We appreciate the question. Our belief is specifically that the adversarial data generating process makes the model more robust across domains when compared to traditional PFNs which are trained on narrowly defined data. The adversarial process is both novel and sophisticated. The training makes no use of any real data, and it is therefore not biased by any real data domain. --- **Question 3**: We think this is a great question and it is very possible. We think the reason why prior research has not been focusing on the last-layer generalizability of deep models is because such mechanism is not in dire need in traditional ML problems, where the cardinality of label is fixed (e.g. fixed number of classes in tabular tasks, fixed vocabulary size in language tasks, etc.). In this case, we are free to let the model learn label-specific parameters. However, the mixture block could be very useful in other ML problems: for instance, it could help a vision model such as an ImageNet generalize to a set of different labels. These labels can have different meanings and higher cardinality than the original labels. This is thanks to the fact that the design of the mixture block ensures that it does not have parameters tied to any specific classes. --- **Question 4**, and **Weakness 5**: The limitations are imposed on PFNs by the transformer architecture’s quadratic computation scaling. However, considerable research in recent years has significantly accelerated the transformer and increased its context length, in some cases up to 1M tokens. It is a worthwhile effort for future research to apply these accelerations to PFNs but out of scope of this paper. The model architecture we employ can be modified to include these advancements. We intend to add the previous paragraph to the conclusion section in the final version of the paper, in light of the reviewer’s suggestion. This should provide a clear roadmap for future work to improve on APT’s computational capabilities. --- **Question 5**: We don't believe that this is surprising: the original TabPFN paper shows that “tuned SVM” (Figure 12, pink triangular symbol) performed comparably with Catboost, and exceeded performance of decision forests. Because we are dealing with small datasets, kernel SVMs are tractable, which would not be the case for large datasets due to their computational intensity. --- **Weakness 1**: We have benchmarked our methods and the competing methods on a variety of data tasks widely accepted in the tabular data research community. We have not seen evidence that suggests these datasets are not diverse, and the fact that an increasing number of papers are using the same benchmarks (TabPFN, TabZilla, etc.) suggests the community views these data tasks as a worthwhile set of challenges for tabular deep learning. --- **Weakness 2**: We were able to directly incorporate our innovations to TabPFN without re-tuning its hyperparameters to synthetic data distribution and transformer model. We did not experience such sensitivity to hyperparameter issue, and one can argue that the adversarial approach of directed exploration of synthetic data space actually mitigates the sensitivity issue. --- **Weakness 3**: This paper is intended to be empirical. For more details on the theoretical foundation of PFNs, please see [T. Nagler ICML'23], and we would like to respectfully remind the reviewer that this is not the focus of this work. --- **Weakness 4**: The entirety of the code we used is attached in the supplemental material, from which readers can easily reproduce our results, and we followed best practices in allowing reproducibility. --- Thank the reviewer for their time and effort. We hope that in light of our responses, the reviewer will consider raising their score on our paper to an accept.
null
null
null
null
null
null
Discrepancies are Virtue: Weak-to-Strong Generalization through Lens of Intrinsic Dimension
Accept (poster)
Summary: The paper considers the problem of weak-to-strong generalization, which is the study of how the performance of a strong model trained on pseudo labels of a weaker model generalize. More formally the following setup is consider. Two features transforms, $\phi_s$ the strong model and $\phi_w$ the weak model, are given where it is a assumed that the feature transform $\phi_s$ and $\phi_w$ has output in some spaces with dimension respectively $d_s$ and $d_w$. Now a linear layer $\theta_w$ ontop of the weak model $\phi_w$ is trained on a dataset $\tilde{S}=((\tilde{x}_1,\tilde{y}_1),\ldots,(\tilde{x}_n,\tilde{y}_n))$ of i.i.d. examples where $\tilde{x}_1\sim \mathcal{D}$ and ${\tilde{y}}_1=f^*(\tilde{x}_1)+z_1$, where $z_1 \sim N(0,\sigma^2)$, $f^*$ being uniformly bounded by 1. Now given a second dataset $S=((x_1,y_1),\ldots,(x_N,y_N))$, drawn i.i.d. in the same manner as $\tilde{S}$, where the learner is only given the unlabelled data $x_1,\ldots,x_N$ a linear layer $\theta_{w2s}$ ontop of the strong model $\phi_s$ is trained with the pseudo labels given by $\theta_w$, i.e. the data set $(x_1,x_1\theta_w),\ldots,(x_N,x_N\theta_w).$ The paper then want to show how the strong model generalize on new data examples $(x,y)$ drawn in the above manner. To this end the paper studies the expected generlization error of $\theta_{w2s},$ i.e. $err(\theta_{w2s})=E_x[E_f[(\theta_{w2s}\phi_s(x)-f*(x))^2]]$ via the bias variance decomposition. Under sufficient conditions, the paper shows that the bias can be bounded by the sum of the generalization error of the best fixed linear lay on top of $\phi_s$, $\rho_s= \min_\theta E_x[(\theta\phi_s(x)-f*(x))^2]$ and the best fixed linear lay on top of $\phi_w$, $\rho_w= \min_\theta E_x[(\theta\phi_w(x)-f*(x))^2].$ The variance term of the strong model can be bounded by $\sigma/(n-d_w-1) (d_{sw}+d_s(d_w-d_{sw}/N),$ where $d_{sw}$ denotes the intersection of the subspaces that $\phi_s$ and $\phi_w$ spans, here the term $\sigma d_{sw}/(n-d_w-1)$ do not change with the number of pseudo labels where as the term $\sigma/(n-d_w-1)d_s(d_w-d_{sw}/N),$ does. Thus, this theorem suggests that the generalization error of the stronger model trained with more pseudolabels improves this variance term, where as the other terms do not change with more pseudolabels. This variance-term can be thought of the discrepancy of the subspaces span by $\phi_s$ and $\phi_w$. For comparison of the performance of $\theta_s$, the paper also considers the model $\theta_f$ a linear layer trained ontop of the strong model $\phi_s$ with the dataset $\tilde{S}$ and the model $\theta_c$ a linear layer trained ontop of the strong model $\phi_s$ with the dataset $\tilde{S}\cup S.$ From these quantities, the paper defines the Performance Gap recovery, shorten PGR $$PGR=\frac{err(\theta_w)-err(\theta_{w2s}}{err(\theta_w)-err(\theta_{w2s}}$$ and the Outperforming ratio (OPR) $$OPR=\frac{err(\theta_{s})}{err(\theta_{w2s})}.$$ The paper then shows that under suitable conditions, which on a high levels says that the noise $\sigma$ is much larger than the approximations errors $\rho_s$ and $\rho_w,$ $(\rho_s+\rho_w) / \sigma \rightarrow 1$ and $n$ and $N$ large enough, then the PGR becomes lower bounded by $1-(d_{sw}/d_w)$ where it is argued that it have been observed in some experiments that $d_{sw}/d_w$ is small, since the weak model as a "large" dimension and the intersection of subspace spanned by the strong and the weak models $d_{sw}$ being "small". Furthermore, under these assumptions the OPR becomes $d_s/d_{sw}$, again being large since $d_{sw}\leq d_s.$ Furthermore when $(\rho_s+\rho_w)/\sigma \not\rightarrow 0$, the paper observes that the PGR and OPR is non-monotonic in $n$. Solving for the optimal $n$, the paper show that when $(\rho_s+\rho_w)/\sigma<<1$ and $N$ is large enough the $PGR$ and $OPR$ under this optimal $n$ becomes larger. In general when the approximation errors $\rho_w+\rho_s$ is larger than $\sigma$ the lower bounds on PGR and OPR becomes not so meaningful. The paper then examine there findings on an synthetic datasets and a real-world datasets. On the synthetic datasets where $\rho_w+\rho_s$ is smaller than $\sigma$, $\theta_{w2s}$ outperformes $\theta_s$ when $n<<N$, and in the case $\rho_w+\rho_s$ is close to $\sigma$, $\theta_{w2s}$'s performance is close to $\theta_w$ so giving little or no weak to strong generalization. For the synthetic datasets with $\rho_w+\rho_s$ smaller than $\sigma$ the PGR as a function of $n$ shows a decreasing trend and as a function of $N$ shows a no monotonic trend, with an increase in the beginning and then turns and decreases, the increase is steeper than the decrease. The OPR as a function of $n$ shows a decreasing trend and the OPR as a function of N shows a non decreasing trend. For the real word dataset the picture is similar expect now PGR can go negative and PGR as a function of N is increasing. Claims And Evidence: There is no proofs in the main, which is deferred to the appendixes. The experiments are very small and I think should be seen as a pilot study giving some indications of what the theory showed. Methods And Evaluation Criteria: As my comment above I think the experiments should be seen as a complement to the theoretical works of this papers and there is room to further study the theoretical findings of this paper on more datasets to give a stronger indication on to what extend the theoretical frame work explains strong to weak generalization. Theoretical Claims: There was no proofs in the main so i did not check the correctness of the proofs. Experimental Designs Or Analyses: I did not check the code for the experiments. Supplementary Material: No. Relation To Broader Scientific Literature: The paper first examines the empirical literature, which is related to the paper in the sense that this was the ground for the interest in theoretically better understanding W2S, generalization that this paper studies. More specifically the W2S concept was introduced by Burns et. al. 2023. The theoretical work which the paper says is the closest is Ildiz et al, which also studies the ridgeless regressors, assumably also with the focus on variance reduction, but here the paper studies variance reduction from an intrinsic dimension perspective inspired by empirical observations by Aghajanyan et al. 2020. Essential References Not Discussed: To the best of my knowledge no. Other Strengths And Weaknesses: The paper is well written, and explains nicely how they see their findings. Other Comments Or Suggestions: Here are the notes that I have collected while reading your nice paper. The first questions/notes(the enumarted ones) would be nice to get and answer on but it is very much not required. The last bullets points are notes i took while reading. 1. page 3 second column line 110, why can the boundedness over the whole distribution be assumed without loss of generality, i get it with normalization, but f* could be unbounded? 3. page 6 first column line 308. what is meant with $(\rho_s+\rho_w)/\sigma \rightarrow 0$ my understanding of theses quanties it not being dependent on neither n and N, so what is meant by going to 0? 4. page 6 second column line 308-312, i feel like the conclusion made in the paragraph is the same as in the paragraph page 6 first column line 318 to 276 in the second column, but no without any concrete lower bound, is this correct? 5. page 7 second column line 338, Is it that intriguingly that f_s outperfroms f_w2s when n\approx N? In this case f_ws was trained on N examples labelled by the weaker model, and f_s was trained on n examples from the true distribution? 6. page 8 first column line 417, why not report the spectrum such that one can get an idea of changing the cut off would change $d_s$ and $d_w$? - Allot of assymptotical notation was defined what is meant by $<<$ - page 4 first column line 206 what is the gradient of $\nabla_\theta$ taken with respect to? Questions For Authors: 1. It is a bit unclear to me what the technical contribution was? Do the proofs require new novel ideas, or is it similar to previous work of X, but the novelty is to put it in the context of weak to strong learning? 2. page 3 second column line 132, the embedings goes $\phi_s$ and $\phi_w$ to the same dimension, which my guess is because one wants to define the intersection of their covariance matrices later. What could be done without this assumption in terms of intrinsic dimension? To the end of the above question 3. page 8 first column line 413-419, do ResNet50 and ResNet15 have the same output dimension? Thanks for the response. I would like to keep my initial score. This decision is based on the response, which clarified my understanding of the paper and reading the other reviewers' comments and concerns. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive suggestions and supportive feedback. We are glad that they found our paper well-written and provided novel perspectives on W2S. Since we could not include figures in OpenReview rebuttal, we will synopsize our new experiments in text and present the formal results in an anonymous URL: https://anonymous.4open.science/r/W2S_IntrinsicDim_demo-FEAF/demo.pdf. ## Key clarifications 1. **Additional experiments on image regression and LLMs**: We appreciate the suggestion on strengthening the experimental validation. In the revision, we will include additional real experiments on image regression (UTKFace) based on CLIP and ResNet embeddings, as well as NLP task (SST-2) based on finetuning LLMs (see the URL for concrete plots). In particular, our new experiments provide strong empirical evidence for our theoretical results: 1. **Discrepancies lead to better W2S**: As Figures 4 and 5 in the submission, we plot the scaling of PGR/OPR against sample sizes $n,N$ for weak-strong pairs with various correlation dimensions on real vision and NLP tasks. The results show that the lower $d_{s \wedge w}/d_w$ (larger discrepancy) consistently brings better W2S. 2. **Variance reduction is a key advantage of W2S**: While the variance-dominated regime that we focus on is overlooked by most prior theoretical works on W2S, our new experiments confirm that this is a particularly important regime where strong W2S performance (in terms of PGR/OPR) occurs in practice. By injecting artificial label noise that leads to larger variance, we observe significant improvements in PGR and OPR, implying that variance reduction is a key advantage of W2S. 2. **Our analysis extends to weak-strong pairs with different feature dimensions**: Please see **R#2 (WLsf), Key#3** for detailed explanations. 3. **Empirical estimation of intrinsic and correlation dimensions**: Thanks for the question. In practice, the feature dimensions of the weak and strong models can be different. Please see **R#1 (5dnE), Key#2.1 and 2.2** for how we estimate $d_{s \wedge w}$ in practice. 4. **Our main technical contributions** are: 1. the theoretical framework based on low intrinsic dimensions of finetuning that provides **a theoretical underpinning for W2S in the variance-dominated regime**, along with 2. the insight on **how weak-strong discrepancy brings better W2S generalization**. As pointed out in **R#2 (WLsf), Key#4**, W2S has shown appealing empirical performance when variance dominates (e.g., see Key#1.2). However, the variance-dominated regime is often overlooked in prior analyses on W2S. (We kindly point out the confusion that Ildiz et al. (2024) explain W2S from the bias instead of variance perspective. We will further clarify this in the revision.) To the best of our knowledge, this is **the first theoretical explanation for W2S in the important variance-dominated regime**. While our analysis is built upon various fundamental tools in random matrix theory, high-dimensional statistics, and linear algebra, the unique structure of the learning problem posed by W2S makes the combination of these tools highly non-trivial. Moreover, our framework is flexible enough to be extended to finetuning problems with similar structures like knowledge distillation and model collapse. We will clarify this in the revision. ## Detailed responses We appreciate the detailed suggestions on notations and presentation. Here, we address questions in "Other Comments Or Suggestions" (O): * O#1: We assume $f_*$ being bounded for conciseness of the analysis. This can be trivially relaxed by adding a constant $C = \max_x |f(x)|$ to the generalization error in our analysis. * O#2, 3: In Case I, $(\rho_s + \rho_w)/\sigma^2 \to 0$ essentially says that the FT approximation errors are small enough compared to the label noise so that we can treat them as zeros. In contrast, for Case II, we define a small constant $\varrho = (\rho_s + \rho_w)/\sigma^2 > 0$ to quantify the variance domination. This $\varrho$ affects the PGR and OPR lower bounds in Coro. 3.6. While the quantitative conclusions are different, it's correct that the qualitative takeaways for both cases are similar. * O#4: We agree. This is exactly a simple explanation for how $f_s$ eventually outperforms $f_{w2s}$ as $n$ increases. We will rephrase the sentence to avoid ambiguity. * O#5: The reason why we did not report the spectra is that they do not contain much relevant information. See the spectra of ResNet embeddings in our new UTKFace experiments for examples: https://anonymous.4open.science/r/W2S_IntrinsicDim_demo-FEAF/fig/utkface_svd_cutoff0.99.pdf. We are happy to answer any further questions you may have. If our responses above help address your concerns, we would truly appreciate a re-evaluation accordingly.
Summary: The paper studies weak-to-strong generalization in ridgeless regression with (sub-)Gaussian features. It reveals that weak-to-strong generalization arises from the discrepancy between the weak model's features and the strong model's features. Claims And Evidence: The main theorem statement is clearly presented, and the proof in the appendix appear to be well-structured and understandable. Methods And Evaluation Criteria: In their main results and experimental evaluations, the authors use PGR and OPR, which are standard evaluation metrics in the weak-to-strong generalization literature. Theoretical Claims: I have checked the proof of the main results, and it appears to be correct. Experimental Designs Or Analyses: The authors provide experimental results on synthetic regression and an image classification task. However, in the image classification task, I have concerns about the choice of intrinsic dimension. Why do the authors define the intrinsic dimension as 90% of the trace of the feature matrix? I believe it is essential to analyze the spectrum of the feature matrix, as the intrinsic dimension and trace can differ between the strong and weak models. Additionally, I am confused about the meaning of "threshold dimension $k$" in line 416 (left). Supplementary Material: N/A Relation To Broader Scientific Literature: This provides novel insights into understanding the weak-to-strong generalization phenomenon, particularly in relation to intrinsic dimension. This contribution enhances the broader understanding of weak-to-strong generalization. Essential References Not Discussed: I believe most related works are cited in the manuscript, at least regarding the theory of weak-to-strong generalization, which I am familiar with. However, several relevant references have been released after the ICML submission deadline, and I hope the authors will cite and discuss them in the next revision. Here are some of them (but not limited to these): [1] Medvedev, Marko, et al. "Weak-to-Strong Generalization Even in Random Feature Networks, Provably." *arXiv preprint arXiv:2503.02877* (2025). [2] Yao, Wei, et al. "Understanding the Capabilities and Limitations of Weak-to-Strong Generalization." *arXiv preprint arXiv:2502.01458* (2025). Other Strengths And Weaknesses: One weakness of the paper is its simplified problem setting, specifically ridgeless regression and (sub-)Gaussian feature maps. However, the authors provide a strong motivation for this choice, and I believe that results derived from a well-motivated simple setting can also be a strength of the work. Other Comments Or Suggestions: Here are some minor comments on the manuscript: - Typo: Line 142 (left): $p \in [0,1]^b$ → $p \in [0,1]^n$ - Clarification: Line 158, 162 (left): $\mathcal{D} : \mathcal{X} \times \mathcal{Y} \rightarrow [0,1]$. I believe it is not appropriate to represent the data distribution as a function mapping to [0,1]. - Clarification: Line 155–156 (right): Does $\dagger$ refer to the Moore–Penrose inverse? I think it would be better to clarify this. - Figures: The figures in the paper have low resolution. I suggest the authors provide high-resolution versions (e.g., in .pdf format). Questions For Authors: Can you provide a high-level intuition on why discrepancies between the features of the weak and strong models lead to weak-to-strong generalization? I believe the current version of the paper lacks this high-level intuition, even though the theoretical results are interesting. If the authors can provide a clearer high-level explanation of their findings and promise to emphasize this in the next revision, I am open to increasing my score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time and suggestions. We are glad that they found our paper well-written and provides novel understanding for W2S. Since we could not include figures in OpenReview rebuttal, we will synopsize our new experiments in text and present the formal results in an anonymous URL: https://anonymous.4open.science/r/W2S_IntrinsicDim_demo-FEAF/demo.pdf. ## Key clarifications 1. **The key intuitions on how discrepancies lead to better W2S are unrolled in two steps in the introduction**: 1. In Lines 68-79, we first break down our main theoretical result $$Var(f_{w2s}) \asymp \frac{d_{s \wedge w}}{n} + \frac{d_s}{N}\frac{d_w - d_{s \wedge w}}{n}$$ to explain **why a larger discrepancy (smaller $d_{s \wedge w}$) leads to better W2S**. That is, the strong student mimics variance of the weak teacher in the overlapped subspace of weak and strong features (with dimension $d_{s \wedge w}$). In contrast, **variance in the discrepancy subspace between weak and strong features (with dimension $d_w - d_{s \wedge w}$) is reduced by a factor of $\frac{d_s}{N}$ in W2S**. Recall that $d_s$ is the low intrinsic dimension, and the unlabeled W2S sample size $N$ is generally large in practice. Therefore, a multiplicative factor of $\frac{d_s}{N}$ significantly reduces the variance in the discrepancy subspace, leading to better W2S performance. 2. Then, in Lines 91-107, we use an example to provide high-level intuition for **how variance reduction in the discrepancy subspace happens**. In particular, we consider a downstream task on classifying the brands of vehicles based on their images. - The weak features, spanning a low-dimensional subspace $\mathcal{V}_w$, capture the designs of vehicles that tend to be more complicated (with a higher intrinsic dimension $d_w$) but contain irrelevant/spurious information that makes the model weak for the downstream task. - The strong features, spanning a low-dimensional subspace $\mathcal{V}_s$, capture the logos of vehicles that are often simpler (with a lower intrinsic dimension $d_s$) and more relevant for the downstream task. Since the design and logo of a vehicle are typically irrelevant, $\mathcal{V}\_w$ and $\mathcal{V}\_s$ are likely to be almost orthogonal in a high-dimensional feature space, leading to a small $d_{s \wedge w}$. Then, **the weak teacher $f_w$ brought by noisy SFT labels will make mistakes that only correlate to the design features in $\mathcal{V}_w$, independent of the logo features in $\mathcal{V}_s$. Such errors in weak supervision can be viewed as independent label noise with respect to the strong features in $\mathcal{V}_s$**. With an intrinsic dimension $d_s$, the generalization error of strong student induced by such independent label noise vanishes at a rate of $\frac{d_s}{N}$ (following the intuition of classical regression analysis). We will further clarify these key intuitions in the revision. 2. **Empirical estimation of intrinsic and correlation dimensions**: Please see **R#1 (5dnE), Key#2**. 3. **Additional experiments on image regression and LLMs**: In the URL, we included additional real experiments on image regression (UTKFace) based on CLIP and ResNet embeddings, as well as NLP task (SST-2) based on finetuning LLMs. In particular, our new experiments provide strong empirical evidence for our theoretical results by showing that (i) **model discrepancies lead to better W2S**; and (ii) **variance reduction is a key advantage of W2S**. Please see **R#4 (W6Uk), Key#1** for details. ## Detailed responses * Discussion on concurrent works: We are keeping track of relevant concurrent works on W2S and will include discussions on them in the revision. * We appreciate all the detailed suggestions in "Other Comments Or Suggestions" and will revise accordingly in the next version. We are happy to answer any further questions you may have. If our responses above help address your concerns, we would truly appreciate a re-evaluation accordingly. --- Rebuttal Comment 1.1: Comment: Thank you for the response. It successfully addresses my concerns. I believe that adding further discussion, particularly regarding key intuition on findings, would further strengthen the manuscript. Accordingly, I have increased my score to 3 (weak accept).
Summary: This paper theoretically investigates the weak-to-strong (W2S) generalization phenomenon in the setting of ridgeless regression. From a bias-variance decomposition perspective, the authors utilize the intrinsic dimensionality of fine-tuned models to analyze the generalization performance of the weak teacher model, the W2S student model, the strong model, and the strong ceiling model. When variance dominates the generalization error, the paper finds that the W2S student model behaves similarly to the weak teacher model within their overlapping feature space, but reduces variance in the other parts of weak teacher's feature subspace. This reduction in variance creates an opportunity for the W2S student to outperform the weak teacher. Additionally, the authors characterize the performance gap recovery metric and the outperforming ratio metric based on their bias-variance decomposition analysis. Furthermore, the theoretical findings are empirically verified in both the ridgeless regression setting and a binary image classification task. ## update after rebuttal I would like to thank the authors for their further responses. However, I still do not see how the explicit effect of early stopping is reflected in the results. As I understand it, in the main context of the paper, the parameter $\alpha_{w2s} \to 0$ is assumed, and all main results are derived under this setting. It is unclear to me (quantitatively) how choosing a suitable (non-zero) value of $\alpha_{w2s}$ would impact the theoretical results presented in the main text. At this point, I intend to maintain my original evaluation. Claims And Evidence: The theoretical results seem to be well supported by the empirical findings. However, I have some concerns regarding certain parts of the proofs, see the Theoretical Claims section for details. Methods And Evaluation Criteria: The proposed methods and evaluation criteria seem reasonable. Theoretical Claims: I checked the proofs in the appendix and have some concerns that need to be addressed. Specifically, in the proof of Theorem 3.1 (or Theorem A.2), the authors use the following equality in Lines 595–597: $$ \mathbb{E}\_{\mathbf{x}\sim\mathcal{D}}[\mathbb{E}\_{f\_{w2s}}[(f\_{w2s}(x)-f\_{\*}(x))^2]]=\mathbb{E}\_{\mathcal{S}\_x}[\mathbb{E}\_{\theta\_{w2s}}[\frac{1}{N}||\Phi\_s\theta\_{w2s}(x)-f\_{\*}||^2]]. $$ On the LHS, the expression represents the definition of excess risk, where $\mathbf{x}\sim\mathcal{D}$ is an independently drawn test sample, which is independent of the random variable $f\_{w2s}$. However, on the RHS, the expectation over the test sample $\mathbf{x}\sim\mathcal{D}$ is replaced by an average over the training sample $\mathcal{S}\_x$, which is **not independent** of $\theta_{w2s}$. This suggests that the RHS might be the training error rather than the excess risk. This issue seems to exist in other parts of the proofs in the appendix. Could you clarify why this equality holds and whether it correctly represents the excess risk rather than the training error? Experimental Designs Or Analyses: The ridgeless regression experiments align with the theoretical analysis and appear to be sound. However, using a binary image classification task with MSE loss may not substantially strengthen the paper in terms of supporting the claims in a more realistic setting. Would similar observations hold for multi-class image classification with a cross-entropy loss? Exploring this could provide stronger empirical support for the theoretical results. Supplementary Material: I went through all the appendix. Relation To Broader Scientific Literature: This paper contributes to the broader literature on learning theory, with a particular focus on the weak-to-strong generalization phenomenon, first identified by Burns et al. (2023). Essential References Not Discussed: The relevant works seem to be properly cited. Other Strengths And Weaknesses: *Other Strengths:* 1. The paper is well written and easy to follow. 2. It introduces the use of intrinsic dimensionality to quantitatively characterize model capacity, an aspect that has not been explored in previous works on W2S generalization. *Other Weaknesses:* 1. This paper mainly focuses on a setting where both the weak and strong models perform well, that is, both the weak teacher and the W2S student have relatively high model capacity and achieve low approximation error. This may differ from many scenarios in previous W2S generalization studies, where the weak teacher typically has limited capacity and performs poorly. 2. The analysis is largely restricted to the variance-dominated regime, which limits its general applicability. However, the authors explicitly acknowledge this limitation in the paper. 3. Another limitation is that the paper assumes both the weak model features and the strong model features exist in the same feature space (i.e. $\mathbb{R}^d$), which may not always hold in practical scenarios. Other Comments Or Suggestions: Typo in Line 66 (left column): "both student and teach" ---> "both student and teacher" Questions For Authors: 1. In addition to the question raised in Theoretical Claims, could you clarify how the obtained variance and bias terms in Line 603 correspond to the terms defined in Lines 121–123 (right column)? Can these terms be directly derived from $\mathrm{Var}(f)$ and $\mathrm{Bias}(f)$ as formulated in Lines 121–123? 2. If the weak teacher model is not sufficiently strong (e.g., if $\rho_w$ is no longer small), will W2S generalization still occur in your analysis? 3. Will your analysis remain valid if $\phi_w(x)\in\mathbb{R}^{d_1}$ and $\phi_s(x)\in\mathbb{R}^{d_2}$ with $d_1\leq d_2$? This would correspond to a scenario where the strong model has more parameters than the weak model. 4. Does your analysis provide any insights into why early stopping often benefits W2S generalization? 5. Based on Proposition 3.5, it seems that large label noise (i.e., large $\sigma^2$) can facilitate W2S generalization. Does this suggest that artificially injecting independent noise into the labels when training the weak teacher model could make W2S generalization more likely? 6. Could you provide more details on how the intrinsic dimension was estimated in your image classification experiments? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the constructive suggestions from the reviewer. We are glad that they found our paper well-presented and our perspective novel. Since we could not include figures in OpenReview rebuttal, we will synopsize our new experiments in text and present the formal results in an anonymous URL: https://anonymous.4open.science/r/W2S_IntrinsicDim_demo-FEAF/demo.pdf. ## Key clarifications 1. **Additional experiments on image regression and LLMs**: In the URL, we included additional real experiments on image regression (UTKFace) based on CLIP and ResNet embeddings, as well as NLP task (SST-2) based on finetuning LLMs. Please see **R#4 (W6Uk), Key#1** for details. 2. **Empirical estimation of intrinsic and correlation dimensions**: Please see **R#1 (5dnE), Key#2**. 3. **Our analysis extends to weak-strong pairs with different feature dimensions**: Notice that since the intrinsic dimensions are far lower than the feature dimensions, the larger feature dimension (e.g., $D_s$) can always be reduced to the smaller one ($D_w < D_s$) via random projection (Johnson-Lindenstrauss transforms) with a negligible information loss $\epsilon \asymp d_s / D_w$. This is the main idea of the empirical estimation in Key#2. Since the high feature dimensions are not essential in our setting, we consider $D_s = D_w = d$ for a clean analysis. We will clarify this in the revision. 4. **Variance-dominated regime is crucial for understanding W2S, focusing on which does not compromise our contributions**: We respectfully disagree with the reviewer and emphasize that our focus on variance-dominated regime is a strength rather than a weakness. This setting is empirically motivated and fills an important gap in existing theoretical understanding. As our motivation in Lines 23-26 right said, empirical evidence in (Burns et al. 2023) suggests that larger W2S gain tends to occur on easier tasks (i.e., variance-dominated tasks). This can also be observed in our synthetic experiments (cf. Fig. 2&3) and our new vision and NLP experiments (see **R#4 (W6Uk), Key#1.2**). **Despite the appealing empirical performance of W2S in variance-dominated regimes, to the best of our knowledge, this regime has never been rigorously studied in prior theories on W2S. Our work fills this gap by providing a descriptive yet clean theoretical underpinning for W2S in the variance-dominated regime under finite samples**. ## Detailed responses * Concern in "Theoretical Claims": Notice that taking expectation over the training dataset is a standard way to connect the excess risks (ER) of random and fixed designs: $$\text{random design ER}=\mathbb{E}\_{S_x \sim \mathcal{D}^N}[\text{fixed design ER over }S_x]$$ Thanks to the reviewer's question, we realized that the inner expectation $\mathbb{E}\_{\theta_{w2s}}$ with shared randomness between $S_x$ and $\theta_{w2s}$ could be misleading (although it is technically correct as $\theta_{w2s}$ inside is conditioned on $S_x$). A better expression for the right-hand side may be $\mathbb{E}\_{S_x,\widetilde{S}}[||\Phi_s\theta - f_*||^2/N]$. We will revise these notations in the proofs. * "Questions For Authors" QFA#3,5,6 are addressed in the Key clarifications. * QFA#1: The variance-bias decomposition in Lines 595-604 is equivalent to directly deriving $Var(f)$ and $Bias(f)$ in Lines 121-123, both following the standard variance-bias decomposition for linear regression (see e.g. Liang, (2016), Statistical learning theory, Sec 2.9). The key observation here is that when opening the square in Line 600, the cross term vanishes because $\widetilde{z}$ is an independent random vector with zero mean. * QFA#2: First, as the analysis in Sec 3.2 shows, even if the weak teacher lacks capacity (FT approximation error $\rho_w$ is large), as long as the label noise $\sigma^2$ dominates $\rho_w$ (intuitively, harder tasks are more likely to have noisy labels), the problem still falls in the variance-dominated regime. Second, beyond the variance-dominated regime, our theory suggests a degeneration of W2S performance due to the vanishing advantage in variance reduction. This is empirically confirmed in both synthetic and real experiments (see discussions in Key#4). * QFA#4: Our analysis is not directly related to early stopping, but there are shared insights. Early stopping has a known intuitive connection with weight decay, which translates to ridge regression in our setting. As footnote 3 explained, ridge regression effectively brings low intrinsic dimensions by filtering out small eigenvalues. In the revision, we formally extend our analysis to ridge regression (see **R#1 (5dnE), Key#4** or the URL). We show that a suitable choice of ridge parameter (or early stopping intuitively) can bring better W2S performance by revealing the underlying low intrinsic dimensions. We are happy to answer any further questions you may have. If our responses above help address your concerns, we would truly appreciate a re-evaluation accordingly. --- Rebuttal Comment 1.1: Comment: I would like to thank the reviewer for their detailed responses and apologize for my late engagement. Regarding my QFA#5 in the initial review, I was unable to find a clear corresponding response in the "Key Clarifications" section. Could the authors kindly point it out or restate their response to QFA#5? In Key#4, the authors suggest that easier tasks correspond to variance-dominated regimes, which I agree with. However, in the response to my QFA#2, the authors argue that harder tasks, those more likely to have noisy labels, also fall within the variance-dominated regime. I may have misunderstood something here, but this seems contradictory. Could the authors clarify this point? Regarding early stopping, this is precisely where I feel the problem setup in the paper differs from practical W2S generalization. Intuitively, a W2S model should not reach an optimal solution during training, as doing so increases the risk of overfitting to the weak teacher's outputs, i.e. mimicking the weak teacher. In fact, most W2S models in practice do not converge to an optimal solution, as shown in experiments (e.g., Burns et al., 2023). Therefore, defining the W2S model as the optimal solution in Eq. (3) may not capture the core mystery behind W2S generalization. Of course, my viewpoint can be subjective and I know that some other theoretical works on W2S adopt a similar setup. I'm open to further discussion with the authors and other reviewers on this. --- Reply to Comment 1.1.1: Comment: We appreciate the additional questions from the reviewer. - QFA#5: Yes, artificially injecting independent label noise to SFT data does improve W2S in both synthetic and real tasks. Such improvement in the synthetic regression can be observed by comparing Fig.2&3 in the submission, as discussed in Line 347-348, right. For real tasks, we point the reviewer to the discussion on **additional image regression and LLM experiments** in **R#4 (W6Uk), Key#1.2**: - **Variance reduction is a key advantage of W2S**: While the variance-dominated regime that we focus on is overlooked by most prior theoretical works on W2S, our new experiments confirm that this is a particularly important regime where strong W2S performance (in terms of PGR/OPR) occurs in practice. By injecting artificial label noise to the UTKFace and SST-2 training data that leads to larger variance, we observe significant improvements in PGR and OPR, implying that variance reduction is a key advantage of W2S (see https://anonymous.4open.science/r/W2S_IntrinsicDim_demo-FEAF/demo.pdf). - Key#4 & QFA#2: First, we would like to clarify that our point in QFA#2 is not "harder tasks, those more likely to have noisy labels, also fall within the variance-dominated regime". Instead, what we emphasize is that **the key factor that characterizes the variance-dominated regime is the relative magnitude between the label noise and the FT approximation error**, instead of the absolute "hardness" quantified by the FT approximation error. For harder tasks (e.g. Olympic math problems), while the FT approximation error $\rho_w$ may be larger, the label noise $\sigma^2$ also tends to be larger (e.g. the human labels may be less accurate). As a result, a hard task can still fall in the variance-dominated regime, $\rho_w + \rho_s \ll \sigma^2$. - We totally agree that **regularization (e.g. early stopping, weight decay, min-$\ell_2$-norm solution) is crucial for W2S, which is reflected by our ridgeless regression + low intrinsic dimension analysis**. - First, we kindly clarify a misconception on Eq(3): the **regularized** optimal ridgeless regression solution learned from data with low intrinsic dimensions in our setting (or the ridge regression solution of in our new analysis discussed in **R#1 (5dnE), Key#4**) is fundamentally different from the **unregularized** optimal solution learned from all weak pseudolabels. As highlighted in our response to QFA#4, **early stopping, ridgeless, and ridge regressions can all be viewed as regularization on the locality of parameter updates**. Therefore, our ridgeless/ridge regression analysis provides intuitive explanations on why regularization like early stopping is essential for W2S in practice. - A minor difference of ridgeless regression from early stopping is that the regularization posed by the low intrinsic dimension of data is fixed. Such subtle difference disappears when we extend our analysis to **ridge regression** (see **R#1 (5dnE), Key#4**) where **the low intrinsic dimensions are implicitly enforced by the $\ell_2$-regularization**. Our (informal) ridge regression result conveys **the same message as ridgeless regression**: assume $\Sigma_s, \Sigma_w \succ 0$ are full rank with fast decay eigenvalues; let $\varrho_s, \varrho_w \ge 0$ quantify the FT approximation error in the ridge regression setting (see Remark 2 in the URL appendix for formal definition); then, with a suitable choice of ridge parameters $\alpha_w,\alpha_{w2s}$, we have $$\mathrm{ER}(f_{w2s}) \le 3 (\frac{\sigma^2}{4 n N}tr(\Sigma_s \Sigma_w)\varrho_s\varrho_w)^{1/3},$$ where $tr(\Sigma_s \Sigma_w)$ is the analogy of $d_{s \wedge w}$ in the ridgeless setting that quantifies the correlation between the weak and strong features. Intuitively, **the suitable choice of $\alpha_w,\alpha_{w2s}$ in ridge regression corresponds to the suitable stopping time in early stopping, which plays an important role in W2S**. - In the less common scenario where finetuning goes beyond the kernel regime, we agree that early stopping may bring different and potentially more interesting feature learning dynamics. We will include a discussion on this in the future direction. We hope the above responses, along with our initial rebuttal, address all the reviewer's questions and concerns. If so, we would greatly appreciate a timely acknowledgment and your support of our work.
Summary: This paper presents a theoretical analysis of weak-to-strong (W2S) generalization, a recently observed phenomenon where a strong student model outperforms a weak teacher model when trained on the teacher's pseudo-labels. The authors provide a variance reduction perspective by leveraging the concept of intrinsic dimensionality by analyzing kernel models. The analysis is technically sound, and the findings offer some insights into the conditions under which W2S generalization may occur. The paper is well-written and organized, making it accessible to readers familiar with machine learning theory. Claims And Evidence: The authors make several key claims, including: - W2S generalization can be explained through the lens of intrinsic dimension.   - The discrepancy between strong and weak models in W2S has a positive effect on reducing variance.   - W2S generalization occurs in variance, with the student model inheriting the teacher's variance in the overlapped feature subspaces and reducing it in the discrepancy subspace.   - The student-teacher correlation influences W2S, with lower correlation benefiting W2S. The authors somewhat support these claims: - **Theoretical Framework**: The authors develop a theoretical framework using ridgeless regression and the concept of intrinsic dimension. They provide mathematical formulations and theorems (like Theorem 3.1) to characterize the variance and bias in W2S generalization. The framework is built on established observations on fine-tuning and the concept of intrinsic dimension.   - **Variance Reduction Analysis**: The authors provide a detailed analysis of how variance is reduced in the discrepancy subspace, supported by Theorem 3.1 and related discussions. They use the analogy of car logo vs. design to provide intuition.   - **Role of Student-Teacher Correlation**: The authors define and incorporate student-teacher correlation (using correlation dimension) into their framework. They explain how lower correlation (greater discrepancy) can lead to better W2S generalization.   - **Experimental Validation**: The authors present experiments on synthetic regression tasks and real image classification tasks. These experiments aim to validate their theoretical findings. Methods And Evaluation Criteria: **Theoretical Framework** - The authors propose a theoretical framework based on ridgeless regression and the concept of intrinsic dimension.   - They use mathematical tools and theorems to analyze the variance and bias components of W2S generalization. - This theoretical approach is appropriate for gaining a deeper understanding of the underlying mechanisms driving W2S. Ridgeless regression, while simplified, allows for tractable analysis and can provide valuable insights. **Experimental Validation** - The authors use both synthetic regression tasks and real image classification tasks for experimental validation. - Synthetic data allows them to control specific parameters and test the theoretical predictions in a controlled setting. - Real image classification tasks (using CIFAR-10) demonstrate the relevance of their findings to practical applications. --- **Suggestion**: While the current methods and evaluation criteria are generally sound, the experimental validation could be expanded. For example, exploring a wider range of datasets, model architectures, and training paradigms would provide a more comprehensive evaluation of the theory. Additional experiments that directly measure or manipulate the intrinsic dimension could further strengthen the connection between the theory and the empirical results. Theoretical Claims: I did not check the correctness of the theoretical proofs. Experimental Designs Or Analyses: Please see above. Supplementary Material: I briefly skimmed the supplementary materials. Relation To Broader Scientific Literature: That aspect seems fine. Essential References Not Discussed: It seems like most key contributions have been discussed. Other Strengths And Weaknesses: **Weaknesses** - **Idealized Assumptions**: The theoretical analysis relies on strong assumptions, such as the Gaussian feature assumption. While the authors mention that the results hold for sub-Gaussian features, the analysis in the main text is limited to the Gaussian case. It is unclear how sensitive the results are to these assumptions and how well they generalize to more realistic scenarios. - **Practical Implications**: The theoretical results provide a good understanding of W2S, but their practical implications are not fully explored. The paper could benefit from a more detailed discussion on how these findings can be used to improve W2S training or model design in real-world applications. - **Experimental Validation**: While the experiments support the theory, they are somewhat limited. Additional experiments on more diverse datasets and model architectures such as language models would strengthen the empirical validation of the proposed framework. **Strengths** - **Clearly written**: The paper is well-motivated, clearly written, and easy to follow. - **New perspective on W2S**: The theoretical framework based on intrinsic dimension provides a novel perspective on W2S generalization. The analysis of variance reduction and the role of student-teacher discrepancy is insightful. Other Comments Or Suggestions: The paper has merit in its theoretical analysis and the use of intrinsic dimension to explain W2S generalization. However, the strong assumptions, limited novelty, and the need for more extensive experimental validation and discussion of practical implications suggest that the paper is not yet ready for publication. I encourage the authors to address these concerns and resubmit the work in the future. Questions For Authors: - **Gaussian Feature Assumption**: The theoretical analysis relies heavily on the assumption of Gaussian features. While you mention that the results can be extended to sub-Gaussian features, could you provide more detail on how this extension is achieved and what the key differences in the results would be? Are there specific types of sub-Gaussian distributions where the results would be significantly weaker or not applicable. - **Alternative distributions**: How does the Gaussian assumption influence the specific forms of the generalization error bounds derived in Theorem 3.1? Are there alternative distributional assumptions that would allow for a similar analysis, and what would be the trade-offs? - **Estimate intrinsic dimension**: The concept of intrinsic dimension is central to your analysis. How do you propose to estimate or approximate the intrinsic dimension of a model in practice, especially for deep learning models where it's not straightforward? Are there practical methods or heuristics that can be used to guide the choice of student and teacher models based on their estimated intrinsic dimensions? - **Potential downsides**: Your analysis suggests that a discrepancy in intrinsic dimensions between the student and teacher is beneficial for W2S. Are there any potential downsides to having a very large discrepancy? Could there be a point where the discrepancy becomes too large, and W2S is negatively affected - **Correlation Mesaure**: The correlation dimension is used to quantify the similarity between the student and teacher models. How sensitive are your results to the specific way this correlation is measured? Are there other measures of student-teacher similarity that could be used, and would they lead to qualitatively different results? - **Ridge case**: The paper focuses on ridgeless regression. How do you anticipate the results might change with the introduction of regularization, which is commonly used in practice? What are the key challenges in extending your analysis to the regularized setting? - **Practical Implications**: What are the most important practical implications of your findings? How can practitioners use your theoretical insights to improve W2S training or design more effective student and teacher models? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their time and suggestions. We are glad that they found this work well-presented and provided a good understanding of W2S. Since we could not include figures in the OpenReview rebuttal, we will synopsize our new experiments in text and present the formal results in an anonymous URL: https://anonymous.4open.science/r/W2S_IntrinsicDim_demo-FEAF/demo.pdf. ## Key clarifications 1. **Additional experiments on image regression and LLMs**: We appreciate the suggestion on strengthening the experimental validation. In the URL, we included additional real experiments on image regression (UTKFace) based on CLIP and ResNet embeddings, as well as NLP task (SST-2) based on finetuning LLMs. Please see **R#4 (W6Uk), Key#1** for detailed discussions. 2. **Empirical estimation of intrinsic and correlation dimensions**: We first highlight that with small finetunable parameter counts (e.g., linear probing or finetuning last few layers), we estimate intrinsic dimensions based on traces of data covariances (see Line 413-419, i.e., the minimum rank for the best low-rank approximation of $\Sigma_w, \Sigma_s$ with a relative error in trace less than $\tau=0.01$). 1. For correlation dimension, a practical challenge is that **the feature dimensions of the weak and strong model can be different**--$V_s, V_w$ of sizes $D_s \times d_s$ and $D_w \times d_w$ can have $D_s \ne D_w$. In this case, we gauge the correlation dimension by matching $V_s, V_w$ through a random unitary matrix $G \in \mathbb{R}^{D_s \times D_w}$ s.t. $d_{s \wedge w} = ||V_s^\top G V_w||^2_F$. This provides a good estimation for $d_{s \wedge w}$ because with low intrinsic dimensions $\max(d_s, d_w) \ll D_s, D_w$ in practice, mild dimension reduction through $G$ well preserves the essential information in $V_s, V_w$. 2. We appreciate the question on **empirical estimations of intrinsic and correlation dimensions for finetuning large models**. When finetuning LLMs, $D_s, D_w$ will be on the order of millions, making the covariance-based estimation infeasible. In this case, we use the SAID method proposed by Aghajanyan et al. (2020) to estimate intrinsic dimensions. Following Remark 2.5, for full FT, we take $\Phi_s, \Phi_w$ as the gradients of the strong and weak models at pretrained initializations. We use randomized rangefinder based on sparse JLTs and the random unitary projection trick above to estimate $d_{s \wedge w}$ efficiently (see Appendix C in the above URL for detailed procedures). 3. **Our assumptions are reasonable and generalizable**. We kindly highlight that footnote 4 and Theorem 3.1 provide explicit pointers to Theorem A.1, the formal version of Theorem 3.1 that rigorously extends the results to sub-Gaussian features. As explained in footnote 4, both theorems convey the same message. Due to page limit, we present the Gaussian results in the main text for clarity. Meanwhile, in the introduction, we provide strong motivations for studying FT in the kernel regime. The choice of subgaussian features is also well-justified by literature (Wei et al., 2022) and related works (Wu & Sahai, 2024; Ildiz et al., 2024) on W2S. Most importantly, the empirical evidence in our new experiments demonstrates the generalizability of our assumptions and results to real-world scenarios. 4. **Ridge regression analysis**: Our analysis can be extended to the ridge regression setting. As mentioned in footnote 1, when $\Sigma_s, \Sigma_w$ admit full ranks with fast decaying eigenvalues, ridge regression effectively brings low intrinsic dimensions by filtering out the small eigenvalues. The result for ridge regression again conveys the same message as Theorem 3.1, with the correlation dimension replaced by $tr(\Sigma_s \Sigma_w)$. Intuitively, a large discrepancy corresponds to a $\Sigma_s, \Sigma_w$ pair with approximately orthogonal leading eigenvectors associated with the large eigenvalues, bringing a small $tr(\Sigma_s \Sigma_w)$. (See Appendix A in the above URL for detailed statements and proofs.) ## Detailed responses * The **practical implications** of our theoretical insights on the choice of weak v.s. strong models and sample sizes in W2S are self-evident and discussed in detail in Sections 3.2 and 5. We will try to further emphasize them in the revision. * Assuming both weak and strong models have sufficient capacities to achieve **low FT approximation errors on the downstream task**, our theory and experiments show that the larger discrepancy between weak and strong models brings better W2S, with **no potential downsides**. We are happy to answer any further questions you may have. If our responses above have addressed your concerns, we would truly appreciate a re-evaluation accordingly.
null
null
null
null
null
null
Self-Play $Q$-Learners Can Provably Collude in the Iterated Prisoner's Dilemma
Accept (poster)
Summary: In this work, The authors prove that multi-agent Q-learners playing the iterated prisoner’s dilemma can learn to collude. The complexity of the cooperative multi-agent setting yields multiple fixed-point policies for Q-learning: the main technical contribution of this work is to characterize the convergence towards a specific cooperative policy. More precisely, in the iterated prisoner’s dilemma, the results show that with optimistic Q-values, any self-play Q-learner can provably learn a cooperative policy called Pavlov, also known as win-stay, lose-shift policy, which strongly differs from the vanilla Pareto dominated always defect policy. Claims And Evidence: The claims made in the submission are generally well-supported by clear and convincing evidence. The authors provide a thorough theoretical analysis and empirical validation to support their main claims regarding the collusion behavior of multi-agent Q-learners in the iterated prisoner's dilemma. Methods And Evaluation Criteria: The proposed methods and evaluation criteria in this submission can solve the problem of collusion in multi-agent Q-learning within the iterated prisoner's dilemma to a certain extent. Theoretical Claims: Yes, I checked the correctness of the proofs for the main theoretical claims, specifically Theorem 3.2 (fully greedy case) and Theorem 3.3 (ϵ-greedy case). The proofs appear to be correct and well-structured. The authors provide detailed derivations and supporting lemmas to demonstrate the convergence of Q-learning to the Pavlov policy. The arguments for the linear convergence rate (Appendix C.2) and the bounds on Q-value deviations (Lemma 3.4) are well-explained and support the main claims effectively. Experimental Designs Or Analyses: I checked the soundness of the experimental designs and analyses. The experiments are well-designed to test the theoretical claims, specifically focusing on the convergence of Q-learning to the Pavlov policy in the iterated prisoner's dilemma. The authors use a combination of standard Q-learning and deep Q-learning to demonstrate the robustness of their findings. The experimental setup, including the choice of hyperparameters and the initialization process, is clearly described and appears to be valid. The results are presented in a clear and consistent manner, with multiple runs and seeds used to ensure reliability. The parameter selection in the experiment was relatively single, and the impact of different parameter changes on the results was not shown. Overall, the experimental designs and analyses are sound. Supplementary Material: I reviewed the supplementary material. I examined the following parts: 1. Appendices A-D: These contain detailed proofs and derivations for the theoretical claims (e.g., Theorems 3.2 and 3.3, Lemma 3.4). The appendices provide additional mathematical rigor and context that support the main results. 2. Appendix E: This section includes the hyperparameters and implementation details for the deep Q-learning experiments. It provides clarity on the experimental setup and helps in understanding the reproducibility of the results. Relation To Broader Scientific Literature: The findings relate to economic concerns about algorithmic collusion, extending empirical studies by Calvano et al. (2020) and OECD (2017). The paper provides theoretical evidence that Q-learning agents can learn collusive strategies, contributing to the understanding of tacit collusion in algorithmic pricing. However, the significance of the conclusions obtained in the article in terms of practical applications is not yet clear, and the authors have failed to fully elaborate on their practical value and application prospects. This is a point of improvement in the article. Essential References Not Discussed: In recent years, there has been significant progress in algorithmic collusion research, but the authors cited less relevant work from the past year (2024) in their literature review. Other Strengths And Weaknesses: Strengths: 1. The paper provides a novel theoretical analysis of collusion in multi-agent Q-learning, specifically in the iterated prisoner's dilemma. It removes restrictive assumptions from prior work (e.g., memoryless settings) and characterizes the dynamics leading to collusion. 2. By providing a theoretical foundation for observed empirical phenomena, the paper contributes to the broader discussion on algorithmic collusion and its impact on market competition. Weaknesses 1. While the theoretical contributions are strong, the paper could benefit from more discussion on how these findings translate to real-world applications. 2. The conditions for convergence to the Pavlov policy (e.g., optimistic initialization) may seem restrictive. While the authors provide a detailed analysis, it would be helpful to explore the robustness of these conditions and whether they can be relaxed without compromising the results. 3. The relevant literature of recent years has not been fully explored; the innovations of the study are not sufficiently prominent. Other Comments Or Suggestions: The paper mentions that the self-play assumption is an important premise for proving the convergence of the Q-learning algorithm. However, self-play may lead to agents learning only a single strategy, which may not fully reflect real-world scenarios. In the real world, agents may encounter opponents adopting different strategies. Therefore, if the self-play assumption is neglected, can the algorithm still achieve collusive behavior? Questions For Authors: 1. In recent years, there has been significant progress in algorithmic collusion research, but the authors cited less relevant work from the past year in their literature review. It is recommended that the authors add a review of recent literature to provide a comprehensive picture of the latest research advances in the current field to highlight the innovative nature of this study. In addition, the authors should clearly indicate the differences and innovations of this study from existing studies. 2. The paper demonstrates that the Q-learning algorithm can converge to a collusive strategy (Pavlov strategy). However, the stability of this collusive strategy in the long run has not been discussed, especially in the presence of external disturbances. 3. The paper presents experimental results showing that the Q-learning algorithm converges to the Pavlov strategy and validates the theoretical analysis. However, the experiments only consider a single payoff parameter 𝑔. To enhance the reliability and generalizability of the results, the authors are encouraged to explore variations in the payoff parameters in the experiments. 4. The paper focuses on theoretical proofs, but it does not sufficiently address the potential challenges of applying the algorithm in real-world economic scenarios. Additionally, detecting or preventing collusive strategies in practice is an important issue, and the authors are advised to provide more insights on this topic. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Authors would like to thank Reviewer LKws for their in-depth comments and very insightful review. > 1 In recent years, there has been significant progress in algorithmic collusion research, but the authors cited less relevant work from the past year in their literature review. It is recommended that the authors add a review of recent literature to provide a comprehensive picture of the latest research advances in the current field to highlight the innovative nature of this study. In addition, the authors should clearly indicate the differences and innovations of this study from existing studies. We would be happy to incorporate additional references to recent work. Does the reviewer have specific papers in mind that they believe are particularly relevant? > 2 The paper demonstrates that the Q-learning algorithm can converge to a collusive strategy (Pavlov strategy). However, the stability of this collusive strategy in the long run has not been discussed, especially in the presence of external disturbances. - In the fully greedy case ($\epsilon = 0$), the $Q$-values ($Q_{CC, D}$ and $Q_{DD, C}$) converge exactly to those corresponding to the Pavlov strategy at equilibrium. - In the $\epsilon$-greedy case, we demonstrated convergence with high probability. However, there is still a small, exponentially decaying probability (as $\alpha$ increases) that the system may deviate from the Pavlov strategy. Specifically, for a fixed horizon $T$, there exists a set of parameters $\alpha$, $\epsilon$, and $\delta$, such that the Pavlov strategy is reached with probability $1 - \delta$. However, for fixed values of these parameters, there exists a horizon $T$ for which the policy deviates from the Pavlov strategy. > 3 The paper presents experimental results showing that the Q-learning algorithm converges to the Pavlov strategy and validates the theoretical analysis. However, the experiments only consider a single payoff parameter 𝑔. To enhance the reliability and generalizability of the results, the authors are encouraged to explore variations in the payoff parameters in the experiments. The trends for multiple values of $g$ were indeed similar, so we initially reported the results for only one value of $g$ for simplicity. However, to enhance clarity and transparency, we have added graphs with additional values of the incentive to cooperate ($g$) in the Appendix of the revised manuscript. Note that $g$ needs to be large enough for the Pavlov and Lose-shift policies to exist (i.e., $g > 4/3$). Additionally, following Reviewer hUYk's suggestion, we plan to remove the $g$ parameterization in favor of a more general reward parameterization. > 4 The paper focuses on theoretical proofs, but it does not sufficiently address the potential challenges of applying the algorithm in real-world economic scenarios. Additionally, detecting or preventing collusive strategies in practice is an important issue, and the authors are advised to provide more insights on this topic. Practical value and application prospects. Indeed, we view this work as an initial step toward understanding collusion in more complex environments, such as Bertrand games, and ultimately developing methods for detecting collusion. However, we believe that addressing these practical challenges is particularly challenging and goes beyond the scope of this paper. We hypothesize that increasing the number of players could make collusion more difficult, although providing a full theoretical analysis of this is currently out of reach. **Action**: We will cite the proposed relevant literature and add additional values of the incentive to cooperate $g$ in the Appendix.
Summary: This paper studies whether Q-learning algorithm with self-play and one-step memory can lead to collusion in iterated prisoner's dilemma game. The authors characterize the conditions on the initializations, rewards, and discount factor to guarantee that the agents would shift from always defect to Pavlov strategies. Claims And Evidence: The paper makes an implicit claim that agents would follow Q-learning algorithm with self-play and one-step memory in the iterated matrix games. However, this claim has not been supported by clear and convincing evidence. Below, I highlight the problematic parts: -The algorithm structure shows that agents can observe the state and the state is the previous joint actions of the players. This implies that the agents can observe the opponent actions. Then, it is not clear why they construct the Q-function over their local actions only as if they do not observe the opponent actions. Such an approach leads to the claimed Bellman equation (2), which depends on opponent policy \pi^2. (The equation has \pi yet it must be a typo.) This causes ambiguity due to its dependence on opponent policy. The paper resolves this ambiguity with self play. However, the self-play causes further issues: -- Both agent has the same Q-function estimate all the time. This is made possible through the update of Q-function only for agent 1. For example, agent 2 does not update Q_{s_t,a_t^2} based on its own action a_t^2 and its own reward r_2. Such an update is not justifiable for a non-cooperative environment such as markets. -- Assumption 3.1 uses different initializations for different state and actions. This is also difficult to justify if the agents do not know the model. -- If the agents know the model, then it is not justifiable to use Q-learning since they can compute the fixed point directly via dynamic programming. In summary, the algorithm studied is not well-justified from the multi-agent learning perspective. Therefore, the claims drawn are likely to be induced by the artifacts of this not well-designed algorithm rather than an emerging phenomenon for multi-agent learning algorithms. Methods And Evaluation Criteria: The paper provides experimental analysis for the iterated prisoner dilemma with specific reward functions. The paper also includes simulations using deep Q-learning. However, it is not clear why neural network approximation would be necessary for such a small scale problem (with four states and 2x2 actions per state) that can be solved in the tabular form. Theoretical Claims: I have not checked the correctness of the proofs but they are intuitive given the carefully crafted initialization and the single-sided update of Q-functions. Indeed, I am more concerned on the justification of these assumptions. For example, why would agents use different initializations for different state-action values. Since agents update their actions based on Q-function estimates with "small enough" exploration, carefully crafted initializations can cause bias towards certain action profiles such that the agents play these action profiles only through exploration and therefore they cannot learn their values accurately. Experimental Designs Or Analyses: Use of deep Q-learning does not sound for this simple setup. Supplementary Material: I have not reviewed the supplementary material. Relation To Broader Scientific Literature: This is a niche problem for the ICML community. This paper would fit better to more specialized domains such as ACM EC. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The algorithm must be well-justified from a multi-agent learning perspective. The paper needs to improve its problem formulation and notation. -For example, the model knowledge and information structure of agents are not clear. -The notation Q_{s,D}^{*,Defect} has not been introduced. Is it the fixed point of (2) given that \pi=defect? Then, (2) should have been defined accordingly. -Policies such as always defect, Pavlov, Lose-shift have been used before they get introduced in Table 2 at page 3. The use of neural networks approximation for such a small scale problem is not clear. Other Comments Or Suggestions: In (2), \pi must be \pi^2. Questions For Authors: - Can the authors explain whether agents can follow such an algorithm in non-cooperative environments? Or is this algorithm for introspective computation of a policy to play in non-cooperative environments? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: > Such an approach leads to the claimed Bellman equation (2), which depends on opponent policy \pi^2. This causes ambiguity due to its dependence on opponent policy. We are confused by this statement, as dependence on the opponent's strategy is inherent to multi-agent games—the $Q$-values necessarily depend on the opponent's policy. > The paper resolves this ambiguity with self-play. However, the self-play causes further issues: -- Both agent has the same Q-function estimate all the time. This is made possible through the update of Q-function only for agent 1. For example, agent 2 does not update Q_{s_t,a_t^2} based on its own action a_t^2 and its own reward r_2. Such an update is not justifiable for a non-cooperative environment such as markets. We strongly push back against this claim. Markets are often modeled as cooperative/competitive environments, such as Bertrand games, where the Prisoner’s Dilemma serves as a minimalistic setting. See Calvano et al., Section 3. Reference: Calvano, Emilio, et al. "Artificial intelligence, algorithmic pricing, and collusion." American Economic > Assumption 3.1 uses different initializations for different states and actions. This is also difficult to justify if the agents do not know the model. -- If the agents know the model, then it is not justifiable to use Q-learning since they can compute the fixed point directly via dynamic programming. We would like to push back against this statement: we explicitly provide a practical way to initalize the $Q$-values to satisfy Assumption 3.1. In this case, all the $Q$-values are the same. The reviewer can refer to the **Q-values Initialization in Practice** paragraph following Assumption 3.1 for further clarification. > The notation $Q_{s,D}^{\\star,\\mathrm{Defect}}$ has not been introduced. **$Q_{s,D}^{\star, \mathrm{Defect}} $ is explicitly defined in Proposition 2.1**: $ Q_{s, D}^{\star, \mathrm{Defect}} = \mathbb{E}_{a \sim \pi(\cdot | s)} r\_{D, a} / (1 - \gamma)$ $ Q_{s, C}^{\star, \mathrm{Defect}} = Q_{s, D}^{\star, \mathrm{Defect}} - \mathbb{E}_{a \sim \pi(\cdot | s)} ( r\_{D, a} - r\_{C, a})$ $Q^{\star, \mathrm{Defect}} $ is a fixed point of the Bellman Equation that yields an always defect policy in the self-play case. > Policies such as always defect, Pavlov, Lose-shift have been used before they get introduced in Table 2 at page 3. Table 2 is now referenced earlier in the text to ensure clarity and avoid confusion > The use of neural network approximation for such a small-scale problem is not clear. The idea of this paper is not to provide large-scale deep RL experiments. The goal of the deep Q-learning experiment part is to see how our findings extend to more complex settings. We think it is especially interesting to see how behaviour transfers from a controlled experiment to a larger, more complex setting. > Typos: In (2), \pi must be \pi^2. The typos have been fixed > Can the authors explain whether agents can follow such an algorithm in non-cooperative environments? Or is this algorithm for introspective computation of a policy to play in non-cooperative environments? We are not sure we understand this question. What exactly are you referring to with "introspective" and "our algorithm"?
Summary: The paper studies Q-learning in the Iterated Prisoner’s Dilemma with memory 1. It shows by formal proof that under some condition, Q-learning results in the so-called Pavlov strategy, which forms a cooperative equilibrium. The paper also conducts some experiments, including experiments with Deep Q-learning on the same Iterated PD. ## update after rebuttal Given that this paper received mixed scores and that I know this area reasonably well and am quite interested in it, I spent relatively much time looking at the paper again during rebuttal. I'm sticking with my "weak accept" recommendation. If anything, I'm now torn between "weak accept" and "accept". I think this is a solid paper and it should be accepted. That said, my recommendation is made under the assumption that the authors will use the extra page in the camera-ready version to clarify some things and better relate the paper to prior findings with somewhat different results. See my previous review, my response to the rebuttal, and the below comments for details. More in-the-weeds comments from re-reading some parts: I looked at the proof of Theorem 3.2 again in some detail to better understand the underlying dynamics. It seems to me that self-play is at least somewhat important for this step of the proof. In particular, when you switch to playing C in (D,D), it seems important that your opponent also switches at the same time. This way, you will now learn that C in (D,D) is quite good (it gets you the (C,C) payoff), even though from a "counterfactual" perspective it's actually worse (if you could switch to D while having your opponent stick to C, that would be better). If the time of switching from D to C was very out of sync (because the two players have separate Q functions, etc.), then once the first player switches to C, they would very quickly learn that C is even worse than D, while the other player would start receiving more positive results again with D, because they would get the (D,C) "temptation" payoff. In principle, this dynamic used in the proof even applies to training in a one-shot Prisoner's Dilemma: If you switch very discontinuously from (almost) always always defecting to (almost) always cooperating ((epsilon-)greedy) and you train via self-play, then cooperation is (somewhat) stable, because you only (mostly) ever cooperate when your opponent also cooperates. IIRC, these kinds of dynamics were studied by https://proceedings.neurips.cc/paper/2021/file/b9ed18a301c9f3d183938c451fa183df-Paper.pdf I think this says that in the one-shot PD setup with self-play, epsilon-greedy allows convergence to mostly cooperating (in terms of frequencies, not iterates) due to this dynamic or at least their results don't rule it out. Generally the self-play setting of the present paper can be viewed as a special case of the setting of that paper (I think). But I think the only result from that paper that somewhat directly applies to this paper just says that one can only converge to Nash equilibria. And another request for clarification: In the Deep Q-learning setup in Section 5, is there anything akin to optimistic initialization happening? (More minor: "As discussed in the previous paragraph" -- I don't understand which part of the previous paragraph this refers to. Maybe a paragraph was deleted here at some point?) Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I tried to read the proof of Theorem 3.2, but didn’t verify the details. Experimental Designs Or Analyses: I didn’t check anything beyond what’s written in the main text. Supplementary Material: No. Relation To Broader Scientific Literature: There are a bunch of papers on whether agents trained with RL or the like can learn to cooperate in various Prisoner’s Dilemma-type games. I mostly view this paper as a contribution to this literature. Essential References Not Discussed: There are more papers on learning agents in the iterated Prisoner’s Dilemma or very similar games. (The authors already cite some of these papers.) I’d be especially interested in a discussion of related papers that have found something more like the opposite result, i.e., that training typically doesn’t find cooperative strategies. E.g., https://www.sciencedirect.com/science/article/pii/0303264795015515 https://arxiv.org/pdf/1709.04326 https://arxiv.org/abs/2211.14468 https://arxiv.org/pdf/1811.08469 https://arxiv.org/abs/2012.02671 All of these papers use somewhat different methods on somewhat different games. But they all find that if one just does some kind of “naive best response” learning, one doesn’t learn to cooperate. I’d like to know what the relevant differences are to this paper. E.g., is it the slightly increased complexity of the environment? Or not using self-play? Somewhat relatedly (and less importantly): Quite a few papers have proposed methods for finding more cooperative equilibria. In principle, the results in the paper could also be used to this end. This is very briefly discussed in Sect. 3.1. It would be interesting to see a more detailed discussion of how this would relate to these other methods, though I think the authors view their project mostly as descriptive rather than prescriptive. So, perhaps it’s not that interesting to talk about how various other methods for achieving cooperation are better/worse. Other Strengths And Weaknesses: I like how the paper uses theoretical perspectives (talk of subgame-perfect equilibrium), while also engaging with the practice of multi-agent ML. I haven’t checked the proofs, but I’ve thought about this sort of issue enough to understand that it is difficult and very tedious to prove this sort of result. So, although the contribution could be viewed as small (in that it is about a specific, simple game, etc.), the amount of effort necessary to obtain this sort of result is, I believe, quite large. The question of whether/when cooperation can emerge in RL is important. In my mind, the main weakness of the paper is that it is a little hard to understand, both at the micro level – see the detailed suggestions below – and at the level of motivation and interpretation for the results. From more to less central: 1) Since Pavlov is a fixed point of the Q function, it’s, I assume, relatively easy to show that there are some initializations under which the Q-learner starts out and sticks with Pavlov. (Right? E.g., take whatever Q values you get in Phase 3 and just initialize it to those.) I take it that the paper is interesting in large part because it doesn’t just do this. Instead, it considers initializations that are quite far from Pavlov and show that there’s a robust-enough-to-prove path from these initializations to Pavlov. It gives more interesting starting conditions under which the players learn to play Pavlov. But then the paper doesn’t really say all that much about these conditions, about why they’re more interesting than the trivial conditions, etc. Even the characterization of the Q values as “optimistic” isn’t really explained. 2) How was Table 1 picked. It’s natural to pick a single-parameter version of the PD (rather than a fully parameterized one). For instance, a simple one is: playing D gives you 1 and cooperating gives the other player y>1. but the specific parameterization is quite odd. What does g control, intuitively? Also, wouldn’t it be good to be able to arbitrarily control the ratio of gains from cooperation to gains from defection? (y in the above) That way one could think about what happens if this ratio goes to infinity? Other Comments Or Suggestions: In cases like the below, it’s nicer, in my view, to give seminal rather than textbook references. >When the prisoner’s dilemma is infinitely repeated, new equilibria can emerge, and always defect is no longer the dominant strategy (Osborne, 2004). >A popular approach to maximize the cumulative reward function Equation (1) is to find an action-value function or Q-function, that is a fixed point of the Bellman operator (Sutton and Barto, 2018, Eq. 3.17). >the usual way to deal with multiple agents in reinforcement learning applications (Lowe et al., 2017; Silver et al., 2018; Baker et al., 2019; Tang, 2019). It might be worth noting here that this requires symmetry of the game. Also, is this really the usual way to deal with multiple agents in _general-sum_ games? Why not write the second math line in the Prop. 2.1 in the same way as the first? Isn’t that shorter and easier to read? >The main takeaway from Proposition 2.2 is that there exists a fixed point of the Bellman Equation (2) whose associated strategy is cooperative. Interestingly, tit-for-tat It would be nice to say something here about the relation between SPE and being a fixed point. >In this section, we show convergence of the dynamics resulting from ϵ-greedy Q-learning with memory updates (i.e., Equation (3)) toward the cooperative fixed point Pavlov policy. The next sentence spells it out, but this sentence alone reads quite weird. Obviously, you can also converge to the “always defect” policies. >distribution ρ over the initial state space S This is a bit awkward, because S is just the regular state space, right? In Theorem 3.2, I find this a little confusing: >Suppose the initial policy is always defect The version of Q-learning considered here doesn’t take a policy as input. Is this implicitly a constraint on the Q values? Relatedly in the proof: Why is $Q_{s,D} > Q_{s,C}$ in Phase 1? This doesn’t follow from Assumption 3.1, right? In Algorithm 1: I assume s_t is updated and Alg. 2 is called with the current s_t? >As opposed to the vanilla Q-learning cases, in which the agents learn to cooperate is not the same, but the resulting policy is. I’m unable to parse this sentence. In the references: >Competition Bureau. Big data and innovation: Implications for competition policy in canada. Canada should be capitalized! Questions For Authors: 1. See item 1 from the “Other Strengths And Weaknesses” section. 2. See item 2 from the “Other Strengths And Weaknesses” section. 3. (The second paragraph from “Essential References Not Discussed” also asks questions, but not ones that I expect the authors to answer in the short amount of available time.) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Authors would like to thank Reviewer hUYk for their in-depth comments, which have significantly improved the updated version of the manuscript. > I’d like to know what the relevant differences are to this paper. E.g., is it the slightly increased complexity of the environment? Or not using self-play? The key distinction between our work and all the papers cited in the review is that they adopt an empirical approach, mostly introducing new algorithms/settings to promote cooperation. For ourselves, we establish theoretical results on achieving cooperation for an existing algorithm: $Q$-learning. Below, we clarify the differences between our algorithm and those considered in the cited papers, as well as why we believe they report divergent findings (i.e., that cooperation is difficult to achieve). - **Sandholm and Crites, 1996**: This study examines the same setting as ours but employs a policy gradient method instead of Q-learning and does not consider self-play. Given their experimental findings, our theoretical results are somewhat surprising. Notably, empirical evidence suggests that relaxing the self-play assumption still leads to cooperation “most of the time” (see Barfuss and Meylahn, 2023, Fig. 1a). **Our perspective**: - In retrospect, we believe their findings differ because policy gradient methods struggle to discover cooperative policies. - Barfuss and Meylahn suggest that the absence of self-play does not inherently prevent cooperation from emerging. - **Foerster et al., 2018; Letcher et al., 2019**: These works build on Sandholm and Crites (1996) by introducing opponent shaping, where updates account for the opponent’s learning process using first-order methods. Their approach aims to address the shortcomings of standard policy gradient methods. While they apply their methods to slightly more complex games, tabular Q-learning would still be feasible in such settings. **Our perspective**: - Their conclusions differ from ours because they focus on policy gradient methods, which, as shown by Sandholm and Crites (1996), struggle to find cooperative policies. - They consider a slightly larger game (the coin game) and relax the self-play assumption. It would be interesting to test whether tabular Q-learning can still identify cooperative strategies in this setting. - **Oesterheld et al., 2023**: This work investigates the one-shot Prisoner’s Dilemma, a setting where cooperation is known to be unattainable in equilibrium. They circumvent this limitation by introducing transparency, allowing agents to share information about their similarities. While their problem setup is more constrained than ours, it introduces distinct challenges. - **Hutter et al., 2020**: This study integrates ideas from Foerster et al. (2018) and Letcher et al. (2019), along with transparency mechanisms similar to those in Oesterheld et al. (2023). Thus, their conclusions diverge from ours for similar reasons. > 1 Since Pavlov is a fixed point of the Q function, it’s, I assume, relatively easy to show that there are some initializations under which the Q-learner starts out and sticks with Pavlov. Yes Indeed! If the Q-values are initialized exactly at one of the equilibria, e.g., the Q-values corresponding to the Pavlov policy, then the policy sticks to Pavlov. This is also true for the always defect policy. *The paper's goal is to show that one can initialize the Q-values such that the initial policy is always defect*, but still converge toward a cooperative policy. This is not obvious because one has to show convergence towards a specific equilibrium, which is non trivial. > 2) How was Table 1 picked? This table comes from Bancio and Mantegazza, a work we directly build upon; that´s why we chose this parameterization to begin. Intuitively, $g$ controls the incentive to cooperate; the larger the $g$, the larger the cooperation is incentivized. In this setting, $g$ can vary between 1 and 2. We would like to stress that all the results are derived with general rewards $r_{a^1, a^2}$ (as it can be seen in Appendicies B, C or D) and the 1D parameterization is mostly used for clarity of the presentation (e.g., the existence of the Pavlov policy, Prop. 2.2). We will remove this parameterizationand keep the general reward formulation $r_{a^1, a^2}$. **Action**: We clarified these discussions in the manuscript and adopted the general reward parameterization. All the comments from the Comments or Suggestions section have been addressed in the updated manuscript, however, we chose not to answer them for the conciseness of the rebuttal. --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal. While I still have some concerns and plan to look at the paper again and weigh all the considerations, I am for now increasing my score to Weak Accept. Responses: Thanks for relating your paper to these prior papers. In my mind, the contrast to the general sense from these other papers that reciprocal cooperation doesn't just emerge spontaneously without some kind of nudging (like opponent shaping) is very interesting about this paper, so I'd recommend highlighting it more. Also, for those who use citation graphs, it would definitely be helpful for this paper to cite as many papers as possible that somewhat centrally make these kinds of claims. (There are lots of papers making this point. E.g., here's another one: https://arxiv.org/pdf/1806.04067 ) Certainly, I would have liked to read this paper right after reading, say, the Foerster et al. 2018 one. Regarding the nitty-gritty of why the results are different: Obviously, it's a little out of scope for this paper to discuss the setups of other papers. Maybe it's something for future work instead. But I'd be very interested in reading more about this, specifically w.r.t. the most similar papers. (Not sure which one of the ones I mentioned is most similar. Probably one of the papers that specifically consider the IPD. I suppose the Hutter paper has an even simpler policy space, but one that doesn't allow for anything analogous to Pavlov.) Especially if the authors basically already know and would just need to explain. I don't find the theory about gradients immediately intuitive. Re 1: Thanks for this. This is what I figured, but it would be good to be clearer in the paper about why the condition in the paper is particularly interesting. Re 2: I welcome using a more generic / simpler parameterization.
Summary: This paper shows that in the Prisoner's dilemma, Q-learning agents can learn to collude to a collaborative policy. The authors clearly identify the underlying assumption to such behaviours. Claims And Evidence: The main contributions of the paper is to prove that both with (Theorem 3.3) and without exploration (Theorem 3.2), agents can learn a cooperative policy. Proofs and proof outlines are provided for the main results, which seem to be correct. Methods And Evaluation Criteria: Yes, the methods are sound and the experimental evaluation appropriate. Theoretical Claims: Proofs of the main results are provided in the body of the paper. They appear to be correct. Proofs of other results are given in the appendix. I did not check them. Experimental Designs Or Analyses: The experimental analysis appears to be correct. This is most a theory paper, experiments are used to back the theoretical claims in the case of deep Q-learning. Supplementary Material: No, I only briefly skimmed through it. Relation To Broader Scientific Literature: As opposed to (Banchio and Skrzypacz, 2022; Banchio and Mantegazza, 2022), the authors consider the standard stochastic (i.e., not averaged) version of \epsilon-greedy Q-learning with memory. The paper cites (Banchio and Mantegazza, 2022) at various times, as there converge of Q-learning in the prisoner's dilemma is considered for the memoryless case. Still, the results obtained are quite distinct from (Banchio and Mantegazza, 2022) -- where a cooperative strategy does not exists -- to justify a separate analysis and contribution. Essential References Not Discussed: Related works are discussed at a suitable length. Other Strengths And Weaknesses: This paper shows an interesting convergence result of Q-learning in the prisoner's dilemma, which might find applications in markets. Still, the contributions is mainly theoretical. Also, a number of assumptions are made on initial conditions and the behaviour of agents. So, the results are rather tailored to the particular setting in hand. It would be interesting if the authors discussed the broader implications of their results. Other Comments Or Suggestions: The Pavlov policy has long been known as an optimal policy for the repeated prisoner's dilemma. What is nice in this paper is the proof that this policy can actually be learnt by using Q-learning. This is interesting and useful information, as there exist several strategies which are fixed points of the Bellman equation and could theoretically be learnt by Q-learning, as the authors point out. This paper shows convergence to a specific equilibrium. Questions For Authors: 1) The authors prove the converge results for the case of one-step memory. How is this case motivated? 2) In the related work section, they also discuss related results for the memoryless case. But what would change if we assume perfect memory? 3) The authors provide their results based on Assumption 3.1 and motive such assumption. Nonetheless, do the authors have any intuition about the behaviour of Q-learning for other initial conditions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Authors would like to thank Reviewer LKws for their in-depth comments and very insightful review. > 1 - The authors prove the convergence results for the case of one-step memory. How is this case motivated? The primary motivation for this work stems from the findings of Banchio and Mantegazza (2022), who showed that in the memoryless case with averaged updates, cooperative equilibria do not emerge—agents consistently learn the always defect equilibrium, or an equilibrium residing at the boundary of the cooperate-defect regions (see Figure 5 of their paper). We hypothesized that this lack of cooperation was specific to the no-memory setting and that introducing even a minimal, one-step memory could facilitate convergence toward cooperation. Our results confirm this hypothesis. Reference: Banchio, Martino, and Giacomo Mantegazza. 2022. "Artificial Intelligence and Spontaneous Collusion." > 2 - In the related work section, they also discuss related results for the memoryless case. But what would change if we assume perfect memory? The choice to focus on one-step memory is motivated both conceptually and practically. Axelrod (1980a, 1980b) demonstrated in his repeated Prisoner's Dilemma tournament that the most effective human-crafted strategy (TIT FOR TAT) relied on short-term memory, was forgiving, and was the simplest among the submitted policies. This suggests that minimal memory is sufficient for cooperative behavior to emerge. From a theoretical point of view, going beyond the one-step memory is significantly much more complex to analyze and quickly becomes computationally intractable: for instance, with a two-step memory, the state space is of size $2^4 = 16$, which yields a number of $2^{2^4} > 6 \cdot 10^4$ possible policies, which would be significantly more challenging to analyze theoretically (and potentially for only marginal conceptual gains). From a practical point of view, reinforcement learning models for the Iterated Prisoner's Dilemma often employ GRUs or LSTMs (e.g., in LOLA [Foerster et al., 2018]), but these architectures are still constrained in their memory capabilities—LSTMs, for instance, can typically recall only up to 150 steps (Khandelwal et al., 2018). Similarly, transformer-based approaches are limited by context window length. However, such deep-learning considerations fall outside the scope of our study. Axelrod, Robert. "More effective choice in the prisoner's dilemma." Journal of conflict resolution, 1980a Axelrod, Robert. "Effective choice in the prisoner's dilemma." Journal of conflict resolution, 1980b Foerster, Jakob N., et al. "Learning with opponent-learning awareness." AAMAS 2018 Khandelwal, Urvashi, et al. "Sharp nearby, fuzzy far away: How neural language models use context." arXiv preprint arXiv:1805.04623 (2018). > 3 The authors provide their results based on Assumption 3.1 and motive such assumption. Nonetheless, do the authors have any intuition about the behaviour of Q-learning for other initial conditions. Yes! We have clear intuitions about the behavior of Q-learning under different initial conditions. For instance, - If Assumption i) is not satisfied, then one can show that the agents learn the always defect policy. As for the proof of Thm 3.2, one can study the evolution of Equation (4), and show that $Q_{(DD), D}^t$ converges to $Q_{(DD), D}^{\text{always defect}}$, and always stay larger than $Q_{(DD), C}^t = Q_{(DD), C}^{t_0}$, hence the policy remains always defect. This is also illustrated on the left plot of Figure 3. - If Assumption i) holds but Assumption ii) does not, the learned policy follows a lose-shift strategy. This can be established using similar arguments by analyzing the evolution of the system of Equations (5-6). **Action:** These discussions will be added to the revised manuscript.
null
null
null
null
null
null
OmniBal: Towards Fast Instruction-Tuning for Vision-Language Models via Omniverse Computation Balance
Accept (poster)
Summary: ## update after rebuttal: Score updated to 3 This paper suggest that during distributed training of vision-language models, there are computation imbalance due to model architectures, data types and how the mini batches are constructed inside and across devices. This paper proposes a novel method of mini-batch construction, model partitioning and re-computation of memory workload distribution so the computation imbalance is minimized, achieving a 1.8x faster training efficiency. Claims And Evidence: Not sure I can distinguish between evidence 1 (line 041) and evidence 3 (line 047) Methods And Evaluation Criteria: I think the methods and evaluation criteria are more empirical and not mathematical. The authors did a lots of experiments from different angles and they are easy to understand. But I do not see any mathematical formulation of their methods. Therefore it is harder to apply the techniques in other models. For example, the calculation of Qv and Qt is purely experimental. So if I want to find out those values for a new model, without any math formulation, I am left with running a bunch of experimentation Theoretical Claims: Table 7 shows ablation of Qv, Qt. But what is the mathematical formulation? Asking this because if I want to apply this technique to a new model, how do I find Qv and Qt for that? Experimental Designs Or Analyses: Experiments are well designed as per the table 3 and 4 Supplementary Material: Yes everything Relation To Broader Scientific Literature: this paper in on the broader topic of efficient distributed training such as LazyBatching (https://arxiv.org/abs/2010.13103) or DeepSpeed Zero etc. But the focus of this paper is on the vision-language model which adds a different flavor to existing works that are generally focused on single modality (specifically LLMs) Essential References Not Discussed: None Other Strengths And Weaknesses: Strength: 1. Well written paper with lots of analysis 2. The authors are tackling a novel problem of looking into efficient distributed training of vision-language models Weakness 1. The paper lacks mathematical formulation so hard to use it in other models beyond the models shown in the experiment. For example if I want to do this for PaliGemma, how do I arrive at Qv and Qt without running a lot of experiments? Other Comments Or Suggestions: None Questions For Authors: 1. How much overhead does the initial sampling stage adds? 2. How does it scales when you have more than 2 modalities - say language, audio and image? How will this impact the initial sampling overhead? Increase significantly or minimal? 3. I understand the inter-strage and intra-stage problem as described in figure 2. It would be good if you could show how this figure changes once you apply OnmiBal Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback. Figures and tables are shown at https://anonymous.4open.science/r/O-A/O.pdf. *Q1: Not sure I can distinguish between evidence 1 (line 041) and evidence 3 (line 047):* "(1) Varying input sizes of LLM and VIT cause imbalance computational loads across training iterations and devices. (3) Input size variation and computational imbalance compel us to use the most aggressive re-computation (checkpointing) strategy to prevent program crashes, which wastes computational resources." **R1:** - (1) highlights the **idle time** across different devices caused by the variation in sequence lengths between the LLM and ViT, which leads to load imbalance during training. - (3) emphasizes the computational overhead introduced by the need for aggressive checkpointing strategies leading to **wasted computational resources**. *Q2: I think the methods and evaluation criteria are more empirical and not mathematical. The authors did a lots of experiments from different angles and they are easy to understand. But I do not see any mathematical formulation of their methods. Therefore it is harder to apply the techniques in other models. For example, the calculation of Qv and Qt is purely experimental. So if I want to find out those values for a new model, without any math formulation, I am left with running a bunch of experimentation.* Concerns about the mathematical formulation of methods and how to find Qv and Qt. **A2:** **Why not a mathematical formula?** The core of ensuring data balance lies in simultaneously fixing both the ViT input and the LLM input, which constitutes a Two-Dimensional Balanced Partition Problem (2D-BPP), an **NP-hard problem**, making it difficult to derive a general mathematical formulation. Instead, we adopt a practical and intuitive approach using experiments from multiple angles to tackle the problem effectively. The imbalance arises from variable input lengths. Our goal is to approximate equal input lengths for ViT and LLM, but no explicit optimal solution exists. Thus, we introduce Q'v and Q't as dataset-dependent hyperparameters to guide the balancing process. **How to get Q'v and Q't?** We provide a script (https://anonymous.4open.science/r/omnibal_example-E4D7/test_balanced_dynamic_batch.py, line 227) that uses an offline exhaustive search (≈10 minutes for 1.2M dataset only once, **no full training required**) to automatically determine Q'v and Q't, making the method easy to apply in practice. *Q3: How much overhead does the initial sampling stage adds?* **A3:** **Overhead Analysis** The sampling overhead is very low for example, with 1.2 million samples, the sampling can be completed within a **few tens of seconds**, which imposes minimal overhead on the overall training process. The time **complexity of ISF** is C*O(N+M), where N is the number of samples, M is the number of samples per pack and C is the number of iterations. *Q4: How does it scales when you have more than 2 modalities - say language, audio and image? How will this impact the initial sampling overhead? Increase significantly or minimal?* **A4:** **Scaling to More Than Two Modalities** We begin by categorizing multi-modal tasks using the case of three modalities: language, audio, and image. When a task involves only two modalities (e.g., **ViT + LLM, LLM + Audio, or Audio + ViT**), our method can be directly applied with minimal modifications. These cases resemble standard VLM scenarios and are illustrated in the upper part of Figure 3 (https://anonymous.4open.science/r/O-A/O.pdf). For tasks involving all three modalities (**ViT + LLM + Audio**), we also provide adapted examples in the lower part of Figure 3 (https://anonymous.4open.science/r/O-A/O.pdf). In these cases, the underlying principles of our approach remain consistent. The only change is the need to handle more combinations during batch construction. **How will this impact the initial sampling overhead?** Increase minimal for our ISF. The time complexity of ISF is C*O(N+M), which is essentially an **O(N)-level** method and therefore does not introduce significant overhead. *Q5: I understand the inter-strage and intra-stage problem as described in figure 2. It would be good if you could show how this figure changes once you apply OnmiBal* Figure 4 (https://anonymous.4open.science/r/O-A/O.pdf) shows the results of using Omnibal. Feel free for any other comments. --- Rebuttal Comment 1.1: Comment: Thank you for your answers to my questions. I will increase my score to 3 --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful response and for kindly considering increasing the score to 3. Just as a gentle note, the score may not have been updated yet on the system interface—perhaps it was just overlooked.
Summary: This work focuses on addressing imbalanced computational loads in large-scale 3D parallel training of vision-language models by rebalancing across data, model, and memory dimensions. The authors conduct experiments on various models, datasets, and hardware platforms to demonstrate the speed-up ratio for vision-language model training. Claims And Evidence: 1. The claim that "vision-language instruct-tuning models have recently made significant progress due to their more comprehensive understanding of the world" is unclear. The statement implies that the progress is caused by a deeper understanding of the world, yet the authors provide no evidence or explanation for how such understanding is achieved or how it leads to the observed improvements. 2. Regarding the heterogeneity between LLM and ViT models, while this disparity is an objective fact, when the two models are fine-tuned as a unified system, it is unclear why one should consider the structure of each module separately. The authors fail to explain in detail what consequences arise from this heterogeneity—for example, how imbalanced model partitioning is affected and how the partitioning is specifically conducted. 3. The paper focuses on the instruction-tuning stage, but pretraining is equally important for building a strong multimodal LLM (MLLM). The authors do not mention pretraining or explain why it is not addressed in this work. Methods And Evaluation Criteria: 1. The authors propose to enhance training speed from three aspects: balanced dynamic mini-batches, balanced model partition, and balanced adaptive re-computation. 2. They validate the efficiency improvements of their method across different backends and balancing strategies. Theoretical Claims: No significant theoretical claims are made. Experimental Designs Or Analyses: 1. The experiments first verify, across different backends and model sizes, that the proposed method not only speeds up training but also maintains training effectiveness. 2. Subsequently, the authors separately evaluate improvements from the data, model, and memory perspectives. Supplementary Material: The supplementary materials provide detailed quantitative evidence of the computation imbalance problem and include numerous additional experimental results. Relation To Broader Scientific Literature: 1. The authors propose an iterative sampling and filtering strategy to improve data balance. 2. They utilize the sum of VAR (forward time) and communication time as a metric for partition evaluation, ranking the top K candidates for speed assessment. 3. Finally, the re-computation strategy is optimized based on actual memory needs. 4. The combined improvements across these three aspects lead to a significant increase in training speed, which is beneficial for the faster construction of MLLMs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: The font in Figures 3 and 4 is hard to read and should be improved for clarity. If the authors address my concerns, I will raise the score. Questions For Authors: 1. Why does using data balancing affect the maximum sequence length, and how is this reduction achieved? 2. The authors mention that existing vision-language models suffer from issues such as structural heterogeneity and varied input sizes, which lead to slower training speeds. However, if future MLLMs adopt an architecture similar to the "fuyu" model—directly inputting visual tokens into the LLM without a separate visual encoder—would these issues persist? Would the proposed method still be effective under such circumstances? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your feedback. Figures and tables are shown at https://anonymous.4open.science/r/O-A/O.pdf. *Q1: The claim that "vision-language instruct-tuning models have recently made significant progress due to their more comprehensive understanding of the world" is unclear. The statement implies that the progress is caused by a deeper understanding of the world, yet the authors provide no evidence or explanation for how such understanding is achieved or how it leads to the observed improvements.* **A1:** We acknowledge that the original explanation may lack clarity. In the revised version, we will provide a more rigorous and precise description, clarifying the mechanisms behind the improvements in vision-language instruct-tuning models and avoiding ambiguous references to "understanding of the world. *Q2: Regarding the heterogeneity between LLM and ViT models, while this disparity is an objective fact, when the two models are fine-tuned as a unified system, it is unclear why one should consider the structure of each module separately. The authors fail to explain in detail what consequences arise from this heterogeneity—for example, how imbalanced model partitioning is affected and how the partitioning is specifically conducted.* **A2:** **Reasons for considering the structure of each module separately** - **Input differences:** ViT and LLM modules receive fundamentally different types of inputs with distinct length distributions. - **Architectural differences:** Although both ViT and LLM are based on Transformer architectures, their configurations differ significantly, resulting in unequal computational overhead. **Consequences arising from this heterogeneity** - **Data-Level:** As shown in Figure 1 (https://anonymous.4open.science/r/O-A/O.pdf), when using LLM-style simple packing in a vision-language system, the ViT component suffers from a data imbalance problem. - **Model-Level:** Figure 5 (https://anonymous.4open.science/r/O-A/O.pdf) shows the simple example model partition based on LLM's method. Since computational cost depends on both the architecture and input length, the architectural and input discrepancies between ViT and LLM make it difficult to evenly divide computation across pipeline stages. This results in an imbalanced workload and significant pipeline bubbles when applying traditional pipeline parallelism strategies designed for LLMs. *Q3. The paper focuses on the instruction-tuning stage, but pretraining is equally important for building a strong multimodal LLM (MLLM). The authors do not mention pretraining or explain why it is not addressed in this work.* **A3:** We have also conducted experiments on multimodal large model pretraining. Details can be found in Section 5.4(4), Generalization Capability, under "Different Tasks," where we state: "Besides SFT tasks, pretraining tasks are also tested, as shown in Appendix G, and we observed consistent improvements across all settings." These results demonstrate that our approach remains effective even in the pretraining stage. *Q4: Why does using data balancing affect the maximum sequence length, and how is this reduction achieved?* **A4:** Figure 2 (https://anonymous.4open.science/r/O-A/O.pdf) shows details examples of how this reduction is achieved. **Simple Fixed Batching** The "maximum length" here refers to the max-seq-len in Table 5, including the batch dimension. The maximum input length is 5K tokens for the ViT and 4K tokens for the LLM. And with a batch size of 4 (as in InternVL-1.5), this yields 20K for ViT and 16K for LLM. **ISF Dynamic Batching** Since ISF adopts a dynamic batching strategy, it can flexibly manage sequence lengths. If some samples are relatively long, the corresponding batch size will be set smaller to keep the maximum sequence length (VIT 9K and LLM 4K). *Q5: The authors mention that existing vision-language models suffer from issues such as structural heterogeneity and varied input sizes, which lead to slower training speeds. However, if future MLLMs adopt an architecture similar to the "fuyu" model—directly inputting visual tokens into the LLM without a separate visual encoder—would these issues persist? Would the proposed method still be effective under such circumstances?* **A5:** "fuyu" model—directly inputting visual tokens into the LLM without a separate visual encoder making the model behave like LLM. The structural heterogeneity and input size imbalance issues will **disappear**. However, its performance still lags behind more established architectures such as ViT-MLP-LLM. Therefore, our proposed method remains highly relevant to the current mainstream vision-language models. *Q6: The font in Figures 3 and 4 is hard to read and should be improved for clarity.* **A6:** We sincerely appreciate the reviewer’s valuable feedback and will update it in the revised version of the paper. Feel free for any other comments. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' rebuttal, which addresses all my concerns. I keep my score as 4. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback. We sincerely appreciate your comments and response.
Summary: This paper identifies a significant computational imbalance issue in large-scale distributed training for Vision-Language Models (VLMs) due to heterogeneity in vision and language components. To tackle this, the authors propose OmniBal, a comprehensive framework that balances computation across three dimensions—data, model, and memory—by adaptively forming balanced mini-batches, optimizing model partitioning, and strategically adjusting memory re-computation. Extensive experiments demonstrate that OmniBal significantly reduces training time while maintaining accuracy, confirming its efficacy and versatility across various models, datasets, and hardware configurations. Claims And Evidence: Please refer to the Question. Methods And Evaluation Criteria: Please refer to the Question. Theoretical Claims: N.A. Experimental Designs Or Analyses: Please refer to the Question. Supplementary Material: Yes. Relation To Broader Scientific Literature: Please refer to the Question. Essential References Not Discussed: Please refer to the Question. Other Strengths And Weaknesses: Strength: 1. The paper tackles an important challenge of computational imbalance in large-scale distributed training of vision-language models (VLMs). 2. It introduces a comprehensive framework that systematically addresses this imbalance from three complementary angles: data input balancing, optimized model partitioning, and adaptive memory management. 3. The motivation/observation of the imbalanced computation in VLMs is clear. Weakness: 1. Some claims need a more detailed explanation. 2. The novelty of this paper needs further justification. 3. Some design choices seem ad-hoc and need justification. Please refer to the Questions section for details. Other Comments Or Suggestions: Please refer to the Question. Questions For Authors: Thanks for the submission to ICML. I have the following comments/questions on this submission: &nbsp; 1. The data partitioning approach in this paper appears to offer limited novelty. Numerous adaptive data batching methods, such as [1] and [2], have already addressed similar issues, albeit without specific evaluations on VLMs. However, applying these techniques to VLMs does not seem particularly challenging, given that (i) the primary goal—balancing computation both within and across batches—is quite similar, and (ii) the methodology, which involves batching based on profiled or predicted latency, closely resembles the existing works. Could the authors clarify the unique contributions of the proposed data partitioning method in comparison to these established batching approaches? In otehr words, what are the major challenges that prevent applying those approaches to VLMs? If it is a simple adaptation, I have concerns on the contribution of this paper. 2. Section 4 introduces Q'v and Q't to reduce Pad Ratio and Dist Ratio, but the paper lacks a clear formula or explanation of how these values affect the ratios. This makes it challenging to understand how Q values are chosen to achieve the intended balance. 3. In Section 5, Q′v is set equal to Qv, while Q′t is defined as Qt − 128. Could the authors provide an intuition or rationale behind these choices? 4. The design of data balancing seems a little bit ad-hoc. It seems like a trial-and-error method that relies heavily on some empirically picked hyper-parameters. Consequently, it's unclear whether this method would generalize effectively to different evaluation datasets. 5. In the model partitioning design, it says that "A candidate set of partition strategies is created by jittering P(1), P(2), . . . , P(N−1) within a radius of r". Could the authors clarify how these parameters are chosen and whether the selection process is systematic or largely empirical? 6. I would like to see the comparison of the proposed method to some other computation balanacing frameworks such as Adapipe, besides showing the speedup over the baselines. 7. What is the difference between Figures 2 and 3? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback. Figures and tables are shown at https://anonymous.4open.science/r/O-A/O.pdf. *Q1: what are the major challenges that prevent applying those approaches to VLMs?*: **Major Challenges:** - **Data Level:** Simple packing strategies for LLMs lead to a severe imbalance in the vision components when applied to VLMs shown in Figure 1 (https://anonymous.4open.science/r/O-A/O.pdf). - **Model Level:** Existing pipeline parallelism techniques assume homogeneous architectures for LLM and consistent input lengths. In VLMs, however, the ViT and LLM components differ significantly in both structure and input sequence length, leading to computational imbalance when such methods are naively applied. - **Memory Level:** The architectural mismatch between ViT and LLM also results in uneven memory consumption, rendering LLM-specific memory optimization strategies ineffective or even infeasible in VLM scenarios. **Illustrative Example:** Figure 1 illustrates a typical data imbalance issue that arises when applying simple LLM-style packing to VLM training. *Q2: Section 4 introduces Q'v and Q't to reduce Pad Ratio and Dist Ratio, ... how Q values are chosen to achieve the intended balance.* **A2:** **Why not a mathematical formula?** The core of ensuring data balance lies in simultaneously fixing both the ViT input and the LLM input, which constitutes a Two-Dimensional Balanced Partition Problem (2D-BPP), an **NP-hard problem**, making it difficult to derive a general mathematical formulation. Our goal is to approximate equal input lengths for ViT and LLM, but no explicit optimal solution exists. Thus, we introduce Q'v and Q't as dataset-dependent hyperparameters to guide the balancing process. **How to get Q'v and Q't?** We provide a script (https://anonymous.4open.science/r/omnibal_example-E4D7/test_balanced_dynamic_batch.py, line 227) that uses an offline exhaustive search (≈10 minutes for 1.2M dataset only once, **no full training required**) to automatically determine Q'v and Q't, making the method easy to apply in practice. *Q3: In Section 5, Q′v is set equal to Qv, while Q′t is defined as Qt − 128. Could the authors provide an intuition or rationale behind these choices?* **A3:** The choices of Q′v and Q′t are based on the search results discussed in **A2: How to get Q′v and Q′t?**. The goal is to minimize the DistRatio, and Appendix C provides an ablation table showcasing part of our search results supporting this decision. *Q4: The design of data balancing seems a little bit ad-hoc. It seems like a trial-and-error method that relies heavily on some empirically picked hyper-parameters. Consequently, it's unclear whether this method would generalize effectively to different evaluation datasets.* **A4:** **This is not ad-hoc** The data balancing strategy was carefully designed based on a thorough analysis of the data distribution and task requirements. **hyper-parameters** The parameters were also designed to solve the Two-Dimensional Balanced Partition Problem (2D-BPP), an **NP-hard problem**, and they are based on a search-based approach using dataset information, making them easily generalizable to other validation datasets. we conducted extensive experiments across diverse datasets, as detailed in Section 5.4 (**Generalization Capability**). These results demonstrate that our approach generalizes well beyond the initial evaluation setting. *Q5: In the model partitioning design, it says that "A candidate set of partition strategies is created by jittering P(1), P(2), . . . , P(N−1) within a radius of r". Could the authors clarify how these parameters are chosen and whether the selection process is systematic or largely empirical?* **A5:** **How these parameters are chosen** "P(1), P(2), ..., P(N−1) init" are obtained by profiling each layer’s forward time. A greedy algorithm computes the anchor partition strategy P⁺ to balance the computation time across all stages Si, which is **systematic**. "r" is **empirically** determined and spans the adjacent layers between the ViT and LLM to ensure a sufficient candidate space. *Q6: I would like to see the comparison of the proposed method to some other... Adapipe, besides showing the speedup over the baselines.* **A6:** **Comparison with other works:** Due to the limited number of studies specifically focused on large-scale VLM training, most existing approaches are designed for pure LLMs and cannot be directly applied. Model balance optimization in AdaPipe (combined with our data balance method ISF) is aligned with the profile-based method in Table 4 (paper) which achieves inferior results compared to our BMP. *Q7: What is the difference between Figures 2 and 3?* **A7:** Same content, shown again for the reviewer to easily reference. Feel free for any other comments. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for providing the additional results and clarifications. Although I still have some concerns regarding the novelty and potential overhead introduced by the trial-and-error parameter search design, I do not wish to be too picky since the experimental result is good. However, I still have concerns that applying methods such as [1, 2] to the VLM setting might be relatively straightforward and may not require substantial adaptation. While I acknowledge that VLM and LLM workloads and training paradigms are different, a deeper analysis beyond workload comparison is necessary—particularly at the design level. Of course, the original versions of these methods cannot be directly applied to VLM. But, for example, if we consider the data sizes of both the language and vision modalities during adaptive batching, is the adaptation relatively straightforward, or do significant challenges still remain? I will increase my score if this concern is addressed. [1] Yu, Gyeong-In, Joo Seong Jeong, Geon-Woo Kim, Soojeong Kim, and Byung-Gon Chun. "Orca: A distributed serving system for {Transformer-Based} generative models." In 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22), pp. 521-538. 2022. [2] Choi, Yujeong, Yunseong Kim, and Minsoo Rhu. "Lazy batching: An sla-aware batching system for cloud machine learning inference." In 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA), pp. 493-506. IEEE, 2021. --- Reply to Comment 1.1.1: Comment: *Q: concerns that applying methods such as [1, 2] to the VLM setting might be relatively straightforward and may not require substantial adaptation.* **A:** **design level** Both [1] and [2] focus on batching for inference. Section 3 in [1] and Background Section B in [2] highlight the differences between training and inference: During training, dataset information is known in advance, whereas during inference, incoming requests are unpredictable. They mainly aim to optimize inference requests for serving, and there is a **clear gap** between batching in inference and training tasks. - For **inference** [1, 2] refers to a technique in which incoming requests are dynamically grouped into batches in real-time, rather than waiting for a fixed-size batch, to improve throughput and balance latency and efficiency. - For **training**, we need to ensure both efficient computation within each DP (Data Parallel) rank and workload balance across different DP ranks to minimize idle time. **More relevant work in LLM Training** **Packing** enables efficient computation within each DP rank and balanced workloads across ranks. It is widely adopted by models like LLaMA, Qwen, and Deepseek, and frameworks such as **Megatron-LM, Nemo, and Huggingface Trainer**. https://docs.nvidia.com/nemo-framework/user-guide/24.07/nemotoolkit/multimodal/mllm/sequence_packing.html https://huggingface.co/docs/trl/v0.4.2/sft_trainer#packing-dataset-constantlengthdataset **LLM method (Packing) compared to VLM** - *Straightforward applied to VLM* Response A1 to Reviewer NPbH has shown the results. Packing in LLM results in computation imbalance problems (on VLM instruct-tuning). | Model | Backend | Data Method | Dist Ratio VIT | Dist Ratio LLM | GPU Days | |:---------:|:---------:|:--------------:|:---------------:|:---------------:|:--------:| | 6 + 20B | Megatron | random | 0.34 | 0.30 | 61.8 | | 6 + 20B | Megatron | LLM packing | 0.40 | 0.05 | 48.3 | | 6 + 20B | Megatron | ISF | 0.02 |0.14 | **21.3** | - *Challenge of transfer to VLM* The BFS (Best-fit-Shuffle) packing schemes in Megatron-LM already have high complexity O(N×M) (N: Number of samples; M: Number of samples per pack). If we further consider satisfying both LLM and ViT training simultaneously, it constitutes a Two-Dimensional Balanced Partition Problem (2D-BPP), which is **NP-hard**, making it difficult to derive a general mathematical formulation. Existing methods, when applied directly, fall short of addressing the specific challenges faced by our VLM. Our proposed heuristic solution, **ISF**, approximates the problem at **O(N)-level complexity** (as detailed in Response to Reviewer rnAP A3), effectively addressing the challenge. Moreover, it demonstrates a notable degree of adaptability in handling more than two modalities, as further elaborated in our response to Reviewer rnAP A4. We sincerely hope that our response helps address your concerns.
Summary: The paper addresses the causes of computational imbalance in VLM training, including aspects of data, model, and memory, and introduces OmniBal, a training framework designed for improving training efficiency of VLMs. OmniBal is basically comprised of three algorithms, balancing batch sizes, model partitions, and memory usages, respectively. The authors present experiments on various datasets and VLMs, claiming that their framework accelerates the process of VLM training under the metric of GPU days Claims And Evidence: Please see Strengths and Weaknesses. Methods And Evaluation Criteria: Please see Strengths And Weaknesses. Theoretical Claims: Please see Strengths and Weaknesses. Experimental Designs Or Analyses: Please see Strengths and Weaknesses. Supplementary Material: Yes. Relation To Broader Scientific Literature: None. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: - This work demonstrates the detrimental impact of imbalanced data (batch size), distribution of model computation load, and re-computation on training efficiency of VLMs explicitly, showing thorough comprehension on the issue. - Experiments across various datasets and VLM models verified the effectiveness of the OmniBal as a holistic framework, suggesting that it boosts training speeds by around 1.5 times to 3.5 times. Weaknesses: - In section 3 the paper mentions the differences between the training of VLMs and LLMs. However, it needs more solid explanations and explicit data to tell: (i) Whether present strategies experimented on LLMs can be transferred to VLMs? If they can, the data of their performance is expected. (ii) As mentioned in the same subsection, "(simple packing) results in computation imbalance problems (on VLM instruct-tuning)", then how severe the problems are? - Effective as the experiments show, yet the solutions provided in the paper seem trivial. In addition, the details of optimizing re-computation in the Balanced Adaptive Re-Computation Strategy needs further explanation in Appendix B.1.. - In Table 4, the proposed method BMP only shows marginal improvement, compared with present methods. - The experiments are not sufficient. (i) The paper demonstrates the effectiveness of OmniBal, but lacks comparisons between OmniBal as a holistic framework and other works on this issue. (ii) More ablation studies of different combinations of components are expected in Tabel 6, data + memory balance and model + memory balance for instance. Although these modules might perform better with prior ones applied, such experiments are still recommended. Other Comments Or Suggestions: - There are some expressions and definitions that can be polished to be clearer for readers. In subsection 4.1, the exact term "Distribution ratio" should be placed right after the "Dist ratio" in bold. In subsection 4.2, when introducing the partitioning strategy, it might be better to use the form of the definition of partition in set theory. Both might make it clearer and avoid confusion. - The explanation of Balanced Adaptive Re-Computation Strategy in Appendix B.2. can be moved to the main body to make the paper more organized. Questions For Authors: See Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback. Figures and tables are shown at https://anonymous.4open.science/r/O-A/O.pdf. *Q1: In section 3 the paper mentions the differences between the training of VLMs and LLMs. However, it needs more solid explanations and explicit data to tell: (i) Whether present strategies experimented on LLMs can be transferred to VLMs? If they can, the data of their performance is expected. (ii) As mentioned in the same subsection, "(simple packing) results in computation imbalance problems (on VLM instruct-tuning)", then how severe the problems are?* **A1:** **Whether present strategies experimented on LLMs can be transferred to VLMs?** The **simple packing** strategy used in LLM training can be directly applied to VLMs, its effectiveness is limited due to the structural differences between the two. Figure 1 (available at https://anonymous.4open.science/r/O-A/O.pdf) provides an example illustrating how LLM-style packing manifests in VLM training. **(simple packing) results in computation imbalance problems (on VLM instruct-tuning) how severe the problems are?** The severity of the imbalance is demonstrated through actual analysis and experimental results, as shown in the accompanying table. | model | backend | Data method | Dist Ratio VIT | Dist Ratio LLM | GPU Days | |:--------:|:--------:|:-----------:|:------------:|:------------:|:--------:| | 6 + 20B | Megatron | random | 0.34 | 0.30 | 61.8 | | 6 + 20B | Megatron | LLM packing | 0.40 | 0.05 | 48.3 | | 6 + 20B | Megatron | ISF | 0.02 | 0.14 | **21.3** | *Q2: Effective as the experiments show, yet the solutions provided in the paper seem trivial. In addition, the details of optimizing re-computation in the Balanced Adaptive Re-Computation Strategy needs further explanation in Appendix B.1.* **A2:** Our problem is both **challenging and important**, and to our knowledge, it has not been systematically studied before. Although our proposed solution is relatively simple, it is highly efficient and easy to transfer and apply. Regarding the optimization of re-computation in the Balanced Adaptive Re-Computation Strategy, we provided a brief explanation in Appendix B.1, as the underlying idea is relatively straightforward. We appreciate the reviewer’s suggestion and will include a more detailed discussion in a future version of the paper. *Q3: In Table 4, the proposed method BMP only shows marginal improvement, compared with present methods.* **A3:** BMP will yield more substantial benefits in more **communication-constrained** settings (larger models). BMP, compared to previous methods, takes into account the imbalance in point-to-point communication across different pipeline stages in VLMs. *Q4: the experiments are not sufficient. (i) The paper demonstrates the effectiveness of OmniBal, but lacks comparisons between OmniBal as a holistic framework and other works on this issue. (ii) More ablation studies of different combinations of components are expected in Tabel 6, data + memory balance and model + memory balance for instance. Although these modules might perform better with prior ones applied, such experiments are still recommended.* **A4:** **Comparison with other works:** Due to the limited number of studies specifically focused on large-scale VLM training, most existing approaches are designed for pure LLMs and cannot be directly applied. Model balance optimization in AdaPipe (combined with our data balance method ISF) is aligned with the profile-based method in Table 4 (paper) which achieves inferior results compared to our BMP. **Additional ablation studies:** The components in OmniBal are inherently interdependent and cannot be decoupled trivially. **Data-level balancing must be addressed first**, as it lays the foundation for subsequent optimizations at the model and memory levels. Moreover, memory optimization is intrinsically tied to model structure. As such, we did not perform isolated ablation studies. Instead, we adopted a progressive evaluation strategy, incrementally adding modules to assess their cumulative effectiveness. **Response to Other Comments Or Suggestions:** We sincerely appreciate the reviewer’s valuable feedback and will incorporate the suggested improvements in the revised version of the paper. Feel free for any other comments.
Summary: This paper focuses on the large-scale distributed training of multimodal large language models, and propose a omniverse computation strategy to manage vision-language data distribution and the training memory optimization. Although the studied problem is significant in the development of multimodal large language models, the paper content does not really match the scope of ICML. For instance, the mainly compared work [Rajbhandari et al. 2020] is published at ICHPCNSA. Moreover, the arguments made in this paper are often very subjectively. For instance, the authors claim that the data distribution affects distributed training efficiency. This statement has no any reference to support, and the experimental results are hard to make a direct connection with this point. Similarly, in the most part of this paper, the description lacks enough references to support, e.g., Sec.3. In the related work, there are also not other works mentioned about the distributed traing of VL models. Besides, this paper seems to be finished in a harry. The presentation is very poor, making hard to really understand the motivation, methodology and contribution of this paper, although the experimental results seem significant. The overall writing of this paper is more like a technical report, rather than an academic paper published by ICML. Overall, I would like to suggest the authors to spend more time in polishing this paper, and then final a more relevant conference or journal for the submission. Claims And Evidence: See summary. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: See summary. Essential References Not Discussed: Yes Other Strengths And Weaknesses: See summary. Other Comments Or Suggestions: See summary. Questions For Authors: See summary. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. *Q1: ICML scope concerns* **A1:** We believe our paper is within the scope of ICML. **According to the ICML 2025 Call for Papers, topics of interest include (but are not limited to): "Machine Learning Systems (improved implementation and scalability, hardware, libraries, distributed methods, etc.)."** Our work focuses on large-scale distributed training and memory optimization for multimodal large language models, which fit the ICML scope. *Q2: in the most part of this paper, the description lacks enough references to support, e.g., Sec.3. In the related work, there are also not other works mentioned about the distributed traing of VL models* **A2:** In Section 2.1 Multi-Modal Large Language Models (MLLMs), we have cited a number of prior works on vision-language model (VLM) training, including **Qwen-VL, Q-Former, and LLaVA**. These papers, while primarily focused on model design and applications, do include brief discussions of distributed training strategies, often relying on backends such as DeepSpeed. Moreover, throughout the paper, we have provided extensive references to support our claims regarding distributed training and memory optimization, including established systems such as **PipeDream, DeepSpeed ZeRO, Megatron-LM, and GPipe**, among others. *Q3: Besides, this paper seems to be finished in a harry. The presentation is very poor, making hard to really understand the motivation, methodology and contribution of this paper, although the experimental results seem significant. "* **A3:** While the current draft may need polishing in parts, the work was not done in haste. Other reviewers did not raise similar concerns about clarity. - *Reviewer rnAP comment "Well written paper with lots of analysis"* - *Reviewer vzDC comment "The motivation/observation of the imbalanced computation in VLMs is clear."* Feel free for any other comments. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I carefully read the paper again, and saw more evidences in Appendix, which are ignored in the first review, so I upgrade my rating. As also commented by other reviewers, the main claims of this paper need more directly supports. I would like to suggest the authors to make a more clear comparison between the training of MLLM and LLM, rather than just leaving them in the Appendix. Besides, if the distributed training is within the scope of ICML, more relevant ICML references are suggested to add and discuss in the paper. --- Reply to Comment 1.1.1: Comment: Thank you very much for your valuable feedback. We will revise the paper according to the reviewers' suggestions by adding a detailed comparison between the training processes of MLLM and LLM in the main text and incorporating more relevant references.
null
null
null
null
Robust Spatio-Temporal Centralized Interaction for OOD Learning
Accept (poster)
Summary: This authors introduce a Spatio-Temporal OOD Processor, a framework for out-of-distribution learning in spatiotemporal graph convolutional networks. As stated, the traditional methods rely on node to node message interactions, which degrade performance in OOD settings. The proposed method addresses such limitations by using a centralized messaging mechanism with Context-Aware Units and Generalized Perturbation Units -- replacing direct node interactions. A spatiotemporal optimization strategy is used to expose models to diverse training environments and this argued to improve generalization. They evaluate on six datasets. Claims And Evidence: Node-to-node messaging in STGNNs is sensitive to spatiotemporal shifts and this reduces generalization: Empirical results show that removing direct node-to-node interactions improves robustness in OOD scenarios The centralized messaging mechanism improves model generalization and robustness: The proposed method higher performance over 14 baselines across 6 datasets. Message perturbation via GenPU forces models to learn invariant spatiotemporal knowledge: Models trained with GenPU outperform those trained without perturbations in both generalization and inductive learning. The DRO-based training strategy improves adaptability to unseen environments: The proposed method outperforms ERM methods under spatiotemporal shifts. The claims and evidences appear to be clear and convincing. Methods And Evaluation Criteria: Evaluated on six spatiotemporal datasets and compared against 14 baselines. The evaluation metrics are MAE, RMSE, MAPE. And the experimental settings include multiple OOD settings, including temporal shifts, structural shifts, and rapid expansion scenarios. All these design choices appear to be legit. Theoretical Claims: The centralized messaging mechanism reduces complexity to O(KNdh) -- claimed to be more computationally efficient than traditional self-attention methods. The proposed method's optimization function follows a distributionally robust paradigm and this is claimed to lead to better generalization compared to traditional empirical risk minimization. Experimental Designs Or Analyses: Experimental setup and results appear to be well-designed and executed. Ablation studies fairly confirm that replacing node-to-node messaging with ConAUbased centralized messaging improves robustness. Also, removing GenPU results in degraded performance. This tells about its role in improving feature extraction. The proposed method outperforms conventional methods by up to 17.01% in generalization and 18.44% in inductive learning. The proposed method appears to be efficient in large-scale datasets. This work could also benefit from a discussion on the trade-offs between computational efficiency and model interpretability. Also comparison with meta-learning approaches for generalization could add further depth. Some question marks down here: - How does the proposed method perform in traditional IID settings compared to its OOD benefits? - Could you speak to the computational trade-offs of using centralized messaging versus node-to-node interactions? - How well does the proposed method scale in real-time applications with large spatiotemporal datasets? - Could the proposed method be applied to non-spatiotemporal domains such as sequential decision-making or reinforcement learning? Would the proposed modules be of particular use for such use cases? Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: This work builds on prior research in spatiotemporal learning, graph neural networks, and OOD generalization. However, some inconsistencies exist: 1- the study assumes that centralized messaging resolves OOD sensitivity, but it does not thoroughly analyze how different types of spatiotemporal shifts impact performance. 2- the role of message perturbation and DRO in non-spatiotemporal settings is not explored and this effectively limits applicability. Essential References Not Discussed: Related Work, Section 5, does not appear to be comprehensive enough. Although recent works have been fairly introduces, the prior work leading to the SOTA could be better described Other Strengths And Weaknesses: - The paper appears to be well-written in terms of language. However, reading is made difficult by the frequent introduction of acronyms. Keeping track of them creates a significant cognitive load, making it harder to follow the content. Simplifying this would greatly enhance readability. Other Comments Or Suggestions: Re context: - Investigating the impact of the proposed method on long-term temporal dependencies. - Analyzing the proposed methods interpretability and decision-making process for better model transparency. - Briefly discussing potential applications in climate modeling, urban planning, and environmental monitoring. Re writing: - I do not think a full stop or comma is needed after formulations. This could be revised. Questions For Authors: No questions apart from those raised in my comments in other sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your review. We provide figures and tables at an anonymous link ( https://anonymous.4open.science/r/reb_/5Pwy.pdf ). **The "#" before a title means that complete tables in the section are available in the anonymous link.** > \# Meta-learning We introduce meta-learning based spatiotemporal models, and results on SD dataset show that our model outperforms these models. |Model|Ours|MegaCRN|MGT| |:-:|:-:|:-:|:-:| |MAE|**23.67**|25.47|26.72| > \# IID Using SD dataset as an example, the table below reports the average MAE. This demonstrates that our model still achieves competitive performance under the IID setting. |Model|Our|STONE|CaST|D $^2$ STGNN|STGCN| |:-:|:-:|:-:|:-:|:-:|:-:| |MAE|**23.59**|24.98|30.17|24.46|25.44| > Computational Complexity As demonstrated in Theorem 1, our centralized messaging has higher efficiency. While the computational complexity of node-to-node interaction is O( $N^2$ d), our method reduces it to O(PNd), where P is much smaller than N. For example, P can be as small as 8 for a graph with 716 nodes. > Large Spatiotemporal Dataset As noted in line 309, in Section E.3, we evaluate the performance on large datasets. Below, we summarize key results from the CA dataset for easy reference. These results confirm the strong effectiveness of our model for large-scale data. |Model|Ours|CaST|BigST|GWNet|STGCN| |:-:|:-:|:-:|:-:|:-:|:-:| |MAE|**32.86**|41.26|39.59|37.63|40.64| > \# Non-spatiotemporal Domain We regret that, due to significant conceptual and methodological differences between the spatiotemporal domain we focus on and the reinforcement learning and sequential decision-making areas of interest to you, we cannot explore these unrelated areas within the limited time available. To address your concerns, we extended our evaluation to discrete graph OOD learning. Following the setup in [1], the table below on Collab-0.8 dataset shows our method performs well in non-spatiotemporal domains. "w/o DRO" and "w/o Per" indicate the removal of the DRO and message perturbation mechanisms, respectively, underscoring their positive effects in non-spatiotemporal setting. |Method|Ours|w/o DRO|w/o Per|DIDA|DySAT|VREx| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |AUC|**77.66**|75.62|73.45|72.59|62.19|62.21| [1] Zhang Z, Wang X, et al. Out-of-distribution generalized dynamic graph neural network with disentangled intervention and invariance promotion. arXiv preprint. > Different Types of Shifts In the third paragraph of the introduction and in Figure 1, we analyze the effects of different shifts. Temporal shifts lead to inaccurate node descriptions, with representation errors propagating through node-to-node messaging and impacting other nodes. spatial shifts disrupt established message passing paths, preventing the model from following trained pathways and degrading its graph representation performance. **Appendix section E.5 evaluates our method’s ability to address these shifts**. To address your concern, we will refine the third paragraph to better emphasize the distinct impacts of each shift. > Related Work Sorry for any confusion. Due to space constraints, **we discuss related works to spatiotemporal OOD learning in detail in Appendix Section A**. In the new version, ICML permits an additional page, which will allow us to incorporate this discussion. > \# Long-term temporal dependencies Following the setup in long-term time series analysis, we use the Knowair dataset as an example, employing 96 time steps to predict 720 time steps. The averaged results below demonstrate that our proposed model effectively handles long-term temporal dependencies. |Model|Ours|CaST|BigST|GWNet|STGCN| |:-:|:-:|:-:|:-:|:-:|:-:| |MAE|**24.89**|26.41|27.76|26.35|28.77| > Decision-making Process **Although model interpretability—an active research area—is not the primary focus of this paper, which emphasizes spatiotemporal OOD generalization**—we analyze the prediction decision-making process in Section E.14 using Shapley values, a standard tool for evaluating component contributions. Our model integrates temporal and spatial components to generate the final output. Under spatial shifts, the temporal component dominates, while under temporal shifts, the spatial component takes precedence. Section E.15 further improves interpretability through embedding visualization. In future, we will explore enhancing interpretability by incorporating insights from sequential decision-making or reinforcement learning. > Potential Applications Spatiotemporal prediction, to which our method belongs, plays an active role in various downstream applications. For example, traffic forecasting can help urban planners optimize city designs, while meteorology forecasting can guide the public in understanding future climate and environmental conditions. > Presentation We will use the full terms instead of acronyms, except for the name of the model STOP. Moreover, we will remove symbols following formulas. --- Rebuttal Comment 1.1: Comment: My comments are appropriately addressed. I appreciate the explanations and improvements and decide to increase the score. --- Reply to Comment 1.1.1: Comment: Dear reviewer 5Pwy, We are deeply grateful for your thoughtful evaluation and for awarding our work with an increased score. Your constructive feedback and recognition are of great value to us, and we are truly honored that you recognize our efforts. Sincerely, The Authors
Summary: The authors introduce a novel spatio-temporal interaction mechanism called STOP, which is tailored to enhance the sensitivity of the conventional node-to-node messaging mechanism favored by existing models to spatiotemporal shift. Key elements of STOP include the centralized message mechanism, the message perturbation mechanism, and an optimization objective rooted in distributed robust optimization. Through a series of comprehensive experiments, the results effectively showcase the efficacy and competitiveness of STOP. Claims And Evidence: The authors have elucidated a strong motivation: existing spatio-temporal interaction mechanisms serve as on trigger for the sensitivity of current models to spatio-temporal changes. They have substantiated this motivation through a series of experiments, accompanied by relevant citations. Methods And Evaluation Criteria: The proposed framework integrates three synergistic technical innovations—centralized messaging, message perturbation, and distributionally robust optimization—which collectively provide a comprehensive solution to the spatiotemporal shift challenge. Theoretical Claims: This paper leverages the Distributed Robust Optimization theory to underpin the sophistication of the model. This framework is well-established. Experimental Designs Or Analyses: I reviewed the experiments in the paper. It includes comparison experiments, double/multiple ablation studies, hyperparameter experiments, efficiency experiments, etc., thoroughly evaluated the proposed method. (1). Authors are expected to add predictive visualization cases to intuitively assess the model's predictive performance. (2). Add more discussion on the zero-shot performance of the model. (3). If we allow fine-tuning using a few data with new features, the authors are requested to compare the model's few-shot learning capabilities. Supplementary Material: I have reviewed all sections in the appendix, which contain wealth content including additional related work, method pseudocode, theoretical analysis, detailed experimental results, and discussion. Relation To Broader Scientific Literature: The spatiotemporal OOD task that the authors focus on represents a new and emerging research direction in the field of spatiotemporal prediction. Their critical insights into existing message-passing mechanisms are fresh perspectives in this field. Essential References Not Discussed: The efforts made by the authors to cover a comprehensive range of related work are commendable. However, to strengthen the literature review, I suggest two improvements: (1) elaborate on the advancements and novelty of the proposed method, and (2) discuss advancements in OOD research in other domains [1- 2]. Ref: [1] Yang J, Zhou K, Li Y, et al. Generalized out-of-distribution detection: A survey[J]. International Journal of Computer Vision, 2024, 132(12): 5635-5662. [2] Liu J, Shen Z, He Y, et al. Towards out-of-distribution generalization: A survey[J]. arXiv preprint arXiv:2108.13624, 2021. Other Strengths And Weaknesses: Overall, the paper demonstrates high quality: it presents a convincing motivation, introduces an innovative framework, maintains a well-structured presentation, and conducts very comprehensive experimental evaluations. However, I have several concerns: 1. Expand the discussion on continual learning methods and general OOD techniques. 2. Include an evaluation of the model's effectiveness in few-shotsettings. 3. Provide visualized prediction cases to illustrate the model's effectiveness. Other Comments Or Suggestions: None. Questions For Authors: What potential improvements can be made to the STOP framework? Is the use of GCN essential for spatiotemporal prediction? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you sincerely for your appreciation and thoughtful comments regarding the paper. Your feedback is invaluable in enhancing the quality of the manuscript. > **W1. Visualization Cases** Due to the rebuttal policy restrictions of ICML, we were only able to include some prediction visualization cases in an anonymous link (https://anonymous.4open.science/r/_reb/objW.pdf). Meanwhile, in the submitted manuscript, Figure 9 visualizes the time embeddings, personalized embeddings, and contextual embeddings used in our model, providing evidence for the effectiveness of the proposed embedding techniques. > **W2. Zero-shot Performance** In fact, the zero-shot performance includes the results for newly added nodes, as shown in Table 4 of our submitted manuscript. These nodes had no observed data during training and were excluded from the training process. Despite this, our model demonstrates impressive zero-shot performance. This is largely due to its ability to extract shared contextual features from spatiotemporal data, which can be leveraged by newly added nodes to achieve effective representations. For convenience, we have included the zero-shot performance results for SD below. ||||| |:-:|:-:|:-:|:-:| ||MAE|RMSE|MAPE| |GWNet|29.01|44.18|22.82| |STID|25.40|39.53|18.66| |BigST|25.22|39.22|18.02| |D $^2$ STGNN|25.85|39.33|20.07| |STAEformer|25.28|39.42|18.41| |CaST|28.83|44.10|22.93| |STONE|25.06|39.15|18.12| |Ours|22.74|36.09|16.82| > **W3. Few-shot Performance** To address your question, we use the SD dataset as an example. Following the experiment setup in the paper, when the testing environment changes, we fine-tune the model using the first week of the test set to evaluate its few-shot performance. The results are shown in the table below. We find that with minimal fine-tuning data, our model quickly adapts to new spatiotemporal patterns, demonstrating strong generalization. This is largely due to the lightweight architecture and effective use of contextual features. ||||| |:-:|:-:|:-:|:-:| | |MAE|RMSE|MAPE| |STGCN|25.38| 38.77|20.61| |GWNet|23.15 | 35.53|18.11| |STID|25.47 |39.65|18.87| |BigST|25.55| 39.61|18.81| |D $^2$ STGNN| 22.09|34.82|16.87| |STAEformer| 26.22|40.34|19.15| |CaST| 28.40| 43.66|20.70 | |STONE| 23.08| 34.88| 17.55| |Ours| 20.34| 32.64| 14.62| > **W4. Related Work** In Section A.3, we summarize recent advances in continual learning. While these methods address dynamic spatiotemporal data, they typically require extensive new data for fine-tuning, implicitly assuming independent and identically distributed (i.i.d.) data. In the updated paper, we discuss progress in out-of-distribution (OOD) learning. Traditional machine learning assumes training and testing data share the same distribution, but real-world applications often face distribution shifts, leading to significant performance drops post-deployment [1]. This has driven growing interest in OOD learning, with methods categorized into unsupervised representation learning, supervised model learning, and optimization-based approaches [2-3]. These techniques leverage causal and invariant learning to extract generalizable knowledge from latent test distributions. Recently, OOD learning in graph learning has gained attention. For example, DisenGCN [4] disentangles informative factors in graph data, assigning them to distinct parts of vector representations. However, these methods struggle with spatiotemporal OOD problems due to their inability to capture complex, heterogeneous spatiotemporal correlations. Our key contribution is a novel spatiotemporal interaction mechanism that integrates centralized message-passing, message perturbation, and a distributionally robust optimization (DRO) objective, addressing these limitations effectively. Reference: [1] Yang, Jingkang, et al. "Generalized out-of-distribution detection: A survey." ICCV 2024. [2] Liu, Jiashuo, et al. "Towards out-of-distribution generalization: A survey." arXiv preprint arXiv:2108.13624 (2021). [3] Kaddour, Jean, et al. "Causal machine learning: A survey and open problems." arXiv preprint arXiv:2206.15475 (2022). [4] Ma, Jianxin, et al. "Disentangled graph convolutional networks." ICML 2019. > **Q1. Potential Improvement** In Section F, we discuss potential improvements to the model, including enhancing its performance by integrating large language models and implementing more advanced perturbation mechanisms, among others. > **Q2. GCN** Whether GCNs are essential for spatiotemporal forecasting remains debated. However, research consistently shows that GCN-based models outperform those relying solely on temporal dependencies, owing to their ability to introduce inductive biases among nodes. In general OOD scenarios, the dense message-passing mechanism of GCNs exhibits sensitivity to distributional shifts. To address this, we propose a centralized interaction mechanism as an alternative, enhancing the robustness of spatiotemporal interactions. --- Rebuttal Comment 1.1: Comment: The response has addressed my concerns, so I have decided to raise my score. --- Reply to Comment 1.1.1: Comment: Dear reviewer objW, We are deeply grateful for your thoughtful evaluation and for awarding our work. Your constructive feedback and recognition are of great value to us, and we are truly honored that you recognize our efforts. Sincerely, The Authors.
Summary: The spatiotemporal messaging mechanism utilized in STGCN exhibits inherent sensitivity to spatiotemporal variations. To address these limitations, the authors introduce a centralized messaging architecture integrated with a message perturbation mechanism and DRO optimization. Extensive experimental evaluations were conducted across six benchmark datasets, encompassing comparative analyses with 14 baseline methods. The results indicate that the proposed model achieves performance improvements of up to 17.01% compared to existing approaches. Claims And Evidence: Yes, the authors focus on the sensitivity of the messaging mechanisms in existing STGNNs to spatio-temporal shifts. They substantiate this by conducting comparative experiments and ablation study involving multiple SOTA STGNNs across diverse OOD scenarios. Methods And Evaluation Criteria: Yes, in response to the sensitivity of existing spatiotemporal node-to-node messaging mechanism of STNNs, this study proposes a novel spatiotemporal interaction mechanism that integrates message perturbation and Distributionally Robust Optimization strategies. These strategies are tailored to address specific challenges, and the design motivation is both rational and logically consistent. Theoretical Claims: Yes, the application of Distributionally Robust Optimization theory in this study theoretically demonstrates that the proposed model exhibits superior generalization capabilities compared to conventional models. Experimental Designs Or Analyses: The experimental section demonstrates rigorous methodology and thorough validation procedures. The study presents a systematic performance evaluation comparing 14 baseline models across six distinct datasets. A comprehensive set of ablation studies has been conducted to quantitatively assess the contribution of individual components. Furthermore, the authors provide detailed analyses including component-wise ablation studies, computational efficiency assessments, and extensive robustness evaluations. To further strengthen the study, it would be valuable to consider additional experimental investigations, particularly: (1) comparative analyses under more challenging operational scenarios, and (2) detailed assessments of memory efficiency and resource utilization. These supplementary evaluations could provide deeper insights into the model's practical applicability and scalability. Supplementary Material: I have reviewed the appendix section of the paper. This section includes: algorithm pseudocode, comprehensive theoretical proofs, extensive supplementary experiments, thoughtful discussions on the limitations of the work, etc. The appendix section contains rich content that can enhance the completeness and rigor of the paper. Relation To Broader Scientific Literature: While previous research has empirically demonstrated the sensitivity of spatiotemporal prediction models in OOD scenarios, the authors have made a novel contribution by identifying and analyzing the fundamental source of this vulnerability stemming from the spatiotemporal messaging mechanism they adhere to. Essential References Not Discussed: I believe that much of the work related to the focus of the authors has already been discussed in this paper, primarily in the appendix. I suggest further consolidating the existing progress in the main body of the text. Additionally, I still encourage the authors to discuss general OOD learning research. Other Strengths And Weaknesses: Strengths: The authors have identified a critical limitation in existing spatiotemporal messaging mechanisms, specifically their sensitivity to out-of-distribution (OOD) scenarios, which constitutes a significant and well-justified research motivation. The technical contributions present a novel centralized interaction framework that incorporates message perturbation mechanisms and distributionally robust optimization objectives. The provided theoretical analysis substantiates the proposed model's advantages and robustness properties. The experimental evaluation demonstrates comprehensive methodology, including extensive comparative analyses across multiple datasets, systematic ablation studies, and detailed efficiency assessments. These empirical results provide substantial evidence supporting the model's performance improvements. Weaknesses and Recommendations: The paper would benefit from a broader discussion on general graph OOD learning algorithms, which could provide valuable context and highlight the proposed method's position within the broader research landscape. The current experimental setup limits dynamic nodes to 30% of the total nodes. To further validate the model's robustness, it is recommended to conduct additional evaluations under more challenging conditions, such as scenarios where 70% of nodes exhibit dynamic behavior. While computational efficiency has been partially addressed, a more comprehensive evaluation should include memory usage metrics and total training time. These additional metrics would provide a more complete assessment of the model's practical applicability and scalability. Other Comments Or Suggestions: 1. ' within Appendix A' in 639 line seems redundant. 2. Some abbreviations of proper nouns appear repeatedly. 3. In Table 20, D2STGNN should be written D$^2$STGNN. Questions For Authors: Please refer to weaknesses 1-3 Q.1 In Table 3, why does MLP-based STID achieve better prediction performance than Transformer-based D$^2$STGNN? Q.2 Does STOP include a unique design to address temporal shifts? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We deeply appreciate your valuable time and effort, which are crucial for improving the quality of our manuscript. > **W1. General Graph OOD Learning Discussion** In the field of graph representation learning, researchers have focused on modifying model architectures to enhance the representation of invariant knowledge, thereby improving generalization performance for OOD problems [1-2]. For instance, GAUG [3] improves downstream training and inference by incorporating an edge prediction module to modify the input graph. DisenGCN [4] emphasizes learning disentangled representations by separating distinct and informative factors from graph data and assigning them to different parts of the decomposed vector representations. OOD-GNN [5] introduces a nonlinear graph representation decorrelation method, leveraging random Fourier features to eliminate statistical dependencies between causal and non-causal graph representations generated by the graph encoder. However, these models fall short in capturing the complex heterogeneous spatiotemporal correlations inherent in spatiotemporal data, resulting in suboptimal performance for spatiotemporal OOD tasks. In this paper, we propose a novel spatiotemporal interaction module to enhance the robustness of models against spatiotemporal shifts. We will add this discussion to the next version. > **W2. More challenging conditions** In Section E.6 of the appendix, we discuss performance comparisons in scenarios where the graph topology grows rapidly. To further evaluate robustness under extreme conditions, we use the SD dataset as an example, where the size of the test graph topology is nine times larger than that of the training graph. The performance results are shown in the figure below. || | || |------------|-------|------------|-------| || MAE| RMSE |MAPE| | STGCN| 36.45 | 55.89| 28.01 | | GWNet| 37.22 | 56.62| 33.12 | | STNorm| 29.79 | 47.71| 24.51 | | BigST| 29.72 | 48.55| 21.09 | | D $^2$ STGNN | 37.46 |56.37| 29.63 | | STAEformer| 28.94 |45.84| 29.40 | | CaST| 30.20|47.09| 22.66 | | STONE| 36.19|56.71| 30.68 | | Ours| **26.73** |**42.35**|**19.38**| > **W3. Comprehensive Efficiency Analysis** We sincerely apologize for any lack of clarity in our explanation. We report comprehensive efficiency metrics for several SOTA models on the SD dataset, including simultaneous training time, total training time, inference time, memory usage under optimal performance conditions, and average MAE performance. We find that our proposed method achieves competitive efficiency compared to the SOTA model D $^2$ STGNN. This is attributed to our model's reliance on a lightweight MLP architecture. | | Average MAE | Train (s/epoch) | Inference (s) | Total (h) | Memory (MB) | |:-:|:-:|:-:|:-:|:-:|:-:| | STGCN| 25.79| 82.6|7.4| 2.50| 3,783| | GWNet| 28.21| 125.3| 14.3| 3.53| 6,871| | STNorm | 26.51| 78.2| 7.0 | 2.36| 2,755| | BigST| 26.26| 68.3| 7.3| 2.10| 2,791| | CaST| 29.84| 82.1| 6.2| 1.52| 3,255| | STAEformer | 26.20| 443.7| 36.8 | 12.96| 32,667| | D $^2$ STGNN | 25.43| 1216.4| 58.9| 33.65| 50,171| | STONE | 25.50 | 199.8 | 16.6 | 5.21| 12,349| | Ours | 23.67 | 59.2 | 6.8| 1.54| 3,652| > **Presentation Problem** We sincerely appreciate your valuable suggestions. In the new version, we will optimize the structure of the paper and correct typographical errors to further improve the overall quality and readability of the manuscript. > **Q1. MLP- vs Transformer- based Models** STID outperforms D $^2$ STGNN in some experimental results, potentially because STID leverages various embedding techniques to capture prior information, which is beneficial for enhancing the model's generalization capability. On the other hand, D2STGNN has a more complex parameter structure, making it prone to overfitting in the training environment. This overfitting can lead to a decline in its ability to generalize to unseen scenarios. > **Q2. Temporal shifts** We tackle temporal shifts using two key techniques: (1) Temporal Embedding Technology : By encoding day-of-week and timestamp-of-day information, this approach captures multi-level periodic patterns in spatiotemporal data, strengthening the model's ability to represent stable temporal structures. (2) Decoupling Mechanism : We employ time series decomposition to model seasonal and trend components separately. The stable characteristics of periodic and seasonal patterns enhance the model's robustness against temporal shifts. Reference: [1] Park, Hyeonjin, et al. "Metropolis-hastings data augmentation for graph neural networks." NeurIPS 2021. [2] Wu, Ying-Xin, et al. "Discovering invariant rationales for graph neural networks." ICLR 2022. [3] Zhao, Tong, et al. "Data augmentation for graph neural networks." AAAI 2021. [4] Ma, Jianxin, et al. "Disentangled graph convolutional networks." ICML 2019. [5] Li, Haoyang, et al. "Ood-gnn: Out-of-distribution generalized graph neural network." TKDE 2022. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' detailed responses. All my concerns have been satisfactorily addressed, and I lean to vote for acceptance. --- Reply to Comment 1.1.1: Comment: Dear reviewer riin, Thank you very much for your time, effort, and valuable insights in reviewing our manuscript. Your guidance has been instrumental in helping us refine and improve our work. Sincerely, The Authors.
Summary: The paper introduces a spatiotemporal interaction mechanism called STOP. STOP includes centralized message passing mechanisms with message perturbation and DRO to enhance stability for spatiotemporal sifts. Through extensive experiments, STOP's robustness to spatiotemporal shifts is effectively demonstrated. Claims And Evidence: The author identified the limitations of existing methods through experimental observations and used this as motivation for the paper. Methods And Evaluation Criteria: The three strategies proposed by the authors to enhance robustness to spatiotemporal shifts are reasonable. The authors have also discussed the roles of each strategy in the method section. Theoretical Claims: The formulas in the paper are correct and align with the descriptions provided. The use of DRO theory in the paper can explain its advancements. Experimental Designs Or Analyses: The authors compared the performance of STOP in various OOD scenarios, including rich baselines and datasets, encompassing multiple common metrics in spatiotemporal learning. They also conducted ablation studies, hyperparameter experiments, and other supplementary experiments to demonstrate the model's effectiveness. Supplementary Material: The additional details of experiments and theory description in STOP. Relation To Broader Scientific Literature: The centralized interaction mechanism proposed by the author can broaden the architectural perspectives of existing models in the field. Essential References Not Discussed: Related work has been compared or discussed. Other Strengths And Weaknesses: Strengths of the paper: - The centralized node interaction mechanism introduced in this paper is innovative and shows promise as a robust learner for spatial features. - The comprehensive OOD benchmark evaluation framework utilized in the experiments is praiseworthy, as it covers multiple datasets, rich baselines, and comparisons across different OOD scenarios. - The paper's assertion that the traditional node-to-node messaging mechanism is fragile is a new finding. Weaknesses of the paper: - Efficiency comparisons are available on two datasets in paper, and there is a lack of presentation of efficiency comparisons for large-scale dataset. - More discussion on baseline experiment details and . As far as I know, some models have parameters coupled with graph size, such as GWNet and AGCRN. How authors handle adaptation to OOD settings? Other Comments Or Suggestions: For efficiency comparison experiments, it is recommended to present the data more intuitively using tables. Questions For Authors: - What is the role of the spatio-temporal prompt in STOP for spatiotemporal shift? - How efficient is STOP on the CA dataset? - If traditional GCN is limited in OOD scenarios, then there is a natural question: Is GCN really necessary? - How is the masking matrix of the message interference mechanism sampled from the distribution? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your appreciation and comments on the paper. Your comments are crucial to improving the quality of the manuscript. > **W1 & Q2. Efficiency Analysis** We sincerely apologize for any misunderstanding caused. Below, we report the efficiency comparison between the proposed method and several advanced models on the CA dataset. The efficiency metrics include memory usage, training time per epoch, total training time, and the amount of memory used. We can find that our model achieves competitive performance while maintaining high efficiency | CA | MAE | Train (s/epoch) | Inference time (s) | Total (h) | Memory (MB) | |:--------:|:--------:|:-----------------:|:-------:|:-----------:|:---------:| | STGCN | 40.64 | 951.2| 219.54 | 23.80| 26,395 | | GWNet | 37.63 | 2279.6| 332.92 | 69.66| 34,103 | | STNorm | 40.77 | 360.8| 69.40 | 11.13| 29,275 | | BigST | 39.59 | 328.4| 82.91 | 10.78| 16,283 | | CaST | 41.26 | 577.3| 153.72 | 17.09| 19,515 | | Ours | 32.86 | 290.2| 70.58 | 7.13| 15,243 | > **W2. OOD Setting** This issue is addressed in Appendix E.1. For models such as GWNet and AGCRN, they employ an adaptive graph learning enhancement technique, which generates meaningful node representations for each node but also causes the parameter scale to become coupled with the size of the graph structure. To address this, we have removed this technique. > **Q1. Spatio-temporal prompt** This technique uses embeddings to encode the day-of-week and timestamp-of-day information. These prior knowledge capture the periodic patterns in spatiotemporal data, which are relatively stable and help improve the model's generalization ability against spatiotemporal shifts. > **Q3. Traditional GCN** Whether GCN is necessary in the field of spatiotemporal prediction has long been a topic of debate. However, many studies in the spatiotemporal domain have consistently shown that GCN-based models outperform those that only consider temporal dependencies, as they introduce an inductive bias between nodes. In spatiotemporal OOD tasks, the dense message-passing mechanism of GCNs demonstrates sensitivity to spatiotemporal shifts. Therefore, we focus on enhancing the robustness of spatiotemporal interactions and propose a centralized interaction mechanism as an alternative solution. > **Q4. Message Interference Mechanism** In lines 188 to 197, we describe the random sampling process for the stochastic masking matrix. Specifically, we first create $M$ learnable probability vectors and normalize it, with the result denoted as {$ p_1^\prime, p_2^\prime,\dots, p_M^\prime$}, where ${p}_i^\prime \in \mathbb{R}^N$ with $i \in$ {$1, 2, \cdots, M$} means $i$-th probability vector. Then, for $i$-th probability vector, we can establish a binomial distribution, which is denoted as $\mathcal{M}\left({p}_i^\prime;s\right)$. Using this distribution, we can sample a masking indices $\widetilde{g}_i \sim \mathcal{M} \left({p}_i^\prime;s \right) \in$ {0,1}$^N$, where $s\in\left(0,N\right)$ indicates the number of sample hits (i.e. the number of values equal to 1 in $\widetilde{\boldsymbol{g}}_i$). Finally, we create $K$ replicas of $\widetilde{\boldsymbol{g}}_i$. As a result, we can obtain a mask matrix with log operation, and the output is denoted as $\mathbf{G}_i=\log\left(\left[\widetilde{\boldsymbol{g}}_i,\widetilde{\boldsymbol{g}}_i,\dots,\widetilde{\boldsymbol{g}}_i\right]\right)\in\lbrace-\infty,0\rbrace^{K\times N}$ and $\mathbf{G}_i$ is used to interfere with the message process. Similarly, we ultimately perform multiple rounds of perturbations on the message-passing mechanism using $M$ interference matrices. To avoid further confusion, we will refine this section to improve clarity and ensure a more precise presentation. --- Rebuttal Comment 1.1: Comment: Most of my concerns are appropriately addressed. I choose to increase the score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer tJb2, We are deeply grateful for your thoughtful evaluation and for awarding our work with an increased score. Your constructive feedback and recognition hold immense value for us, and we are truly honored by your acknowledgment of our efforts. Sincerely, The Authors
null
null
null
null
null
null
Finite-Time Convergence Rates in Stochastic Stackelberg Games with Smooth Algorithmic Agents
Accept (poster)
Summary: This paper studies the convergence problem of the stochastic Stackelberg games. Specifically, this paper makes the following contributions. It analyzes different scenarios that reflect the decision-maker's ability to reason about the agents' behavior via different estimates of how it could impact the gradient. The paper also conducts a thorough analysis of different drift-to-noise ratios in the game. Last but not least, it also analyzes the effect of induced drift. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes. I only checked the theoretical claims in the main paper, and they seem to be sound. Experimental Designs Or Analyses: No empirical experiments. Supplementary Material: I checked the background section (appendix section A) Relation To Broader Scientific Literature: Yes, the problem abstraction (Stackelberg game) is deeply connected with the previous work. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength. The theoretical analysis is thorough and solid. For example, in section 4, the paper conducts a theoretical analysis for the naive and strategic decision-makers. Limitation. Empirical experiments would significantly strengthen the paper. Other Comments Or Suggestions: N/A Questions For Authors: Have you tried to extend the proposed estimation to scenarios when the decision-maker could assume that the agent will also be strategic to the reverse direction of the decision-maker following certain constraints? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. Below we respond to the topics you brought up in sections of the review. **Other Strengths And Weaknesses:** *Strength. The theoretical analysis is thorough and solid. For example, in section 4, the paper conducts a theoretical analysis for the naive and strategic decision-makers. Limitation. Empirical experiments would significantly strengthen the paper.* **Response**: Thank you for the acknowledgment of the theoretical strengths of the results. Regarding numerical experiments, we have included fairly extensive numerical experiments in the supplement. Due to page limitations and the theoretical nature of the paper, we ultimately decided to leave them to the supplement, however, given the additional page afforded by the final paper submission process, we would include some portion of the numerics in the main body. --- **Questions For Authors**: *Have you tried to extend the proposed estimation to scenarios when the decision-maker could assume that the agent will also be strategic to the reverse direction of the decision-maker following certain constraints?* **Response**: This is a very interesting question. We have not considered agents who are strategically adversarial against the decision-maker. This would go beyond the model and corresponding results we have in the paper. This is definitely challenging since one would want to know the unintended consequences (e.g., to which equilibrium do the agents and decision-maker converge if at all) and then design mitigation strategies on the part of the decision-maker. Also one would want to know what kinds of manipulation strategies are possible. This is an interesting future direction.
Summary: The paper studies the stochastic Stackelberg game and characterize the complex dynamics between the decision maker and learning agents. The drift, noise and optimization errors of the learning agents are decoupled and algorithms are proposed to control each components. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No. But they seems to be valid to me. Experimental Designs Or Analyses: Yes, I checked appendix D. Supplementary Material: I went over some parts of appendix A and B, and some proofs in appendix H. Relation To Broader Scientific Literature: The paper studies the Stackelberg game and learning agents. The results helps advance the understanding of leader-follower dynamics in Stackelberg and and could have broader impacts on general machine learning algorithms that can be modeled with Stackelberg game dynamics. Essential References Not Discussed: The paper should discuss existing works on online learning in Stackelberg games, e.g. Zhao et al 2023, Yu and Chen 2024 and their followup works. Zhao, G., Zhu, B., Jiao, J., & Jordan, M. (2023, July). Online learning in stackelberg games with an omniscient follower. In International Conference on Machine Learning (pp. 42304-42316). PMLR. Yu, Y., & Chen, H. (2024, July). Decentralized online learning in general-sum stackelberg games. In Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence (pp. 4056-4077). Other Strengths And Weaknesses: One strength is that the paper provides a general receipt for analyzing Stackelberg games. Different algorithmic approaches (naive decision maker and strategic decision maker) are provided. The assumptions (e.g. Definition 2.3) are also mild so that a large class of algorithms are included. While the general framework provided by the paper is useful, it is not clear to me what are the technical contributions. For example, it is not clear whether Proposition 3.1 and Proposition 3.3 can be non-trivially extended from the analysis of Theorem 2 Duvocelle et al 2023. Or whether Theorem 4.9 is a mere combination of the analysis used in Lin et al. 2021 and Duvocelle et al 2023. Other Comments Or Suggestions: While I appreciate the fact that the paper is very self-contained, I am not sure if the length of the paper will hinder the readability for ICML readers. It is hard to see such length is necessary for this work and the length of the paper makes it hard for the reader to identify the technical contributions/ make followup contributions. I suggest to assume some appropriate prior knowledge of the readers and cite previous papers for relevant background (instead of restating them), e.g. cite previous work that introduce examples of strongly monotone games, cite previous works for gradient estimators for bandit feedback. Questions For Authors: 1. Could you elaborate the technical contributions of this paper? Can we obtain the drift, noise and optimization error decomposition from existing works? 2. One significant difference between the Stackelberg game considered by this work and that considered by previous works (e.g. Zhao et al 2023, Yu and Chen 2024) is the introduction of the epoch length. While it seems reasonable (and realistic) to me introduce the epoch length, I am not sure whether the epoch length makes the problem much easier. As the epoch length is not controlled by the leader's algorithm, but rather a parameter for a centralized planner to choose, it seems that the epoch length would make the problem significantly easier. Due to this, I am also not sure if previous works did not choose to decompose the drift and noise because of this setting difference. Could you elaborate on the difference between the two settings? I would be happy to raise my score if more discussion with existing papers on learning in Stackelberg games. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for the review. Given space constraints, we focus on related literature and technical novelties. **Referenced work**: The results don't exist in prior works, nor do they easily extend from analysis in referenced papers. - ZhaoEtAl2023, YuChen2024, different in game structure and theoretical approach, study **stationary** Stackelberg games with **side information** (i.e., omniscient follower) and are interested in manipulation strategies. Additionally, YuChen2024 studies 2-player finite action games with bounded noisy rewards (i.e, $r_t\in[0,1]$). Both study regret analysis for UCB style algorithms, i.e., very different than our setup where we study time-varying stochastic continuous action games with no side information. Yet, we will add these to related work. - [LinEtAl21], [DuvoEtAl23] study strongly monotone simultaneous play games and have no "leader" and thus a different eq concept; our Stackelberg game is not assumed to be strongly monotone (only the induced agent games $\mathcal{G}_u$ are): - [LinEtAl21] studies stationary games and employs self-concordant barrier function algorithm. We use a smoothed single point zeroth-order estimator for the DM. Hence the analysis doesn't apply to our algorithms. - [DuvoEtAl23] employs an **asymptotic** analysis. We have a decision-maker **(DM)** which controls the sequence of games $\mathcal{G}_{u_t}$; they assume $\mathcal{G}\_t \to \mathcal{G}\_*$ or obtain asymptotic tracking results for Nash (!=Stack). **Specific Results** - **Prop 3.1, 3.3** don't follow from the **asymptotic** analysis in [DuvoEtAl23], which doesn't imply a drift-to-noise decomposition (which is essential for our setting). e.g., S1-S3 are asymptotic assumptions on stepsizes, gradient deviations (i.e. $R_{i,t}$), and bias and variance. We provide a non-asymptotic analysis that determines the optimal error as a function of the drift-to-noise ratio (fundamentally different from [DuvoEtAl23]). This exposes how the DM can control the agents' dynamics via **choosing** $\tau$ and $\eta$. - **It is not possible for Thm 4.9 to result from analysis in [LinEtAl21], [DuvoEtAl23]**. We already noted differences to [LinEtAl21]. [DuvoEtAl23] have a similar zeroth-order alg (with the exception of a pivot point), yet, as noted they **rely on asymptotic assumptions**. These assumptions are not well-suited to our objective wherein one player aims to control the learning behavior of others in *finite time* and still converge to a Stack eq. (!=Nash). Their asymptotic convergence to Nash *assumes* $\mathcal{G}\_t\to\mathcal{G}\_{*}$ whereas for us the DM **controls** $\mathcal{G}\_t$ in finite time. **Technical Novelties**: - We study stochastic Stackelberg games s.t. the leader's problem is time-varying from follower agents' learning: the agent eq $x_t^\star=x^\ast(u_t)$ are **induced by** DM actions (vs a random stablizing seq. as in [DuvoEtAl]). Each induced $\mathcal{G}_u$ is strongly monotone but not necessarily the Stack game itself. - **Aim**: design an alg to control the drift-to-noise ratio to ensure finite time stabilization of agent behavior while converging to a static Stack eq (or PSE).: i.e., DM chooses $\tau$ and $\eta$ to ensure finite time within epoch error $\epsilon_\tau \propto (\eta\sigma)^2$ and then control the noise. The epoch length (a DM design feature) **alleviates drift**; while $\tau$ depends on agent regularity params, it can be estimated via adaptive algs as suggested in future work. - To this end, we analyze $\Vert u_t-u^\star\Vert^2$ by decomposing $\Vert x_{t}^\tau-x^\ast(u_t)\Vert^2$ into a stochastic optimization and a drift error. The within epoch contraction gives $\mathbb{E}\Vert x_{t}^\tau-x^\ast(u_t)\Vert^2\lesssim \rho^{\tau}\mathbb{E}[\Vert x_{t}^0-x^\ast(u_t)\Vert^2]+\rho^2\sigma_a^2$. For the DM to set $\tau$ s.t. RHS$\lesssim \epsilon_\tau \propto \eta^2\sigma^2$, we bound the sequence of *epoch initial conditions* $\mathbb{E}[\Vert x_{t}^0-x^\ast(u_t)\Vert^2]$. The "trick" is to exploit the across epoch results (Sec 3) to bound $$\mathbb{E}\Vert x_{t}-x^\ast(u_t)\Vert^2\lesssim \left(1-\frac{1-\rho^2}{2}\right)^t\Vert x_{0}-x^\ast(u_0)\Vert^2+\frac{\sigma_a^2}{1-\rho^2}+\left(\frac{L_{eq}\Delta_u}{1-\rho^2}\right)^2, \text{where} \ \Delta_u:=\max_{s\leq t}\Vert u_{s}-u_{s-1}\Vert;$$ $\Delta_u$ is controlled by $\eta$. - Allowing for epoch-based algorithms doesn't make it easier: we conjecture it is necessary. To illustrate this point, see Rev GNfd on lower bounds and ref [1] therein. Overall, the approach is novel, not appearing in or simply derived from prior work, especially in Stackelberg games, to our knowledge. We exploit a drift-to-noise decomp (e.g., a need the [DuvoEtAl23] doesn't have as there is no leader) to design novel algs and prove convergence of $(u_t,x_t)$ to Stack eq. **We will utilize the extra page to incorporate these comparisons and clarify technical novelties.** Happy to discuss further. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I understand that the epoch length is a design feature, which is used to alleviate drift, and the results are new. But within the epoch, the game is stationary, so I would expect prior methods on strongly monotone game to hold. It still seems like the problem is much simpler when you can control the epoch length, and I am not sufficiently convinced that the derivation of drift error is enough contribution. --- Reply to Comment 1.1.1: Comment: Thank you for your response! --- **$\tau$ lower-bound analysis**: we construct randomly generated games that characterize the necessity of $\tau$ for convergence with fixed step-sizes: https://imgur.com/qaDgEap and https://imgur.com/YHTalZO demonstrate the progression of system stability wrt $\tau$ for two different instances with the game structure described below. The top panel of each plot demonstrates that the evolution of $\tau$ is non-linear, and identifying the lower bound $\tau^*$, after which the system is stable, is **non-trivial and instance-dependent** as shown by the difference in the two plots. We expand more on this below. --- **$\tau$ is necessary for non-asymptotic rates**: without a monotonicity assumption on collective behavior, $\tau$ is required for providing non-asymptotic rates. Prior works on ZS Stack games (special case) have also shown theoretically that a time-scale separation is necessary for non-asymptotic rates [FiezEtAl21]. The hard question, which we address, is **what should $\tau$ be?** --- **Novelties**: Respectfully we reiterate that $\tau$ does not make the problem somehow simple. The technical challenge is constructing the decomposition as optimally as possible so we can control for the **across epoch** drift given the innate agent noise and the noise in the decision-maker's update. - Given the necessity of $\tau$ for the non-asymptotic case, the question is **what should $\tau$ be**, especially in light of the objective of the decision-maker? Sec 3 determines the **optimal target accuracy** as a function of the drift (which DM will control) and noise, which consequently determines $\tau$. Sec 4 then optimizes these terms by designing $\eta$ wrt to $\tau$ to control the **size of the induced drift**. - Indeed, as the reviewer notes, the drift-noise decomposition is novel; this by itself provides **non-asymptotic rates** for time-varying strongly monotone games. This **extends [DuvoEtAl23]**, an entire paper on **asymptotic convergence** for the same class of games; our Sec 3 is a **non-asymptotic (non-trivial) extension that doesn't rely on their analysis**. - Beyond these novelties, the algs in Sec 4 are shown to converge **despite not having the typical monotonicity assumptions for the combined Stack. update**. - And, *of course* results on strongly monotone games apply within epoch; we do not claim otherwise. In fact, our analysis allows for $\rho$-contracting updates beyond SGP (see Appendix). We exploit the regular behavior of this class to construct non-asymptotic convergence of the agent problem by constructing $\tau$ as a function of $\Delta_{\tt a}$, and an algorithm for the DM to control the joint behavior. --- **Related works that reinforce novelty**: - As for asymptotic rates (not the objective of our paper), it is not clear from any prior work that using asymptotic rates would work in the Stack. setting to obtain stationary asymptotic convergence, especially without the combined update being strongly monotone. It is our very strong conjecture that time scale separation is needed---meaning the sequence $\eta_t/\gamma_t\to \infty$ (cf [Ch 6, Borkar])---since the dynamics are general stochastic approximation updates. - Prior work on time-varying games focus on asymptotic rates and convergence to **Nash** under not just strong monotonicity but also asymptotic conditions on learning rates, such as asymptotic behavior of the bias and variance, and convergence to a stationary game, (e.g., [DuvoEtAl23]). We analyze finite-time convergence to the **Stack. eq.** without these assumptions (e.g. strong monotonicity). --- **Example**: Consider a two player game $(f_1,f_2)$ with $f_i:\mathbb{R}^{n\times m}\to \mathbb{R}$, with P1 the leader, and P2 the follower. Assuming SGP, the updates are $(x_1,x_2)^+=(x_1,x_2)-\eta g(x_1,x_2)$ where $g(x)=(g_1(x),\gamma/\eta D_2f_2(x))$ where $g_1:=D_1f_1$ for PSE and $g_1:=Df_1$ for Stack. For simplicity, let's take it to be quadratic (this is just a special class) with update $(x_1,x_2)^+=(I-\eta J)(x_1,x_2)$ where $J=[A, B; \frac{\gamma}{\eta}C,\frac{\gamma}{\eta} D]$ for matrices $(A,B,C,D)$. Then for some $\tau \in [0,\infty)$, the update is given by $(x_{1,t+1},x_{2,t+1})=(I-\eta J)(I-\eta V)^\tau (x_{1,t},x_{2,t})$ where $V=[0, 0; \frac{\gamma}{\eta}C,\frac{\gamma}{\eta} D]$. We generate examples such that - The equil. is not stable for $\tau=0$---i.e., $\text{spec}(I-\eta J)\not\subset [0,1]$ **for any $(\eta,\gamma)$** and $\text{Re}(\text{spec}(-J))>0$ - The Stack. game is not strongly monotone---i.e., $\langle g(x)-g(y),x-y\rangle \leq 0$, and - There is a Stack. equil., i.e., $A-BD^{-1}C\succ 0$ and $D\succ 0$, (resp. a PSE $A\succ 0$ and $D\succ 0$). Stability occurs at $\tau^*\geq 1$, where $\tau^*=\min$ {$\tau | \lambda((I-\eta J)(I-\eta V)^\tau)\subset [0,1)$}. We leave it to future work to understand this theoretically. Code: https://anonymous.4open.science/r/202504-StackExsTimeScale-F20D.
Summary: This paper focuses on learning in Stackelberg games under the setting where the agents might learn their equilibrium gradually. First, the author provides the equilibrium tracking error for the learning agents for a given sequence of the decision-maker’s actions. Then, a learning algorithm for the decision-maker, which can address this tracking error, is proposed. Convergence results for the proposed method are also provided, demonstrating the effectiveness of the approach. Claims And Evidence: The claims are generally supported by mathematical proofs. Methods And Evaluation Criteria: The learning algorithm discussed in Corollary 4.5 and Proposition 4.6 involves three nested loops: super-epoch, epoch, and iteration for the agents. This triple-loop structure could potentially lead to slow convergence in practical applications. It would be beneficial to include experimental results in the main body of the paper to demonstrate the practical convergence rate and effectiveness of the proposed algorithm. Theoretical Claims: The paper utilizes techniques from variational inequalities and stochastic approximation to establish the theoretical results. However, some aspects of the theoretical analysis are unclear: - In Section 2.2, the agents are assumed to update their actions for $\tau$ iterations. However, Propositions 3.1 and 3.2 seem independent of $\tau$, and the proof of Proposition H.2 utilizes an inequality that differs slightly from Definition 2.3, $\mathbb{E}\_t\\|x_{t+1} - x_t^{\ast}\\|^2 \leq \rho^2 \\|x_t - x_t^{\ast}\\| + \rho^2 (c\sigma_a)^2.$ Did the author assume $\tau=1$ in Proposition 3.1? - Similarly, the (SPG) algorithm appears to update $x_t$ over $t$ rather than $k$. Clarification on the update rules and how they align with the theoretical results would strengthen the paper. - Regarding Proposition 3.3, it is unclear whether it holds for any $\varepsilon$ or under specific conditions. Providing more details on this point would enhance the clarity of the theoretical contributions. Experimental Designs Or Analyses: The experimental results are presented only in the supplementary material. While these results showcase the empirical performance of the proposed method, it is not clear if the selected benchmark problems are standard within the Stackelberg game community. Supplementary Material: The supplementary material primarily contains technical proofs and additional theoretical justifications. I did not check all of the proofs in the supplementary material. Relation To Broader Scientific Literature: The setting where the agents might learn and thus the objective function of the decision-maker can be time-varying seems novel. Essential References Not Discussed: The paper appropriately cites relevant prior works on learning in Stackelberg games. Other Strengths And Weaknesses: My main concerns and questions are outlined in “Theoretical Claims”. Other Comments Or Suggestions: Please see “Theoretical Claims”. Questions For Authors: Please see “Theoretical Claims”. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper, and pointing out some of the novelties in your review. Below we focus on the key queries in your review which we believe should address your concerns. **Please let us know if there are further clarifications**. **Theoretical Claims:** We label each of these points 1,2,3. **Re 1.** For clarity, Def 2.3 is stating that **within an epoch** (i.e. since $u$ is fixed and therefore $t$ is fixed for the decision-maker), the agents contract at a rate $\rho$ to the equilibrium $x^*(u)$ of the induced game. That is $k$ here is indexing the steps within the epoch, and $t$ is the fixed index of the epoch. Now, we reflect on Prop 3.1 and *Cor* 3.2 and Prop. H.2: These are **across epoch results**. The confusion we believe comes from the fact that $\rho^\tau\leq \rho$ for $\tau\geq 0$ and $\rho\in(0,1]$. We have applied this to arrive at $\mathbb{E}\_t\Vert x_{t+1}-x_t^\ast\Vert^2\leq \rho^2\Vert x_t-x_t^\ast\Vert^2+\rho^2(c\sigma_a)^2$ from the expression in Def 2.3. In Section G.1 of the supplement, we show that a "one-step" within epoch contraction translates to a "one-step" across epoch contraction; here is where we use the fact that $\rho^t\leq \rho$ for any $t\in (0,1]$. Note that in this section, since it is stated for general update rules (independent of the existence of a decision-maker choosing $u$'s that influence the behavior), we have dropped the $k$ notation and are just using $t$ for an iteration. For clarity, the statements in G.1 would look like the following: fix epoch $t$ and action $u_t$, then $$\mathbb{E}[\Vert x_t^\tau-x^*(u_t)\Vert^2] \leq \rho^{2\tau}\Vert x_t^0-x^*(u_t)\Vert^2+c^2\sigma_a^2\frac{\rho^2}{1-\rho^2}\leq \rho^{2}\Vert x_t^0-x^*(u_t)\Vert^2+c^2\sigma_a^2\frac{\rho^2}{1-\rho^2}$$ Coming to the statement at the beginning of Proposition H.2: we have that there is a $\rho$ and $c$ such that $$\mathbb{E}\Vert x_{t+1}-x_t^\ast\Vert^2\leq \rho^2 \mathbb{E}\Vert x_t-x_t^\ast\Vert^2+c^2\cdot(\rho\sigma_a)^2\quad\text{where}\ \ x_t^\ast := x^*(u_t)$$ We were a little loose with the notation between the one-step within epoch and one-step across epoch results, and will update the statement (Prop H.2) in the appendix to make this clear; for instance, if Definition 2.3 holds for some say $c_1$, then the above across epoch inequality holds for any constant $c$ satisfying $c^2\geq c_1^2/(1-\rho^2)$ -- e.g. for stochastic gradient play $c_1^2=2\gamma^2$ and $c^2=2c_1^2$. In particular, its just the constant $c$ that changes between the two. Formally, nothing changes in the proposition and corollary in Section 3 as they are stated using the notation $\lesssim$ which absorbs constant scalings; though we will adjust this to make the relationship more clear in the main body. **Re 2:** As noted above, this is just context switching: since the results in Appendix G hold independent of the decision maker's existence, we dropped the epoch notation and are only expressing things in terms of iterations. That being said, we can switch the $t$ to a $k$ to make it more clear and add a note, as well as making the other suggested changes for clarity. **Re 3**: As noted in the paper, this is an informal statement stated as such for brevity. As we state below Prop 3.3, the formal statement is given in Prop H.5 which details all the assumptions. Given the context of the subsection, the statement holds under the assumption that the agents are running a $\rho$ contracting algorithm, and further (as detailed in Prop H.5) under assumptions on the constants of the decision-maker's algorithm. We will add clarifications to this point. Hopefully this clarifies the questions on notation and assumptions. Please also see the response to Reviewer MjEv for additional discussion/clarification on technical novelties. --- **Methods And Evaluation Criteria & Experimental Designs Or Analyses:** **Response:** - We will use the extra page in the final version to incorporate a subset of the numerics in the main body that explore the main theoretical results and their assumptions. - All the examples of monotone games (Kelly Auctions, quadratic games, and ride-sharing) used in experiments have appeared before in the prior literature, which we cite, just in different experiments. The nonlinear examples are intended to explore the boundaries of the theoretical results. That being said, we still utilize the Kelly Auction for the agent game, but incorporate non-convex social costs which are common in the literature (including social welfare and revenue maximization).
Summary: This paper proposes an algorithm for a game in which there is one leader and $n$ followers. The leader chooses an action and the followers play a Nash equilibrium which is influenced by the leader's action. The solution concept is that of a Stackelberg equilibrium. The game is static but is repeated during the algorithm. The main challenge is the fact that the agents are adjusting their behavior to the leader's actions. The authors provide convergence guarantees for their algorithm. Claims And Evidence: The claims and the proofs are relatively clear. Methods And Evaluation Criteria: Yes, they make sense. Theoretical Claims: I did not check the proofs in detail but the ideas seem fine. Experimental Designs Or Analyses: I checked the ones in the main body. They seem fine to me. Supplementary Material: I looked at the supplementary material but did not check everything in detail. Relation To Broader Scientific Literature: It seems fine to me. Essential References Not Discussed: Not as far as I know. Other Strengths And Weaknesses: Nothing specific. Other Comments Or Suggestions: Nothing specific. There are a few typos such as "inquality". Questions For Authors: 1) Can you comment on the difference between Definition 4.2 and the notion of Stackelberg equilibrium on page 4 (left column)? How is this definition different and why is it necessary to introduce it? 2) Can you please provide sufficient conditions to guarantee that Assumption 4.3 holds? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. Below we respond to your questions. Please let us know if this has clarified things. **Questions For Authors:** 1. *Can you comment on the difference between Definition 4.2 and the notion of Stackelberg equilibrium on page 4 (left column)? How is this definition different and why is it necessary to introduce it?* **Response**: The difference is that the decision maker is not optimizing through the dependence of $u$ in the agents best response $x^*(u)$; instead it is evaluated a the fixed point of the given $\arg\min$ expression. We introduce this because it is the natural equilibrium concept when the decision-maker does not optimize through $x^*(u)$, but rather performs updates relative to the gradient of $\ell(u,z)$ (where $z\sim \mathcal{D}(u)$) with respect to the explicit dependence on $u$ -- namely stochastic samples of $\mathbb{E}\_{z\sim \mathcal{D}(u)}\nabla_u\ell(u,z)$ instead of the full gradient $\mathbb{E}\_{z\sim \mathcal{D}(u)}\nabla_u\ell(u,z)+\frac{d}{dv}\mathbb{E}\_{z\sim \mathcal{D}(v)}\ell(u,z)|_{u=v}$ as detailed in Section 2.3 (cf Equation (2) and discussion thereafter) as well as the discussion at the top of Section 4.1. 2. *Can you please provide sufficient conditions to guarantee that Assumption 4.3 holds?* **Response**: This is a standard assumption in optimization for online stochastic gradient methods in games and otherwise (see [1--3]). For example, the assumption says that stochastic gradient methods that use gradient estimates $g_t=\nabla_u \ell(u_t,z_t)$ satisfy $$\mathbb{E}_t[\Vert\nabla_u \ell(u_t,z_t)-\mathbb{E}\_{t}[g_t]\Vert^2]\leq \sigma^2<\infty,\quad\text{where}\ \mathbb{E}_t[\cdot\mid \mathcal{F}_t]$$ i.e., the estimator has finite variance conditioning on all the previously seen data (which is the filtration part in Assumption 4.3). This is just stating that the gradient noise has bounded variance, given data to time $t$. One sufficient condition for this is that the environment distribution from which we sample $g_t$ has finite variance -- i.e. $\mathbb{E}[\Vert g_t\Vert^2]<\infty$. As is standard the size of the variance can be controlled via stepsize choices, batching amongst other techniques. [1] Cutler et al "Stochastic Optimization under Distributional Drift" JMLR 2023 [2] Narang et al "Multiplayer Performative Prediction: Learning in Decision-Dependent Games" JMLR 2023 [3] Besbes et al "Non-stationary Stochastic Optimization", 2014 --- Rebuttal Comment 1.1: Comment: Thank you for your answers.
Summary: The paper explores a learning problem in the context of Stackelberg games with single leader (decision-maker) and multiple followers (agents). The authors consider a setting where both the agents and the decision-maker are learning. The agents are learning, for any action taken by the decision-maker, the Nash equilibrium of the game induced by such an action. The decision-maker aims at playing a sequence of actions ensuring the convergence to a Stackelberg equilibrium. Both the algorithm employed by the agents and the stochastic environment are unknown to the decision-maker. First, the authors characterize the equilibrium tracking error of the agents, highlighting the contribution of the noise and the "drift" induced by the sequence of leader's actions. Then, they analyze the convergence rate of a simple decision-maker that ignores the fact that the agents are responding to it. Finally, they devise a more powerful decision-maker which adopts a derivative-free method that converges in $O(1/\epsilon^2)$ to an $\epsilon$-approximate equilibrium. Claims And Evidence: The claims in main paper are supported by proofs in the Appendix, albeit I did not check their correctness. The results are plausible given the assumptions, and the high-level idea of the algorithms employed is convincing. Methods And Evaluation Criteria: There are no typical benchmarks for this setting. Nonetheless, the authors provide experiments on some typical economics settings (e.g., Kelly auctions) and a quadratic game inspired by real-world data. They also investigate the performance of their algorithms in settings where the theoretical assumptions are not perfectly met. Theoretical Claims: I did not check the correctness of the proofs. Experimental Designs Or Analyses: I did not check the validity of the experiments. Supplementary Material: I briefly read portions of Appendix C, D and G to view some examples of games and algorithms, but I did not check the details. Relation To Broader Scientific Literature: Stackleberg games have been extensively studied in offline settings. This work considers a learning setting, and is built upon previous works on Stochastic Optimization, especially the work of Cutler et al. (2023). While in (Cutler et al. 2023) the loss function depends only on the decision-maker action, this paper considers also the presence of other strategic agents that influence the loss. Essential References Not Discussed: The authors discuss all the relevant related works. Other Strengths And Weaknesses: Strengths: the paper addresses a novel learning problem, where both the leader and the followers are learning in a Stackelberg game. They provide an in-depth analysis of two different algorithms for the leader's problem and briefly discuss when one approach is preferable to the other. Weaknesses: it is unclear what exactly the decision-maker has to learn, and what they know in advance (see questions). The authors do not provide any lower bound on the convergence rate (in terms of iterations) achievable in this problem. Other Comments Or Suggestions: Suggestion: after Corollary 4.5, the authors discuss the fact that the total number of iterations is $T\tau$. To complete this discussion, it would be useful to provide the formula to compute $\tau$ (up to constants) given the target accuracy $\epsilon$. The same holds for Corollary 4.10. Questions For Authors: 1) Does the decision-maker know the agent's algorithm? Line 160 (right) states that the algorithm is unknown, yet it is employed at line 396 left. Similarly, is the noise distribution $\mathcal{D}_e$ known in advance? 2) At line 138 left, why is $x$ drawn from a distribution $\mathcal{D}_x(u)$, rather than being computed by the agent's algorithm based on both $u$ and the previous $x$? 3) At line 117 right, why does $x^{k+1}_{t+1}$ depend on the sequence of joint actions $x^1\_t,\dots, x^k_t $ rather than the sequence $x^\tau\_{t}, x^1\_{t+1}, \dots, x^k\_{t+1} $? 4) Line 67 (right) states that the epoch complexity of your algorithms is optimal. Is it also optimal in terms of iterations? 6) Is the notion of performatively stable equilibrium in Definition 4.2 equivalent to the one of (Narang et al., 2023)? Why is it the appropriate notion of equilibrium for Section 4.1? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for reviewing our paper, and acknowledging some of the strengths of the paper. **Qs for Authors:** **Re1.**: No, the decision-maker **(DM)** doesn't know the algorithm or $\mathcal{D}\_e$ *a priori*. - **Algorithm**: In Line 396, DM doesn't know $\mathcal{A}$. From the environment, it receives a sample of $\ell$ evaluated at the agents' response. In practice this means that the DM deploys action $u_t+\delta v_t$, observes $(\mathcal{A}(x_{t-1}, u_t+\delta v_t),\xi_t)$ and then computes $\ell(u_t+\delta v_t,(\mathcal{A}(x_{t-1}, u_t+\delta v_t),\xi_t))$ so as to compute $g_t$. It only needs to know its own loss and be able to query the environment. - **Environment**: Analogous to the above point, the decision-maker doesn't know $\mathcal{D}\_e$. We assume that they can stochastically query it. We will add further discussion to clarify. **Re 2.** Here, we are defining the problem the DM aims to solve if the the agents were not responding to $u_t$ but were at equilibrium. This forms the **in equilibrium** benchmark: i.e., the point to which the DM aims to converge depends on the environment stochasticity even when the agents are stochastically best responding. We show convergence to this stationary equilibrium so that we don't need a time-varying benchmark like many of the existing works on time-varying games. We exploit problem structure to get *stronger* convergence guarantees as compared to bounding stochastic tracking errors (e.g., we bound $\Vert u_t-u^\ast\Vert^2$ vs $\Vert u_t-u_t^\ast\Vert^2$ where $\{u_t^\ast\}$ is some time-varying "optimal" sequence given the drift induced by the agents). **Re 3.** Thanks for catching this: it is a typo that should read $x\_{i,t}^{k+1}=\mathcal{A}\_i(x_t^0,x_t^1,\ldots, x_t^k,u_t)$. Here, we should have the notation $x_{i,t}^{k+1}=\widetilde{\mathcal{A}}\_i(x_t^0,x_t^1,\ldots, x_t^k,u_t)$ where $\widetilde{A}\_i$ are $\rho$ contracting algorithms, and then define $x_t=\mathcal{A}(x_{t}^0,u_t)$ as the $\tau$ time joint algorithm since in the analysis that follows we treat $\mathcal{A}$ as the $\tau$ time response of the agents. We will update this notation to be consistent. **Re 4.** We are not aware of any lower bounds in terms of iterations for Stackelberg games employing epoch-based algorithms, and so it is not clear if it is optimal in terms of total iterations. We discuss lower bounds further below. **Re 5.** It is equivalent when Def 4.2 is restated in terms of the lifted $(n+1)$ player simultaneous play game as noted in the paper; e.g., consider $(\mathbb{E}_{\xi \sim \mathcal{D}_e(u^{ps})}\ell(u,(x,\xi)), f_1(x,u), \ldots, f_n(x,u))$ over the joint action $(u,x)\in \mathcal{U}\times \mathcal{X}$, then the performatively stable eq (PSE) $u^{ps}$ is a Nash eq of this game where the environment noise is fixed at $u^{ps}$. The difference between this equilibrium and Stackelberg (SE) is two-fold: the DM is not optimizing through the dependence of $u$ in the agents' best response $x^*(u)$ as in a SE, nor in the environment distribution $\mathcal{D}\_e(u)$. This is why it is the natural equilibrium concept for Sec 4.1 -- i.e., the DM is only using stochastic queries of $\mathbb{E}\_{z\sim \mathcal{D} (u)}$ $\nabla_u\ell(u,z)$ instead of the full gradient $\mathbb{E}\_{z\sim \mathcal{D} (u)}$ $\nabla_u\ell(u,z)+$ $\frac{d}{dv}\mathbb{E}\_{z\sim \mathcal{D}(v)}\ell(u,z)$ as detailed in Sec 2.3 and the discussion at the top of Sec 4.1. To find an optimal point for $\mathbb{E}\_{z\sim \mathcal{D}(u)}\ell(u,z)$ the DM needs to optimize through $z(u)=(x^\ast(u),\xi(u))\sim \mathcal{D}_x(u)\times \mathcal{D}_e(u)$. Yet, it doesn't know the agents' preferences nor the environment distribution, but can query it. Sec 4.2, on the other hand, aims to estimate $\frac{d}{dv}\mathbb{E}\_{z\sim \mathcal{D}(v)}\ell(u,z)$ so that SE is the right benchmark. **Re: "Other... Weaknesses":** - **Lower Bounds** are open and interesting, but beyond the scope. We comment on a special sub-case: - In zero-sum settings, which is a special case of our problem, the ODE method in stochastic approximation provides some lower bounds. The continuous-time limit translates to a time-scale separation $\tau$ [1]. Here, there are lower bounds in terms of eigenvalues of the game Jacobian block matrices; this could be any $\tau>0$. It's possible to extend this to general sum settings, however, the game Jacobian doesn't have a nice structure like zero-sum games so it's hard to meaningfully interpret these bounds. This suggests, however, that $\tau>0$ is necessary and that its precise value is problem dependent. Thanks for the suggestions; we will incorporate them. - [1] Fiez & Ratliff. "Local Convergence Analysis of Gradient Descent Ascent with Finite Timescale Separation", ICLR 2021
null
null
null
null
Beyond Task-Specific Reasoning: A Unified Conditional Generative Framework for Abstract Visual Reasoning
Accept (poster)
Summary: - This paper proposes a unified framework (UCGS) for solving 4 different abstract visual reasoning tasks with a single deep network architecture. - The already existing abstract visual reasoning tasks are based on problem panels consisting of several images showing simple visual concepts following different abstract rules with the model's task being the selection of the correct missing image (Raven's Progressive Matrices, Visual Analogy Problems), the detection of an outlier w.r.t. the rule (Odd-One-Out), or the assignment of images to the correct problem panel (Bongard problems). - The authors introduce a formal generative framework that cuts these different tasks down to the estimation of conditional probabilites of images given the rest of the problem panel. - The paper provides a concrete implementation of the framework in form of a transformer architecture applied on vector quantized image patches and consisting of a hierarchy of modules to first encode patches per image, concepts on panel level, and finally generate a target image autoregressively patch by patch. - An experimental evaluation validates the effectiveness of the proposed framework and architecture in solving various AVR problems without retraining as well as generalization to unseen problems and tasks by showing results for three different settings with: - known tasks with unseen combinations of known abstract rules and visual concepts (ID Tasks), - unseen tasks with known abstract rules and visual concepts (ID-ZS), and - unseen tasks with unseen abstract rules and visual concepts (OOD-ZS). Claims And Evidence: - The authors claim that "UCGS can successfully solve various AVR tasks" (lines 038 ff.) and the "framework can solve tasks like RPM, Visual Analogy Problem, and Odd-one-out" (lines 094 ff.). - While the paper shows that the formal framework is flexible and can be used to tackle these tasks, the experimental results are still far away from completely solving these tasks. Therefore, this statement could be misunderstood and the paper would benefit from a more precise formulation. - The paper claims "strong reasoning ability in ID tasks" (lines 365 ff.). - If I am not mistaken, the paper is missing a comparison with task-specific solvers that would put the documented experimental results in relation to what is possible with inductive biases specific to the tasks and answer the question of the cost of the unified framework and its advantage w.r.t. task flexibility. - The provided baselines are rather ablations of the architecture implementing the general framework. The remaining claims are supported by clear and convincing evidence, e.g., the zero-shot generalization of the reasoning ability to unseen tasks, novel rules and visual concepts by outperforming the random guess. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense. Theoretical Claims: I have checked all theoretical claims and their proofs (propositions 3.4 to 3.6). - Regarding proposition 3.6., I have some doubts regarding the replacement of the last image in the panel: - The Bongard problem only describes the task of assigning a query image to the correct panel (left or right), not replacing an image from the panel with the query image. - Can the removal of the last image (or any other image) result in the abstract rule of that panel not being unique anymore and in the worst case even making both query images fitting with both partial problem panels such that there is no unique solution anymore? - In lines 183 ff., the authors say that if both problem panels (left and right) are sampled uniformly from the dataset, then the probabilities of the partial panels is 1 over the size of the dataset. If I am not mistaken, that should only be correct if you can make some additional assumptions about the dataset, e.g., that there are no problem panels with equal partial panels, if you remove the last image. Experimental Designs Or Analyses: From the paper and supplementary material, the experimental designs seem valid. Supplementary Material: I reviewed the supplementary material containing information about the datasets, model details, and further qualitative results. Relation To Broader Scientific Literature: I am not very familiar with the related works in the area of abstract visual reasoning. However, - the key contribution being the unified framework to tackle different tasks with a single model without retraining and enabling some generalization to unseen tasks and problems (combinations of abstract rules and visual concepts) seems novel, - therefore, the generalization ability to unseen tasks and problems is a finding unique to this paper, - as mentioned in the review section "Claims and Evidence", the paper misses the comparison with the performance of task-specific solvers, which do exist as mentioned in the introduction (lines 37ff. right column) and seem to perform better, as stated in the limitations "further exploration is required to reach the performance achieved by the task-specific solvers" (lines 431ff. right column). Essential References Not Discussed: - The main paper is missing descriptions and references for the G1-set and SVRT datasets, while the supplementary material provides these. - As mentioned above, the paper mentions task-specific solvers but does not discuss the provided experimental results in relation to the performance of these task-specific solvers. Besides the above, I am not aware of any missing essential references. Other Strengths And Weaknesses: Strengths: - The paper addresses a very challenging and interesting problem in learning abstract rules from visual data with a unified framework addressing multiple tasks without retraining. - It is mostly very clearly explained and easy to understand. - The introduction, related work, and figure 1 introduce the topic of abstract visual reasoning very well. - The unified framework (section 3.1) and transformer implementation (section 3.2) are explained clearly without being too detailed (hyperparameters etc. in the appendix). - The generalization capabilities to unseen tasks are exciting and unique to the paper with its unified framework. Weaknesses: - Some lack of clarity - Is the model training of VQVAE and transformer done end-to-end or as usual one after another? The total loss in eq. 9 seems to suggest the former, which I would find surprising. - The paper uses the terms "visual concepts, rules, definitions, problems, and tasks" without a clear definition. The meanings and differences became clear to me rather late in the paper (in sections 4.1.1 to 4.1.3). - If I am not mistaken, the provided baselines are rather architecture ablations than existing approaches. If that is the case, the paper would benefit from making this clear. - What is the motivation of including an object-centric baseline? Other Comments Or Suggestions: - Regarding the lack of clarity w.r.t. the used terms (abstract rules, visual concepts, etc.), the paper would benefit from defining them once early on and then using them consistently. - Lines 64 to 70 (the first two sentences of that paragraph in the left column) sound very repetitive, which seems like an accident. - The first paragraph in section 3.1. (before definition 3.1) is a bit repetitive w.r.t. the related work. Questions For Authors: The most important questions are: 1. Why do you not compare with task-specific solvers to put the provided experimental results in relation to them, if you mention that "further exploration is required to reach the performance" of them (see limitations lines 431 ff.)? - This comparison is very important in my opinion for the evaluation to be complete. Worse performance of the proposed approach is to be expected and would therefore not change my positive view of the experimental results (if not too large). 2. Please see my doubts regarding proposition 3.6. in review section "Theoretical Claims": Does the replacement and therefore removal of an image in the Bongard problem planels not affect the uniqueness of solutions and the probabilities in the proof? - A convincing clarification would remove my doubts regarding this theoretical claim. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the constructive suggestions. The detailed responses to the reviewer's comments are as follows. **Q1: Some statements could be misunderstood and the paper would benefit from a more precise formulation.** Thank you for pointing out the inaccurate statements. We will carefully check the manuscript to modify or remove the statements in the revised version. **Q2: Comparison and discussion to task-specific solvers** We conducted additional experiments on the task-specific solvers and multimodal LLMs GPT-4V and GPT-4o. Since UCGS-T can handle both selective and generative problems, we compare it to generative task-specific solvers PrAE [1], NVSA [2], GCA [3], ALANS [4] and RAISE [5] that support answer selection and generation. The task-specific solvers are trained separately on each dataset without the supervision of rule and attribute annotations to ensure consistency with the experimental setup of UCGS-T. | Models | RAVEN | PGM | | --- | --- | --- | | PrAE [1] | 13.6 | - | | NVSA [2] | 11.5 | - | | GCA [3] | 37.3 | 31.7 | | ALANS [4] | 50.1 | - | | RAISE [5] | 54.5 | 14.0 | | GPT-4V | 13.8 | - | | GPT-4o | 19.2 | - | | GPT-4o + Language Descriptions | 38.8 | - | | UCGS-T (Ours) | 64.6 | 38.1 | Experimental results show that UCGS-T outperforms generative task-specific solvers under the same setup. PrAE and ALANS cannot solve PGM problems since they only define visual concepts and abstract rules of RAVEN. Without additional annotations, task-specific solvers suffer significant performance drops, e.g., PrAE’s accuracy drops from 65.0% (reported by [1]) to 13.6% in our experiments. We exhibit the results of GPT-4V and GPT-4o on RAVEN from [6], where both models show lower accuracy than UCGS-T, ALANS and RAISE. The multimodal LLMs are not designed for abstract visual reasoning, therefore solving such tasks may require further prompt engineering and chain-of-thought design. [1] Abstract spatial-temporal reasoning via probabilistic abduction and execution. [2] A neuro-vector-symbolic architecture for solving Raven’s progressive matrices. [3] Generating correct answers for progressive matrices intelligence tests. [4] Learning algebraic representation for systematic generalization in abstract reasoning. [5] Towards Generative Abstract Reasoning: Completing Raven’s Progressive Matrix via Rule Abstraction and Selection. [6] What is the visual cognition gap between humans and multimodal llms? **Q3: Doubts regarding the replacement of the last image in Proposition 3.6** SVRT provides an image set for each rule containing different images generated by computer programs that follow the rule. Each image is assigned to only one panel of Bongard Problems (BPs). If the remaining images of a set are insufficient to form a complete panel, they will be dropped. Therefore, there are no duplicate images between BP panels even if one image is removed from the left and right problem panels. **Q4: The main paper is missing descriptions and references for the G1-set and SVRT datasets, while the supplementary material provides these.** Thank you for the suggestion. We will add the descriptions and references in the first paragraph of Experiments. **Q5: Is the model training of VQVAE and transformer done end-to-end or as usual one after another?** VQVAE is pretrained before training the remaining modules. We leave the image reconstruction loss in Eq. 9 to make it possible to finetune VQVAE end-to-end in the training stage. But we find that setting \lambda to 0 (i.e., freezing the parameters of VQVAE) is the best choice. Please refer to Appendix B.1 for the detailed descriptions. **Q6: Regarding the lack of clarity w.r.t. the used terms** In this paper, tasks refer to different abstract visual reasoning tasks such as RPM and BP. Problems refer to individual problem instances. Definitions describe the form of different tasks. Visual concepts refer to image attributes like object size and color. Rules are changing patterns on attributes (e.g., progressive changes). Thanks for the helpful suggestion to make the terms clear. We will introduce the terms at the beginning of the method section. **Q7: The motivation of including an object-centric baseline** Some task-specific solvers [1, 2] have realized abstract visual reasoning with object-centric representations. Therefore, we adopt it as one of the typical approaches to explore the performance of different visual representations in UCGS. [1] Learning to reason over visual objects. [2] Systematic visual reasoning through object-centric relational abstraction. **Q8: Other comments or suggestions about writing** Thanks for the constructive comments. In the revised version, we will clarify that the baselines are ablations on model architecture when introducing the baselines. And we will also remove the repeated parts (Lines 64-70 and the first paragraph in Section 3.1) in the manuscript.
Summary: This paper presents a method for solving abstract visual reasoning tasks that aims to unify previous methods for different types of tasks (e.g. matrix reasoning vs. odd-one-out) and different modes for solving problems (e.g. classification vs. generation). The method is evaluated on various abstract visual reasoning tasks, with an emphasis on the ability to generalize between tasks. Claims And Evidence: The primary concern with this work is that, despite claiming to present a highly general model, it is overly focused on a very specific subset of idealized abstract visual reasoning problems. In general, abstract visual reasoning tasks have been created because we think that they tell us something important about the more general reasoning capabilities of our models, not because we are interested in solving these tasks per se. Thus, it is unclear why a 'general purpose solver' is needed for this specific subset of tasks. Despite being more general than some previous models in this literature (e.g., those that are specific to classification-based matrix reasoning tasks), the proposed approach is still very specific to a particular type of problem. It assumes that inputs will be presented in a set of discrete panels, that these panels will consist of relatively simple geometric forms, and that the objects will be entirely explainable via a relatively simple set of abstract rules. Thus, the proposed approach is very far from the 'general purpose abstract reasoner' that is promoted in the abstract and the introduction. The performance of the model is also very poor in some cases. For instance, the iid performance on the RAVEN and PGM benchmarks is well below the performance of several models which are not included as baselines. The introduction touts the zero-shot reasoning abilities of the model, but the zero-shot generalization to new tasks is generally very poor (though somewhat better than the baselines). There is also no discussion or evaluation of the truly general-purpose systems (LLMs, VLMs, reasoning models like o1) that increasingly display strong abilities to solve these sorts of problems, while also not being limited to a very specific problem format. Methods And Evaluation Criteria: There are many baselines from the literature on abstract visual reasoning in deep learning systems (e.g. architectures designed for problems like RAVEN and PGM) that are not discussed or compared with the proposed approach. The tasks that are considered are also very similar, despite the emphasis of the paper on generality. There are many other abstract visual reasoning tasks (Bongard problems, ARC, visual question-answering tasks, tasks involving reasoning over real-world images) that are important to consider when evaluating a putatively general-purpose abstract visual reasoning system, and that are increasingly solvable by LLMs or VLMs. Theoretical Claims: The propositions and proofs introduced in section 3.1, though correct, seem somewhat unnecessary for what ultimately turns out to be a relatively straightforward generative architecture for solving these types of abstract visual reasoning problems. The definitions introduced in this section also underscore the very specific domain to which the model can be applied (i.e. it is only applicable to problems involving panels governed by specific rules). Experimental Designs Or Analyses: The experiments are reasonable given the goals of the paper, but they are very limited to a specific type of abstract visual reasoning task. Supplementary Material: I read the entire supplementary material. Relation To Broader Scientific Literature: The paper primarily considers only other work that evaluates deep learning systems on abstract visual reasoning tasks. There is very little connection to other types of reasoning tasks, or the broader space of models that are increasingly able to solve a wide range of tasks. Essential References Not Discussed: Within the domain of deep learning models designed to solve abstract visual reasoning tasks, the references considered are reasonable. But many of these models are not directly compared with the proposed approach, despite achieving better performance in some settings. There is also very little consideration of reasoning beyond these types of tasks, and very little discussion of other approaches to solving abstract visual reasoning tasks. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thanks for the constructive suggestions. The detailed responses to the reviewer's comments are as follows. **Q1: Comparison and discussion to RPM solvers and general-purpose systems** We conducted additional experiments on the general-purpose systems GPT-4V and GPT-4o and classic task-specific RPM solvers. Since UCGS-T can handle both selective and generative problems, we compare it to generative task-specific solvers PrAE [1], NVSA [2], GCA [3], ALANS [4] and RAISE [5] that support answer selection and generation. The task-specific solvers are trained separately on each dataset without the supervision of rule and attribute annotations to ensure consistency with the experimental setup of UCGS-T. | Models | RAVEN | PGM | | --- | --- | --- | | PrAE [1] | 13.6 | - | | NVSA [2] | 11.5 | - | | GCA [3] | 37.3 | 31.7 | | ALANS [4] | 50.1 | - | | RAISE [5] | 54.5 | 14.0 | | GPT-4V | 13.8 | - | | GPT-4o | 19.2 | - | | GPT-4o + Language Descriptions | 38.8 | - | | UCGS-T (Ours) | 64.6 | 38.1 | **Comparison with task-specific solvers.** The experimental results indicate that under the same experimental setup, UCGS-T outperforms the generative task-specific RPM solvers. Since PrAE and ALANS only define visual concepts and abstract rules for RAVEN, they are incapable of solving PGM problems. Our experiments reveal that, without additional annotation information, the performance of the generative task-specific RPM solvers significantly declines. For example, [1] reports that PrAE achieves accuarcy of 65.0% after rule-supervised training, which declines to 13.6% in our experiments. **Comparison with general-purpose systems.** We also exhibit the results of the general-purpose systems GPT-4V and GPT-4o on RAVEN. The accuracy scores are reported by [6] where GPT-4V and GPT-4o have lower accuracy than UCGS-T, ALANS and RAISE. GPT-4o shows a notable performance improvement when provided with language descriptions of candidate images (from 19.2% to 38.8%), but still falls short of UCGS-T, ALANS, and RAISE. While the general-purpose systems GPT-4V and GPT-4o are not specifically designed for abstract visual reasoning, solving such tasks may require further prompt engineering and chain-of-thought design. [1] Abstract spatial-temporal reasoning via probabilistic abduction and execution. [2] A neuro-vector-symbolic architecture for solving Raven’s progressive matrices. [3] Generating correct answers for progressive matrices intelligence tests. [4] Learning algebraic representation for systematic generalization in abstract reasoning. [5] Towards Generative Abstract Reasoning: Completing Raven’s Progressive Matrix via Rule Abstraction and Selection. [6] What is the visual cognition gap between humans and multimodal llms? **Q2: Despite claiming to present a highly general model, it is overly focused on a very specific subset of idealized abstract visual reasoning problems.** As stated in Lines 26–29 of the abstract, UCGS is "general" since it aims to solve multiple abstract visual reasoning (AVR) tasks in a unified manner, which are often treated as independent in task-specific solvers. We agree that AVR tasks are widely used not because researchers are interested in achieving high scores in these tasks, but because they can reveal the core reasoning abilities of AI. Our goal is precisely to develop a more general framework for AVR—one that can exhibit multi-task AVR ability as humans, rather than simply “solving these tasks.” To this end, we use representative AVR tasks as benchmarks. While the matrix reasoning tasks may seem simple in form, building multi-task solvers on them remains challenging. As shown in the table, GPT-4V and GPT-4o can hardly achieve performance comparable to task-specific solvers. The performance of UCGS-T illustrate the effectiveness of the proposed framework in the typical AVR tasks. Importantly, selecting these tasks does not mean our framework is limited to them. UCGS can extend to other tasks, such as visual analogy extrapolation problems [1], which shares a similar structure with matrix reasoning. [1] A review of emerging research directions in abstract visual reasoning. **Q3: The propositions and proofs in section 3.1 seem somewhat unnecessary …** The significance of the propositions consists of two aspects. On one hand, it demonstrates how different forms of classical reasoning tasks can be described as a conditional generation processes. On the other hand, it explains how generative and selective tasks can be unified into one framework. Specifically, some selective tasks can be reformulated as an implicit answer generation problem, where the goal is to search for an option among predefined candidates that matches the generated result. This design allows a single conditional generative model to solve both types of tasks simultaneously, eliminating the requirement of an additional scoring network for answer selection. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for these replies. While I appreciate the engagement and additional results, I feel that my core concerns have not been addressed. First, the performance of the model is not particularly competitive relative to previous approaches on these datasets. New baselines have been included to compare with other generative task-specific solvers, but there are many non-generative (classification-based) models that perform much better on these tasks, many of them now approaching saturation on both RAVEN and PGM. I see no principled reason not to include these other baselines in the comparison. Second, despite the purported general-purpose nature of the approach, it is still highly specific to the format of abstract visual reasoning tasks, and it is not even clear how it will be generalized to closely related tasks like ARC, or visual reasoning tasks that involve real-world images. Though I appreciate that the task is slightly more general relative to some other approaches in this literature, it is still very specifically tailored to these kinds of tasks, and there is no explanation of how the approach will be scaled to handle more open-ended, unstructured, real-world reasoning problems, or how it will be integrated into more general-purpose systems. Again, it seems that the purpose is merely to solve abstract visual reasoning tasks for their own sake, rather than to develop an approach that can improve the reasoning abilities of real-world agents. Third, there are new baselines reported for gpt-4v and gpt-4o, but these are not state-of-the-art systems for reasoning. It would be more appropriate to perform a comparison with reasoning models like o1 or r1. Additionally, it is unclear if current systems struggle with visual reasoning tasks because of the reasoning or visual components of the tasks. There is a lot of work suggesting that multimodal models struggle primarily with visual encoding. Therefore, it would be especially informative to disentangle the visual vs. reasoning demands of these tasks when comparing with baselines like gpt-4o or reasoning models like o1.
Summary: The authors transform a series of different classical abstract visual reasoning tasks by making them all into a task of generating one missing data panel given the remaining set of example panels instantiating a visual concept, which is captured with conditional generative models. Using a new architecture based on transformers, the system is trained on several abstract visual reasoning tasks, performs well across tasks, and transfers even to unseen tasks. Claims And Evidence: While I could not run the code to replicate the experiments, the reported experiments and evaluations support the claims. Methods And Evaluation Criteria: The authors use selection accuracy as a single evaluation criterion. Theoretical Claims: The paper does not contain any proof. Propositions 3.4 to 3.6 are rather basic intuitions that are translated from language to straightforward formulas. Experimental Designs Or Analyses: The abstract visual reasoning tasks are standard benchmarks. Supplementary Material: Yes, I read through the entire supplementary material. Relation To Broader Scientific Literature: Abstract visual reasoning is of broad interest to the community. The present paper profits most from reformulating classic problems into one single problem, i.e., generating a new data point (panel) based on a number of positive examples (panels) from the same concept. While this is an interesting manipulation that allows the transfer across different AVR tasks, it is also a strong modification of the original problems, e.g. in the case of the Bongard problems, where the original goal is to produce a sentence describing the concept of one set versus the concept of a second set. Conceptually, it is an important goal to learn more abstract visual concepts, but it is not clear how the current system that learns such implicit abstract representations would compare to neuro-symbolic systems, particularly in terms of explainability. Intuitively, it is unclear why the performance on the RPM task is comparatively low to other systems trained exclusively on this task and why out of distribution generalization is comparatively to the other systems. Essential References Not Discussed: There are multimodal models that have addressed similar problem settings, e.g. - Zhao, H., Cai, Z., Si, S., Ma, X., An, K., Chen, L., Liu, Z., Wang, S., Han, W. and Chang, B., MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning. In The Twelfth International Conference on Learning Representations. and neuro-symbolic systems, e.g.: - Hersche, M., Zeqiri, M., Benini, L., Sebastian, A. and Rahimi, A., 2023. A neuro-vector-symbolic architecture for solving Raven’s progressive matrices. Nature Machine Intelligence, 5(4), pp.363-375. If the Bongard problems are part of this paper, then it seems necessary to cite - Yun, X., Bohn, T. and Ling, C., 2020. A deeper look at Bongard problems. In Advances in Artificial Intelligence: 33rd Canadian Conference on Artificial Intelligence, Canadian AI 2020, Ottawa, ON, Canada, May 13–15, 2020, Proceedings 33 (pp. 528-539). Springer International Publishing. - Depeweg, S., Rothkopf, C.A. and Jäkel, F., 2024. Solving bongard problems with a visual language and pragmatic constraints. Cognitive Science, 48(5), p.e13432. Other Strengths And Weaknesses: The goal of the study is very relevant, timely, and interesting. It would be very helpful to contextualize the performance results: how well do multimodal models fair by comparison, and why? How do neurosymbolic models perform, and why? While I am convinced of the helpfulness of the old approach, “if you cannot solve a problem, solve a different problem”, it should be pointed out that e.g. the Bongard problems are much harder in their original formulation in which a sentence describing the concept has to be formulated in natural language. Other Comments Or Suggestions: Fig. 1 (d) does not show BPs but SVRT by Fleuret et al. (2011). In fig. 2 (a) it looks like there is a redundant positional embedding 2 /3. BP is used in proposition 3.6, although the Bongard Problems are introduced later in the text and then disappear altogether. Questions For Authors: The proposed architecture is rather intricate and complex. Are there any insights the authors would like to share beyond the performance on the benchmarks? Can the authors share any reasoning for the specific form of the Can the authors explain the large improvement in accuracy in the ID tasks with comparatively little improvement compared to baseline in the out of distribution tasks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the constructive suggestions. The detailed responses to the reviewer's comments are as follows. **Q1: Concerns about the formulation and experiments of Bongard problems** Similar to the task-specific solvers, UCGS is a framework designed to solve abstract visual reasoning (AVR) tasks with only visual input/output. Therefore, we adopt a simplified version of Bongard Problems (BPs) mentioned in [1], where the task of describing rules via natural language is transformed into a classification problem on query images. Since the modified BPs have a similar definition to SVRT, we validate the ability of models to solve the modified BPs based on SVRT in our experiments. We will provide a more detailed explanation in the revised manuscript to clarify the setting of BPs. [1] A review of emerging research directions in abstract visual reasoning. **Q2: Comparison and discussion to multimodal models and neuro-symbolic models** We conducted additional experiments on multimodal LLMs GPT-4V and GPT-4o and task-specific solvers. Since UCGS-T can handle both selective and generative problems, we compare it to generative task-specific solvers PrAE [1], NVSA [2], GCA [3], ALANS [4] and RAISE [5] that support answer selection and generation, where PrAE and NVSA are neuro-symbolic models. The task-specific solvers are trained separately on each dataset without supervision of rule and attribute annotations to ensure consistency with the experimental setup of UCGS-T. | Models | RAVEN | PGM | | --- | --- | --- | | PrAE [1] | 13.6 | - | | NVSA [2] | 11.5 | - | | GCA [3] | 37.3 | 31.7 | | ALANS [4] | 50.1 | - | | RAISE [5] | 54.5 | 14.0 | | GPT-4V | 13.8 | - | | GPT-4o | 19.2 | - | | GPT-4o + Language Descriptions | 38.8 | - | | UCGS-T (Ours) | 64.6 | 38.1 | The experimental results indicate that under the same experimental setup, UCGS-T outperforms the generative task-specific solvers. **Comparison with neuro-symbolic models.** Without additional annotations, neuro-symbolic models show a significant performance drop on RPMs. [1] reports PrAE achieves 65.0% accuracy with rule supervision, but it drops to 13.6% in our experiments. PrAE and NVSA rely on predefined rules and explicitly defined representations for specific datasets, making it hard for reasoning with undefined concepts and rules. UCGS-T does not rely on manually designed concepts and rules. It performs independent reasoning process on each visual concept, inferring concept and rule without additional annotations. **Comparison with GPT-4V and GPT-4o.** We exhibit GPT-4V and GPT-4o's performance on RAVEN from [6]. Both models have lower accuracy than UCGS-T, ALANS, RAISE and GCA. GPT-4o improves with candidate image descriptions but remains behind UCGS-T, ALANS, and RAISE. Since not designed for abstract visual reasoning, GPT-4o may require further prompt engineering and chain-of-thought design to improve the performance on RPM. [1] Abstract spatial-temporal reasoning via probabilistic abduction and execution. [2] A neuro-vector-symbolic architecture for solving Raven’s progressive matrices. [3] Generating correct answers for progressive matrices intelligence tests. [4] Learning algebraic representation for systematic generalization in abstract reasoning. [5] Towards Generative Abstract Reasoning: Completing Raven’s Progressive Matrix via Rule Abstraction and Selection. [6] What is the visual cognition gap between humans and multimodal llms? **Q3: Missing references to some papers of multimodal models and Bongard problems** Thanks for the suggestions. We will cite them in the revised version. **Q4: Can the authors explain the large improvement in accuracy in the ID tasks with comparatively little improvement compared to baseline in the out of distribution tasks?** The performance of models in ID tasks is influenced by the architecture and composed modules. In OOD tasks, the difference between the training data and unseen visual concepts or abstract rules dominate the performance of models, which may reduce the performance difference between baselines and UCGS-T. **Q5: The proposed architecture is rather intricate and complex. Are there any insights the authors would like to share beyond the performance on the benchmarks?** UCGS-T extracts global visual concepts from patch-based representations for reasoning. RPM tests often involve global rules, e.g., consistent object counts across panels, which patch-based and object-centric representations struggle to capture. UCGS-T addresses this by adding a module to extract and reason independently on visual concepts. Exploring better image tokenizers may help build models with stronger OOD reasoning capabilities. **Q6: In fig. 2 (a) it looks like there is a redundant positional embedding 2 /3.** The positional embedding 2 indicates the target position and is used as the query vector in the panel encoder.
null
null
null
null
null
null
null
null
Simple Randomized Rounding for Max-Min Eigenvalue Augmentation
Accept (poster)
Summary: This paper researches the $\textbf{max-min eigenvalue augmentation}$ problem: given symmetric PSD matrices $M, A_1, \cdots, A_m \in \mathbb{R}^{n \times n}$ and a positive integer $k < m$, the goal is to solve the following optimization problem $$\max_{z \in \\{0, 1\\}^m, \\|z\\|_0 \le k} \lambda\_{\min} \left( M + \sum\_{i=1}^{m} z_i A_i \right),$$which is a general case of $\textbf{Bayesian E-optimal design}$ and $\textbf{Maximum algebraic connectivity augmentation}$. Since this problem is NP-hard, this paper turns to solving its relaxed version $$\max\_{z \in [0, 1]^m, \eta \in \mathbb{R}} \left\\{\eta \bigg| M + \sum\_{i=1}^{m} z_i A_i \succeq \eta I, \sum\_{i=1}^{m} z_i \le k \right\\},$$which is an SDP problem, and utilizes the randomized rounding technique to get an approximate solution. Specifically, let $(z\_{\text{sdp}}, \eta\_{\text{sdp}})$ be the returned solution, $R := \max \\{ \text{tr}(A_i) \\}\_{i=1}^{m}$, and $$\text{INC} := \lambda\_{\min} \left(M + \sum\_{i=1}^{m} [z\_{\text{sdp}}]_i A_i \right) - \lambda\_{\min}(M).$$This paper establishes the guarantee that if $\text{INC} = \Omega( R \ln k)$, the randomized rounding method yields a constant-factor approximate solution to the original problem. Claims And Evidence: Yes, this paper provides a strict proof for the main theorem. Methods And Evaluation Criteria: This paper is purely theoretical and doesn't include experiments. Theoretical Claims: The proofs seem correct since I didn't read all the proofs carefully. Experimental Designs Or Analyses: None. Supplementary Material: Yes, I reviewed the appendix, which only has appendix B. Relation To Broader Scientific Literature: The max-min eigenvalue augmentation (MMEA) problem studied in this paper can be taken as a generalization of the Bayesian E-optimal design (BED) and the maximum algebraic connectivity augmentation (MACA). However, there are two different aspects: (1) the augmentation matrix $A_i$, $i \in [m]$ is symmetric PSD, while it's a rank-one matrix in BED and MACA; (2) MMEA imposes the constraint $k < n$, which differs from the settings in BED and MACA. Essential References Not Discussed: None. Other Strengths And Weaknesses: $\textbf{Strengths:}$ (1) The core technical contribution is a novel intrinsic dimension concentration inequality for the minimum eigenvalue of a sum of random PSD matrices, filling a gap in matrix concentration inequalities. (2) This work proves that the proposed method provides a constant-factor approximation when the optimal increase in the minimum eigenvalue is sufficiently large. (3) Unlike many prior works, the approach accommodates augmentation matrices of arbitrary rank rather than just rank-one matrices. $\textbf{Weaknesses:}$ (1) The theoretical guarantee holds only when the optimal increase is large enough, which might not always be the case in practice. (2) There is no empirical evaluation or computational study to support the practical performance of the method. Other Comments Or Suggestions: (1) The definitions of quantities $R$ and $W$ only appear in Abstract and Introduction (Line 184). It would be helpful to repeat them when they appear in subsequent sections. (2) In Line 92, for the maximum algebraic connectivity augmentation problem, it should be $M = \mathbf{1} \mathbf{1}^\top$ since $L$ is the Laplacian matrix corresponding to $A_e$. (3) In Line 342, " It also of interest" -> "It is also of interest". Questions For Authors: The max-min eigenvalue augmentation (MMEA) is closely related to the Bayesian E-optimal design (BED) and the maximum algebraic connectivity augmentation (MACA). Could you compare this paper with existing works on BED and MACA in terms of theoretical guarantees and running time? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for taking the time to review our manuscript! Below we include our responses for each relevant section. $\text{Other Strengths And Weaknesses:\}$ 1. Yes, the optimal increase might not always be sufficiently large in practice. Although, if it is not sufficiently large, then it might also be the case that it is not worth the cost of running design experiments (BED) or removing edges (MACA). Perhaps we should mention something like this in the paper, but we also do not want to give the impression of overselling our work. 2. We would be happy to add experiments (that we have already ran). Please see our response to reviewer gow2 (specifically, bullet number 4) for further discussion. $\text{Other Comments Or Suggestions:\}$ 1. We agree and will repeat them in subsequent sections. 2. $L$ is the Laplacian matrix of the graph given in a problem instance (i.e., the graph without any edges removed). Taking $M = \mathbf{1}\mathbf{1}^{\top}$ corresponds to the case in which the given graph is empty. 3. We will update this typo. $\text{Questions For Authors:\}$ In the bullets below, we discuss existing theoretical guarantees and run times for BED and MACA, respectively. Note that all work that we discuss considers the case in which the augmentation matrices are rank-one matrices. - BED. Work on BED focuses on the setting in which $k \geq n$. Most notably, (Allen-Zhu et al.. 2021) develop an algorithm that provides a $(1-\epsilon)$-approximation under the condition that $k = \Omega(\frac{n}{\epsilon^2})$. Because we focus on the setting in which $k < n$, our theoretical guarantee is not comparable. Regarding run times, their algorithm runs in $\tilde{O}(mn^2)$ time, while our algorithm runs in $\tilde{O}(m)$ time. That said, the main bottleneck is solving the SDP relaxation, so this runtime comparison is not relevant, unless one developed a method for approximately solving the relaxation in better than $\tilde{O}(mn^2)$ time. - MACA. (Kolla et al., 2010) provide guarantees for an algorithm that hold in both the $k < n$ and $k \geq n$ settings. Their algorithm provides a constant-factor approximation under the assumption that the optimal increase is $\Omega(1)$, whereas our algorithm provides a constant-factor approximation under the assumption that $\Omega(\ln k)$, as $R = \sqrt{2}$ in this case. The runtime of their algorithm is again larger than simple randomized rounding’s runtime. We are happy to include this discussion in the manuscript. Additional references could be discussed, but these are the most relevant to your question.
Summary: In this work, the authors provided a new algorithm for the max-min eigenvalue augmentation problem. The method is able to achieve a constant approximation to the optimal value with a constant probability, given that the augmentation improvement is sufficiently large. The results are established by proving an extension of the matrix concentration inequality. Please see the following sections for my detailed comments. Claims And Evidence: Due to the time limit, I did not check the correctness of the theory, except those briefly mentioned in the main paper. The theoretical claims seem correct by checking the main paper. Methods And Evaluation Criteria: The methods and evaluation criteria make sense to me. Theoretical Claims: Due to the time limit, I did not check the correctness of the theory. The theoretical claims and the proofs in the main paper seem correct. Experimental Designs Or Analyses: N/A. Supplementary Material: I did not check the supplementary material due to the time limit. Relation To Broader Scientific Literature: This paper is related to the topic of ICML conference and should be interesting to audiences from machine learning and numerical methods fields. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Please see my comments in other sections. Other Comments Or Suggestions: - Line 275, left column: If $\beta$ is larger, I think only $L_1(\beta)$ will have a smaller dimension? This seems to be consistent with the fact that Lemma 3.2 only considers $L_1(\beta)$. - It would be better if the authors could explain how the bound changes from the dimension $n$ in Theorem 2.5 to the intrinsic dimension. The same comment applies to Lemma 3.3. - I wonder why the probability bound in Corollary 4.3 is 29/64 instead of 61/128? - As pointed out by the authors, it would be important to include empirical results of the proposed algorithm. Although the contributions of this paper is on the theory side, it is helpful to show the value of the proposed algorithm by exhibiting the empirical performance. It would be particularly interesting to compare the new algorithm with existing algorithms designed for a specific application, such as the Bayesian E-optimal design problem and the maximum algebraic connectivity augmentation problem. Questions For Authors: Please see my comments in other sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for taking the time to review our manuscript! Below we include our responses for each relevant section. $\text{Other Comments or Suggestions:\}$ 1. Yes, nice catch, we should have 1 instead of j here; we can make this update. 2. We agree that the exposition would benefit from this discussion. Regarding Lemma 3.2, the dimension of $L_1(\beta)$ is bounded by the intrinsic dimension. Regarding Lemma 3.3, there is a typo in the sentence preceding the lemma: it should read “Because Theorem 2.2. provides…” instead of “Because Theorem 2.4 provides…”. This sentence would then explain how intrinsic dimension comes into play. 3. Our reasoning is for $29/64$ is as follows. Taking $E_1$ and $E_2$ to be the events of interest, we have $$P(E_1 \cap E_2) = P(E_1) + P(E_2) - P(E_1 \cup E_2) \geq 1/2 + 61/64 - 1 = 29/64.$$ 4. We agree with you (and ourselves for that matter!). We have already run some synthetic (maximum algebraic connectivity and bayesian optimal design) experiments along these lines. We would be happy to include them, assuming that it is within scope; there is room for a small additional section, but we are not sure if it is standard to incorporate something like this at this point. We compare against Federov’s exchange algorithm because (1) the algorithm is the primary algorithm used in state-of-the-art software implementations, and (2) the algorithm can readily be applied in the general-rank and $k < n$ setting. Our main observations: Federov’s algorithm typically provides a better approximation, but it can take a significantly longer time to find an approximation that is on par with simple randomized rounding (even accounting for the SDP solve). Perhaps this would be better suited for future work. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the rebuttal! I would suggest the authors include the experiment results into the supplementary materials if there is not enough room in the main manuscript. I am happy to increase my rating.
Summary: This paper studies the max-min eigenvalue augmentation problem: choose a subset of at most $k$ PSD matrices $A_i$ to augment a PSD matrix $M$ to maximize the minimum eigenvalue of $M + \sum_i A_i$. This problem generalizes the Bayesian E-optimal design (where certain experiments need to run together) and maximum algebraic connectivity augmentation problems (to $k < n$). This paper gives a simple randomized rounding method of an SDP relaxation of the max-min eigenvalue augmentation problem, and proves that if the optimal increase (OPT - $\lambda_{min}(M)$) is sufficiently large, then the rounding method gives a constant-factor approximation. The analysis depends on a new Chernoff-type matrix concentration inequality for the minimum eigenvalue of a sum of independent random PSD matrices. Claims And Evidence: In this theoretical papers, all claims are proved, or citations are provided. Methods And Evaluation Criteria: This is a theoretical paper with proofs but not evaluation. Theoretical Claims: I checked the claims up to Section 2, and did not find any problems. Experimental Designs Or Analyses: This paper does not have experiments. Supplementary Material: I did not review any supplementary materials. Relation To Broader Scientific Literature: This paper proves a new Chernoff-type matrix concentration inequality for the minimum eigenvalue of a sum of independent random PSD matrices, while the literature only has such inequality for the maximum eigenvalue. This paper gives an approximation algorithm for the max-min eigenvalue augmentation problem, which generalizes the Bayesian E-optimal design and maximum algebraic connectivity augmentation problems. Essential References Not Discussed: I am not aware of essential references not discussed in this paper. Other Strengths And Weaknesses: (As stated in the paper and in the summary above,) the randomized rounding algorithm is very simple: include a matrix $A_i$ with probability exactly the value in the SDP relaxation. And the analysis appears to be intuitive. However, this reader may appreciate some discussion on the technical restriction on the optimal increase ($OPT - \lambda_{min}(M) = \Omega(R\ln k)$): whether this restriction is necessary (e.g., due to lower bounds), or whether it arises naturally in practice. Other Comments Or Suggestions: This paper is very well written, and I don't have suggestions. Questions For Authors: For the technical restriction on the optimal increase ($OPT - \lambda_{min}(M) = \Omega(R\ln k)$), besides “it works”, is it necessary (e.g., due to matching lower bounds), or is it a mild assumption in practice? Ethical Review Concerns: No ethical review needed. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for taking the time to review our manuscript! Below we include our responses for each relevant section. ${\bf \text{Questions For Authors:}}$ We left the lower bound as a direction for future work, but since our original submission, we have realized that it is tight and would be happy to include this result. The result more-or-less follows from the fact that the matrix Chernoff inequality for the minimum eigenvalue (i.e., Theorem 2.4) is sharp (see 5.1.2. of Tropp). More specifically, the idea is to take $M$ to be a diagonal matrix whose first $n-k$ diagonal entries that are large and whose last $k$ other diagonal entries equal to $0$. We restrict the augmentation matrices to the las $k$ dimensions and take them as specified in Tropp. We could include the result as a short proposition and place its proof in the appendix.
Summary: This paper presents a matrix concentration on the minimum eigenvalue of the sum of PSD matrices, where the lower bound is parameterized via a generalization of the intrinsic dimension of the expected sum of the matrices. This complements the existing upper bounds e.g. from Joel Tropp's monograph on matrix concentration. The paper shows that this concentration can be used to prove that a simple SDP relaxation + randomized rounding scheme achieves a constant factor approximation to the "max min eigenvalue augmentation problem" with constant probability in a regime where previous results did not give guarantees. This max min eigenvalue augmentation problem is a generalization of certain optimal design and resilient graph design problems. While the regime that this paper studies appears to be less common invocations of these optimal design and graph problems, but seem like natural problems to study. Paper is entirely theory; no experiments at all. Claims And Evidence: The paper is entirely theory. The paper can be partitioned into two parts: proving the matrix concentration result, and proving that the concentration result is helpful for the max min eigenvalue augmentation problem. Most of the paper focuses on the former, and shows how a certain partitioning of the unit sphere plus the existing matrix concentration literature can be used to prove this new minimum eigenvalue concentration result. It's a short and clever series of arguments. There's a smaller section of the paper that argues how the SDP relaxation + randomized rounding algorithm provides a good (constant factor) solution to the max min eigenvalue augmentation problem, which seems to rely on relatively standard machinery as far as I can tell. It's good, but the emphasis is on the matrix concentration imo. Methods And Evaluation Criteria: There's no empirical evidence in this paper. Not a problem per se, just a statement. Theoretical Claims: I checked some of the proofs in detail, but not all. They seem essentially correct. My main interest in examining the theoretical claims is in better understanding how the core novel concentration result, Theorem 3.1, compares directly with the existing results on both 1) minimum eigenvalue concentration without an intrinsic dimension term and 2) maximum eigenvalue concentration with an intrinsic dimension term. The claim seems to be correct, but I don't have a very clear sense of how it really compares to the prior work. In particular, there's a few nonstandard terms in here: 1. The definition of intrinsic dimension used in their result is a generalization of the standard definition. In particular, the intrinsic dimension of a matrix, as defined in the Tropp monograph, is the ratio of trace to spectral norm. In this paper, it's the ratio of the trace to some parameter $\alpha$ which is less than or equal to the spectral norm. It's not nearly as intuitive to think about what this generalized intrinsic dimension is or should be. 2. They don't directly tackle the minimum eigenvalue problem. Instead, they do something more general I suppose. They have a parameter $\alpha$. They show that for all vectors $\vec v$ such that $\vec v^\intercal E[X] \vec v \geq \alpha$, we have $\vec v^\intercal X \vec v \leq (1-\varepsilon)\alpha$ with high probability, where $X$ is our sum of independent PSD matrices. This concentration result only holds for values of $\alpha$ between $O(L / \varepsilon^2)$ and $O(\| E[X] \|_{op})$, where L is a uniform upper bound on the operator norms of each summand. Further, the failure probability scales with $\frac1{\varepsilon^2 \alpha}$. When $\alpha = \| E[X] \|_{op}$, this should recover the usual definition of intrinsic dimension, but in this case very few vectors $\vec v$ will be covered in this theorem statement. It's not made 100% clear how this result translates to a genuine minimum eigenvalue guarantee, and I think the paper would benefit from a bit more discussion of this. 3. I note that the authors do have some light discussion of this, but it feels insufficient to me. Adding an appendix to think about the shape of this concentration result for (e.g.) and IID sum of terms and meditating on when a minimum eigenvalue guarantee follows would be rather beneficial in my perspective. But the claims seem to be correct. The core results just needs more discussion imo. Experimental Designs Or Analyses: No experiments. Supplementary Material: I reviewed some of the lemmas around the concentration. Certainly Lemma 3.2 from my notes. Perhaps a couple others? Not a ton of it. Relation To Broader Scientific Literature: This concentration result seems like it might be a powerful short little result. This is why I harped on it in my review above. I don't fully appreciate how this result compares to the existing machinery, and I think the paper would benefit from having more of a discussion on this point. I'd like to better understand that concentration in order to better understand the potential broader applicability of such a result. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper has nice writing, especially in its proof sketches. It's pretty intuitive, and feels nicely self-contained as well. Other Comments Or Suggestions: List of typos and recommended edits. Feel free to ignore anything and everything here. 1. [Line 87, left] Shouldn't L be a PSD matrix? So shouldn't you have to subtract the all-ones matrix? 2. [Lines 95-101, right] While true, I'm not clear what's the point of the end of this paragraph. 3. [Lines 126-132, left] This "larger/smaller" language is awkward throughout. Say something like "If INC increases, then the number of quadratic forms x'Ax that are Omega(INC) decreases. More precisely, letting S = {...} be the set of directions (we refer to unit vectors as directions), we have that for any gamma ...." 4. [Lines 160-162, left] This break between paragraphs is awkwards and repetitive. Rephrase. 5. [Figure 1] Add more to the caption. Explain that $\alpha = \gamma INC$ maybe, and that increasing $\gamma$ makes the black boundary smaller. Also, make the colors print better in black and white 6. [Line 131, right] Define the intrinsic dimension you use directly with a clear definition. intdim(A,alpha) = tr(A)/alpha. 7. [Line 202, left] Specify page 61 for the thm statement, since it's not exactly the statement of thm 5.1.1. Same for theorem 2.4. 8. [Theorems 2.1, 2.2, 2.4, 2.5] I'd try to be more consistent between always writing these bounds in terms of t or 1-eps. Either is fine, but being more consistent would help. 9. [Line 210, right] This leftrightarrow symbol is kinda confusing here, because it sounds like you're more strongly asserting this iff claim. I would remove the leftrightarrow and replace it with parenthesis, so it instead says something like "(i.e. X > (1-eps) E[X])". 10. [Line 229, left] Remove "a" 11. [Line 230, left] Remove first "i \in [m]" 12. [Thm 3.1] Discuss when such an alpha exists. Because it doesn't always exist. 13. [Line 271, left] remove subscript on X_i in E[X_i]. 14. [Line 291, left] Subscript on L should be 1 not j. 15. Consider having a more concrete theorem statement for E-optimal design or algebraic connectivity augmentation problem. It's just a corollary. But it'd be good to write out. Questions For Authors: See theory discussion -- what's the right way to think of the underlying concentration argument? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for taking the time to review our manuscript! Below we include our responses for each relevant section. We focus most of our response on your main line of inquiry. ${\bf \text{Theoretical Claims/Other Strengths and Weaknesses:}}$ These are greats points. In what follows, we aim to shed light on them and discuss how we can address them in the manuscript. Hopefully this ties everything together. Consider [Lines 144-156, right]. These lines state that the minimum eigenvalue is a special case of the main quantity of interest when $\alpha = \lambda_{\mathrm{min}}(\mathbb{E}[X])$, as you point out in your 2nd comment under Theoretical Claims. While this is the case, our results do not imply anything interesting when $\alpha = \lambda_{\mathrm{min}}(\mathbb{E}[X])$; this seems to be what you are asking about in your 2nd and 3rd comment. Our writing then seems to be unintentionally misguiding. Our concentration inequality is only interesting when $\alpha > \lambda_{\mathrm{min}}(\mathbb{E}[X])$. Per this work, it is best to think of main random quantity $\min_{x \in S(\mathbb{E}[X],\alpha)}x^{\top}Xx$ as being intimately related to the minimum eigenvalue $\lambda_{\mathrm{min}}(X)$, but not ever the minimum eigenvalue itself. We can add clarification here as well as a forward references to later parts of the work for more detailed clarification (specifically, see the paragraph that follows the next two.) Regarding your comment about intrinsic dimension, we can modify the discussion of Remark 2.3. Our notion of intrinsic dimension provides an upper bound on Tropp’s intrinsic dimension. The upper bound is convenient for lower bounding $x^{\top}Xx$ over $x \in S(\mathbb{E}[X],\alpha)$, rather than upper bounding $x^{\top}Xx$ over $x \in S^{n-1}$, but it also generalizes Tropp’s definition, so we refer to it as intrinsic dimension. As you point out, it would be nice to understand how the novel concentration result compares with the minimum (and maximum) eigenvalue concentration result. In this spirit, it is natural to wonder why we cannot just apply the existing minimum eigenvalue concentration result? To position the reader’s thinking, we can explain after Theorem 2.4 why we cannot upper bound (6) with the matrix Chernoff inequality presented in Theorem 2.4. If there is a direction $x$ per which $x^{\top}\mathbb{E}[X]x = 0$, and hence $\lambda_{\mathrm{min}}(\mathbb{E}[X]) = 0$, then Theorem 2.4 provides us with a trivial guarantee. The existence of such a direction, however, should not impact upper bounding (6). One might hope that we can directly apply Theorem 2.4 to the set $S(E[X],\alpha)$, but the set is not a subspace (as seen in Figure 1), so we cannot do this. Although, this idea is exploited in the proof of Theorem 3.1; see Lemma 3.2. To address your point that the core result needs more discussion (and positioning with respect to the other concentration results), we can include the following discussion following Theorem 3.1. - Theorem 3.1 tells us that we can guarantee that $x^{\top}Xx$ is larger on a smaller set of directions $x \in S(\mathbb{E}[X],\alpha)$ per which the quadratic form $x^{\top}\mathbb{E}[X]x$ is larger. Accordingly, there is a tradeoff: For large values of $\alpha$, we can guarantee $x^{\top}Xx$ is larger, albeit in a smaller number of directions $x$. Ultimately, in the proof of Theorem 4.1, we choose $\alpha$ to optimize this tradeoff. - Consider the extreme case in which $\alpha = \lambda_{\mathrm{min}}(\mathbb{E}[X])$, which is still possible under the assumptions of Theorem 3.1. Using the fact that $\mathrm{tr}(\mathbb{E}[X]) \geq n \lambda_{\mathrm{min}}(\mathbb{E}[X])$, it then follows that Theorem 3.1 provides a weaker guarantee than Theorem 2.4. Following Theorem 4.1, we could discuss in slightly more detail how the proof follows from carefully choosing alpha based on the tradeoff inherent in Theorem 3.1 discussed above, i.e. taking $\alpha = \Omega(R ln k)$. ${\bf \text{Other Comments Or Suggestions:}}$ We would like to include essentially all of your recommendations. We just make note of two spots in which we would like to keep the manuscript as is. - 1. Adding the matrix of all ones “pushes” the eigenvalue for the eigenvector of all ones to the top of the spectrum so that we can consider the minimum eigenvalue instead of the second smallest eigenvalue. - 9. At this point, we are hesitant to make this update so that we do not introduce any errors in the proofs that follow in later sections.
null
null
null
null
null
null
DiffAdvMAP: Flexible Diffusion-Based Framework for Generating Natural Unrestricted Adversarial Examples
Accept (poster)
Summary: In this paper, the authors propose a flexible framework named DiffAdvMAP to facilitate the effective and natural generation of unrestricted adversarial examples (UAEs). The framework is based on the posterior distribution with two constraints of adversary and reconstruction, which is eventually optimized with a gradient method. Experiments validate the effectiveness, efficiency and naturalness of the proposed method. Claims And Evidence: Yes. The writing is clear. Methods And Evaluation Criteria: The two constraints adopted in the framework are well-motivated and straightforward to achieve high ASRs and flexibility under different settings. Evaluation on NIPS2017 with diverse metrics like FID, LPIPS in addition to ASR is acceptable. Theoretical Claims: The formulation and logic of the theoretical claims seems reasonable. One minor question is about the last line of Eq. (20). The equality holds when $p(y|\hat{x}_t)p(C_1) = p(C_1,y|\hat{x}_t)$, which is only correct when $C_1$ is independent of $y$ and $\hat{x}_t$. This is clearly wrong given the definition of $C_1$. Experimental Designs Or Analyses: The experiments are extensive with different settings and baselines. Ablations of important modules are conducted. Results are analyzed with sufficient visualizations. Supplementary Material: Yes. Relation To Broader Scientific Literature: This paper provides a unified and flexible theoretical framework for the generation of UAEs with diffusion models. Essential References Not Discussed: AdvDiffuser published in ICCV2023 is suggested to be included as a baseline. Other Strengths And Weaknesses: Strength: * The paper formulates the generation of UAEs from a Bayesian perspective, which is interesting. * The method is well-motivated to combine two constraints into the framework. * The experiments are extensive with sufficient analyses. Weakness: * The method eventually ends up with a loss function with three regularization terms, which is optimized simply with gradient methods. For the term corresponding to the adversarial constraint, it's different from previous methods only because of the utilization of CW attack, rather than original likelihood maximization. There should be more discussions about the practical difference between the proposed method and previous methods and the essential factors that make the method more effective. * There is a phenomenon that the transfer ASRs for Sec. 4.2 are lower in general than those in Sec. 4.3. Is there any explanation? Other Comments Or Suggestions: * In line 36, a string of "0" is injected to the main text. * The resolution of Figure 1 should be higher. Questions For Authors: See comments above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your reviews and kindly reminder. According to your opinions, we add the theoretical explanation of the effetviness of DiffAdvMAP, and explain the reason for the difference of transfer ASRs in Sec. 4.2 and Sec. 4.3, the responses are as follows. We wish you can improve the score after reading them. 1. The essential factors that make the method more effective: As is illustrated in Section 3, we introduce an adversarial constraint to ensure the effectiveness of UAEs and a reconstruction constraint to control the content of generated UAEs, such two constraints are considered as two events of the posterior distribution of UAEs. Then the posterior distribution is approximated with the prior knowledge of real data learned by the diffusion model, unlike previous methods which only use diffusion models as strong denoisers to enhance the generation process, DiffAdvMAP generates adversarial features by sampling from the approximated posterior distribution, which make the adversarial features more coherent to the original features, thus improving the naturalness of UAEs. As a result, we have more space to improve effectiveness at the cost of sacrificing naturalness. Besides, in order to derive the probability density function(PDF) of the posterior distribution, the events should be deterministic events that are expressed as equations. In this situation, the objective function of CW attack can be converted into the form of an equation, on the right side of the equation is the adversarial confidence level $c$, once $c\le 0$, the adversarial example can attack the classifier successfully, as for the likelihood function, there’s no exact range indicates that the attack will succeed. As a result, we can control the strength of UAEs more effectively by controlling the value of confidence level c, such that we can achieve a better trade-off between the effectiveness and naturalness of UAEs. As is illustrated in Table 6 in Appendix E, the white-box attack success rate is nearly 100%, but the transfer ASR is not enough when the absolute value of the confidence level is small. When the absolute value of the confidence level increases, the transferability increases, and although the naturalness decreases, it is still better than other baselines. 2. The reason for transfer ASRs in Sec. 4.2 are different from those in Sec. 4.3: On the one hand, experiments in Section 4.2 and Section 4.3 undertake different tasks with different models: experiments in Section 4.2 generate UAEs from noise with the latent diffusion model[1] and experiments in Section 4.3 generate image-similar UAEs with real images as reference. The input and the generated UAEs are completely different. On the other hand, since there’s no real image for reference in the task of generating UAEs from noise, we employ a strong class guidance to ensure the naturalness of generated UAEs, then it becomes harder for DiffAdvMAP to counteract the strong class guidance to generate adversarial features, thus the effectiveness of generated UAEs decreases when maintianing their naturalness, which is reflected by the transfer attack success rate. 3. Including important paper: Since the authors of AdvDiffuser don’t release their code(though there’s a github repository proposed in the paper, the repository is empty), and it conducts the experiments under the white-box setting, which is different from the setting in our main paper, we have compared DiffAdvMAP with AdvDiffuser in Appendix D under the white-box setting, the evaluation metrics are all from their paper, please refer to it for more details, it’s obvious that the efficiency, effectiveness and naturalness of UAEs generated by our method is better than AdvDiffuser. 4. Typos and suggestions: We are sorry we made such typos, the denominator of equaiton(20) should be $p(C_1|x_t,y)$, but it has no impact on the subsequent derivation, we will correct the typos here and in line 36, increase the resolution of Figure in the revised version. --- Rebuttal Comment 1.1: Comment: Thanks for your thorough and detailed response. I will keep my rating for acceptance.
Summary: This paper introduces Diffusion-based Adversarial Maximum a Posteriori (DiffAdvMAP), a framework that generates unrestricted UAEs by sampling from posterior distributions. It leverages a diffusion model to approximate the prior distribution of real data and incorporates adversarial and reconstruction constraints within a Bayesian framework to enhance both effectiveness and realism. Experimental results on ImageNet demonstrate that DiffAdvMAP achieves a superior trade-off between image quality, flexibility, and transferability compared to existing unrestricted adversarial attack methods. Claims And Evidence: The claimed technical contributions are overstated. The authors assert that their framework can generate UAEs under various attack conditions, including noise-based and global image-similar UAEs. However, as shown in Fig. 1, generating UAEs from noise simply involves using a pretrained diffusion model to produce a source image, then applying adversarial loss to an intermediate result—making it technically no different from global image-similar UAEs generation. Similarly, regional customized UAEs generation merely applies a mask and/or style loss to the source image, which is not a novel approach. Methods And Evaluation Criteria: In Section 4, the authors claim that the evaluation is conducted under a black-box setting, which does not align with the titles of the subsequent tables. Furthermore, the classification of UAE generation types (Sections 4.2 and 4.3) appears inconsistent with prior works, where AdvDiff and DiffAttack are typically considered the same type of method. Additionally, the authors fail to include AdvDiffuser [1], an important related work. [1]AdvDiffuser: Natural Adversarial Example Synthesis with Diffusion Models Theoretical Claims: n/a Experimental Designs Or Analyses: n/a Supplementary Material: Yes. All of the supplementary material has been reviewed. Relation To Broader Scientific Literature: n/a Essential References Not Discussed: n/a Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We are sorry that maybe because our writing is not clear enough, you have some misunderstandings about the innovation of our paper. Our responses are as follows, and we hope you can improve the score after reading our responses and paper carefully. 1. DiffAdvMAP is a flexible framework that can be adapted to different attacking scenarios by sampling UAEs from their approximated posterior distribution under the adversarial constraint and reconstruction constraint. We make specific adaptations to each attacking scenario.\ (1) Technical $\textbf{Difference}$ between generating UAEs from noises and image-similar UAEs: When generating UAEs from noise, we generate most low-level semantics via the generation process and then generate high-level adversarial features; when generating image-similar UAEs, we destroy the high-level features of the input clean image with several diffusion steps, and regenerate high-level adversarial features, as is shown in Alg.1 in Appendix C. Besides, we leverage different diffusion models for such two tasks: the latent diffusion model and an unconditional diffusion model, we also build different objective functions for each task: we introduce the reconstruction constraint based on the reference image in the task of generating image-similar UAEs, while in the task of generating UAEs from noise, we replace the reconstruction constraint with class condition y, as is introduced in Section 3 and Appendix B. The experimental results in Section 4 also show that DiffAdvMAP can generate UAEs from noise and generate image-similar UAEs. \ (2) DiffAdvMAP achieves $\textbf{innovations}$ in generating regional and customized unrestricted adversarial examples (UAEs): For regional UAEs, it abandons the traditional additive perturbation-based approach applied to masked areas. Instead, it destroys mask-covered regions and reconstructs adversarial features through posterior distribution sampling under dual constraints (adversarial and reconstruction), enabling generative reconstruction of adversarial patterns. For customized UAEs, it embeds specific reconstruction constraints (e.g., style/color consistency) into the posterior probability density function, replacing explicit style/color loss functions. This directly guides probability distribution shaping via reference images to ensure precise semantic alignment. 2. Table title is $\textbf{aligned}$ to section content: In section 4, we conduct experiments under the transfer-based black-box setting, the title of Table 2 ‘The white-box attack success rate(%), transfer attack success rate (%), image quality metrics, as well as the run time(sec) of DiffAdvMAP and baseline methods in the task of generating global image-similar UAEs’ means the white-box attack success rate(ASR) of attacking white-box known surrogate models, and transfer ASR of attacking black-box unkown classifiers is included in the table. When performing a transfer-based black-box attack, UAEs are generated with a known surrogate model, eg, Inception V3, then these UAEs are used to attack other unknown classifiers, eg, Mobilenet V2, Resnet50, and Swin-Transformer, and obtain the transfer attack success rate[1]. As a result, the evaluation is conducted under a black-box setting, and the content and the title of the tables in Section 4 are consistent with our claims.\ [1] Chen Y, Liu W. A theory of transfer-based black-box attacks: Explanation and implications[J]. Advances in Neural Information Processing Systems, 2023, 36: 13887-13907. 3. Advdiff and DiffAttack are $\textbf{different}$: According to the paper and official code of AdvDiff and DiffAttack, AdvDiff leverages the latent diffusion model to generate UAEs from noise, its inputs are random Gaussian noise images, it designs two adversarial guidance techniques to conduct adversarial sampling in the reverse generation process of diffusion models; DiffAttack leverages the stable diffusion model to generate image-similar UAEs, its inputs are clean images, it computes the cross attention map and self attention map of each latent code as the objective fucntion and optimizes the objective function during the generation. As a result, though AdvDiff and DiffAttack are all diffusion-based methods, the principles and corresponding tasks are different, so we compare the performance of our method in different tasks with them in different sections. 4. Include important paper: Since the authors of AdvDiffuser did’t release their code(though there’s a github repository proposed in the paper, the repository is empty), and it conducts the experiments under the white-box setting, which is different from the setting in our main paper, we have compared DiffAdvMAP with AdvDiffuser in Appendix D under the white-box setting, the evaluation metrics are all from their paper, please refer to it for more details, it’s obvious that the efficiency, effectiveness and naturalness of UAEs generated by our method is better than AdvDiffuser. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The rebuttal addressed some of my concerns; however, my main concern regarding the necessity of separating the generation of UAEs from noise-based and image-similar cases remains unaddressed. The only distinction appears to be how the 'low-level semantics' images are obtained. Since the proposed DiffAdvMAP block is applied after generating these images, I don't see a strong justification for treating these conditions separately or presenting this distinction as a key contribution. Notably, prior work [1] has also compared AdvDiff and DiffAttack within the same evaluation table. Given this, I will raise my score to a weak reject. [1] AdvDiff: Generating Unrestricted Adversarial Examples using Diffusion Models --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your careful consideration of our rebuttal and thoughtful evaluation adjustment. To facilitate a deeper understanding of this research, we have prepared a detailed explanation that addresses each point of concern. We would be pleased to provide additional clarification if any questions remain unresolved. The workflows of existing diffusion-based unrestricted adversarial attack methods, except for the differences in the details of each method, are similar[1][2][3]. Since existing methods failed to generate unrestricted adversarial examples(UAEs) within the same framework under various scenarios, we are motivated to develop a unified framework with such a workflow and achieve UAEs generation under various attacking conditions by tuning some modules or hyperparameters of the framework. Although the flowchart in Figure 1 appears to have no significant difference after obtaining the low-level semantics, there are distinctions in the implementation of such two scenarios. For example, in the task of generating image-similar UAEs, we derive the posterior distribution under the adversarial constraint and reconstruction constraint, while in the task of generating UAEs from noise, since there’s no real image for reference, we derive the posterior distribution under the adversarial constraint and class condition y, hence the subsequent processes of the two scenarios like probability density function(PDF) inference and PDF maximization is also different. Besides, the diffusion models applied in such two scenarios are also different: the conditional latent diffusion model(LDM) for generating UAEs from noise and an unconditional diffusion model for generating image-similar UAEs. For the experiments proposed in AdvDiff, as is proposed in the last sentence of section 4.1 in their paper: ‘Note that we adopt the clean images generated by LDM to achieve DiffAttack and AutoAttack for a fair comparison’, which means the authors use noise images as inputs of the LDM to synthesize images, then use the synthesized images as clean image inputs of DiffAttack and AutoAttack to generate UAEs, such that the main contents of the UAEs generated by each method are the same. In other words, they employ the LDM before performing DiffAttack and AutoAttack, such that the input of their implementation of DiffAttack and AutoAttack is still noise, and the UAEs generated with each method maintain the same main contents. By leveraging such adaptation, methods compared within the same evaluation table are all noise-based. However, the attacking conditions of AdvDiff and DiffAttack are fundamentally different. In our method, the low-level semantics in generating UAEs from noise are randomly synthesized by the diffusion model. In generating image-similar UAEs, the low-level semantics are determined from real natural images. As a result, the content of UAEs generated under the two scenarios is completely different, such that we can not compare their naturalness and effectiveness in the same experiment. Besides, we reimplemented the source code of AdvDIff and DiffAttack without any adaptation, so we compare them in different tables under different scenarios. Moreover, AdvDiffuser also regards such two scenarios separately. From their paper, in section 4.2, they compare the UAEs generated from noise with AdvDiffuser and AC-GAN in Table 1, and in section 4.3, they compare image-similar UAEs with AdvDiffuser and GA-PGD in Tables 2 and 3. If our explanation has helped resolve your concerns, we would greatly appreciate it if you might consider revising your rating to acceptance. [1] AdvDiff: Generating Unrestricted Adversarial Examples using Diffusion Models.\ [2] Advdiffuser: Natural adversarial example synthesis with diffusion models.\ [3] Diffusion Models for Imperceptible and Transferable Adversarial Attack.
Summary: This paper proposes DiffAdvMAP, a Bayesian-based framework for generating universal adversarial examples (UAEs) by approximating their posterior distribution. Unlike existing diffusion-based methods, which struggle with low naturalness and limited effectiveness, DiffAdvMAP leverages adversarial and reconstruction constraints to enhance both attack success and adaptability. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: (1) In the Bayesian framework, given the reference image, true label, diffusion model, and target classifier, a mathematical formulation for UAE generation is constructed. The adversarial constraint is derived from the CW attack, and the reconstruction constraint is introduced based on different generation scenarios. The posterior distribution of UAEs is obtained using Bayesian principles, considering the properties of the diffusion model. (2) For image-similar UAEs, the posterior distribution's PDF is derived, leading to the objective function through approximation. To balance computational complexity, a one-step estimation method is used, approximating the conditional distribution accordingly. The roles of key parameters, such as standard deviation adjustments, are clarified to ensure the correctness and practicality of the objective function. Experimental Designs Or Analyses: (1) Multiple evaluation metrics are employed: FID and LPIPS for global image similarity UAEs, and FID, TRES, and HyperIQA for noise-based UAEs. (2) Appropriate baselines are selected for different attack scenarios: AdvDiff for noise-based UAEs and classic unrestricted and diffusion-based attacks for global image similarity UAEs. (3) Ablation studies analyze the impact of reconstruction constraints, adversarial constraints, destruction-reconstruction modules, and variations in adversarial confidence levels. Supplementary Material: Yes, I review all the supplementary material. Relation To Broader Scientific Literature: Diffusion models excel in image synthesis and time series prediction, but their application in adversarial sample generation often relies on them as priors without fully utilizing their ability to learn real data distributions. DiffAdvMAP integrates adversarial and reconstruction constraints to better leverage learned priors, improving naturalness, attack success, and flexibility, expanding diffusion models’ role in adversarial attack research. Essential References Not Discussed: No. Other Strengths And Weaknesses: (1) DiffAdvMAP is primarily evaluated on ImageNet-compatible datasets, but its adaptability to specialized domains like medical imaging and remote sensing remains unexplored. (2) While the method improves generation speed, it still relies on complex diffusion models and multiple iterations, leading to high computational costs. This may hinder real-time attacks or large-scale data processing. Other Comments Or Suggestions: (1) Expand experiments by incorporating diverse datasets and industrial inspection images, to evaluate DiffAdvMAP’s adaptability. Investigate framework adjustments for specialized data and explore adversarial sample generation in multimodal scenarios. (2) Optimize efficiency by improving diffusion model structures, refining sampling strategies, leveraging hardware acceleration, and exploring model compression and quantization to reduce resource consumption. Questions For Authors: (1) Expand experiments by incorporating diverse datasets and industrial inspection images, to evaluate DiffAdvMAP’s adaptability. Investigate framework adjustments for specialized data and explore adversarial sample generation in multimodal scenarios. (2) Optimize efficiency by improving diffusion model structures, refining sampling strategies, leveraging hardware acceleration, and exploring model compression and quantization to reduce resource consumption. Ethical Review Concerns: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the reviews and kindly suggestion. According to your opinions, we add experiments on another dataset Celeba-HQ, the experimental results are satisfactory. We wish you can improve the score after reading our response.\ 1. Dataset Diversity: We use the ImageNet-compatible dataset to evaluate DiffAdvMAP because it contains the most important objects in the real world, and it is widely used in previous works[1][2] as a single dataset. We also evaluate DiffAdvMAP with another subset of ImageNet in Appendix D. DiffAdvMAP is developed based on pre-trained diffusion models, and since it’s hard to obtain open-sourced diffusion models in the area of medical imaging and remote sensing, we supplement experiments on the Celeba-HQ dataset in attacking the face gender classification model. The experimental details are as follows:\ We trained a Resnet18 and a Mobilenet V2 as the gender classifier, and used the Resnet18 as the known surrogate classifier to generate 1000 UAEs, among them, 500 are female faces and 500 are male faces originally. For the evaluation, we propose the clean accuracy of each model, the white-box attack success rate(ASR) towards Resnet18, and transfer ASR towards Mobilenet V2 in the table below.\ |Models | Resnet18| Mobilenet V2|\ |Clean Accuracy|98.33%|98.50%|\ |ASR|100%|92.3%|\ Note that the generation speed is still 6 seconds/image. We think this experiment can show the adaptability of DiffAdvMAP to other datasets to some extent.\ [1] Dong Y, Pang T, Su H, et al. Evading defenses to transferable adversarial examples by translation-invariant attacks[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 4312-4321.\ [2] Gao L, Zhang Q, Song J, et al. Patch-wise attack for fooling deep neural network[C]//Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVIII 16. Springer International Publishing, 2020: 307-322. 2. Optimize efficiency: As introduced in Section 1, diffusion models have a supreme performance in image synthesis. Our DiffAdvMAP method fully utilizes the prior knowledge of real data distribution learned by the diffusion models and manages to generate more effective and natural UAEs than the previous method. We already refined the sampling strategy by leveraging a truncated sampling process. We also propose targeted optimizations on our method to address the inherent generation speed constraints of other diffusion-based methods, keeping a comparable efficiency compared with most attack methods, which is shown in Table 2. Note that the GPU we applied to generate UAEs is RTX 3090 24G. The generation speed can be further improved if we use more advanced hardware. Use a lighter diffusion model to accelerate the generation speed is also included in our future research considerations. 3. Multimodal scenarios: In this paper, we are focusing on generating general UAEs in single-modal cases. Generating UAEs under the multimodal scenarios is an interesting area; we will conduct research in this area in our future work.
Summary: This paper introduces DiffAdvMAP, a flexible diffusion-based framework for generating Universal Adversarial Examples (UAEs) under various attack conditions. By approximating the posterior distribution of UAEs using pre-trained diffusion models, DiffAdvMAP generates more natural UAEs compared to existing diffusion-based attack methods. The framework incorporates an adversarial constraint to ensure attack effectiveness and a reconstruction constraint to enhance adaptability across different attack scenarios. Experimental results show that DiffAdvMAP outperforms baselines in white-box success rate, transferability, and defense robustness, demonstrating its superior flexibility and effectiveness. Claims And Evidence: Yes, the claims on the flexibility, naturalness and effectiveness of DiffAdvMAP to generate UAEs are supported by generally clear and convincing evidence. The claim of the flexibility of DiffAdvMAP is supported by the design of this method in Constraint 2, i.e., C2 in Equation (2). It allows for different scenarios in UAE generation, including style generation and color generation. The qualitative evaluations in Experiments also indicate the flexibility can be empirically achieved. The authors also claimed naturalness and effectiveness of DiffAdvMAP. Intuitively and mathematically, Equations (7), (8), (9) and (14) generally demonstrate the control on realism. Equation (4) controls the effectiveness of DiffAdvMAP to attack classifiers. Results in Tables 1 and 2 demonstrate the naturalness and effectiveness of generated UAEs empirically. However, for the efficiency claimed for DiffAdvMAP, the experimental results in Tables 1 and 2 seem not to strongly support it. Compared to the past methods, time cost of DiffAdvMAP is not the SOTA. Methods And Evaluation Criteria: The proposed DiffAdvMAP is consistent with and able to resolve the motivations via the maximum a posterior. In order to achieve the realism, it is natural to sample adversarial examples directly from an approximated posterior distribution and use the prior knowledge of real data distributions learned by diffusion models, instead of imposing strong prior as diffusion models. In terms of the flexibility, the constraint in Equation (6) seems to be sound where $\Omega(\cdot)$ allows the change of styles, colors and other scenarios to amend $x$ and generate desired UAEs. ImageNet is a commonly applied dataset to investigate the performance of adversarial example generation. Success rates for attacks can evaluate the basic effectiveness of attackers. FID-score, TRES and HYPERIQA are also proper evaluation criteria, because computing LPIPS score needs reference images and DiffAdvMAP should be studied also for the cases without reference images. Theoretical Claims: This work does not have theoretical claims. Experimental Designs Or Analyses: As aforementioned, the experimental designs are in general sound. The dataset is ImageNet which is ubiquitous to investigate the performance of adversarial example generation. Success rates are necessary for the basic effectiveness of attackers. FID-score, TRES and HYPERIQA are also proper evaluation criteria, because computing LPIPS score needs reference images. Qualitative results are also demonstrated properly. Supplementary Material: This work does not have supplementary materials. Relation To Broader Scientific Literature: This work enhances adversarial attackers generation to generate more natural and flexible UAEs. It extends diffusion-based attacks beyond standard settings by allowing UAEs to be generated under various attack conditions (e.g., from noise, region-based modifications, color changes). DiffAdvMAP moves from latent-space perturbation (Chen et al., 2023) to posterior-based sampling, leading to more realistic adversarial images. It is also a more flexible adversarial attack framework than previous diffusion-based approaches. Essential References Not Discussed: For the defense methods against adversarial attacks, a recent and important paper: CLIPure [1] should be mentioned and discussed, because it exploits the semantic information in adversarial examples and achieve the SOTA performance, on top of DiffPure. [1] MingKun Zhang, Keping Bi, Wei Chen, Jiafeng Guo, and Xueqi Cheng, CLIPure: Purification in Latent Space via CLIP for Adversarially Robust Zero-Shot Classification, ICLR 2025. Other Strengths And Weaknesses: **Strengths**: - *Originality*: This paper proposes an original approach to adversarial attack generation by integrating diffusion models with Bayesian inference. Unlike conventional adversarial attacks that rely on fixed perturbations, DiffAdvMAP samples adversarial examples from a learned posterior distribution, leading to more natural unrestricted adversarial examples (UAEs). Flexibility and realism of the generation is also well addressed. - *Clarity*: The paper is well-organized and clearly written, making it accessible to readers familiar with adversarial attacks and diffusion models. The motivation, methodology, and experiments are explained in a logical flow. **Weakness**: - *Quality*: The paper does not provide a thorough analysis of the computational cost of DiffAdvMAP compared to traditional adversarial attack methods. The time cost of DiffAdvMAP is also not the smallest among the compared past methods in experiments. The biggest limitation of DiffAdvMAP is its decreased effectiveness on unseen classifiers in black-box attack scenarios, although DiffAdvMAP relies on the pretrained diffusion models. It may indicate that DiffAdvMAP overfits to the structure of known classifiers, making it less effective in real-world settings where models are often unknown. Other Comments Or Suggestions: List of typos: - Page 1 Introduction Line 35-37 "Such approaches do not require perturbing real images wi0000000000000000000000th restricted perturbations," -> "Such approaches do not require perturbing real images with restricted perturbations," Questions For Authors: 1. Would the authors provide more insights either theoretically or empirically on why DiffAdvMAP is more effective and efficient compared with past methods? 2. For stronger purifiers such as CLIPure, how effective will the proposed DiffAdvMAP be to attack them? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the reviews and kindly reminder. We added the theoretical explanation and the experiments on CLIPure as follows. We wish you can improve the score after reading the response. 1. Efficiency: The main motivation of the paper is to develop a framework that achieves superior naturalness and adversarial effectiveness while maintaining practical generation efficiency. Due to the inherent generation speed constraints of diffusion-based methods, the efficiency of DiffAdvMAP is not the best among the baseline methods. Still, DiffAdvMAP surpasses most of the baselines in attack efficiency, achieves optimal naturalness, and matches ReColorAdv in generation speed. It achieves the best trade-off between naturalness, efficiency, and effectiveness. \ The theoretical analysis of efficiency:\ (1) Generating image-similar UAEs: As introduced in Section 2, compared to other diffusion-based methods(DiffPGD, AdvDiffuser, DiffAttack): Standard diffusion models require multiple U-Net queries for each image generation, causing inherent slowness. DiffAdvMAP accelerates this by generating UAEs via a truncated generation process(20 U-Net queries), achieving 30% faster speeds than DiffPGD (30 queries) and 10× acceleration over AdvDiffuser(400 queries). Even with equivalent query counts to DiffAttack, our method's 1-2 MAP optimizations per step enable 4×faster processing than DiffAttack's attention map computations and optimizations. Compared to non-diffusion-based methods: While GANs have fewer parameters than U-Nets, cAdv requires hundreds of network queries, making it slower than some diffusion approaches. Iterative methods like NCF and ReColorAdv bypass network queries entirely—their speed depends on the objective function complexity and iteration count. \ (2) Generating UAEs from noise: For the task of generating UAEs from noise, compared to AdvDiff, both of the methods utilize the same generation process of the latent diffusion model to generate UAEs. Since there’s no real image for reference, we employ a strong class guidance to ensure the naturalness of the UAEs, such that DiffAdvMAP should perform more MAP optimization steps to counteract the strong class guidance in generating adversarial features to ensure enough transferability of UAEs. From Table 1, we can see that DiffAdvMAP has better transferability and naturalness than AdvDiff, requiring only 10% more time. 2. Effectiveness: As illustrated in Sec.3, we introduce an adversarial constraint to ensure the effectiveness and a reconstruction constraint to control the content of generated UAEs. Such two constraints are considered as two events of the posterior distribution of UAEs. Then the posterior distribution is approximated with the prior knowledge of real data learned by the diffusion model. Unlike previous methods which only use diffusion models as strong denoisers to enhance the generation process, DiffAdvMAP generates adversarial features by sampling from the approximated posterior distribution, which make the adversarial features more coherent to the original features, thus improving the naturalness of UAEs. As a result, DiffAdvMAP have more space to improve effectiveness at the cost of sacrificing naturalness. As illustrated in Tab.6 in Appendix E, the white-box attack success rate is nearly 100%, but the transfer ASR is not enough when the absolute value of the confidence level is small. When the absolute value of the confidence level increases, the transferability increases, and although the naturalness decreases, it is still better than other baselines.\ (1) We don’t think DiffAdvMAP overfits to the structure of known classifiers. \ When performing a transfer-based black-box attack, UAEs are generated with a known surrogate model, eg. Inception V3, then these UAEs are used to attack other unknown classifiers eg. Mobilenet V2, Resnet50 and Swin-Transformer and obtain the transfer attack success rate, the principal of transfer attack results in the decrease of attack success rate(ASR). As shown in Table 2, all of the existing black-box attack methods face an ASR decrease when attacking unknown classifiers. \ (2) We complement the experiment on a stronger purifier, CLIPure, with DiffAdvMAP and DiffPGD. The ASRs are as follows:\ |Methods/Surrogate Models| Mobilenet V2| Swin-transformer| Incepetion V3 |Resnet50|\ |DiffAdvMAP| 48.3 | 57.6 | 39.9 | 49.2|\ |DiffPGD | 31.7 | 52.4| 24.6 |33.8| CLIPure is a zero-shot classifier that classifies images by matching an image with text prompts. It also leverages the CLIP model to purify the adversarial examples in the latent space. The ASR demonstrates that DiffAdvMAP can change the semantic information of images coherently, such that it can resist the semantic purification in the latent space and mislead the text-to-image semantic-based classifier to make wrong predictions in a large proportion. 3. Typos: Very thankful for your kindly reminder. We will correct it in revised version. --- Rebuttal Comment 1.1: Comment: I thank the authors for their explanation and added experimental results. All my questions and concerns are generally addressed. I will keep my score.
null
null
null
null
null
null
Of Mice and Machines: A Comparison of Learning Between Real World Mice and RL Agents
Accept (poster)
Summary: This paper conducted and analyzed an experiment of "predator and prey" by mice and a robotic agent to learn the behavior patterns of biological self-preservation instinct, and designed and evaluated the simulation of reinforcement learning agents and the same "predator" robotic agent. The paper implemented negative memory amplification and variance-penalized temporal difference learning to introduce the risk-avoidance to reinforcement learning and decision making. The simulation results showed the two mechanisms made the reinforcement learning agents share more similar visitation pattern with the mice such as radial exploration and wall-following. Claims And Evidence: Yes Methods And Evaluation Criteria: 1. Line 110 - 113, what is the meaning of "receives a reward of +1 upon reaching the goal", moving to the next cells safely? What is the meaning of "puffed by the simulated predator-like robotic threat"? It is fair to set 1 to both reward and penalty in the simulation to explain the difference between mice and RL agents, but I am wondering if increasing the penalty would make RL agents more cautious? 2. Line 172 - 179, what is the definition of success rate? Is it the same metric as "survival rate" first appeared in 2. Experiment Setup? What is the motivation of making RL agents move more like mice if the success rate is almost the same while the RL agents even traveled less distance? 3. Line 321 - 329, how to formulate the Q-value variance? Is it empirically computed or parametrically formulated? 4. Figure 6 top row and Figure 12 top row, what is the meaning of "distance changes" and the x-axis? What is the reason for the change to be linear? 5.Line 603, what is the evaluation criteria or quantification of "risk-awareness" for RL agents? It is ambiguous to claim that RL agents behavior should resemble mice's. Besides, how does risk-awareness further benefit the RL agents if the survival rate is not the primary focus? Theoretical Claims: Not applied Experimental Designs Or Analyses: Please refer to "Method and Evaluation Criteria" section Supplementary Material: Reviewed all supplementary materials Relation To Broader Scientific Literature: This paper proposed two mechanisms in the field of reinforcement learning to introduce risk-awareness instinct to the model. The first one is based on Post Traumatic Stress Buffer (PTSB) (Al Abed et al., 2020) and the second one is extension to Surprise Minimization in Reinforcement Learning (SMiRL) (Berseth et al., 2019). In 6. What about LLM agents, the paper discussed the performance of large language model under the same simulation setup. I am not convinced by the necessity of discussion about LLM which seems irrelevant to the previous parts about reinforcement learning. Essential References Not Discussed: None to my knowledge Other Strengths And Weaknesses: This paper provided an insightful work including both biological experiments and first-hand data for "predator-prey" to illustrate the risk-awareness and self-preservation, and provided feasible modifications to the previous method to make reinforcement learning model to capture similar patterns of mice's. I have a few major concerns. The first one is about the motivation for including the risk-awareness in RL models, especially since the success rates between the mice and the simulation are similar (and the setup of the RL model in this paper is arguably designed to be "reckless", since the bonus and penalty are of the same scale). Second, "risk-awareness" is ambiguous and subjective, and it lacks quantification. The authors also need to provide more evidence to claim the benefits of incorporating it into the RL models. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough and valuable feedback! We would like to address several key points: **Q: Motivation for Risk-Awareness in RL** A: Our paper aims not to improve RL performance but to understand differences between biological and artificial learning. This basic research is crucial for advancing both fields. Modern RL has diverged significantly from biological principles, and understanding these differences has scientific value regardless of performance metrics. Many AI breakthroughs (CNNs from visual cortex, attention mechanisms from human vision, and Hierarchical RL from human planning) originated from biological insights. We aim to understand how and why biological risk assessment strategies differ from current RL approaches, which serves important scientific goals beyond conventional performance evaluation. **Q: How to Quantify Risk Awareness?** A: We have provided multiple objective indicators in the paper, including waiting behavior (mice pausing to assess risks), diverse path selection and wall-following behavior (shown in visitation density comparisons), and more conservative movement patterns (changes in average distance before/after predator detection). To further quantify these behaviors, we conducted additional quantitative analyses: - Waiting behavior: Defined as movement with distance change <0.1 units within the first six steps, our analysis of 1,000 trajectories shows standard TD-MPC exhibits 0% waiting behavior, while mice demonstrate 27.7% and our improved method shows 32.4%. - Episode length: Standard RL agents complete tasks with an average length of 7.09 steps, while our improved method shows 13.89 steps and the mouse shows 14.04 steps. These quantitative measures demonstrate that our proposed mechanisms enable RL agents to exhibit risk perception abilities more similar to mice. We will include these results in the future. **Q : Could you clarify the meanings of 'receives +1 reward at the goal' and 'puffed by the predator,' and would increasing the penalty (beyond ±1) make RL agents more cautious compared to mice?** A: In our setup, the hexagonal environment has a starting point at one vertex and a goal point on another. The agent (prey) receives a reward of +1 only upon reaching the goal point, which mirrors the real mouse experiment where mice receive water as a reward at the goal. The agent receives a penalty of -1 when "puffed" by the predator, which occurs when their distance is below 0.1 units (27.5cm in real environment). This corresponds to the air puff stimulus used in the physical mouse experiments. The initial equal magnitude (+1/-1) for rewards and penalties was chosen to reflect that both water reward and air puff are relatively mild stimuli. We did experiment with larger penalty values (up to 200x). However, even with increased penalties, the RL agents still exhibited fundamental differences from mice in that they would not display waiting behavior at the start point. This behavioral difference highlights the need for additional mechanisms beyond simple reward scaling. **Q: How to formulate the Q-value variance?** A: We would like to highlight that this is explicitly detailed in our paper. The Q-value variance is empirically computed by uniformly sampling actions from the action space and calculating the variance of their Q-values (as stated on the right side of lines 279-284). **Q: Figure 6 top row and Figure 12 top row, what is the meaning of "distance changes" and the x-axis? What is the reason for the change to be linear?** A: The plots show how goal-agent distance changes before and after predator detection. (This information is in the caption, maybe too small now.) The linear trend emerges naturally because we are considering an average of 1000 episodes. This big number average makes mice’s and also RL’s trend to be linear. **Q: Why Discuss LLMs?** A: We included LLMs to compare how different agents behave in similar environments, tying back to the theme of "machines and mice." The results suggest risk-awareness is not just an RL issue but a broader challenge in AI cognition. The LLM experiments are supplementary, not a core focus. **Q: Some other clarification** A: The survival rate is the same as the success rate. In the end, we greatly appreciate the reviewer's insights and look forward to further discussions! --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications! I would like to raise my score to "weak accept". --- Reply to Comment 1.1.1: Comment: Thank you so much for your supportive comment. If at your convenience, would you mind kindly updating the score in the system as well? We truly appreciate your time and support!
Summary: This paper compares the learned behaviors of biological mice and simulated RL agents in a predator-avoidance maze environment. The authors highlight the disparities in behavior and propose two mechanisms to bridge the gap between a TD-MPC2-trained simulated agent and biological mice. Finally, they compare the performance of an LLM-agent and show its similarity to an RL agent. Claims And Evidence: I'm mostly fine with the claims, but they are generalized across RL agents when, in fact, they may be an artifact of specific task specifications in the simulation and the choice of algorithms (discussed in the upcoming sections). That said, since the authors focus solely on comparing behaviors and do not claim biological plausibility, I suggest only minor rewrites. Methods And Evaluation Criteria: - In Section 3, the authors describe Deep Q Network (DQN) and Soft Actor-Critic (SAC), along with other behavior cloning and offline RL methods in Appendix C. However, all the plots in the main paper only use TD-MPC2. Why is this the case? If the results from the other methods are not used, perhaps the unnecessary text could be removed? - The ideas of Post Traumatic Stress Buffer (PTSB) and Selective Memory Sampling are reminiscent of Prioritized Experience Replay (Schaul et al., 2015). Did the authors consider this approach? Would it be accurate to say that the proposed approach is one instantiation of prioritized experience replay? - TD-MPC2 uses Model-Predictive Path Integral (MPPI) to sample actions. In simpler terms, it generates multiple rollouts of planning horizon $H$ and picks the action sequence that yields the highest predicted return, typically using a weighted averaging scheme based on the rollouts' performance. The default implementation of TD-MPC2 only considers a horizon length of 3 or 5, which is very small. What is the choice of of planning horizon $H$ in the experiments? This could potentially impact your results. - The proposed Variance-Penalized Temporal Difference (TD) Learning aims to reduce risky behaviors by avoiding states with high uncertainty. However, it is unclear whether risk and uncertainty are inherently the same. For example, early in learning, Q-values may appear uncertain due to limited exploration, but this does not necessarily imply that those states are inherently risky. **References** - Schaul, T., Quan, J., Antonoglou, I., & Silver, D. (2015). Prioritized experience replay. arXiv preprint arXiv:1511.05952. Theoretical Claims: N/A Experimental Designs Or Analyses: **Task Specification** I have some concerns about how the task is specified in simulation. I appreciate the effort the authors put into aligning the simulation with the real-world task. However, the disparity in qualitative behaviors may stem from the use of an episodic RL formulation with discounting and sparse rewards. An RL agent taking the shortest path to the goal is likely an artifact of choosing discount factor $\gamma < 1$ with sparse rewards, as this setup inherently incentivizes shortest-path solutions. While I do not know the biological mechanisms underlying mouse intelligence, it seems highly unlikely that they align with an episodic RL framework with discounting, as used in these experiments. To understand the impact of task specification in goal-reaching tasks, Vasan et al. (2024) is a useful reference. Their work demonstrates that three seemingly equivalent task specifications in RL can lead to different final performance outcomes, even when using the same learning algorithm. **Comparisons with Other RL Methods ** - TD-MPC2 is a model-based RL algorithm. I am curious whether model-free algorithms such as PPO or SAC achieve similar final performance to TD-MPC2. **LLM Agent Description** - The action space of the LLM agent is unclear. Does it select an action at every timestep, or does it generate an entire trajectory at once? For a fair comparison with TD-MPC2, the LLM should be evaluated in a setting where it selects an action at each timestep. **References** - Vasan, G., Wang, Y., Shahriar, F., Bergstra, J., Jagersand, M., & Mahmood, A. R. (2024). Revisiting Sparse Rewards for Goal-Reaching Reinforcement Learning. Reinforcement Learning Journal, 4, 1841–1854. Supplementary Material: Yes. I looked at Appendix C more closely than the rest. Relation To Broader Scientific Literature: This aligns with a broader body of work, including Vaxenburg et al. (2024) and Singh et al. (2023), which compare biological intelligence and RL agents, conducting behavioral analyses to assess biological plausibility and differences **References** - Vaxenburg, R., Siwanowicz, I., Merel, J., Robie, A. A., Morrow, C., Novati, G., ... & Turaga, S. C. (2024). Whole-body simulation of realistic fruit fly locomotion with deep reinforcement learning. bioRxiv, 2024-03. - Singh, S. H., van Breugel, F., Rao, R. P., & Brunton, B. W. (2023). Emergent behaviour and neural dynamics in artificial agents tracking odour plumes. Nature machine intelligence, 5(1), 58-70. Essential References Not Discussed: The related work section primarily motivates RL as a useful framework for understanding biological intelligence. However, it does not discuss prior works similar to theirs or explain why RL alone is sufficient to generate the behaviors observed in mice. Additionally, the section appears somewhat hastily written. For instance, a paragraph begins with the vague statement, "Fundamental challenges persist" (line 438). While the related work lists studies that explore the intersection of RL and neuroscience, it does not clearly connect them to the present study. For example, a more relevant discussion could address why model-based RL is specifically chosen to model rat behavior. Please read the papers I mentioned in this review and consider citing them if relevant. Other Strengths And Weaknesses: **Strengths** - The paper is well-written and enjoyable to read. - The plots and illustrations are aesthetically pleasing and effectively highlight the behaviors the authors aim to showcase. - The analysis is timely, given the increasing role of AI in society. Comparisons with biological intelligence are valuable and much appreciated. **Weaknesses** - The authors do not provide strong explanations or hypotheses for why RL agents behave differently from mice. - The proposed mechanisms for promoting risk aversion feel ad hoc. I encourage the authors to consider additional factors, such as the choice of algorithm and task specification. Other Comments Or Suggestions: - I'm open to reconsidering my score if 1) the authors update the related work section to better position their study and share it during the rebuttal, and 2) address my concerns regarding the methods and evaluation criteria. **Typos, etc** - The citation in the very first line is incorrect. It is Sutton & Barto (2018). Not just Sutton (2018). In APA: "Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press" Questions For Authors: - What is the planning horizon \( H \) used in your TD-MPC2 experiments? - What is the chosen discount factor \( \gamma \)? - Evolution optimizes for reproductive success, which inherently emphasizes self-preservation behaviors. In contrast, an RL agent lacks such pressure and is only optimized for success and failure. Do you think this could partly explain the differences in behavior? If so, could this be encoded through rewards? - Figure 11: The authors state that the proposed “VP-TD-MPC2 + PTSB” achieves an 86.1% visitation overlap with a mouse. However, the trajectories still appear visually different. Do you have any hypotheses on why this gap remains? - Would a "baiting" behavior emerge with longer training in the simulation? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's thorough and valuable feedback. Below we address each point systematically: **Q: Regarding algorithm selection and comparison** A: We experimented with DQN and SAC alongside TD-MPC2 but focused on TD-MPC2 because: (1) Model-based approaches together with TD learning better align with biological decision-making (Daw et al., 2011), (2) DQN and SAC showed less stable convergence and inconsistent trajectories, and TD-MPC2 produced more stable behaviors. Notably, with sufficient training, all RL methods ultimately exhibited similar "reckless" behaviors (lacking waiting periods and showing minimal exploration), confirming that our observations about behavioral gaps are not specific to one algorithm. **Q: Concerning Prioritized Experience Replay (PER)** A: While our approach shares conceptual similarities with PER, our experiments showed standard prioritized replay failed to change the RL behavior. The key distinction is our specific amplification of negative experiences to model trauma-like responses, simulating high-risk encounters in a biologically-inspired manner, rather than prioritizing based solely on TD error magnitude. **Q: As for risk and uncertainty** A: We observe that while initial training phases do show high variance across the environment (reflecting exploration uncertainty), this pattern quickly evolves to specifically highlight areas with risk (predator encounters). After sufficient exploration, Q-variance remains low near walls and starting positions but stays consistently high in predator-dense regions, confirming our mechanism captures genuine risk rather than mere exploration uncertainty. **Q: On Task Specification, Evolutionary Pressure, and Implementation Details** A: We agree that evolutionary pressure contributes to these behavioral differences. Our reward structure (-1/+1) mirrors our Cell Reports study paradigm (not cited due to double-blinding), where mice received mild water rewards and air puffs of similar intensity. Increasing penalty values failed to encourage cautious behavior like waiting behavior at the entrance. We chose episodic RL and sparse rewards specifically to maintain consistency with our mouse experiments. The 0.995 discount factor was selected to encourage longer-term planning that better matches mouse behavior, though it lacks direct biological explanation. Importantly, behavioral differences persisted across all tested discount factors (0.9-0.995). We used H=3 after experimenting with longer horizons (H=6,10), which degraded performance due to compounding prediction errors. When H is too big, the prey can even ignore the predator. No baiting behavior emerged even with extended training (up to 10M steps). The above shows fundamental risk assessment differences rather than training duration issues. **Clarification On LLM** A: Our LLM agent makes decisions at each timestep, identical to TD-MPC2, ensuring a fair comparison despite the computational cost. **We've improved the related work section** A: Recent studies by Vaxenburg et al. (2024) and Singh et al. (2023) demonstrate the value of comparing biological and artificial behaviors in navigation tasks. Our choice of model-based RL draws from Daw et al. (2011), showing animals maintain internal models for decision-making, with striatal prediction errors paralleling TD learning. Regarding whether standard RL suffices for mouse-like behaviors, Mattar & Daw (2018) showed that prioritized memory access explains biological planning processes, inspiring our PTSB. Blanchard et al. (2011) documented neural risk assessment mechanisms in rodents aligning with our variance-penalized approach. These studies collectively suggest that while standard RL alone cannot reproduce mouse behaviors, our biologically-inspired modifications bridge this gap. **Regarding weaknesses**: Our proposed mechanisms are not ad hoc, they are directly motivated by observed mouse behavior, as detailed in our response to Reviewer GbX7. **REF** - Vaxenburg, R., Siwanowicz, I., Merel, J., Robie, A. A., Morrow, C., Novati, G., ... & Turaga, S. C. (2024). Whole-body simulation of realistic fruit fly locomotion with deep reinforcement learning. bioRxiv, 2024-03. - Singh, S. H., van Breugel, F., Rao, R. P., & Brunton, B. W. (2023). Emergent behaviour and neural dynamics in artificial agents tracking odour plumes. Nature machine intelligence, 5(1), 58-70. - Daw, N.D., et al. "Model-based influences on humans' choices and striatal prediction errors." Neuron 69.6 (2011): 1204-1215. - Mattar, M.G., and Daw, N.D. "Prioritized memory access explains planning and hippocampal replay." Nature Neuroscience 21.11 (2018): 1609-1617. - Blanchard, D.C., et al. "Risk assessment as an evolved threat detection and analysis process." Neuroscience & Biobehavioral Reviews 35.4 (2011): 991-998. In the end, we thank the reviewer again for insightful comments! We look forward to further discussions with you. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response, which clarifies several of the questions I had earlier. I also appreciate the updated related work paragraph. I want to emphasize that I like this line of work and would like to help make it clearer and more compelling to the broader RL community. At this point, I will stick with my current score. However, if the authors can provide a reasonable plan to address my remaining questions and concerns in the final draft, I would be happy to raise my score. --- **> Mismatch between generality of claims and tailored solutions** I believe some of the criticism raised by other reviewers stems from the mismatch between the generality of the paper’s claims and the bespoke nature of the proposed mechanisms. For example, both the Post-Traumatic Stress Buffer and the variance-penalized TD update appear to be carefully crafted solutions aimed at inducing risk-averse behavior similar to that seen in biological rats. Since the claims made in the paper pertain broadly to biological and artificial agents, it's worth examining whether the proposed mechanisms are actually general or if they are domain-specific solutions. --- **> Horizon length of 3 with TD-MPC** As you can imagine, a planning horizon of 3 with TD-MPC is quite short—equivalent to only a few hundred milliseconds in real-world time. I suspect that a more capable model-based RL method, one that supports longer-horizon planning, could achieve risk-averse behavior purely through decision-time planning. Would the authors agree with this? If so, should we view the proposed mechanism as a stopgap solution in lieu of stronger planning methods? --- **> Comparison with Prioritized Experience Replay (PER)** The authors mention that, while their approach shares conceptual similarities with PER, standard prioritized replay did not alter the agent's behavior in the same way. The key distinction, as stated, is that their method amplifies negative experiences to simulate trauma-like responses, as opposed to simply prioritizing transitions based on TD error. However, I couldn't find any mention of PER in the submitted draft. If you have experimental results comparing your approach with PER, please include them in the paper. Given the relevance of PER, it’s important to cite it directly and explain its limitations in your context, along with the motivation for your proposed mechanism. --- **> Figure 11 – Visitation overlap** The paper states that “VP-TD-MPC2 + PTSB” achieves an 86.1% visitation overlap with mouse trajectories. However, the qualitative difference between the trajectories remains noticeable. Do you have any hypotheses as to why this gap persists? This point is still unclear to me and would benefit from further explanation. I look forward to your response! --- Reply to Comment 1.1.1: Comment: We sincerely thank your thoughtful feedback, which significantly helps us clarify and strengthen our paper! --- **Q: Generality of Claims vs Specific Solutions** A: Our work indeed focuses on a specific predator-prey domain, and the proposed mechanisms were intentionally designed within this context. However, predator-prey dynamics represent fundamental survival interactions observed across both natural and artificial systems, as highlighted by Tsutsui et al. (2024) and Marrow et al. (1996). These studies emphasize such scenarios as key paradigms for investigating adaptive behavior. In the final draft, we will refine our claims to acknowledge the domain-specific nature of our mechanisms. While our approach is tailored to predator-prey settings, we believe it offers valuable insights into computational models of biological caution. We view these dynamics as a principled and biologically grounded testbed for studying adaptive behavior under threat, which we hope makes our findings more relevant to the RL community. **Ref** - Tsutsui, Kazushi, et al. "Collaborative hunting in artificial agents with deep reinforcement learning." Elife 13 (2024): e85694. - Marrow, Paul, Ulf Dieckmann, and Richard Law. "Evolutionary dynamics of predator-prey systems: an ecological perspective." Journal of mathematical biology 34 (1996): 556-578. --- **Q: Horizon length of 3** A: The horizon length of 3 is well-suited to our experiments. Our task mirrors real mouse behavior: excluding waiting periods, mice need 2-3 seconds to reach the goal. Given our simulation timestep (0.25 seconds per action), successful task completion in RL requires 8-12 steps. Why 0.25s? This aligns with findings showing that rodents make approximately 2-5 decisions per second during navigation (Resulaj et al., 2009). Through environment setup, we determined that the 0.25s provides an optimal balance between biology realism and learning performance. To further investigate this issue, we conducted additional experiments using Dreamer-v3, which employs an LSTM module to handle longer horizons. Even with a horizon of 15, Dreamer-v3 failed to replicate cautious behaviors observed in mice. This indicates behavioral differences arise not from horizon limitations, but from principles addressed by our biologically-inspired mechanisms. We will expand on these experimental design choices and their rationale. **Ref** - Resulaj, A., Kiani, R., Wolpert, D.M., & Shadlen, M.N. (2009). Changes of mind in decision-making. Nature, 461(7261), 263-266. --- **Q:Comparison with Prioritized Experience Replay (PER)** Our results show that PER offers partial benefits but does not fully address the challenges in our scenario. Specifically, PER achieves an 80% survival rate within 40,000–50,000 steps (vs 60,000–70,000 with standard replay). However, density plots indicate minimal behavioral differences after training. Thus, PER improves training efficiency but has limited impact on the behaviors. This limitation motivated our proposed mechanism, which explicitly amplifies negative experiences simliar to biological trauma responses. We will explicitly cite PER, compare with it, and clarify our design motivation in the final manuscript, supported by experimental plots. --- **Q: Fig 11 and why this gap persists** A: The reported 86.1% visitation overlap indicates that the RL agent explores approximately 86% of the same spatial area as the mouse across all episodes. This metric reflects similarity in exploratory coverage (whether both agents have been to the same states), rather than a one-to-one match in how frequently or in what manner they visit those states. It is also true that in Fig 11, the final converged policies still display qualitative differences. Mice exhibit stronger wall-following behavior, while our improved RL agents tend to use central regions more. Our hypothesis: - This behavioral difference stems from the structure of the RL state space. Having thoroughly explored the outer wall during training, the agent doesn't need to maintain constant wall contact for safety. Upon reaching the arena's top, it can confidently consider the task solved. - The original TDMPC2 agent exclusively used central regions. While our improved RL agent now captures some cautious mouse-like behaviors, it has learned that central areas provide more efficient paths to the goal when safety is assured. In contrast, mice persist in wall-following regardless of task mastery, likely due to hardwired behavioral priors. We acknowledge this distinction was previously unclear and will revise the text to emphasize that visitation overlap does not imply similarity in behavioral patterns or density plot distributions. --- We hope these clarifications and planned revisions address your concerns. We are very grateful for your detailed feedback and your openness to reconsidering your score! Your suggestions have already helped us strengthen the work in several ways!
Summary: This paper investigates the behavioral differences between biological mice and reinforcement learning (RL) agents in a predator-avoidance maze environment. The authors find that RL agents lack preservation instincts, often taking risky, efficiency-driven paths without assessing potential threats, in contrast to biological mice. To address this discrepancy, the paper introduces two novel mechanisms designed to encourage biologically inspired risk-avoidance behaviors in RL agents: (i) Post Traumatic Stress Buffer (PTSB), which mimics biological trauma responses, and (ii) Variance-Penalized Temporal Difference (TD) Learning, which integrates action uncertainty by penalizing high Q-value variance and promotes more risk-averse decision-making. Experimental results demonstrate the effectiveness of both proposed mechanisms. Additionally, the authors introduce ChatGPT-4 to explore the behavior of large language model (LLM) agents in a controlled environment. However, experiments reveal that LLM-based agents exhibit risk-taking tendencies similar to those of RL models, suggesting that achieving biological alignment remains a broader challenge in AI. ## update after rebuttal I keep my initial rating. Claims And Evidence: Good. Methods And Evaluation Criteria: Good. Theoretical Claims: None. Experimental Designs Or Analyses: Good. Supplementary Material: I read the videos. Relation To Broader Scientific Literature: None. Essential References Not Discussed: none. Other Strengths And Weaknesses: Strengths: 1. The paper presents a systematic and quantitative analysis of the behaviors of mice compared to reinforcement learning (RL) agents in a predator-prey maze, revealing significant differences in risk assessment and exploratory strategies. 2. The writing of this paper is good. The proposed PTSB and Variance-Penalized TD learning designs are well-motivated. 3. The proposed novel methods for altering RL behaviors to align with biological processes are effective and have potential application value in life science research and other related fields. Weaknesses: 1. The experiments are conducted in Cellworld Gymnasium, raising questions about whether the proposed mechanisms would generalize to other, more diverse environments or tasks. Could the authors provide more analysis or change the environment to demonstrate the generalization of the mechanisms? 2. This paper utilizes ChatGPT-4 to conduct the experiments. However, the analysis appears limited, partly due to the reduced number of trial runs, which may lead to unreliable conclusions. Other Comments Or Suggestions: No. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful feedback. Regarding the two main concerns raised: **Q: On generalization across environments: The experiments are conducted in Cellworld Gymnasium, raising questions about whether the proposed mechanisms would generalize to other, more diverse environments or tasks. Could the authors provide more analysis or change the environment to demonstrate the generalization of the mechanisms?** A: We acknowledge that generalization is an important consideration. Our research primarily aims to understand fundamental behavioral differences between biological and artificial agents in predator-avoidance scenarios, rather than demonstrating broad environmental generalization. We focused on a controlled environment that precisely mirrors our biological experiments, allowing for direct behavioral comparisons. While broader generalization testing is valuable for future work, our current focused comparison provides crucial insights into the computational principles underlying biological risk assessment. These principles (memory amplification of negative experiences and uncertainty aversion) are likely to transfer to other risk-assessment scenarios, though the specific behavioral manifestations may vary with environment. Future work will examine how these mechanisms generalize across different environmental structures and predator behaviors. We truly appreciate your recognition of our systematic quantitative evaluation approach and the biological motivation behind our proposed mechanisms. Our future work will indeed explore applications in more complex environments, particularly in AI safety domains and autonomous navigation systems where risk assessment is critical. **Q: Regarding the ChatGPT-4o analysis: This paper utilizes ChatGPT-4o to conduct the experiments. However, the analysis appears limited, partly due to the reduced number of trial runs, which may lead to unreliable conclusions.** A: While we conducted approximately 100 trajectories (limited by computational constraints of running GPT-4o), our experimental design ensures comprehensive coverage through systematic variation of predator spawn locations across all relevant scenarios. The remarkable consistency of ChatGPT-4o's behavior across these diverse configurations provides strong evidence for our conclusions. Our analysis reveals that regardless of predator positioning, ChatGPT-4o consistently adopts strategies similar to baseline TDMPC2, reinforcing our findings about the fundamental behavioral gap between artificial and biological agents. We will include additional visualizations showing predator spawn distributions to further support this conclusion. We look forward to further discussions with you!
Summary: The authors devise several RL agents for a navigation task that involves reaching a goal while avoiding a predator. The task is closely modeled after an actual biological experiment with real mice. Based on the observation of systematic behavioral differences between biological and artificial agents in this predator-prey environment, they suggest two additions to a hybrid RL algorithm, which combines TD learning and MPC involving a deep encoder-decoder model: 1) a replay buffer mechanism that resamples episodes with close encounters with the predator during training more often and 2) a penalty term in the TD error that penalizes reward with the variance of the Q-values across actions in that state. Empirical results show that these two modifications result in an increased overlap of cell visitations between the simulated agent and mice. Claims And Evidence: The claims are that 1) survival performance, 2) state visitation distributions, 3) wall following behavior, 4) initial waiting are closer to mice's behavior in the proposed RL model. 1-2 are evaluated quantitatively, and 3-4 are evaluated qualitatively. However, while 1 does not seem to differ (a statistical test may help here), the used measure for 2 shows an improvement, but the typical wall-following trajectories of mice are still not reproduced. Methods And Evaluation Criteria: The algorithmic choices, such as the replay buffer and altered cost function are ad hoc, and at ICML it would be great to see a more in-depth motivation and analysis of these choices. It would be very helpful to find more principled and quantitative measures of similarity between trajectories and between strategies of behavior in the predator-prey environment to better quantify and understand the similarities and differences in behavior between mice and models. Theoretical Claims: There are no theoretical claims involving theorems. Experimental Designs Or Analyses: The specific behavior of the RL agent contrasts with the behavior of the mice. However, many choices in setting up the RL agent have not been further discussed or investigated. First, the reward for encountering the predator is fixed. Second, the agent does not explore the. maze but encouraging exploration e.g. by adjusting \beta in the third equation on page 3. Third, the environment is clearly partially observable, which is not explicitly modeled. Fourth, the predator is always detected, no matter in what direction it is relative to the agent, including being in its back, which does not correspond to the mice’s field of view. Fifth, the conclusions are drawn based on predator behavior determined by the parameters of algorithm 1, and more aggressive attack behavior may suffice to obtain more “Experiments (Figure 7) indicate that 50% (ranging from 0% to 100% in increments of 25%) is the optimal percentage for this buffer.” How generalizable is this result, or how much does this pertain to the specifics of the current setup? While the authors report that “TDMPC-2 agent has a 20.9% overlap with mice, while VP-TDMPC-2 achieves 86.1%”, looking at figure 11 seems to suggest very different behavior and still limited wall-following behavior. Supplementary Material: I read through the entire supplementary material. Relation To Broader Scientific Literature: There is a wealth of literature on modeling the behavior of animals, particularly mice with RL. The literature on replay buffers since DYNA is extensive, similarly the literature on implementing forms of risk aversion in RL. I would be very careful with naming an algorithm as the present one, “Post Traumatic Stress Buffer (PTSB),” just out of respect for the medical profession and the patients suffering from such conditions. Maybe there is a way to reformulate this. Essential References Not Discussed: Sorry, not sure what I would pick out here as "essential". Other Strengths And Weaknesses: This is an interesting and important direction for neuroscience and ethology, but I think that the paper in its current form is not at the core of the interests of the ICML community but would elicit more excitement at a conference such as From Animals to Animats. Other Comments Or Suggestions: Characterizing DQN as online RL is a bit difficult, given that it uses the replay buffer, which makes it more of an offline method. The color mapping is wrong in the caption of figure 3. Questions For Authors: How was “Memory Amplification” implemented? Where do the encounters with the predator happen? A density plot of the encounters with the predator would help understand the changes in agent behavior given the modified algorithms. Would different rewards in the problem's setup, especially for being caught by the predator, change behavior? Are the differences in behavior mostly attributable to not using a formulation of partial observability? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the reviewer's thorough feedback and would like to address several key points: **Q: Why PTSB and VP-TDMPC2** A: Our PTSB models rodents' responses to adverse stimuli. Even with mild aversive stimuli (air puffs), mice show pronounced fear responses after few exposures, exhibiting PTSD-like memory amplification. LeDoux (2000) demonstrates how rodents overweight negative experiences during risk assessment, which we implement by selectively amplifying memories of predator encounters. Our Variance-Penalized approach stems from observing mice frequently waiting at the starting position before making decisions - behavior suggesting internal risk evaluation. Rushworth & Behrens (2008) found mammalian brains specifically encode uncertainty in decision-making circuits, with orbitofrontal cortex activation correlating with outcome uncertainty. Behaviorally, animals consistently avoid high-variance options even with identical expected values. We evaluated multiple uncertainty measures before selecting variance as most biologically plausible and computationally effective. **REF** - LeDoux, Joseph E. "Emotion circuits in the brain." Fear and anxiety (2013): 259-288. - Rushworth, Matthew FS, and Timothy EJ Behrens. "Choice, uncertainty and value in prefrontal and cingulate cortex." Nature neuroscience 11.4 (2008): 389-397. **Q: Predator Encounter Density and Behavioral Implications** A: Predator encounters primarily occur in central maze regions, as our density plots show. This distribution explains why our mechanisms produce mouse-like behaviors: when combined with PTSB (amplifying learning from central encounters) and variance-penalized training, agents develop: - increased waiting at the starting position to assess risk - stronger preference for wall-following paths that avoid these high-risk central areas We will add density plots to illustrate this relationship between encounter locations and resulting behavioral adaptations. **Q: Memory Amplification Implementation and PTSB Generalizability** A: In standard replay buffers, negative experiences typically constitute only about 10% of collected data. Our approach ensures higher representation (50%) in each training batch. This ratio showed optimal training efficiency in our experiments and the principle generalizes to other algorithms like DQN and SAC, though optimal ratios may vary. **Q: Environmental Design Choices and Partial Observability Effects** A: Regarding predator visibility and observation design: Mice have approximately 270° natural vision, and our experiments documented frequent head-turning behavior that gives them effective near-360° awareness. Our implementation matches this natural capability rather than imposing artificial constraints. We tested various observation spaces, including more limited observations (only prey/predator positions when visible), field-of-view restrictions, and observations without goal information. Importantly, all variations produced similar behavioral patterns—RL agents consistently took direct paths without waiting or wall-following behaviors. This consistency suggests the behavioral differences aren't primarily attributable to partial observability. Our current observation space specifically parallels actual mouse experimental conditions while maintaining computational tractability. For the robot, the same algorithm was used in both physical trials and simulation, enabling direct comparison between RL agents and mice under identical conditions. **Q: Wall-Following Behavior Limitations** A: We acknowledge certain limitations. Extreme wall-following remains challenging even when giving the agent extra rewards for being near the wall, particularly with predator presence. However, our approach achieves significant improvements in risk-aware behavior compared to baselines. **Q: Impact of Varying Rewards and Risk Quantification** A: The same question is asked by reviewer yzj9, and we have provided detailed answers addressing both reward variation effects and our risk quantification methodology in that response. **Q: DQN Classification and PTSB Terminology** A: We agree that RL terminology can be confusing. DQN is off-policy (learns from experiences generated by different policies) but is considered online in standard RL taxonomy because it continuously collects new data through environment interaction. We agree with the reviewer regarding the terminology confusion and will revise accordingly in the final version. **Q: ICML relevance** A: ICML welcomes interdisciplinary research. Biological systems have developed sophisticated risk-assessment mechanisms that could inform machine learning approaches in high-stakes scenarios. This research direction aligns with ICML's mission to advance AI capabilities through cross-disciplinary insights. In the end, we again appreciate the reviewer's insights and look forward to further discussions!
null
null
null
null
null
null
DIS-CO: Discovering Copyrighted Content in VLMs Training Data
Accept (poster)
Summary: The paper focuses on an important and timely problem, i.e., how to discover the copyrighted content in the training data of VLMs. Specifically, the paper’s contribution includes (1) a new benchmark, MovieTection, (2) a new method DIS-Co, (3) comprehensive experiments, and (4) some new discoveries. ## Update after rebuttal Dear authors, I would like to express my honor to have an opportunity to review your paper and apologize for forgetting to file some areas, such as “Claims and Evidence”. Here, as the update after the rebuttal, I want to summarize my opinion and give the final recommendations. Although I agree that: (1) the paper makes valid claims with experimental evidence; (2) the evaluation is sufficient; (3) the designs and analyses of experiments are reasonable and sufficient; and (4) the relation to broader scientific literature is strong. The following concerns are not solved during the rebuttal, and therefore I suggest incorporating them into the revision and submitting your paper to a later conference. 1. To show your paper is novel, you should NOT say there are similar topics in the top-tier conferences. That is definitely true because I do not argue that this topic is not novel; the topic is important and should appear at these conferences. In addition, the move from LLM to MLLM cannot be considered significant, though unexplored. I think the proposed DIS-CO is not novel (as shown above, and the authors admit it is a piece-up of existing works). No need to argue that you build a new benchmark and conduct a lot of experiments, they have nothing to do with the novelty. 2. The authors say "Our paper focuses on a technical problem: detecting whether a model has memorized specific content." This focus is feasible. However, the paper does not reflect this. The paper makes a lot of arguments on *copyright* rather than memorization/replication. Please see a similar paper [1] on how to focus on technical problem: *"Replicants are either a benefit or a hazard; there may be situations where content replication is acceptable, desirable, or fair use, and others where it is “stealing.” While these ethical boundaries are unclear at this time, we focus on the scientific question of whether replication actually happens with modern state-of-the-art diffusion models, and to what degree."* What's more, your paper and [1] have a large similarity because you focus on the copy(right)/memorization of MLLM and [1] focus on the diffusion models. However, no citation or comparison is given. [1] Somepalli, Gowthami, et al. "Diffusion art or digital forgery? investigating data replication in diffusion models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. 3. 8 pages should not be the excuse for limiting to videos rather than images. In fact, you also treat videos as a sequence of images, and there is indeed no fundamental difference. To conduct comprehensive research, you should include both videos and images. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: Yes Essential References Not Discussed: No Other Strengths And Weaknesses: Pros: Contributing a new dataset is helpful to the community. The paper is generally well-written. The experiments are comprehensive. Cons: Firstly, I want to discuss the **copyright** problem refered in this problem. 1. Why do you think using copyrighted material in the training data is copyright infringement rather than fair use? If a VLM lacks of explosure to all the copyrighted material, it will lack many knowledge. That is similar to a scene that happened many years ago: if the copyrighted material should be searched by the searching engine? After many years, we have rearch an agreement that the copyrighted materials available on the website is OK. So, today, many researchers think that using copyrighted material in the training data for LVM is not copyright infringement. The copyright infringement happens at (1) a model generates something very similar to the copyrighted image/video/texts; ***and*** (2) someone use the generation for profit without the authorization of the owners. It is fair use if someone only use for purposes like eduction. 2. If you think using copyrighted material in the training data is copyright infringement, why do you include the copyrighted movies in your dataset? Does this constitute the copyright infringement? From my point of view, what we should do is to detect “copy” rather than “copyright”. Second, I am curious about why you focus on the movies. There are many copyrighted images and videos are not from movie. Also, does your task be limited to the video domain? Why not images? Finally, I am concerned about the novelty of the proposed DIS-CO. As shown in Fig. 2, it is a combination of known technologies. The method seems to be an engineering solution rather than an academic algorithm. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Dear Reviewer, We appreciate the time and effort invested in reviewing our paper. Below, we clarify your comments. > Why do you think using copyrighted material in the training data is copyright infringement rather than fair use? > If the paper gave the impression that we consider all copyrighted content in training data inherently infringing, that was not our intention. We fully recognize that the legality of training on copyrighted data is complex and context-dependent. However, especially in light of recent lawsuits, we notice that training on copyrighted data without authorization can raise serious concerns, particularly when the resulting models are deployed commercially. Our work does not take a legal position on whether a specific instance constitutes infringement or fair use. Rather, our contribution is technical: we propose a method of detecting whether specific content (copyrighted or otherwise) was included in a model’s training data. > If you think using copyrighted material in the training data is copyright infringement, why do you include the copyrighted movies in your dataset? (…) What we should do is to detect “copy” rather than “copyright”. > With respect to our own use of copyrighted movie frames in the *MovieTection* benchmark, we address this concern directly in our Impact Statement. Our dataset contains a small number of frames per film, is used solely for education and research purposes, and does not substitute the original work, conditions under which we believe fair use applies. We also fully agree with the distinction made between detecting “copyright” and “copy.” The key issue is not whether a model saw the material, but whether it memorizes and reproduces it in a problematic way. As noted, *“Infringement happens when model generates something very similar to the copyrighted image…”* This was also a core motivation behind our work. For example, it is probably fine, and even expected, for a model to know *Notting Hill* is a romantic comedy. But when it can consistently name the movie title from frames like Figure 9 (in the paper), that level of specificity suggests not just general understanding, but visual memorization, which may go beyond what’s expected from general exposure. This is why we designed the benchmark, and we believe it plays an important contribution for the field, as it helps drawing the line between the different levels of memorization that VLMs may exhibit. In that sense, we believe our work fully aligns with your view. > Second, I am curious about why you focus on the movies. (…) Also, is your task limited to the video domain? Why not images? > Our focus on movies was mainly motivated by the fact that they are likely to be familiar to a broader audience, making the results easier to interpret and relate to than, for example, paintings or less widely consumed copyrighted material. As for the second part of your question: no, our method is definitely **not limited to the video domain**. In fact, as shown in our proof-of-concept experiment using COCO, our technique works on static, single images. On top of that, we also conducted an initial experiment using a different type of copyrighted content: comic books. For this, we assembled a small dataset of five different works: *Astérix Legionary*, *Lucky Luke: Billy the Kid*, *The Amazing Spider-Man #1*, *Spirou and Fantasio: Comme Zorglub*, and *Tintin in America*. As these comics come from long-running series with similar visuals, we expected models to struggle to identify specific titles. However, GPT-4o performed surprisingly well: | | Astérix | Lucky Luke | Spider-Man | Spirou and Fantasio | Tintin | | --- | --- | --- | --- | --- | --- | | **GPT-4o** | 52.0% | 61.3% | 42.3% | 68.8% | 67.2% | That said, we chose not to further develop this experiment in the paper because, unlike movie frames, comic pages often contain text within the images. This introduces an additional variable, making it harder to determine whether the model is relying on the artwork alone or also using the text to make its predictions. Since our goal was to evaluate visual memorization specifically, we felt that this mix of modalities could reduce the clarity of our findings. > Finally, I am concerned about the novelty of the proposed DIS-CO (…) It is a combination of known technologies. > While DIS-CO builds on existing components, its application to detecting visual copyrighted content in VLMs is, to our knowledge, novel. No prior work has addressed this task. And the fact that it combines known techniques is not by itself a limitation! In the end, it’s what enabled us to outperform the strongest prior method [1] and bring new insights to the field. [1] Li Z, et al. Membership Inference Attacks against Large Vision-Language Models. NeurIPS, 2024. **Conclusion:** We hope that our answers have addressed your concerns. Please let us know if any further clarification or additional information is needed from our end. --- Rebuttal Comment 1.1: Comment: Dear authors, Thanks very much for your rebuttal. However, most of my concerns remain: (1) "While DIS-CO builds on existing components, its application to detecting visual copyrighted content in VLMs is, to our knowledge, novel. No prior work has addressed this task." I am sure that for ICML, this kind of novelty is not enough. Sorry. (2) I am also not convinced that why your use of movies is fair use and why training MLLM may not be. (3) Thanks for the author's experiments on images. To extend your paper to the image domain, it seems that a major revision may be needed and the current version is not strong enough for publication now. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, We would like to respectfully respond to your follow-up remarks and offer additional clarifications. --- ### On Novelty: We would like to clarify that, although quite specific, the goal of detecting whether copyrighted data was used during model training has already received growing attention at recent top-tier machine learning conferences, including ICML, ICLR, and EMNLP [1,2,3]. Nevertheless, these efforts have so far been focusing on large language models. What sets our work apart is that we move beyond the text domain and study this problem in the context of vision-language models, which, to our knowledge, has not yet been explored. We introduce a new detection method (DIS-CO), a benchmark (MovieTection), and conducted a wide range of experiments, from human evaluations to fine-tuning studies. We hope this demonstrates the novelty and relevance of our contribution. --- ### On the Use of Copyrighted Movies and Fair Use: Our paper focuses on a *technical problem*: detecting whether a model has memorized specific content. We do not make any claims about whether training on copyrighted data constitutes fair use. Perhaps it will be! We believe that is ultimately for the courts to decide. Nonetheless, this question is currently at the center of public and legal debate, as evidenced by at least 24 copyright lawsuits filed against AI companies in the U.S. since 2023 [4]. As such, this context highlights the relevance of studying the issue. If such training is ultimately deemed unauthorized, our method could provide a means to detect instances of it. As for our own dataset, we have no reason to believe it would fall outside the bounds of fair use, particularly given that we consulted our institution's Data Protection Office and received explicit approval. --- ### On Generalization Beyond Movies: Although MovieTection focuses on movies, our experiments with COCO and comic book data confirm that DIS-CO applies equally well to non-video content. While we recognize the value of extending the benchmark to additional domains, we also had to be pragmatic in light of the 8-page limit. Including more comprehensive experiments on other content types would not have been feasible without compromising the depth, clarity, and focus of the current contribution. --- We sincerely hope this clarification helps convey the intent and contributions of our work more clearly. Sincerely, The Authors --- [1] DE-COP: Detecting Copyrighted Content in Language Models Training Data, ICML 2024 [2] Detecting Pretraining Data from Large Language Models, ICLR 2024 [3] Copyright Violations and Large Language Models, EMNLP 2023 [4] https://www.wired.com/story/ai-copyright-case-tracker/
Summary: The author proposes a copyright detection method for VLM training data based on free-text generation, where movie frames are input into the model to generate corresponding titles, allowing for the detection of whether the model has memorized copyrighted content. The main innovations include: 1) the construction of the MovieTection dataset, which differentiates between training and non-training data based on temporal segmentation to improve the effectiveness of detection; 2) combining image and text inputs to eliminate interference from common knowledge, thereby enhancing the reliability of detection. Experimental results show that DIS-CO significantly outperforms traditional methods in terms of average AUC on mainstream models like GPT-4o. The study further proposes a fine-tuning defense strategy to mitigate potential information misuse issues. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. The issue regarding Theoretical Claims will be elaborated on in the "Questions For Authors" section. Experimental Designs Or Analyses: Yes. The issue regarding Experimental Designs Or Analyses will be elaborated on in the "Questions For Authors" section. Supplementary Material: Yes. I have checked the code part. Relation To Broader Scientific Literature: Yes. The issue regarding Relation To Broader Scientific Literature will be elaborated on in the "Questions For Authors" section. Essential References Not Discussed: None Other Strengths And Weaknesses: Yes. The issue regarding Other Strengths And Weaknesses will be elaborated on in the "Questions For Authors" section. Other Comments Or Suggestions: None. Questions For Authors: Advantages: 1. By generating free-form text instead of fixed-structured text, the model's susceptibility to external interference during inference is reduced, thereby enhancing its ability to detect potential copyright information leakage. This approach strengthens the robustness and credibility of the method. 2. The dataset is partitioned based on the release dates of the films, clearly distinguishing between training and non-training data. This eliminates the risk of data overlap, ensuring more controlled experimental conditions. 3. The proposed method mitigates memory leakage of copyright information in the model by replacing labels, such as substituting "copyrighted content" with alternative expressions. This defense strategy effectively prevents the model from retaining copyrighted data in its memory. Questions: 1. How applicable and universal is the proposed mechanism? MovieTection relies on box office rankings, neglecting niche or independent films, which could lead to an underestimation of long-tail copyright risks. Additionally, the impact of image compression and resolution changes on detection results has not been tested, affecting the robustness of the method in real-world deployment, and preventing assurance of its effectiveness in diverse scenarios. 2. Does it involve ethical issues? Although the publication of 14,000 movie frames is considered "fair use," this approach may still provoke copyright disputes. The article lacks sufficient ethical argumentation and does not fully address potential copyright conflicts or privacy concerns, which may expose involved parties to unnecessary legal risks. 3. Can it effectively prevent potential attacks? The current study only simulates key forgery and fine-tuning attacks, without considering more complex threats such as adversarial samples or model distillation. This results in a potentially insufficient defense capability, unable to comprehensively address various security threats, thus leaving certain security vulnerabilities. 4. What is the computational overhead? The paper does not analyze the computational cost introduced by multiple frame inputs, particularly the impact on GPU memory. This lack of a detailed evaluation could limit the application of the method in low-resource environments, leading to performance bottlenecks in real-world deployment and hindering its widespread use in resource-constrained settings. 5. Is it aligned with specific copyright laws (such as fair use exceptions)? The paper does not integrate specific copyright legal provisions, such as fair use exceptions, leading to ambiguity in the legal significance of its conclusions. The lack of consideration for the broader legal framework, especially regarding fair use and its impact on copyright protection, results in an unclear legal interpretation and a lack of in-depth analysis of the real-world legal implications. Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Privacy and Security'] Ethical Review Concerns: Although the publication of 14,000 movie frames is considered "fair use," this approach may still provoke copyright disputes. The article lacks sufficient ethical argumentation and does not fully address potential copyright conflicts or privacy concerns, which may expose involved parties to unnecessary legal risks. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, We appreciate the time and effort dedicated to evaluating our paper. We understand the concerns raised and below, we address each point in detail: > **Q1.1** How applicable and universal is the proposed mechanism? > We acknowledge that MovieTection’s focus on box-office hits may limit its coverage of niche films, which are also subject to copyright and may appear in pretraining datasets. We would like, nonetheless, to make two clarifications: - While this is indeed a limitation, it reflects a broader challenge shared by most existing studies in the field. Detecting memorization of rarely seen content is inherently more difficult than detecting memorization of content that appears frequently during training. - We agree that it is important to assess how VLMs respond to niche content. To complement our experiments, we conducted an auxiliary study in which we tested DIS-CO on three films with little to no international projection. | Movie ($X = Box Office) | Gaiola Dourada ($3M) | Canção de Lisboa ($930k) | Leão da Estrela ($500k) | | --- | --- | --- | --- | | GPT-4o | 13% | 8% | 0% | It’s interesting to see that with DIS-CO, a smaller movie such as *Gaiola Dourada* is still partially recognized by gpt-4o, suggesting that even limited exposure during training can leave some detectable traces. However, the overall trend is clear: as a film’s popularity and likely exposure in training data decreases, so does the strength of the memorization signal. This reinforces our focus on popular films: while weaker signals may appear for niche works, box-office hits offer stronger, more consistent signals for evaluating memorization. > **Q1.2** The impact of image compression and resolution changes has not been tested. > We did a small experiment to evaluate how different resolutions affect DIS-CO’s performance. | GPT-4o | 21 Jump Street | 1917 | A Beautiful Mind | A Star is Born | Aladdin | Avg. | | --- | --- | --- | --- | --- | --- | --- | | 1126x512 | 68% | 86% | 71% | 80% | 92% | **79.4%** | | 563x256 | 58% | 85% | 66% | 77% | 86% | **74.4%** | | 282x128 | 57% | 85% | 58% | 64% | 74% | **67.6%** | As expected, lower resolutions reduce accuracy, since the model has less visual details to work with. That said, DIS-CO remains effective overall: suspect movies are still clearly distinguishable from clean ones, which consistently score near 0% accuracy, regardless of the resolution. Although smaller frames introduce a light performance drop, they can still be a practical choice to reduce computational effort without significantly impacting detection quality. > **Q2 + Q5** Does it involve ethical issues? Is it aligned with specific copyright laws? > We believe that all aspects of our work are aligned with fair use exceptions and conducted in accordance with relevant ethical and legal standards. We would like to clarify that the dataset release was reviewed in advance by our institution’s Data Protection Officer (DPO), who provided a positive assessment regarding its compliance. In addition, to safeguard ethical usage and prevent misuse, the dataset is being released under a restrictive CC BY-NC-SA 4.0 license, which limits its use strictly to non-commercial research and academic purposes. > **Q3** The current study only simulates key forgery and fine-tuning attacks (…) This results in a potentially insufficient defense capability, thus leaving certain security vulnerabilities. > We respectfully note that our paper does not mention or simulate “key forgery” attacks. Could you clarify what was meant by this term, and how it relates to our work? Assuming the comment refers to our fine-tuning experiment, we would like to clarify that this component serves primarily as a proof of concept to illustrate that disclosure of memorized content can be mitigated. While we do explore fine-tuning as a way to reduce the disclosure of memorization, this remains a secondary contribution. The central focus of our work is on detection, not mitigation. > **Q4** The paper does not analyze the computational cost introduced by multiple frame inputs. > Following this suggestion, we evaluated the GPU memory usage when increasing the number of input frames. | Frames (N) | Qwen2-VL 7B | Qwen2-VL 72B | | --- | --- | --- | | 1 | 16.00 GB | 138.03 GB | | 4 | 17.38 GB | 142.69 GB | | **Increase** | **+1.38 GB** | **+4.66 GB** | On average, each additional frame increases memory by 0.46 GB (Qwen2-VL 7B) and 1.53 GB (Qwen2-VL 72B). This corresponds, for the bigger model, in less than 3.5% total additional memory to process 4 frames, showing that the main memory requirement comes from loading the model itself. The cost added by using multiple frames is small and unlikely to be a problem in most settings. **Conclusion** We hope that our responses clarify the concerns raised and demonstrate the validity and value of our work. We thank you for the insightful feedback and we are happy to present further clarifications. --- Rebuttal Comment 1.1: Comment: My questions have been clearly answered. Therefore, I change the score to 3. Thanks for your efforts.
Summary: This paper introduces DIS-CO, a new method to check if large vision-language models (VLMs) were trained on copyrighted material. the authors use the idea that a model will "remember" specific content if it has seen it before. In this work, the model is asked to name a movie from a single frame or caption. The authors build a new dataset called MovieTection, which includes 14,000 movie frames (and captions) from films released before and after the model’s training cutoff date. Their experiments show that when the model has seen a movie during training, it is more likely to correctly name the movie, and this method works well across different types of models, even in settings where frames are very challenging and we can only see a few of the model’s outputs. ## update after rebuttal The author clearly addressed my question so I kept the score as accept. I do think this paper is a valuable contribution to the community, and I don't agree with some other reviewers who claim that the paper lacks novelty or is too limited due to movie datasource. Claims And Evidence: Core Claims: 1. DIS-CO effectively detects the inclusion of copyrighted movies in a model’s training data, outperforming baseline approaches across multiple evaluation metrics. 2. The proposed method is applicable in both white-box and black-box settings. Evidence: 1. Experiments on the MovieTection benchmark demonstrate that models yield significantly higher accuracy and AUC when queried with suspect frames (from copyrighted movies) than with clean or non-member frames. 2. The study shows a clear improvement in detection performance with longer prompt contexts and reveals a positive correlation between factors such as movie popularity (box-office revenue) and quality (IMDb ratings) with the likelihood of memorization. Methods And Evaluation Criteria: Methods: 1. The approach involves querying VLMs with free-form text prompts where the model is asked to identify the movie from an input frame or caption. 2. DIS-CO distinguishes between “suspect” and “clean” movies based on their release dates relative to the model’s training cutoff. 3. Two variants are proposed: one that considers all correct predictions (DIS-CO) and another (⌊DIS-CO⌋) that filters out cases where both image-based and caption-based queries agree, thereby reducing bias. Evaluation: 1. Performance is assessed using accuracy and Area Under the Curve (AUC) metrics on the MovieTection benchmark and the VL-MIA/Flickr dataset. 2. Experiments span several models (e.g., GPT-4o, Gemini-1.5 Pro, Qwen2-VL 72B, LLaMA-3.2 90B) Weakness: The paper focuses on movie frames and captions, and it remains to be seen if the approach can generalize to other kinds of copyrighted content or other multimodal tasks. Theoretical Claims: There is no theoretical claims in this paper Experimental Designs Or Analyses: 1. The experimental design is comprehensive, using a newly constructed benchmark (MovieTection) alongside an existing dataset (VL-MIA/Flickr) to evaluate model performance. 2. The study considers various dimensions including prompt context length, movie popularity, and quality, offering insights into factors that influence model memorization. 3. Human evaluation experiments are conducted to differentiate between generalization and genuine memorization, adding depth to the analysis. Supplementary Material: The supplementary materials provide detailed prompt templates for both image and caption-based queries, additional qualitative examples, and further experimental results (e.g., fine-tuning procedures and ablation studies). These materials enhance the reproducibility of the work and offer deeper insights into the experimental setup and methodological choices. Relation To Broader Scientific Literature: The work builds on a rich literature on membership inference attacks and data contamination detection, extending these ideas from text-only models to the multimodal setting of VLMs. It relates to recent prompting approaches and entropy-based methods while addressing their limitations, especially in black-box scenarios. Essential References Not Discussed: All essential references I know have been discussed. Other Strengths And Weaknesses: All strengths and weakness has been pointed out above. Other Comments Or Suggestions: no other comments. the paper is well-written. Questions For Authors: 1. My question to this paper is whether if this paradigm can be applied to other copyright content as well beyond just movie, which also states in the method review section: "The paper focuses on movie frames and captions, and it remains to be seen if the approach can generalize to other kinds of copyrighted content or other multimodal tasks." Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, We greatly appreciate the time and effort you invested in reviewing our paper. Below, we provide a response to your question. > My question to this paper is whether if this paradigm can be applied to other copyright content as well beyond just movie, which also states in the method review section: "The paper focuses on movie frames and captions, and it remains to be seen if the approach can generalize to other kinds of copyrighted content or other multimodal tasks.” > We believe it definitely can! In fact, we even ran DIS-CO on a small experiment with a completely different type of visual content: comic books. The works we used for this were: *Astérix Legionary;* *Lucky Luke: Billy the Kid;* *The Amazing Spider-Man #1;* *Spirou and Fantasio: Comme Zorglub;* and *Tintin in America*. Given that these comics come from series with multiple of similar-looking volumes, we expected a model to struggle. Still, GPT-4o managed to correctly identify the specific comic-title surprisingly often. | | Astérix | Lucky Luke | Spider-Man | Spirou and Fantasio | Tintin | | --- | --- | --- | --- | --- | --- | | GPT-4o | 52.0% | 61.3% | 42.3% | 68.8% | 67.2% | That said, we ended up leaving this part out of our main experiments because, unlike movie frames, where the visual signal is (mostly) isolated, comic books introduce an extra variable: text inside the images. This makes it harder to know whether the model’s prediction came from the visual content or if it was leveraging the text. Since our goal was to test visual memorization specifically, we felt that mixing modalities here could weaken the core message. **Conclusion:** We hope that our answer has addressed your concern. Please let us know if any further clarification or additional information is needed from our end.
Summary: This paper investigates the challenge of verifying whether copyrighted content was used to train large vision-language models (VLMs) without direct access to their training data. The authors introduce DIS-CO, a novel approach that leverages the hypothesis that VLMs can recognize images from their training corpus. By systematically querying a VLM with specific frames from copyrighted material, DIS-CO extracts content identity through free-form text completions. To evaluate its effectiveness, the authors present MovieTection, a benchmark containing 14,000 frames with detailed captions from films released both before and after a model’s training cutoff. Experimental results demonstrate that DIS-CO significantly enhances detection performance, nearly doubling the AUC of the best prior method for models with available logits. Claims And Evidence: Claim: The paper claims that a VLM is able to recognize images from its training corpus. This key claim is supported by the evidence in existing literature [r1] that data encountered during training leads to greater model confidence when generating outputs [r1] Li Z, et al. Membership Inference Attacks against Large Vision-Language Models. NeurIPS, 2024. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-aligned with the problem statement. The study introduces a rigorous benchmark, strong comparative baselines, and a method applicable to real-world black-box models. Improvements in ablation studies, dataset validation, and robustness against adversarial probing would further enhance its impact. 1. Appropriateness of the Proposed Method (DIS-CO). The DIS-CO method is well-suited for detecting whether large vision-language models (VLMs) have been trained on copyrighted content. By leveraging free-form text generation, the approach mitigates biases introduced by multiple-choice settings and allows for more natural model responses. 2. Strength of Benchmark Dataset (MovieTection). The introduction of MovieTection as a benchmark dataset is a strong contribution. It is carefully constructed with a temporal split to differentiate between pre-training and post-training data. The inclusion of "main frames" and "neutral frames" enhances evaluation granularity, making the benchmark robust for detecting training exposure. 3. Evaluation Metrics and Comparisons. The study effectively evaluates DIS-CO using AUC scores, accuracy metrics, and comparative baselines (MCQA, Rényi method, Captions-based prompting). The removal of caption-based correct predictions to isolate image memorization is a novel refinement. Theoretical Claims: 1. Soundness of the DIS-CO Method. The paper’s central theoretical claim—that free-form text completion provides a stronger signal for detecting memorized content than multiple-choice settings—is intuitively reasonable and supported by past studies on language model leakage. However, The paper lacks a formal proof of why free-form completions reduce false positives better than MCQA. 2. Upper-Bound Estimation of Memorization. The approach of defining DIS-CO and ⌊DIS-CO⌋ as lower and upper bounds of memorization is reasonable, but the paper does not theoretically prove the validity of this bounding technique. Experimental Designs Or Analyses: 1. Validity of Experimental Setup. Strengths: The experimental design of **DIS-CO** is well-structured to test its effectiveness in detecting copyrighted content within VLMs. The evaluation setup appropriately considers **both white-box and black-box models**, making the results more generalizable. Weakness: However, a potential limitation is the **assumption that chronological release serves as a strict boundary for training data exposure**. While this is a reasonable heuristic, it does not entirely rule out indirect exposure through publicly available media (e.g., trailers, posters). A complementary **dataset contamination check** would strengthen the claim that post-cutoff movies are truly novel to the model. 2. Baseline Comparisons. Strengths: The study effectively benchmarks **DIS-CO** against prior methods, including **MCQA and Renyi entropy-based techniques**. The **AUC and accuracy metrics** provide a fair and interpretable comparison. The decision to remove **caption-based correct predictions** from the evaluation is a useful refinement, ensuring that memorization is attributed to visual data rather than textual association. Weakness: An area for improvement is the **lack of an ablation study** that isolates the contributions of **prompting strategy, query selection, and frame type**. Understanding the relative impact of these components would clarify why DIS-CO outperforms prior approaches. Supplementary Material: Yes, I reviewed the supplementary material, which provides additional code implementation details for DIS-CO. Relation To Broader Scientific Literature: DIS-CO builds on membership inference attacks as explored by [r1] and [r2], extending them to black-box models and offering more reliable detection of copyrighted content in VLMs, surpassing previous methods like Rényi entropy. [r1] Li Z, et al. Membership Inference Attacks against Large Vision-Language Models. NeurIPS, 2024. [r2] Pinto F, et al. Extracting Training Data From Document-Based VQA Models. ICML, 2024. Essential References Not Discussed: The paper provides a well-contextualized discussion of prior work and thoroughly cites relevant research in membership inference attacks (MIA), dataset attribution, and multimodal model evaluation. Other Strengths And Weaknesses: Strengths: 1) The MovieTection dataset is a new benchmark designed for testing model memorization of copyrighted materials, addressing a crucial gap in dataset attribution research. 2) DIS-CO achieves state-of-the-art performance in detecting memorization, improving upon prior methods such as MCQA and Rényi entropy-based inference. 3)The methodology is clearly explained, with detailed steps for dataset construction, query formulation, and evaluation. Weakness: 1) While DIS-CO’s empirical results are strong, the paper lacks formal mathematical analysis of why free-form text completion is a more reliable indicator of memorization than multiple-choice formats. 2)Additional theoretical justification (e.g., probability bounds on model memorization) could strengthen the claims. 3) The study does not dissect the impact of different components of DIS-CO, such as the number of frames per query, prompt variations. Other Comments Or Suggestions: The paper is well-organized and easy to follow. The sections are logically structured, guiding the reader through the problem statement, proposed method, experiments, and results. Questions For Authors: 1. Clarification on Dataset Contamination.The paper assumes that movies released after the training cutoff date are not included in the model’s training data. However, some models might have encountered promotional content (e.g., trailers or posters) that could influence performance on recent movies. Can you provide more details or experiments to assess the potential contamination from such external content sources? 2. Ablation Study for DIS-CO. You have provided solid results showing DIS-CO’s advantage over prior methods. However, could you consider conducting an ablation study to isolate the contributions of specific components of DIS-CO (e.g., the number of frames per query, prompt variations)? Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Legal Compliance (e.g., GDPR, copyright, terms of use)'] Ethical Review Concerns: copyright issue of the collected data Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for your valuable feedback and comments. Below, we address each of your questions. > **W1.** Lack of formal mathematical analysis. > Here, we present our mathematical analysis to support our intuition on why free-form completions (FF) are a more robust indicator of memorization than Multiple Choice (MC) formats. **Random-Chance Baselines:** In an MC setting with $k$ options, the probability of a correct guess by chance is $P_{MC} = \frac{1}{k}$, wich for $k=4$ would result in 25% By contrast, in an FF setting the model must generate the exact label from a vast output space $\Omega$ (with $|\Omega| \gg k$), so that the chance-level probability is approximately $P_{FF} \approx \frac{1}{|\Omega|},$ which is orders of magnitude lower than $P_{MC}$. **Impact of Non-Uniform Priors:** It is true that models may exhibit a bias toward more popular movies. Even if this bias increases the chance for a popular movie by a factor of 100, the overall probability remains extremely small. For instance, if $|\Omega| = 10{,}000$, even with a bias, the probability $P'_{FF} = \frac{100}{10{,}000} =$ 1%, which is still 25 times lower than the MC chance of 25%. > **Q1.** Clarification on Dataset Contamination. > We fully agree that external sources could influence the model’s performance. To investigate this potential contamination, we would like to draw your attention to Appendix D. We found that GPT-4o (with a knowledge cutoff in October 2023) acknowledged 20 out of the 50 clean movies included in MovieTection. We believe it's reasonable to assume these 20 titles are likely candidates for having appeared in publicly accessible sources such as trailers, promotional posters, or press coverage before the model's cutoff date, though not as full movies, since they had not been released yet. The table below summarizes findings from Table 10 in Appendix D. | Release Period | # of Movies | Non-zero Values | Max Accuracy | | --- | --- | --- | --- | | **Nov 2023 - Feb 2024** | 6 | 4 | 6% | | **Mar–Oct 2024** | 12 | 0 | 0% | We observe that movies released closer to the model's cutoff date, and thus with more external content available, result on the highest accuracy scores, while very recent releases, for which public information was considerably sparser at the cutoff date, show no correct mappings. The highest observed accuracy for this subset of recognized movies was, however, only 6%, significantly lower than accuracy scores typically achieved for suspect movies. Therefore, we firmly believe that exposure to external content alone minimally impacts our primary results. > **W2 + Q2** Could you consider conducting study to isolate the contributions of specific components of DIS-CO? > Isolating and understanding the contributions of individual components in DIS-CO is definitely essential. However, we believe to have already explored some of these aspects, for which we would like to direct your attention to Appendix I, where we analyze the effect of varying the number of frames per query. Our analysis is conducted across the different frame types (main vs. neutral) and model sizes, offering a wide view of how these variables affect performance. As for your suggestion regarding prompt variations, we conducted an additional experiment to evaluate model sensitivity to different wordings, which can be relevant for practical applications, since user inputs will naturally have some variance in the words. We designed a small-scale evaluation using two categories of prompts: - Biased Prompts: These include additional cues that might assist the model. - Example: “What Oscar-winning movie is this frame from?” - Paraphrased Prompts: Semantically equivalent rephrasings of the default prompt. - Example: “Can you identify what movie is present here?” We evaluated model responses on a subset of the MovieTection dataset, summarized below: | Prompt Type | 21 Jump Street | 1917 | A Beautiful Mind | A Star is Born | Aladdin | Avg. | | --- | --- | --- | --- | --- | --- | --- | | Easier | 83% | 100% | 87% | 85% | 92% | 89.4% | | Default | 68% | 86% | 71% | 80% | 92% | 79.4% | | Default Paraphrased | 60% | 88% | 74% | 82% | 92% | 79.2% | Given that in real-world settings the target content may not always be a well-known blockbuster, biasing the model toward popular titles through hints may not be ideal. We believe that sticking to neutral prompt variations is a more reliable choice, as it avoids introducing external priors and better reflects the model’s actual memorization. > **Flag for Ethics Review** > Regarding the ethics review flag, please refer to our response provided in the reply to Reviewer V24U (Q2 + Q5), which addresses this issue as well. **Conclusion:** We hope that our answers have addressed your concerns, and thank you once again for your valuable feedback. Please let us know if any further clarification or additional information is needed from our end.
null
null
null
null
null
null
Beyond Self-Interest: How Group Strategies Reshape Content Creation in Recommendation Platforms?
Accept (poster)
Summary: The paper studies group strategies in content creation games: a game framework in which individual creators in a recommender system can form groups. This can lead to interesting scenarios, such as a creator deviating from its strategy in the "vanilla" Nash equilibrium, even though this reduces its individual utility, it increases the group’s utility. They provide game classes, such as the bandit $C^3$ and TvN games, under which the vanilla Nash equilibrium and the $C^3$ game with group creators result in the same or different equilibria. They then analyze general $C^3$ games and the Price of Anarchy under exposure and engagement rewards, showing that it can be unbounded with exposure but bounded for engagement. Finally, they provide simulations supporting their theoretical claims on user welfare. Claims And Evidence: Yes, the theorems and lemmas are clearly stated and have proofs in the Appendix. Methods And Evaluation Criteria: Yes, the polarized synthetic dataset and experiments with LBR algorithm support the theoretical claims namely Theorem 4.7, 4.9 and 5.4 Theoretical Claims: Yes, I’ve checked the correctness of Lemma 4.3, Thm 4.6 and 4.7. A quick clarification on the last statement in section A.1: For the Bandit C3 game, to get $PNE(G, s_c)$ you initialize all creators in the group $c$ with $s_c$, wlog say these are 1 to $n_c$; now from $n_c$ to $n$ you follow Algorithm 1? Experimental Designs Or Analyses: Yes, the experiments are sound and support Theorems 4.7, 4.9, and 5.4. However, some details are missing: - Can you specify how gradient descent is used to approximate the optimal group strategy at each round? - According to Figure 2 in the appendix, the group creators (red triangles) are dispersed. Just to clarify, this is not for the fixed group strategy as in Section 4.1; here, group creators can have different strategies within a group. Supplementary Material: N/A Relation To Broader Scientific Literature: This paper is a good contribution to the literature on content creation games. The concept of producers forming groups is novel and has not been considered in prior work. The technical results are solid and provide a theoretical characterization of new kinds of equilibria, namely: Single-Group Stackelberg Equilibrium (Definition 4.4), Single-Group Nash Equilibrium (Definition 4.5), and scenarios in which they differ from the "no-group" individual creators' Nash equilibrium. The paper also provides Price of Anarchy bounds for general $C^3$ games with group strategies and builds on the PoA characterizations in Yao et al. Essential References Not Discussed: Although not an essential reference, I wonder if the authors considered connections to the literature on Algorithmic Collective Action in Machine Learning, https://proceedings.mlr.press/v202/hardt23a.html. Here, too, individuals can form groups to achieve a collective goal. Other Strengths And Weaknesses: Overall, I enjoyed reading this paper, and it has clear theoretical contributions and corresponding practical takeaways: i) The insights from Theorem 4.7 demonstrate how welfare under the single-group Nash equilibrium $s^{II}$ has a drastic decrease compared to individual creators. ii) The insight from Theorem 4.9, shows how welfare varies with the topK value and that the platform should avoid very large $K$ due to diminishing attention spans. iii) The Price of Anarchy under exposure and engagement rewards is analyzed and provides a good follow-up to the literature on $C^3$ games PoA from Yao et al., now with group strategies. The only weakness is in the description of Section 6, and it would improve the paper if the experimental design description were expanded in Section 6 or Appendix C, as I highlighted earlier. Other Comments Or Suggestions: I believe there’s a typo on line 175, column2: “… shapes the same”, i think you mean equal up to permutation of strategies? Questions For Authors: 1. In the Type 1 and Type 2 equilibria, and more generally for the theorems in Sections 4.1 to 4.4, each individual in the group c plays the same strategy $s_c$, right? This is different from the example in the paragraph on line 172. I believe the results in Theorem 5 do not require this single group strategy. 2. Can you provide more insights on the importance reweighting method mentioned on line 356? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: >For the Bandit C3 game, to get $PNE(G, s)$ you initialize all creators in the group $c$ with $s_c$, wlog say these are 1 to $n_c$; now from $n_c$ to $n$ you follow Algorithm 1? We initialize the group of creators with $s_c = (s_1, s_2, \dots, s_{n_c})$, which are not necessarily identical across all members. As you mentioned, once the group strategy $s_c$ is selected, Algorithm 1 is applied to the remaining $n - n_c$ creators, who then sequentially choose their strategies to complete the profile. >In the Type 1 and Type 2 equilibria, and more generally for the theorems in Sections 4.1 to 4.4, each individual in the group c plays the same strategy $s_c$, right? Note that $s_c = (s_1, s_2, \dots, s_{n_c})$. Creators within the group are allowed to adopt different strategies (actions), and these strategies are not necessarily identical. In both the Type 1 and Type 2 equilibria, the strategies of group members can differ—the equilibrium does not assume a single shared strategy across all group members. And in Figure 2 of the appendix, the group creators (represented by red triangles) indeed employ different strategies. >Can you specify how gradient descent is used to approximate the optimal group strategy at each round? Thank you for pointing this out. The group aims to select $s_c = (s_1, s_2, \cdots, s_{n_c})$ in order to maximize its group utility, defined as $Q_c(s_c, s_{-c}) = \sum_{i \in \mathcal{C}} u_i(s)$. To approximate the optimal group strategy at each round, the group can perform multiple steps of gradient descent on the objective $Q_c(s_c, s_{-c})$ with respect to the group strategy $s_c$, assuming that the group has full knowledge of the game environment. In more realistic scenarios, the group can instead adopt trial-and-error approaches to iteratively improve its group utility over time. Alternatively, heuristic methods may be employed—for example, assigning certain creators to dominate the exposure of the largest user while others are allocated to target remaining users. This structured, adaptive strategy allows the group to improve utility even with limited information. We will include this clarification in our revised version. >I believe there’s a typo on line 175, column2: “… shapes the same”, I think you mean equal up to permutation of strategies? Yes, we mean they are equal up to permutation of strategies. > Can you provide more insights on the importance reweighting method mentioned on line 356? Thank you for pointing this out. The importance reweighting method, as proposed in (Yao et al., 2024b), enables the platform to steer creator incentives toward under-served users by modifying the reward structure. Specifically, the platform defines the creator's utility as $$ u_i(s) = \mathbb{E}_{x \in \mathcal{X}} \left[ w(x) \pi(x, s) P_i(s, x) \right], $$ where $w(x)$ represents the *importance weight* of user $x$. When the platform detects that a user is being under-served under the current content distribution, it increases $w(x)$ for that user. This effectively amplifies the reward for creators who target such users, encouraging them to shift their content in that direction. Over time, this reshapes the content distribution and improves overall user welfare. As a concrete example, consider the game instance in Appendix B.3. In that case, we can set the reward as $$u_i(s) = w(x_1) \pi(x_1, s) P_i(s, x_1) + w(x_2) \pi(x_2, s) P_i(s, x_2),$$ and choose $w(x_1)$ to be large and $w(x_2)$ small. This encourages all creators to shift from $s_i = e_2$ to $s_i = e_1$, leading to a new PNE where user welfare is optimal. Once this equilibrium is reached, the platform can reset the weights to $w(x_1) = w(x_2) = 1$, and creators will no longer deviate. We will include a detailed explanation and discussion of this method in revision.
Summary: This paper investigates group strategic behaviors among content creators in recommendation systems. Specifically, the authors assume that creators within a group can strategically deviate to maximize their collective reward. Using bandit C3 games, they theoretically demonstrate that user welfare can suffer significant losses due to such group strategic behaviors. In more general cases, they analyze the price of anarchy (PoA) of coarse correlated equilibria. Furthermore, they show that user engagement-based reward mechanisms can mitigate these issues compared to exposure-based mechanisms. Simulations support the theoretical findings. --- I have updated the score to 3 after rebuttal. Claims And Evidence: **(Pro)** The theoretical results are generally sound. **(Con)** One major issue is the definition of group deviations. The formation of such groups is questionable, as some players in the group may be worse off due to their inclusion. Consequently, the stability of these groups is unclear, and some players may have an incentive to deviate from the group strategy. A more reasonable setting would introduce an additional requirement ensuring individual rationality—i.e., players should not be worse off by following the group strategy. **(Con)** Another major issue is the simplification of theoretical results in the bandit C3 game. The theoretical results in Section 4 rely heavily on this specific game structure. For example, when the user population vectors are not orthogonal or the creator strategy set is more general (e.g., continuous), it is unclear how the results would extend. Additionally, in Theorems 4.7 and 4.9, the findings seem highly dependent on the choice of $p_1, p_2, \dots, p_n$. It would be helpful to understand how the results hold for different choices of these parameters. Methods And Evaluation Criteria: N/A Theoretical Claims: I did not check the details of the theoretical claims, but the results appear to be generally sound. Experimental Designs Or Analyses: **(Con)** Similar to the claims section, I would expect the authors to conduct experiments under more general settings, such as varying the choice of the vector $p$. Additionally, incorporating a real-world dataset with user features would strengthen the analysis. Supplementary Material: I briefly checked the proofs. Relation To Broader Scientific Literature: **(Pro)** The paper's major contribution is the consideration of group strategic behaviors in recommendation systems. Essential References Not Discussed: No essential references appear to be missing. Other Strengths And Weaknesses: No additional strengths or weaknesses were identified. Other Comments Or Suggestions: 1. What is the optimistic tie-breaking rule in Line 203, and how does it relate to the tie-breaking rule in Theorem 4.6? Some typos were found: 1. The notation $\sigma$ is inconsistent throughout the paper, appearing in three different forms: $\sigma(\cdot, \cdot)$, $\sigma_{\cdot, \cdot}$, and $\sigma_{\cdot}(\cdot)$. 2. Line 172: ${q_j^v}$ and $q_j^{eq}$ are not consistent. 3. Line 214: "for for" → "for" 4. Line 224: One of $s_{1,2,4}$ or $s_{4,5}$ in $S^{II}$ appears to be incorrect. Questions For Authors: See the cons listed above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: >One major issue is the definition of group deviations. As we discussed in the paper (Lines 270-274), suppose that all creators have reached an equilibrium, and then some creators decide to form a group. Joining such a group becomes a dominant strategy because the group utility is at least as high as the individual case—at the very least, it remains unchanged if the creators continue their previous actions. If the group generates additional rewards, the bonus can be allocated among the group members, ensuring that each creator receives a reward greater than or equal to their original reward. Given this allocation rule, the formation of such groups is reasonable. Even if a creator deviates from the group strategy, they have an incentive to re-join the group. Furthermore, in the real world, groups of creators often exist and are typically united by a media company, such as an MCN (Lines 28–33). >When the users are not orthogonal or the creator strategy set is more general, it is unclear how the results would extend. We use this simplified game to better unveil the theoretical insights. In Section 5, we address the general case, where the user population vectors are not orthogonal, and no assumptions are made regarding the parameter $p$. Theorem 5.3 demonstrates that the PoA under exposure reward can be arbitrarily bad, which aligns with the results in Theorem 4.7. Additionally, we provide bounds on the PoA under engagement rewards in the general case. Furthermore, our empirical results in Section 6 and Appendix C further support the theoretical claims in the general cases, where the user population vectors are not orthogonal and the creator strategy set is continuous. >In Theorems 4.7 and 4.9, the findings seem highly dependent on the choice of $p$. It would be helpful to understand how the results hold for different choices of these parameters. Thank you for your valuable suggestions. 1) To emphasize the potential negative impact of group strategic behavior on user welfare, we consider the TvN game as an example for worst-case analysis (Yao et al., 2024a). The TvN game, although stylized, reflects user distributions in real-world online content platforms, which are often highly skewed and unbalanced. 2) Our choice of this representative $p$ is also primarily for clarity and simplification of presentation. Analogous to the proof of Theorems 4.7 and 4.9, we can extend our results to a more general unbalanced case, where the largest user proportion in the TvN game can vary, and this will provide a more smooth transit from the extreme case to the even case. We will incorporate these results into the revision. And we will expand the discussion to explicitly cover how the results extend to general $p$. In particular: - Under unbalanced $p$, similar welfare loss results hold, and group strategic behavior remains impactful. - Under more evenly distributed $p$, the impact of group behavior diminishes, as also implied by Theorem 4.6 and Theorem 4.12. For example, when $p$ is uniform, in Theorem 4.7, the welfare under both the individual case and the $n_c = n$ case equals 1, and the $K$ will no longer affects the user welfare in Theorem 4.9. >I would expect the authors to conduct experiments under varying choice of the vector $p$. Thanks for your thoughtful suggestion. We will further strengthen our results by using a more general $p$ based on a Zipf-like or power-law distribution, where $p_j \propto \frac{1}{j^\alpha}$ with $\alpha > 1$. Such distributions have been widely observed in user preferences across online platforms[1][2][3]. Our current $p$ already follows a similarly skewed pattern, and we will clarify this in our revision. The additional experiment results under this new $p$ yield similar results and insights, further supporting the robustness of our conclusions. Please refer to the empirical results provided in our response to reviewer dTVP. [1] Chowdhury et al. Popularity growth patterns of youtube videos-a category-based study. [2] Cameron, S. Zipf’s Law across social media. [3] Mosaic Ventures. The creator economy: a power law. >What is the optimistic tie-breaking rule in Line 203, ...? Thank you for pointing this out. It refers to selecting $s_{-c}$ in a way that is most favorable to the group. This rule is specific to the equilibrium introduced in Line 203 and is unrelated to the tie-breaking rule mentioned in Theorem 4.6. In Theorem 4.6, the tie-breaking rule applies to individual creators when their utilities are equal for selecting different strategies. In such cases, a more general and consistent rule is to assume that creators will choose the user with the smaller index, which is generally used in game theory literature. This deterministic rule avoids ambiguity and is suitable for all cases discussed in the paper. In our revision, we will remove the optimistic tie-breaking rule and adopt this general tie-breaking assumption for clarity and consistency. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I would like to raise my score to 3.
Summary: The paper studies how strategic behaviors of a group of creators can affect user welfare with game-theoretic analysis. In particular, they adopt the content creation competition (CCC) game-theoretic framework introduced in prior works, where users and content creators both derive utilities from the recommender system, with a specific definition of utility for both sides. And they assume there are groups of content creators who collaboratively maximize their group utility. They conducted game-theoretical analysis under various settings. They found that (1) when the group size is small, groups do not affect the equilibrium much; (2) when group size is large, user utility suffers a lot. (3) the equilibrium can be quite different under different parameters in the CCC framework, (3) the price of anarchy can be arbitrarily bad when the reward for the creators are exposure (rewarded when created items exposed to users), when the reward for the creators are user engagement( rewarded when created items get user engagement), then the PoA is bounded, suggesting that we might want to use engagement reward to improve user welfare. Claims And Evidence: I do not find problems Methods And Evaluation Criteria: The paper focuses a bit too much on settings that are not algined with real-world applications. For instance, I think content creators are rewarded for engagement, but the paper focuses a lot on the setting where creators are rewarded for exposure. Another example is that users do have limited attention, but the paper focuses a lot on the case where users have infinite attention. Theoretical Claims: I did not check the proofs. Experimental Designs Or Analyses: I did not find issues. Supplementary Material: No Relation To Broader Scientific Literature: It adopts prior frameworks on game-theoretic analysis in recommender systems with strategic behaviors. This paper specifically focuses on a new aspect where content creators might form groups to compete strategically. Essential References Not Discussed: I am not aware of any Other Strengths And Weaknesses: Strengths 1. The paper studies a very interesting problem----the impact of strategic behaviors of content creators on user welfare in recommender systems. It might have many real-world implications. 2. The paper provided theoretical analysis on the game-theoretic dynamics of grouped content creators and user welfare under various scenarios. 3. The paper draw conclusions and insights from these theoretical analysis which make sense to me. Weaknesses 1. I think the paper's analysis focuses too much on unrealistic settings as I mentioned above. 2. The writing can be improved. See below. Other Comments Or Suggestions: 1. In introduction, it might be good to talk about the findings in this work earlier. 2. Many notations are used without definition. e.g. n, K, \beta in introduction. 3. Acronyms used without explanation, e.g., PNE. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: >The paper focuses a bit too much on settings that are not algined with real-world applications. For instance, I think content creators are rewarded for engagement, but the paper focuses a lot on the setting where creators are rewarded for exposure. As mentioned in the paper (Lines 144-148), the exposure reward mechanism is widely used in both theoretical and empirical settings (Ben-Porat et al., 2019; Hron et al., 2022; Jagadeesan et al., 2023; Meta, 2022; Savy, 2019), making it an important aspect to study in the context of recommendation systems. Furthermore, both exposure and engagement metrics are used in practice: user engagement tends to be used more often as a reward metric for established creators, while exposure is typically used for new creators (Yao et al., 2023). Thus, our focus on exposure rewards reflects a commonly encountered scenario in many platforms. >Another example is that users do have limited attention, but the paper focuses a lot on the case where users have infinite attention. Thank you for your valuable comment. We would like to clarify that our analysis does not assume infinite user attention. 1) The constant attention model we use does not imply infinite attention. This attention truncated by the parameter $K$ and it assumes users allocate a fixed amount of attention uniformly across the top-$K$ items they are shown—e.g., $r_1 = r_2 = \dots = r_K = 1$. This setting is relevant in practice, particularly in user interfaces where content is displayed in fixed-sized, unordered blocks (e.g., a “For You” page with $K$ equally weighted items). Thus, the model captures a common real-world recommendation scenario without assuming that users consider all available content. 2) Our paper also considers another setting with *diminishing attention*, modeled as $r_1 = \dots = r_\tau = 1$, $r_{\tau+1} = \dots = r_n = 0$, which captures the case where users only pay attention to the top $\tau$ items. These two types—constant and diminishing—represent common user behaviors corresponding to slow-decay and rapid drop-off attention curves, respectively. 3) The theoretical results in Section 4 can be extended to more general attention profiles $\{r_i\}_{i=1}^n$. In particular: - Slow decay attention will lead to results akin to those in Theorem 4.7, implying that large group strategic behavior negatively impacts user welfare. - Under rapid drop-off attention, Theorem 4.9 and Corollary 4.10 hold similar results demonstrating that tuning $K$ and $\beta$ can help mitigate user welfare loss. 4) In our simulations, we also test *log cutoff attention scores* as an intermediate setting. These results are consistent with our theoretical findings, reinforcing the robustness of the insights under various attention models. We chose to present constant and diminishing attention as representative cases to improve the readability and clarity of the exposition. We will clarify this modeling choice and its implications in our revision.
Summary: The paper is attempting to answer : How do group strategies among content creators impact recommendation systems, specifically focusing on content distribution and user welfare? The paper examines how content creator groups impact recommendation systems, contrasting with individual creator behavior. Particularly, they show that large groups can significantly harm user welfare, especially with exposure-based rewards. Furthermore, the authors quantifies inefficiency, showing the price of anarchy (PoA) can be arbitrarily large with exposure rewards but is bounded with engagement rewards. They argue and demonstrate that engagement-based rewards better mitigate negative group effects and improve user welfare. Empirical results from simulations further support the effectiveness of the user engagement rewarding mechanism. Claims And Evidence: The paper provides theoretical arguments and examples (like the TvN game) to show that group behavior can significantly alter content distribution and user welfare, especially with exposure rewards. This is supported by mathematical formulations and the concept of group equilibria. The argument that engagement rewards are better for user welfare is supported by the PoA analysis and the simulation results, which show higher user welfare under engagement rewards. Methods And Evaluation Criteria: The game-theoretic framework provides a solid foundation for analysis, PoA offers a quantitative measure of inefficiency, and simulations validate the theoretical findings. The focus on user welfare and the comparison of reward mechanisms are central to the research questions. Therefore, the proposed methods and evaluation criteria make sense for the problem at hand. Theoretical Claims: I briefly looked at the formulations in the main paper. Do not see any obvious issues. Experimental Designs Or Analyses: The experimental designs and analyses are generally sound and appropriate. The simplifications and assumptions are acknowledged but could be discussed more thoroughly. Supplementary Material: No. Relation To Broader Scientific Literature: The TvN game was introduced by Yao et al. (2024a) to model the dilemma faced by creators in choosing between popular trends and niche topics. The paper uses the TvN game to illustrate the specific impact of group behavior on content distribution and user welfare. It shows how groups can lead to a significant deviation from the individual creator case, especially with exposure rewards. This provides a concrete example of the general theoretical findings. Essential References Not Discussed: The author discussed related work mostly in the study of game-theoretic aspects in recommendation systems. Probably it would be interesting to touch the prior work on empirical studies of creator behavior on content platforms. Other Strengths And Weaknesses: Strengths: * The paper is generally well-structured with examples that help to clarify the concepts and arguments. * The combination of theoretical results and simulations strengthens the paper's claims. * The paper clearly demonstrates the advantages of engagement rewards in mitigating negative impacts of group behavior. * The paper makes a significant contribution by shifting the focus from individual creator strategies to group strategies in recommendation systems. Weakness: * The paper primarily uses synthetic data for simulations. Including empirical validation with real-world data from online platforms would further strengthen the claims and demonstrate the practical relevance of the findings. * The model relies on certain simplifications and assumptions (e.g., relevance function, user attention scores, specific game setups). The paper could discuss the limitations of these assumptions and their potential impact on the results. Other Comments Or Suggestions: N/A Questions For Authors: In the simulations, were the differences in user welfare between exposure and engagement rewards statistically significant? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: >The paper primarily uses synthetic data for simulations. Including empirical validation with real-world data from online platforms would further strengthen the claims and demonstrate the practical relevance of the findings. Thank you for the thoughtful suggestion. We will further strengthen the practical relevance and credibility of our results by using a more general $p$ based on a Zipf-like or power-law distribution, where $p_j \propto \frac{1}{j^\alpha}$ with $\alpha > 1$. Such distributions are well-documented in real-world platforms and capture the skewed nature of user preferences [1][2][3]. Our current setup already follows a similarly skewed pattern, and we will clarify this in the revision. The additional results under this new $p$ yield similar results and insights, further supporting our conclusions. We present results for two different $\alpha$ values below. [1] Chowdhury et al. Popularity growth patterns of youtube videos-a category-based study. [2] Cameron, S. Zipf’s Law across social media. [3] Mosaic Ventures. The creator economy: a power law. https://www.mosaicventures.com/patterns/the-creator-economy-a-power-law. $\alpha=1.1$ |Group Size|Exp|Eng| |-:|:-:|:-:| |10|4.72±0.04|4.81±0.03| |15|4.65±0.05|4.79±0.04| |20|4.59±0.07|4.81±0.02| |25|4.52±0.08|4.82±0.03| |30|3.26±0.02|4.77±0.01| $\alpha=1.8$ |Group Size|Exp|Eng| |-:|:-:|:-:| |10|4.85±0.02|4.89±0.02| |15|4.84±0.02|4.90±0.01| |20|4.73±0.05|4.89±0.01| |25|4.70±0.03|4.89±0.01| |30|3.75±0.01|4.88±0.01| >The model relies on certain simplifications and assumptions (e.g., relevance function, user attention scores, specific game setups). The paper could discuss the limitations of these assumptions and their potential impact on the results. We appreciate your valuable suggestion. Our model does make several stylized assumptions as many works in this line also do (Jagadeesan et al., 2023, Hu et al., 2023, Yao et al. 2024a;b). However, we do not see it as a major weakness but rather a necessary simplification to better unveil the theoretical insights. Below, we clarify the rationale and limitations associated with our assumptions: 1. **Relevance function**: The results in Section 4 are based on the dot product relevance score, which is for the simplification of presentation, and our results also hold for other reasonable relevance functions (e.g., $\sigma$ depending on $||s - x||$). And our general results—Theorems 5.3, 5.4, and 5.5—do not rely on specific assumptions about the form of the relevance function. 2. **User attention scores**: These two types—constant and diminishing—represent common user behaviors corresponding to slow-decay and rapid drop-off attention, respectively, providing results and insights under these two kinds of user attention. 3. **Specific game setups**: The bandits $C^3$ game simulates the fundamental scenario where every creator must select a topic to create content. And, the TvN game, where the user distribution is unbalanced, is recognized as a good example for worst-case analysis (Yao et al., 2024a). While generalizing Section 4's results to broader environments is a meaningful and challenging direction for future work, we highlight that Section 5 already addresses the general case: it makes no orthogonality assumptions about user population vectors and does not constrain the parameter $p$. We will include a dedicated discussion about the limitations and potential impact of these assumptions in our revision. >In the simulations, were the differences in user welfare between exposure and engagement rewards statistically significant? Yes, as shown in Figure 1, engagement reward consistently maintain higher user welfare, while exposure rewards lead to a notable decline. Each experiment is run 5 trials to avoid the randomness. We also emphasize that our simulations do not consider the worst-case group behavior under exposure reward. Under such behavior the user welfare could be even lower than reported. Under an alternative initialization where all creators are intialized around users 2–10, the resulting user welfare under exposure reward is worse. We present the empirical results here: |Group Size|Exp|Eng| |:-:|:-:|:-:| |10|4.20±0.07|4.53±0.21| |15|4.19±0.12|4.54±0.10| |20|4.25±0.17|4.69±0.03| |25|4.13±0.22|4.81±0.10| |30|3.80±0.04|4.95±0.01| Furthermore, it is important to note that even small improvements in user welfare can have meaningful consequences in practice. For example, platforms like TikTok serve billions of content impressions daily. Thus, marginal gains in user welfare—achieved through better reward design—can translate into substantial improvements in user satisfaction, engagement, and platform revenue. --- Rebuttal Comment 1.1: Comment: I will keep my rating. Thanks for the response
null
null
null
null
null
null
Improving Generalization with Flat Hilbert Bayesian Inference
Accept (poster)
Summary: This paper proposes a sharpness-aware version of stein variational gradient descent and applies it to neural network fine tunings. The goal is to learn a map (in a RKHS) that transforms from the reference distribution to the true posterior by minimizing the KL divergence. To motivate the sharpness-aware optimization for Bayesian inference, the authors prove a generalization bound, which upper bounds the true KL divergence by a worse-case KL divergence on the training data in a small neighborhood of RKHS. Then the algorithm generally resembles the idea of Foret et al. (2021), but the difference is that the optimization is carried out in the RKHS by functional gradient descent. The authors evaluate the propose method in neural network fine tuning, where they demonstrate improve accuracy and uncertainty estimate (measured by the ECE score). Claims And Evidence: I find the claims (both theoretical and empirical) in the paper are generally well supported. Methods And Evaluation Criteria: Yes, they make sense. Theoretical Claims: I skimmed through the proof of Theorem 4.2, but I did not check the proof of Theorem 4.3. The authors claim that these results are non-straightforward extensions of prior results of Foret et al. (2021) because RKHS are typically infinite dimensional. However, the proof seems to a simple combination of existing results. They first construct a two-layer neural network that approximates the RKHS function and then the theorem is proved by invoking the results of Foret et al. (2021). Experimental Designs Or Analyses: I read through the experiments in Section 5. Supplementary Material: Yes, but I only skimmed through Section A (the proofs). Relation To Broader Scientific Literature: This paper is a sharpness-aware variant of SVGD (Liu and Wang, 2016) by adapting the idea of SAM (Foret et al., 2021). When applied to training neural networks, the method inherits the benefits from the both worlds---better generalization and better uncertainty estimate. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: 1. The method has large memory consumption because it needs to maintain several copies of the neural network parameters. Thus, the authors have focused on fine tuning as opposed to training from scratch. I assume that it is very hard to apply this method to training neural networks from scratch. But that is a common limitation of all particle based Bayesian inference methods. 1. The writing is of high quality. 1. The experiments are comprehensive across many datasets and baselines. Experiments demonstrate that the proposed method improves not only the accuracy but also uncertainty estimates. Additional ablation experiments demonstrate that the proposed method indeed yields more flat solutions. Other Comments Or Suggestions: 1. The equations in Algorithm 1 are hard to parse of the line breaks. Maybe use a double-column algorithm environment. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer’s thoughtful response. As rightly pointed out, we’ve acknowledged in the *Limitations* section that FHBI, like other particle-based Bayesian inference methods, shares the common drawback of requiring multiple models to be retained during training. This makes it less practical for training from scratch. However, FHBI remains a strong fit for fine-tuning scenarios, where the trainable components are typically lightweight. Looking ahead, an exciting direction for future work is to bring the sharpness concept in RKHS to recent variational inference (VI) methods, which do not suffer from the same memory overhead. This could offer a compelling approach in enhancing performance and robustness, while maintaining memory efficiency.
Summary: The paper introduces a novel Bayesian inference method designed to improve generalization by leveraging functional sharpness-aware particle sampling in RKHS. The key innovation lies in combining sharpness aware minimization (SAM) with SVGD in infinite-dimensional RKHS to form the proposed FHBI. The authors extend theoretical generalization bounds from finite-dimensional Euclidean spaces to infinite-dimensional functional spaces. Empirical evaluations on the VTAB-1K benchmark show that FHBI outperforms several SAM, SVGD and their combinational baselines. ## update after rebuttal Thanks to the authors for the clarification. After reading the initial response to my questions, as well as the other reviews and replies in general, I will keep my score. Claims And Evidence: I think the claims made in the paper are clear, and supported by both theoretical proofs and experimental results. Methods And Evaluation Criteria: The methodology builds upon: 1) generalization theory in RKHS, extending flatness-based optimization(SAM) beyond Euclidean settings. 2) particle optimization of Stein Variational Gradient Descent (SVGD). The idea of combining SAM and SVGD in RHKS is very clear and makes sense to me. Theoretical Claims: The theoretical contributions are significant: 1. Theorem 4.2 extends Euclidean space generalization bounds to RKHS. 2. Theorem 4.3 bridges functional sharpness minimization with Bayesian inference. This establishes a connection between empirical and population KL loss. I haven’t carefully checked every mathematical detail, but the proof appears correct. Experimental Designs Or Analyses: The experimental design is strong. It evaluates FHBI on the VTAB-1K classifications and compares it with several baselines, including SVGD, SAM-based methods, SGLD, SVGD+SAM, SGLD+SAM, and deep ensembles. Moreover, it conducts a wide ablation study about the number of particles, sharpness, and gradient diversity to further demonstrate the validity of FHBI. Supplementary Material: The supplementary material includes proof details, experimental setting and additional results. Relation To Broader Scientific Literature: The paper is situated within the fields of Bayesian deep learning (variational methods, particle-based sampling), sharpness-aware optimization (SAM), and kernel methods in function space (RKHS). Essential References Not Discussed: I think the paper sufficiently cites prior related work. Other Strengths And Weaknesses: Strengths: 1. The idea of combining SAM with SVGD in RKHS to improve generalization is novel and generally makes sense. 2. Novel theoretical extension of generalization bounds in Euclidean space to the function space. 3. Strong empirical performance across diverse benchmarks. Weaknesses: 1. I'm a little confused about the relationship between FHBI and SAM. It is not clearly demonstrated in the objective function. 2. Some notation is confusing, e.g., general posterior and population posterior. Are they referring to the same thing? 3. The runtime comparison in Fig. 3 only with SVGD. Other Comments Or Suggestions: There is a typo in the line 204: the change of variable $q(T^{-1}(\theta))$ should be $q(T^{-1}(\vartheta))$ Questions For Authors: 1. How does FHBI compare computationally to other baselines in terms of runtime and memory overhead besides SVGD? 2. As you conduct the ablation study in Sec.6 to compare the particle sharpness of FHBI and SVGD, can FHBI and SAM be compared in detail using some metric to illustrate the advantages of particle-based sampling techniques? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer's feedback. We would like to address the concerns as follows: + **Regarding the relationship between FHBI and SAM:** As discussed at the end of Section 4, FHBI is a generalization of SAM with multiple model particles. This property is reflected more clearly in the update rules specified in the pseudocode of Algorithm 1 rather than the objective function. In particular, for the case of $m=1$ particles, the kernel terms become constant, and hence, the first step becomes the ascend step in SAM, while the second step in Algorithm 1 becomes the descend step of SAM. + **Regarding the notations:** Both terminologies in question refer to population loss. We thank the reviewer for highlighting this inconsistency and will fix this to "population loss" in the final revision to ensure consistency. + **Regarding the runtime comparison with other methods in Figure 5:** Firstly, it appears that the reviewer is referring to Figure 5 instead of Figure 3, since Figure 3 does not contain the comparisons with SVGD. Nevertheless, even though we compared FHBI with many baselines in the main experiments, we decided to ablate the runtime of FHBI by comparing it with SVGD since SVGD is most directly related to FHBI. Moreover, in Figure 5, we cannot compare with SAM because the figure presents the runtime based on the number of particles, which is not suitable for deterministic, single-particle methods like SAM.
Summary: The paper introduces Flat Hilbert Bayesian Inference (FHBI), a novel algorithm designed to enhance generalization in Bayesian inference by extending principles from finite-dimensional Euclidean spaces to infinite-dimensional reproducing kernel Hilbert spaces (RKHS). FHBI employs an iterative two-step procedure involving adversarial functional perturbation and functional descent within RKHS, supported by a theoretical framework that analyzes generalization in infinite-dimensional spaces. Empirically, FHBI is evaluated on the VTAB-1K benchmark, which includes 19 diverse datasets across various domains, where it consistently outperforms nine baseline methods by significant margins, demonstrating its practical efficacy and potential for real-world applications. Claims And Evidence: Appropriate Methods And Evaluation Criteria: The evaluation method refers to the work of BayesTune, which I believe is acceptable in this work. Theoretical Claims: All proofs except those in the supplementary materials have been checked. It can be considered that there are no significant errors. Experimental Designs Or Analyses: The experimental designs and analyses are appropriate Supplementary Material: Part B. Additional Experiments and part C. Experimental Details have been reviewed Relation To Broader Scientific Literature: This paper is a combination of work on flat minimizers and particle-based Bayesian methods, and a detailed investigation has been conducted on the historical work of both. Essential References Not Discussed: None Other Strengths And Weaknesses: 1. Although four particles were chosen as the equilibrium point, the actual training time and resource consumption (e.g., GPU memory) have not been quantified. Particularly, the scalability on large models (e.g., ViT-L/16) has not been verified. 2. The sensitivity of the results to the length-scale of the RBF kernel was not analyzed. 3. Although the theory proposes "functional sharpness," the experiments only indirectly assess it through empirical metrics, without designing a direct measurement method for the RKHS space. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and would like to address the concerns as follows: + **Memory consumption and scalability with larger models:** All experiments were conducted using a single Tesla V100 GPU with 4 model particles. The memory usage and training time for the CIFAR-100, Caltech-101, and PatchCamelyon datasets are reported in the tables below. In line with the reviewer’s suggestion, we will incorporate this ablation study into the appendix of the final revision. VRAM consumption (MB): | Architechture | CIFAR100 | Caltech101 | Patch Camelyon | | ------------- | -------- | ---------- | -------------- | | ViT-B/16 | 12541 | 12539 | 12535 | | ViT-L/16 | 33426 | 33684 | 32126 | Training time (s/iter): | Architechture | CIFAR100 | Caltech101 | Patch Camelyon | | ------------- | -------- | ---------- | -------------- | | ViT-B/16 | 5.55 | 5.66 | 5.46 | | ViT-L/16 | 17.45 | 17.35 | 17.02 | + **Sensitivity to the RBF kernel length scale:** As noted in both the main paper and the appendix, we initially tune the length scale parameter $\sigma$ from the candidate set $\\{0.7, 1, 1.2\\}$. To further investigate sensitivity to this hyperparameter, we expand the range to $\\{0.1, 0.7, 1, 1.2, 2.5\\}$, where we additionally include a small value of $\sigma=0.1$ and a large value of $\sigma=2.5.$ The results on the Natural datasets, reported below, indicate that the performance remains robust within a reasonable range. We also observe a slight degradation when using extremely small values (e.g., $\sigma = 0.1$), where the model tends to overfit, or large values (e.g., $\sigma = 2.5$), where the model tends to underfit. | $\sigma$ | CIFAR100 | Caltech101 | DTD | Flowers102 | Pets | SVHN | Sun397 | | -------- | -------- | --------- |-------- |-------- |-------- |-------- |-------- | | 0.1 | 72.1 | 91.6 | 74.0 |97.9 |90.2 |85.3 |52.2 | | 0.7 | 73.8 | 91.8 | 73.3 |98.7 |92.4 |86.7 |56.1 | | 1 | 73.6 | 92.7 |72.7 | **99.1** |91.9 | **87.3** |54.3 | | 1.2 | **74.1** | **93.0** | **74.3** |98.3 | **92.4** |86.4 | **56.5** | | 2.5 | 69.2 | 90.9 |69.4 |97.5 |90.9 |84.6 |52.6 | + **Measurement of sharpness on the RKHS**: Since the transportation function $f$ governs the updates of the particles, its sharpness in the RKHS governs the sharpness of individual particles. For this reason, we decide to report the sharpness of each particle, as it more directly correlates with the model’s predictive behavior and generalization ability. As demonstrated in the ablation studies, reducing the sharpness in the RKHS leads to a corresponding reduction in the sharpness of every particle, thereby improving the generalization ability of the ensemble.
null
null
null
null
null
null
null
null
TimeBridge: Non-Stationarity Matters for Long-term Time Series Forecasting
Accept (poster)
Summary: This paper proposes a novel long-short-term representation modeling approach for handling non-stationarity in multivariate time series. For short-term representation, the integrated attention mechanism is employed to model temporal dependencies. For long-term representation, the cointegrated attention mechanism captures inter-channel dependencies while preserving cointegration relationships. Overall, it provides a novel perspective on modeling non-stationary multivariate time series. Claims And Evidence: This claim aligns well with classical time series analysis and is supported by existing literature. Methods And Evaluation Criteria: The experimental setup is comprehensive and fair, covering mainstream datasets and models in multivariate time series forecasting. Theoretical Claims: No relevant theoretical claims are made in this paper. Experimental Designs Or Analyses: The experiments are well-designed, covering standard datasets and baselines. However, the choice of loss function (L1 loss) warrants further analysis, as it may involve unfair comparisons. Supplementary Material: The supplementary materials provide detailed implementation specifics. Relation To Broader Scientific Literature: The paper's motivation aligns well with classical time series theories. These concepts have been well-established in prior research. Essential References Not Discussed: The literature review is thorough. Other Strengths And Weaknesses: **Strengths**: While traditional theories on non-stationarity are well-established, many recent deep learning-based approaches either overlook or only partially incorporate them, limiting their effectiveness. This paper successfully integrates these classical principles into deep learning models, demonstrating strong empirical performance, which provides valuable insights for the field. **Weaknesses**: As mentioned, the loss function design (L1 loss) lacks sufficient discussion on its contribution to performance improvements. Other Comments Or Suggestions: The font size in Table 17 could be increased for better readability. Questions For Authors: Could the authors provide further insights into the rationale behind their loss function choice and its comparative effectiveness? If all methods were evaluated under the same loss function, would the proposed approach still achieve significant improvements? Clarifying this point is crucial to distinguishing whether the performance gains stem from handling non-stationarity or from the loss function itself. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your constructive feedback and valuable insights into our work. Below, we address your concerns: **Q1:** The loss function lacks sufficient discussion on its contribution to performance improvements. **A1:** We conducted additional experiments where TimeBridge was trained using the **MSE loss**, denoted as "TimeBridge (MSE loss)", and compared the results with the best baseline, DeformableTST. Regardless of the loss function, TimeBridge outperforms DeformableTST in most datasets (See Table A), highlighting the effectiveness of our framework. We will incorporate the full results in the revision of our paper. **Table A: Average results of different prediction length {$96, 192, 336, 720$} with MSE Loss and Hybrid MAE Loss** ||TimeBridge|TimeBridge (MSE loss)|DeformableTST| |-|-|-|-| |ETTm1|**0.344**/**0.379**|0.353/0.388|0.348/0.383| |ETTm2|**0.246**/**0.310**|**0.246**/**0.310**|0.257/0.319| |ETTh1|**0.397**/0.424|0.399/**0.420**|0.404/0.423| |ETTh2|0.341/0.382|0.346/0.394|**0.328**/**0.377**| |Weather|0.219/**0.249**|**0.218**/0.250|0.222/0.262| |Electricity|**0.149**/**0.245**|**0.149**/0.246|0.161/0.261| |Traffic|**0.360**/0.255|**0.360**/**0.252**|0.391/0.278| |Solar|**0.181**/0.239|**0.181**/**0.238**|0.185/0.254| |Climate|1.057/0.494|**1.052**/**0.486**|1.060/0.496| **Q2:** The font size in Table 17 could be increased for better readability. **A2:** Thank you for the suggestion. We will adjust the font size of the large tables in the revised version to enhance readability. **Q3:** Could the authors provide further insights into the rationale behind their loss function choice and its comparative effectiveness? If all methods were evaluated under the same loss function, would the proposed approach still achieve significant improvements? Clarifying this point is crucial to distinguishing whether the performance gains stem from handling non-stationarity or from the loss function itself. **A3:** From Table A above, it is clear that the hybrid MAE loss improves TimeBridge's performance on four ETT datasets. The ETT datasets have relatively few channels ($C=7$), which limits TimeBridge’s ability to utilize the Cointegrated Attention module to capture cointegration information. In fact, we only use the Integrated Attention to model ETT datasets (see Table 8 in our paper). Additionally, the **ETT data exhibits a high degree of random fluctuation, which is better modeled using a loss function that weighs both time and frequency components.** Hence, the hybrid MAE loss strengthens the modeling of short-term dependencies in such datasets. On datasets with more channels (e.g., Electricity, Traffic, Solar), the choice of loss function has minimal impact, as both losses yield similar results. We will incorporate this analysis in the revised version of our manuscript. We are deeply grateful for your recognition of our work and your constructive feedback. If you have any further questions or suggestions, we would be more than happy to address them. --- Rebuttal Comment 1.1: Comment: Thanks for your new results. My concerns have been addressed, and I will maintain my original score. --- Reply to Comment 1.1.1: Comment: Thank you again for your valuable comments and for acknowledging the additional experimental results. We truly appreciate your positive recognition of our work.
Summary: This study focuses on the problem of time series forecasting (TSF) by addressing the dual challenge of integrating both long-term and short-term relationships. The proposed TimeBridge employs a dual-attention framework: initial patch-level integrated attention for short-term pattern analysis and cointegrated attention to model long-term temporal correlations. The practical performance of TimeBridge has been validated across various real-world datasets. Claims And Evidence: The claim aligns with previous work in TSF. Methods And Evaluation Criteria: The experimental datasets and evaluation metrics employed in this study strictly follow the most prevalent standards established in the TSF field. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: The experimental design adheres to the most widely adopted baselines in TSF, ensuring the fairness. Supplementary Material: The supplementary material provides more detailed content. Relation To Broader Scientific Literature: Non-stationarity represents a crucial aspect in series modeling, and this article possesses the potential to exert a broader impact on related scientific endeavors. Essential References Not Discussed: The most relevant works have already been encompassed. Other Strengths And Weaknesses: Strengths: 1. The paper is well-motivated. It introduces a novel approach and a new perspective from classical time series analysis for addressing non-stationary time series data. 2. This paper is well written, with well-defined mathematical notations. 3. The experiment is comprehensive, featuring extensive ablation studies that systematically validate the contribution of each component. Weaknesses: 1. As indicated in Table 6, the sequential selection strategy of CI and CD significantly impacts prediction performance on Electricity and Traffic datasets, yet this phenomenon remains insufficiently explained in the text. 2. The manuscript lacks computational complexity analysis, making it difficult to assess the model's efficiency and memory requirements. Other Comments Or Suggestions: While the study represents an effort in modeling cointegration through deep learning, the architectural design of the proposed modules appears relatively simplistic and could benefit from further sophistication. Questions For Authors: I am particularly interested in whether the financial experiments have been implemented in real-world trading scenarios? Could the authors provide some insights into the practical deployment? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your feedback and insightful comments on our work. Below, we address each of your concerns and questions: **Q1:** As indicated in Table 6, the sequential selection strategy of CI and CD significantly impacts prediction performance on Electricity and Traffic datasets, yet this phenomenon remains insufficiently explained in the text. **A1:** We believe that modeling short-term dependencies first is crucial because these local temporal features are essential for accurate short-term forecasting. **By using Integrated Attention (CI) to capture short-term dependencies before Cointegrated Attention (CD) models long-term relationships, we ensure that important immediate patterns are not lost, which significantly improves performance.** A similar "local first, global later" modeling strategy is also used in other works [1]. **Q2:** The manuscript lacks computational complexity analysis, making it difficult to assess the model's efficiency and memory requirements. **A2:** We provide a detailed comparison of the theoretical complexity of our method with other mainstream Transformer-based models. The table below summarizes the theoretical computational complexity, where $C$ is the number of channels, $I$ and $O$ are the input and output lengths, and $S$ is the patch length: |TimeBridge|iTransformer|PatchTST|Crossformer|FEDformer| |-|-|-|-|-| |$O(\text{max}(C·(\frac{I}{S})^2,C^{3/2}))$|$O(C^2)$|$O(C·(\frac{I}{S})^2)$|$O(\frac{C}{S^2}O(I+O))$|$O(I/2+O)$| Moreover, theoretical complexity alone cannot fully capture real-world performance due to implementation differences. We tested on 2 NVIDIA 3090 GPUs, measuring training (1 epoch) and inference times for three datasets of increasing size, with results averaged over 5 runs. The results in the table below indicate that TimeBridge outperforms most Transformer-based models. |||TimeBridge|iTransformer|PatchTST|Crossformer|FEDformer| |-|-|-|-|-|-|-| |ETTm1 ($C=7$)|Training Time|72s|65s|73s|280s|449s| ||Inference Time|33s|31s|39s|48s|62s| |Electricity ($C=321$)|Training Time|252s|89s|450s|328s|510s| ||Inference Time|125s|78s|141s|116s|130s| |Traffic ($C=862$)|Training Time|409s|175s|649s|360s|485s| ||Inference Time|175s|154s|207s|171s|196s| **Q3:** While the study represents an effort in modeling cointegration through deep learning, the architectural design of the proposed modules appears relatively simplistic and could benefit from further sophistication. **A3:** We would like to clarify that our method was **intentionally designed with a simple, intuitive structure to address the challenges of non-stationarity and dependency modeling effectively**: - **Integrated Attention** focuses on capturing short-term dependencies while mitigating spurious regressions, using a streamlined approach that normalizes the attention map to remove non-stationary components. - **Cointegrated Attention** preserves non-stationarity to model long-term cointegration relationships between variables. - **Patch Downsampling** bridges the two blocks, allowing the Integrated Attention block to process aggregated short-term information, which feeds into the Cointegrated Attention block for long-term modeling. We further provide a theoretical analysis of the modeling rationale behind these design choices, which can be found in Reviewer 1BTh’s Answer 1 (**A1**). **Q4:** I am particularly interested in whether the financial experiments have been implemented in real-world trading scenarios? Could the authors provide some insights into the practical deployment? **A4:** Thank you for your interest in the practical deployment of our model. Our financial experiments were conducted using the **Qlib** [2] platform for historical backtesting, with simulated trading costs to approximate real-world scenarios. While real-world deployment would need to address challenges such as latency and liquidity, we have not yet integrated with live trading systems due to data privacy and regulatory constraints. [1] MICN: Multi-scale Local and Global Context Modeling for Long-term Series Forecasting (ICLR 2023) [2] https://github.com/microsoft/qlib Thank you for your valuable feedback. If you have any further questions or need clarification, feel free to follow up. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for thoroughly addressing the previous concerns. Given the financial focus of this work, I would like to request further clarification. The paper states that cointegration relationships are modeled across stocks. Could the authors explicitly clarify which aspect of the stocks (e.g., raw price trajectories, log-returns, or other derived signals) the cointegration mechanism primarily operates on? --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer’s acknowledgment of our previous responses. The core focus of our cointegration modeling is the raw price trajectories of stocks, **as their non-stationary nature aligns with the prerequisites for cointegration analysis. Long-term equilibrium relationships between industries (e.g., the price level connection between the automobile and steel industries in cost-linked price movements) are directly reflected in price trends, rather than in short-term return fluctuations.** By analyzing the raw price series, we can effectively capture the co-movement patterns along the upstream and downstream supply chain, providing a basis for arbitrage strategies. In contrast, stationary return series would not yield cointegration relationships with meaningful economic interpretation. We will include this analysis in the revised version.
Summary: This paper introduces TimeBridge, a methodological framework addressing non-stationarity in long-term time series forecasting. The proposed approach is structured around two core mechanisms: Integrated Attention, which seeks to mitigate short-term non-stationarity by stabilizing localized variations, and Cointegrated Attention, designed to retain and model long-term non-stationary dependencies. Empirical evaluations across multiple benchmark datasets suggest that TimeBridge performs competitively relative to state-of-the-art (SOTA) forecasting models. ## update after rebuttal After carefully reading through all the reviewers' comments, I believe the strong performance of this work across multiple dimensions is truly commendable. Accordingly, I have raised my score, with the expectation that the authors will revise and improve the identified concerns in the camera-ready version. Claims And Evidence: The manuscript presents claims that are only partially substantiated by empirical results, and several key limitations emerge: - The proposed framework purports to be an effective mechanism for handling non-stationarity; however, it omits direct comparisons with seasonal-trend decomposition methods [1][2], which are widely recognized as robust techniques for handling non-stationary components in time series data. Moreover, the definitions of stationary and non-stationary in the paper seem to be mere rebranding of trend and seasonal components. - The methodological novelty of the work appears to be constrained. The proposed architecture largely constitutes a reconfiguration of existing methodologies [3][4][5][6], rather than introducing a fundamentally novel paradigm. - While the reported experimental results indicate performance improvements, the magnitude of these improvements is relatively modest and does not conclusively establish the superiority of TimeBridge over existing approaches. [1] Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting. [2] First De-Trend then Attend: Rethinking Attention for Time-Series Forecasting. [3] Crossformer: Transformer Utilizing Cross-Dimension Dependency for Multivariate Time Series Forecasting [4] Fredformer: Frequency Debiased Transformer for Time Series Forecasting [5] A Time Series is Worth 64 Words: Long-term Forecasting with Transformers [6] UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting Methods And Evaluation Criteria: The selection of benchmark datasets and evaluation metrics (MSE, MAE) aligns with standard practices in long-term time series forecasting. However, the study would benefit from: - **Explicit comparisons with seasonal-trend decomposition methods**, given their proven efficacy in handling non-stationarity. - A more rigorous ablation study to disentangle the individual contributions of Integrated and Cointegrated Attention, ensuring a precise understanding of their respective roles in mitigating non-stationarity. Theoretical Claims: The manuscript does not provide formal mathematical proofs. Although it references cointegration principles, it lacks a rigorous theoretical justification for why the proposed attention-based mechanisms offer an optimal means of capturing non-stationary dependencies. The absence of a formal derivation raises concerns regarding the theoretical soundness of the proposed approach. Experimental Designs Or Analyses: The experimental design and results were examined, revealing the following limitations: - The reported performance gains are insufficiently substantial to position TimeBridge as a transformative advancement over existing methodologies. Supplementary Material: Yes, the supplementary materials were reviewed, particularly the dataset descriptions and additional ablation studies. However, the lack of comparative analysis with decomposition-based methods remains a fundamental oversight that weakens the empirical claims. Relation To Broader Scientific Literature: The study contributes to time series forecasting by integrating non-stationarity mitigation techniques into an attention-based framework. However, several key concerns remain unaddressed: - Seasonal-trend decomposition methods have long been established as effective solutions for handling non-stationarity, yet the paper does not explicitly contrast its approach with these techniques. - The methodology and structure of TimeBridge bear strong resemblance to several prior works, including [3][4][5][6]. The extent to which TimeBridge provides a substantive departure from these works remains unclear. [3] Crossformer: Transformer Utilizing Cross-Dimension Dependency for Multivariate Time Series Forecasting [4] Fredformer: Frequency Debiased Transformer for Time Series Forecasting [5] A Time Series is Worth 64 Words: Long-term Forecasting with Transformers [6] UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting Essential References Not Discussed: Yes, the study should provide an in-depth discussion of: - **Seasonal-trend decomposition techniques** to contextualize the proposed approach within established non-stationarity handling frameworks. - **Recent advancements in hybrid modeling approaches**, which also address long-term dependencies in time series forecasting. Other Strengths And Weaknesses: - **Strengths**: The paper correctly identifies non-stationarity as a critical issue and attempts to address both short-term and long-term dependencies within a unified forecasting framework. - **Weaknesses**: The degree of methodological novelty is limited, as the proposed framework appears to primarily constitute a recombination of existing techniques rather than a fundamentally new paradigm. Furthermore, the lack of direct comparisons with seasonal-trend decomposition models significantly undermines the empirical validation of the proposed approach. Other Comments Or Suggestions: - The theoretical grounding of Integrated and Cointegrated Attention should be more explicitly justified within the context of existing time series modeling literature. - The ablation study should provide greater granularity in assessing the distinct contributions of each proposed component. Questions For Authors: 1. How does TimeBridge substantively differentiate itself from existing attention-based time series forecasting models? 2. Would a decomposition-based or frequency-domain approach offer comparable performance with potentially greater computational efficiency? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback and insightful comments on our work. Below, we address your questions: **Q1:** It omits direct comparisons with seasonal-trend decomposition methods. **A1:** We add comparison with seasonal-trend decomposition methods (Autoformer [1] and TDformer [2]). The table below shows the average prediction results across all datasets. Full results will be included in the revised paper. ||TimeBridge|Autoformer|TDformer| |-|-|-|-| ||MSE/MAE|MSE/MAE|MSE/MAE| |ETTm1|**0.344**/**0.379**|0.588/0.517|0.380/0.406| |ETTm2|**0.246**/**0.310**|0.324/0.368|0.267/0.325| |ETTh1|**0.397**/**0.424**|0.496/0.487|0.486/0.479| |ETTh2|**0.341**/**0.382**|0.453/0.462|0.378/0.411| |Weather|**0.219**/**0.249**|0.338/0.382|0.236/0.271| |Electricity|**0.149**/**0.245**|0.227/0.338|0.177/0.282| |Traffic|**0.360**/**0.255**|0.628/0.379|0.570/0.322| |Solar|**0.181**/**0.239**|0.340/0.380|0.208/0.251| |Climate|**1.057**/**0.494**|1.456/0.584|1.364/0.626| **Q2:** The definitions of stationary and non-stationary in the paper seem to be mere rebranding of trend and seasonal components. **A2:** While trend and seasonal components can be viewed as corresponding to non-stationary and stationary elements, previous works have mainly focused on modeling them within individual channels. They haven’t fully addressed the need to preserve non-stationarity across channels to capture long-term cointegration. **Our paper’s core insight is that short-term dependencies can be modeled within stationary channels, while long-term cointegration requires preserving the non-stationary relationships between variables.** Thus, we analyze non-stationarity’s distinct impacts on both intra- and inter-variable modeling, rather than just decomposing the sequence within channels as previous methods [1,2] have done. **Q3:** The improvements are modest and don't definitively prove TimeBridge's superiority over existing methods. **A3:** As shown in **Table 1**, TimeBridge consistently achieves superior performance in long-term forecasting, reducing MSE and MAE by 1.85%/2.49%, 5.56%/4.12%, and 13.66%/7.58% compared to DeformableTST, ModernTCN, and TimeMixer, respectively. Additionally, our model demonstrates exceptional performance in financial forecasting (**Table 3**), highlighting its ability to capture complex cointegration relationships in financial markets. These results confirm the effectiveness of our approach. **Q4:** The methodology and structure of TimeBridge bear strong resemblance to several prior works, including [3,4,5,6]. **A4:** We design our model with a unique focus on non-stationarity in both short-term stability and long-term dependencies. **Integrated Attention stabilizes short-term dependencies by removing non-stationary features from the Query-Key pairs, while Cointegrated Attention emphasizes the necessity of preserving non-stationarity for modeling long-term cointegration relationships between channels.** In contrast, PatchTST [5] does not address non-stationarity and focuses solely on the temporal dimension, missing the rich cointegration relationships between variables. Crossformer [3] and UniTST [6] model short-term dependencies across and within channels but are theoretically susceptible to spurious regressions and lack explicit consideration of long-term cointegration. Fredformer [4], though utilizing frequency-domain attention to capture long-term signals, struggles with fine-grained temporal trends in the time domain, limiting its ability to model short-term dependencies effectively. **Q5:** A more detailed ablation study is needed to clarify the individual contributions of Integrated and Cointegrated Attention in mitigating non-stationarity. **A5:** We have conducted an ablation study on the impact and order of Integrated Attention and Cointegrated Attention in **Table 5** in our paper. The results indicate that both attention mechanisms must be used together, with Integrated Attention applied first, followed by Cointegrated Attention. **Q6:** The theoretical grounding of Integrated and Cointegrated Attention should be more explicitly justified within the context of existing time series modeling literature. **A6:** We have provided a more explicit theoretical justification for Integrated and Cointegrated Attention in Reviewer 1BTh’s Answer 1 (**A1**). **Q7:** Would a decomposition-based or frequency-domain approach offer comparable performance with potentially greater computational efficiency? **A7:** We compared TimeBridge with methods such as TimeMixer (decomposition-based), TimesNet (frequency-domain), and the newly added Autoformer and TDformer in **A1**, all of which demonstrate that TimeBridge consistently outperforms them. While decomposition-based or frequency-domain methods may offer comparable performance, we currently do not have conclusive evidence to support this. We appreciate your insightful feedback. If you have any further questions or need clarification, feel free to follow up. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal. While most of my concerns have been addressed, a few points remain: 1. Regarding Q1: TDformer and Autoformer are not the state-of-the-art STD-based methods. I recommend you compare and include Leddam [1] (ICML2024), which employs a similar Channel-Independent (CI) and Channel-Dependent (CD) design as your work. 2. Regarding Q3: I noticed that you use a loss function different from other baselines—switching from the conventional MSE to MAE and incorporating both time and frequency domain information. In my reproduction, I observed that this loss function significantly impacts the results. **However, the paper does not describe this loss function in detail or provide ablation experiments to isolate its effect.** This raises concerns that the observed improvements might largely stem from using a different loss function. I strongly recommend that you compare TimeBridge, PatchTST, ModernTCN, and Leddam under both the traditional MSE loss and the loss function you employ, along with a detailed explanation of its necessity and benefits. 3. Regarding Q6: **I strongly suggest incorporating the theoretical discussion on Integrated and Cointegrated Attention into the main text** to clarify its grounding in existing time series modeling literature. **I am confident that the authors can address the aforementioned issues in the final version.** After carefully reading through all the reviewers' comments, I believe the strong performance of this work across multiple dimensions is truly commendable. Accordingly, **I have raised my score, with the expectation that the authors will revise and improve the identified concerns in the camera-ready version.** *A minor suggestion regarding Reviewer 2KKn's Q.1 on the justification of the CI and CD strategies:* I recommend referring to **“The Capacity and Robustness Trade-off”** [2] (TKDE), which offers valuable theoretical insights relevant to your discussion. One of the key findings of that work is as follows: ***The Channel Dependent (CD) strategy exhibits high capacity but low robustness, whereas the Channel Independent (CI) strategy has lower capacity but higher robustness. In many real-world, non-stationary time series characterized by distribution drifts, robustness tends to be more critical than capacity for achieving reliable forecasting performance. As a result, the CI strategy often outperforms the CD strategy in practice.*** Incorporating a more detailed discussion of this Trade-off in the main paper could significantly strengthen your justification and highlight the robustness of your method under real-world conditions. *[1] Revitalizing Multivariate Time Series Forecasting: Learnable Decomposition with Inter-Series Dependencies and Intra-Series Variations Modeling.* *[2] The Capacity and Robustness Trade-off: Revisiting the Channel Independent Strategy for Multivariate Time Series Forecasting.* --- Reply to Comment 1.1.1: Comment: We are immensely grateful for your positive recognition of our work and the additional experimental results. Below are our responses to your new points: --- **Q1:** Compare and include Leddam. **R1:** Thank you for your suggestion. We compared TimeBridge with Leddam across four forecasting horizons, as shown in the table below. **Using the hyperparameter search strategy in Appendix E.1, we found that TimeBridge consistently outperforms Leddam, especially on datasets with more channels.** Additionally, our search results for Leddam are better than those reported in their paper for a fixed input length of 96, which aligns with their observation that extending the look-back window improves forecasting performance. We will include the complete results in the revised version. ||TimeBridge|Leddam| |-|-|-| ||MSE/MAE|MSE/MAE| |ETTm1|**0.344**/**0.379**|0.354/0.381| |ETTm2|**0.246**/**0.310**|0.265/0.320| |ETTh1|**0.397**/**0.424**|0.415/0.430| |ETTh2|**0.341**/**0.382**|0.345/0.391| |Weather|**0.219**/**0.249**|0.226/0.264| |Electricity|**0.149**/**0.245**|0.162/0.256| |Traffic|**0.360**/**0.255**|0.452/0.283| |Solar|**0.181**/**0.239**|0.223/0.264| --- **Q2:** Discussion of Loss function. **R2:** We compared TimeBridge with the MSE loss in Reviewer yRZB’s **A1**, showing it still outperforms the best baseline, DeformableTST. As detailed in Reviewer yRZB’s **A3**, **the hybrid MAE loss primarily enhances performance on the ETT dataset, where the limited channels hinder cointegration modeling.** Given the high degree of random fluctuation in ETT data, using a loss function that combines time and frequency weighting better captures its dynamics. **For datasets with more channels (e.g., Traffic and Solar), the impact of the loss function is minimal.** We will further include results for other baselines using this loss function. --- **Q3:** Theoretical discussion on Integrated and Cointegrated Attention. **R3:** We will provide a detailed theoretical discussion of the impact of Integrated and Cointegrated Attention on non-stationarity in the revised version. --- **Q4:** Refer to “The Capacity and Robustness Trade-off” to justify the CI and CD strategies. **R4:** We will refer to the findings in “The Capacity and Robustness Trade-off” to enrich the discussion of CI and CD strategies in the subsection on **Ablation on Integrated and Cointegrated Attention impact and order.** --- Guided by your suggestions, we have deepened our understanding of the critical issues you highlighted, significantly improving the quality of our work. Thank you once again for your valuable feedback.
Summary: This paper introduces a new framework to tackle the challenges posed by non-stationarity in multivariate time series forecasting. It addresses both short-term fluctuations and long-term trends by employing two specialized attention mechanisms, i.e., Integrated Attention and Cointegrated Attention. The framework is validated with extensive experiments on real-world datasets, demonstrating superior performance in both short-term and long-term forecasting. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Not applicable Experimental Designs Or Analyses: The overall experimental designs look great. Supplementary Material: Checked the code but did not run it Relation To Broader Scientific Literature: The authors have discussed the related work in Normalization and Dependecy Modeling in time series forecasting. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. Addressing the dual challenge of non-stationarity in short-term stability and long-term dependencies is interesting, as traditional methods either neglect or insufficiently manage these aspects. 2. The framework has been rigorously tested across multiple datasets, demonstrating consistent state-of-the-art performance. The inclusion of financial forecasting datasets enhances the credibility of the results. Weaknesses: 1. While the authors criticize some existing methods for incorporating non-stationary factors without a robust theoretical framework, the proposed method in this paper also lacks a rigorous theoretical foundation. A more detailed theoretical exploration could strengthen the framework's validity. 2. The novelty of the proposed work could be further justified. The use of attention mechanisms to manage correlations between patches or among variates is not entirely novel, as similar approaches have been explored in channel-dependent transformers. Highlighting distinct advantages or improvements over these methods would clarify the contribution of this work. 3. In short-term forecasting, where ARIMA is recognized as a strong baseline method, the authors might benefit from discussing its relative performance. This comparison could provide a clearer benchmark for evaluating the advantages of the proposed framework. Other Comments Or Suggestions: See above Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your insightful comments on our work. Below, we address each of your questions: **Q1:** A more detailed theoretical exploration could strengthen the framework's validity. **A1:** Thank you for your valuable comments. Below we provide a concise theoretical explanation grounded in classical stochastic processes, specifically Brownian motion, to clarify the rationale behind our design. **Proposition 1: Spurious Attention from Non-Stationary Inputs** Consider a standard Brownian motion $X_t \sim I(1)$, where: $X_t = X_{t-1} + u_t,\quad u_t \sim \mathcal{N}(0, \sigma^2)$. Then we have: $\text{Mean}(X_t)=0, \quad \text{Var}(X_t) = t\sigma^2,\quad \text{Cov}(X_{t_1}, X_{t_2}) = \min(t_1, t_2)\sigma^2$ Let two input patches of length $S$ be: $p_i = [X_{t+i+1}, \dots, X_{t+i+S}],\quad p_j = [X_{t+j+1}, \dots, X_{t+j+S}]$ Their attention score $\text{score}(p_i, p_j)$ is approximated as: $\text{score}(p_i, p_j) \propto p_i p_j^T \propto \sum_{s=1}^{S} (X_{t+i+s} X_{t+j+s}) \propto \sum_{s=1}^{S} \text{Cov}(X_{t+i+s}, X_{t+j+s})$ $ \propto \sigma^2 \left(S\min(i, j) + \frac{S^2 + 2St + S}{2}\right) $ This score grows with both the time index $t$ and the square of the patch length $S$, leading to spurious attention caused by global trends rather than genuine short-term dependencies. **As shown in Figure 4(a), many patches tend to exhibit high attention scores.** To mitigate this, we perform patch-wise detrending: $p_i' = \text{Detrend}(p_i) = [\Delta X_{t+i}, \dots, \Delta X_{t+i+S}] \sim I(0), \quad \Delta X_t = X_t - X_{t-1} $ In this case, the variance becomes stable: $\text{Var}(\Delta X_t) = \sigma^2,\quad \text{score}(p_i', p_j') \propto S\sigma^2$ This makes the attention mechanism focus on true short-term patterns, unaffected by long-term drift. --- **Proposition 2: Importance of Non-Stationarity in Capturing Cointegration** Let $X_t, Y_t \sim I(1)$ be two non-stationary time series with a cointegration relationship: $Z_t = X_t - \beta Y_t \sim I(0)$ where $\beta$ is a constant coefficient. If we remove non-stationarity via detrending: $\Delta X_t = \text{Detrend}(X_t),\quad \Delta Y_t = \text{Detrend}(Y_t)$ Then: $ Z_t = \Delta X_t - \beta \Delta Y_t = \epsilon_t $ which becomes a **random noise sequence**. This destroys the cointegration signal, making it impossible for attention mechanisms to capture long-term equilibrium relationships. **Figure 4(b) illustrates that removing non-stationary components eliminates the vast majority of cointegration information.** --- These two propositions together explain our architectural choice: - For **short-term modeling**, we eliminate non-stationarity to avoid spurious regressions. - For **long-term modeling**, we preserve non-stationarity to retain meaningful cointegration. We will incorporate this theoretical analysis in greater detail in the revision of our paper. **Q2:** Highlighting distinct advantages or improvements over these methods would clarify the contribution of this work. **A2:** **We design our model by focusing on the distinct characteristics of non-stationary data, specifically addressing short-term stability and long-term dependencies.** This insight leads to the adoption of simple yet effective strategies that align with the nature of non-stationary data. Unlike other attention methods that directly model short-term dependencies and are prone to spurious regressions, Integrated Attention stabilizes short-term modeling by removing non-stationary features from the Query-Key pairs. In contrast, Cointegrated Attention emphasizes the necessity of retaining non-stationary features for capturing long-term cointegration relationships, setting it apart from recent channel-dependent Transformer models. These models either fail to consider long-term cointegration [1] or overlook it entirely [2]. [1] Crossformer: Transformer Utilizing Cross-Dimension Dependency for Multivariate Time Series Forecasting [2] Revitalizing Multivariate Time Series Forecasting: Learnable Decomposition with Inter-Series Dependencies and Intra-Series Variations Modeling **Q3:** ARIMA is a strong baseline for short-term forecasting; the authors should compare its performance to highlight the advantages of their framework. **A3:** Since ARIMA is for univariate forecasting and our PeMS datasets are multivariate, we compared TimeBridge with VARIMA, which is more appropriate for multivariate forecasting. The table below shows that TimeBridge outperforms VARIMA in most cases. This comparison will be updated in the revised paper. ||TimeBridge|VARIMA| |-|-|-| ||MAE/MAPE/RMSE|MAE/MAPE/RMSE| |PeMS03|14.52/**14.21**/23.10|**13.78**/17.95/**19.95**| |PeMS04|**19.24**/**12.42**/**31.12**|24.87/15.61/36.26| |PeMS07|**20.43**/**8.42**/**33.44**|26.00/11.21/37.67| |PeMS08|**14.98**/**9.56**/**23.77**|19.38/12.49/28.02| We sincerely appreciate your constructive feedback. If you have any further questions or need clarification, please feel free to reach out.
null
null
null
null
null
null
G-Adaptivity: optimised graph-based mesh relocation for finite element methods
Accept (spotlight poster)
Summary: This paper proposes to apply graph neural network (GNN) architectures for mesh relocation in finite element methods (FEM). The technical contributions involve a novel training pipeline and improved network structures together with appropriate loss functions. Experiments demonstrate the effectiveness and potential of the proposed learning paradigm. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Partially (Sec. D). Relation To Broader Scientific Literature: Well related to GNN and FEM literatures. Essential References Not Discussed: No. Other Strengths And Weaknesses: From my point of view, applying GNN for mesh deformation in FEMs is an interesting and highly promising approach. My major concern is the scalability of GNN architectures when the number of nodes increases significantly. This issue may restrict the applicability of the proposed method in other more complicated computation scenarios. The authors should provide more detailed discussions. Other Comments Or Suggestions: Overall, the technical scope of this paper is quite different from my expertise, making it difficult for me to provide detailed and in-depth evaluations of the specific technical implementations and experimental setups. However, I believe that the approach of applying GNNs for mesh relocation in this paper is rather valuable. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your positive feedback and recognition of the importance and potential impact of our work. Your comments encouraged us to **test our model at significantly larger scales, including 3D simulations**. We hope the [additional experiments](https://imgur.com/a/rOdOAA0) we performed adequately address your concerns, further strengthen the paper, and provide sufficient grounds to upgrade your recommendation to a clear acceptance. We agree that scalability is crucial for the relevance of any adaptive meshing approach. It turns out that our approach scales very well to larger problems, in three ways. ### **Scalability of the GNN Model** **Firstly,** in forward mode, the diffusion deformer is able to scale to very large meshes by design. In particular the inductive learning property of GNNs ensures the ability of GNNs to transfer to unseen graphs in this case meaning we can perform super-resolution to scale to very large meshes. Our experiments demonstrate that our model can efficiently relocate **tens of thousands of nodes in just over two seconds on a standard laptop.** GNNs have been widely observed to scale well. For instance: - The "OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs" (Hu, 2021) demonstrates GNNs frequently being scaled to large graphs of millions/billions of nodes and edges. - Recent works like GraphCast (Lam, 2023), Aurora (Bodnar, 2024), MeshGraphNets (Pfaff, 2020) and Scalable Universal Physics Transformers (Alkin, 2025) all evidence the ability of transformer and GNN architectures to scale to very large scale simulations. To substantiate our claims, we conducted **new experiments on a larger 150x150 mesh (22,500 nodes)** for the Poisson problem with 128 sampled Gaussians (see [Figures C](https://imgur.com/a/rOdOAA0)). Our model **consistently achieved significant mesh adaptation, accuracy improvement, and computational acceleration compared to Monge-Ampere (MA), matching the performance observed on smaller-scale experiments**. |Scale|Model|Error Reduction (%)|Time (ms)| |-|-|-|-| |60x60|MA|11.94 ± 1.50|23,084| ||G-Adapt|**27.47 ± 0.89**|**452**| |150x150|MA|17.96 ± 1.27|115,395| ||G-Adapt|**25.70 ± 1.51**|**2,555**| One design challenge to note is that naive fully connected transformer encoder would create $22,500^2$ edges, making it computationally prohibitive. We overcame this by using a **sliding window (SWIN) style transformer** to capture the monitor function embedding at the mid-length scales. With this design choice the architecture can be scaled up much further thus providing a method that offers significant potential for real-life FEM applications. Moreover, for even larger-scale problems, **leveraging Firedrake and mesh hierarchy (multi-grid) techniques** could further enhance scalability, which we plan to explore in future work. ### **Extending to 3D Simulations** **Secondly,** to further assess **scalability for real-world applications**, we expanded our method to 3D domains, prompted by the suggestions by reviewers Kufx and MACS. In [Figure A](https://imgur.com/a/rOdOAA0) and the below table we demonstrate that the G-Adaptivity framework and diffusion deformer model are easily adapted to the 3D setting performing an experiment on a 10x10x10 unit cube for the 3D Poisson problem. These results confirm that our approach leads to **highly competitive error reduction in the 3D setting while maintaining computational efficiency.** |Model|Error Reduction (%)|Time (ms)|Aspect| |-|-|-|-| |MA|12.71 ± 0.00|41,049|2.97 ± 0.00| |G-Adapt|28.08 ± 0.36|494|6.91 ± 0.20| ### **Scalability of the FEM solver** **Finally,** the FEM solver which is used for the training of our GNN can also be efficiently scaled. **If a scalable solver is known for a particular PDE, then it can be easily adapted into the G-Adaptivity framework as Firedrake supports “sophisticated, programmable solvers through seamless coupling with PETSc”**. A good reference for this is (Kirby & Mitchell, SINUM, 2018). This allows us to leverage domain expertise and achieve scalability when this is needed. In our experiments, the default PETSc options of the Firedrake class `NonlinearVariationalSolver` suffices for competive FEM solution times. The same applies to solving the adjoint equations generated with Pyadjoint. Firedrake allows passing the additional key "adj_args" to the state solver parameters to specify which solver should be used for the adjoint equation thus enabling **scaling in the same manner as the forward solver**. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed responses and experiments. I have no further questions. I will maintain my rating as weak acceptance.
Summary: This paper presents a novel approach to mesh relocation (r-adaptivity) in finite element methods (FEMs). Traditional r-adaptive methods optimize mesh geometry by solving additional meshing PDEs, which are computationally expensive. Recent machine learning (ML) methods focus on learning surrogates for these classical techniques. In contrast, this work introduces a graph neural network (GNN)-based approach that directly minimizes the FEM solution error rather than relying on surrogate error estimates. The proposed G-Adaptivity framework employs a diffusion-based GNN mesh deformer, which optimizes mesh point locations while enforcing non-tangling constraints. The paper claims that this outperforms both classical PDE-based methods and previous ML approaches in terms of FE solution accuracy while maintaining the computational efficiency of ML-based solutions. Claims And Evidence: The claims made in the submission are generally well-supported by empirical evidence. However, the Firedrake adjoint optimal gradient computation requires clarification. The authors remark that when the exact solution is unavailable, they approximate it using FEM interpolation on a high-resolution mesh. However, running a high-resolution FEM solver is computationally expensive. Is this additional computation included in the reported time comparisons? If not, the time efficiency claims should be adjusted accordingly. Methods And Evaluation Criteria: The proposed method is well-motivated for r-adaptive meshing. It replaces classical PDE-based relocation with a GNN trained via backpropagation through a differentiable FEM solver (Firedrake). The evaluation criteria include: 1. Error reduction: FEM L2-error reduction relative to the baseline mesh. The results show that the proposed method achieves the best error reduction across all baselines. Notably, it succeeds in improving error reduction in the Burgers' Square Rollout, where both UM2N baselines fail. 2. Computational time: The time required for mesh relocation. The proposed method outperforms the classical Monge-Ampère (MA) approach but is 2-3× slower than UM2N-G. 3. Mesh quality: Evaluated using the aspect ratio of deformed meshes. The proposed method performs well overall but is outperformed by some baselines in specific cases (e.g., Poisson Convex Polygon problem). Theoretical Claims: The theoretical claims appear sound and well-supported. The proof of non-tangling (Theorem 4.2) is particularly valuable, as it ensures that mesh deformation does not lead to degenerate elements. No major concerns were identified. One related question: while a sufficiently small pseudo-timestep dτ is theoretically guaranteed to prevent mesh tangling, what is the practical numerical threshold for dτ? Specifically, how is this value determined in experiments, and does it require tuning for different PDEs or mesh resolutions? Experimental Designs Or Analyses: The whole evaluation parts consists of three problems: the Poisson's equation, Burgers' equation and Navier-Stokes equation. The comparison with baselines considers error reduction, computational time, and aspect ratio, which are appropriate metrics. However, there are two notable gaps in the experimental design. first it lacks 3D evaluation: all of these problem are solved in the 2D space. Extending the method to 3D surfaces or volumetric meshes would better demonstrate its scalability and generalizability. Moreover, the paper does not analyze the effect of different loss function components (e.g., equidistribution loss) on performance. An ablation study would clarify the contributions of individual components. Supplementary Material: The supplementary sections provide implementation details, derivations and proofs, where Appendix A provides additional implementation details on the diffusion deformer, and Appendix C provides PDE formulations and FEM implementation details. More numerical experiments are provided in Appendix D. While these are well-structured, a more in-depth discussion of Firedrake’s role (Appendix B) and how it was adapted for this work would be beneficial. Relation To Broader Scientific Literature: This paper is related to classic r-adaptivity, ML-based PDE solvers, and graph-based learning approaches. It highlights the key limitation of prior ML-based r-adaptive meshing (UM2N), which relies on learning surrogates rather than directly optimizing the FEM loss. However, additional discussion on potential limitations of ML-driven meshing (e.g., generalization to higher dimensions) would be beneficial. Essential References Not Discussed: Good to me. Other Strengths And Weaknesses: This method proposes a novel GNN-based approach that directly minimizes the FEM error, avoiding heuristic error estimates. It also provides theoretical guarantees for mesh regularity and non-tangling. The method significantly improves computational efficiency over classical methods, supported by comprehensive experiments on various PDEs and mesh topologies. However, the proposed method is slower than UM2N baselines, which is not discussed in the paper. Besides, there is no explicit ablation study to quantify the impact of different model components, and limited discussion on generalization to 3D meshes or more complex PDEs. Other Comments Or Suggestions: I wonder like to see additional visualizations comparing mesh evolution across timesteps and discuss failure cases where G-adaptivity may not perform well. Questions For Authors: 1. How does G-Adaptivity generalize to 3D adaptive meshing? Have you tested it on volumetric meshes or 3D curved surface meshes on complex geometry? 2. How sensitive is the model to changes in hyperparameters (e.g., number of GNN layers, training iterations)? 3. Can the proposed approach be combined with h-adaptivity for further improvements? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful review of our manuscript and for your positive comments and questions. Below we provide further clarifications on the **role of Firedrake, hyperparameter choices** and **generalisability of our approach**. We hope the results of the additional experiments and our responses to your questions are sufficient to encourage you to consider raising your score for acceptance. ### **Claims and Evidence** We should clarify that a **fine grid reference solution is only required during training** of the neural network, not during inference. The Firedrake adjoint computations are only invoked during the training phase and **not** during mesh adaptivity. Thus **our method is indeed as efficient as reported** (online mesh movement in a few dozen milliseconds). ### **Theoretical claims** The statement that $d\tau$ needs to be "sufficiently small" means that it **only needs to be smaller than 0.5**. This was shown in the original Appendix F.2 and we have now updated Theorem 4.2 to include this in the main body of the manuscript. ### **Supplementary Material** We have added a more detailed discussion of the role of Firedrake in the supplemenatry material of the revised manuscript, and include a shortened version of this here. Training the GNN requires computing the derivative of the loss function $E(Z,U_Z)$ with respect to node coordinates $Z$, using adjoint models for efficiency. Automating their derivation is essential for a general $r$-adaptivity methodology that works across different test cases. **Firedrake is ideal for this, as it derives adjoint models and computes these derivatives automatically** (Ham et al., Struct. Multidiscip. Optim., 2019). Obtaining corresponding formulas by hand is difficult: e.g., for $J(Z,U_Z) = ||U_Z||^2_{L_2(\Omega)}$, which is a simplified version of $E(Z,U_Z)$ in (3), the corresponding derivative takes the form $dJ(Z,U_Z)[T] = \int_\Omega (U_Z^2+\nabla U_Z\cdot \nabla p - 4p) \nabla\cdot T - \nabla U_Z (DT+DT^\top)\nabla p dx$ with $p$ being the (weak) solution of the adjoint equation $\Delta p = 2U_Z$. This automatic approach can then be coupled with PyTorch using Firedrake’s ML integration (Bouziani et al., arXiv:2409.06085) to enable GNN training. ### **Relation to Broader Scientific Literature** **Limitations of ML-based meshing:** The ML-based approach is inherently statistical, meaning that **GNN-based meshing tools are likely to perform worse on out-of-distribution test data.** We observed this in our experiments both with pre-trained UM2N models and our own G-Adaptive approach when applied to PDEs whose solutions featured vastly different scales and features than those on which the models were trained. ### **Questions for Authors** 1. Our method **extends naturally to 3D** adaptive meshing and we have performed an additional experiment on a cubic mesh with a selection of random Gaussians comparable to the experiment from Table 1 in our manuscript. The results are shown in [Figure A](https://imgur.com/a/rOdOAA0), and the following table (note out-of-the-box UM2N cannot be applied to 3D problems): |Model|Error Reduction (%)|Time (ms)|Aspect| |-|-|-|-| |MA|12.71 ± 0.00|41,049|2.97 ± 0.00| |G-Adapt|28.08 ± 0.36|494|6.91 ± 0.20| 2. We have performed extensive ablation studies and found that our approach is **not very sensitive to the choice of hyperparameters**, as long as they remain in a reasonable range. In particular, we emphasize that **all experiments in the paper used identical hyperparameters without fine-tuning** to specific problems or PDEs. #### Table 1: The effect of $d\tau$ and diff. timesteps on the Error reduction (%) |$d\tau$\No.-timesteps|2|4|8|16|32|64| |:-:|-|-|-|-|-|-| |0.05|10.41|16.60|18.95|15.85|21.82|22.93| |0.1|12.97|14.52|20.68|15.71|20.27|20.85| | 0.25 | 19.54 | 19.11 |22.30 |19.94 |23.11 |22.09 | | 0.5 | 20.43 | 22.65 |22.14 |22.42 |21.32 |21.92 | | 1 | 20.60 | 21.57 |21.16 |22.10 |19.71 |19.40 | #### Table 2: The effect of $d\tau$ and diff. timesteps on inference time (ms) |$d\tau$\No.-timesteps|2|4|8|16|32|64| |:-:|-|-|-|-|-|-| | 0.05 | 60 | 44 | 46 | 116 | 65 | 247 | | 0.1 | 54 | 42 | 79 | 61 | 86 | 208 | | 0.25 | 41 | 40 | 48 | 56 | 119 | 108 | | 0.5 | 50 | 58 | 49 | 92 | 125 | 158 | | 1 | 52 | 59 | 45 | 69 | 100 | 108 | #### Table 3: The effect of equi-dist loss regularisation on Error reduction (%) |Reg. weight|Error Reduction (%)| |-|-| | 0 (no equi-dist loss) | 22.42 | | 0.5 | 22.95 | | 1 | 23.99 | | 2 | 23.21 | | 4 | 22.14 | | 8 | 20.96 | 3. This is an outstanding suggestion and in fact part of ongoing work by the authors. It is natural to start by relocation to find an optimal meshpoint distribution followed by h-refinement in regions that require particularly close resolution, see (Dobrev et al., Eng. Comput., 2022) and (Piggot et al., Ocean Model., 2005). The flexibility of the current and ML approaches more generally makes them ideal candidates for such an hr-adaptive approach.
Summary: This paper focuses on using graph neural network to predict deformation of the computational domain in order to reduce error of solution obtained by finite element (FE) method. The philosophy follows [1]. The contribution of this paper is three-fold. 1. The authors proposed a new design on the model architecture based on flow/velocity-type method (eq. (7) and (8)). 2. In order to train the flow-type model, the training utilized a differentiable solver (Firedrake) and directly minimizing FE solution error. 3. A new regularization term is added to the loss function (eq. (9)). The authors benchmarked on their own dataset. Particularly, their dataset is featured with convex domain, e.g., square and convex polygons. [1] Zhang, M., Wang, C., Kramer, S., Wallwork, J. G., Li, S., Liu, J., Chen, X., and Piggott, M. D. Towards universal mesh movement networks. 2024. ## update after rebuttal The authors supplement experiments on non-convex domains and I changed my score to 3. However, as I commented in my review, why did not the author benchmark the dataset from [1] in the first place? I did not see any reason for not doing so. Then the author agreed the necessity of the comparison and conducted experiment in rebuttal. It seems to me that the design of experiment in the original version is lacking of thoughts to miss such basic and important benchmark. Without seeing a final version of the paper, the credibility of the supplemented experiment is weakened. Claims And Evidence: See below. Methods And Evaluation Criteria: A more systematic comparison between your work and UM2N [1] should be made: 1. Why not benchmarking the same dataset of UM2N [1]? I do not see how their dataset could not be your choice and why you have to choose your own dataset instead. Particularly, the dataset of UM2N includes non-convex domains. In contrast, your dataset contains only convex domains except a cylinder case for Navier-Stokes equations. 2. From your experiment, all table 1,2,3 shows that the regularization term in loss function (9) contributes most to your method, since UM2N-G (UM2N + the loss function (9)) also has great improvements from vanilla UM2N. Essentially, as one of the core novelties of this paper, this regularization term is a density control that encourages equidistribution of mesh nodes. Why this regularization is so effective on your dataset is under-explored. Is it as effective on the dataset of [1]? It is actually quite doubtful because the density of mesh nodes should increase in the area where error is high. Equidistribution heuristically contradicts with this strategy. Therefore, I strongly encourage the authors benchmark their method on the dataset of [1]. [1] Zhang, M., Wang, C., Kramer, S., Wallwork, J. G., Li, S., Liu, J., Chen, X., and Piggott, M. D. Towards universal mesh movement networks. 2024. Theoretical Claims: N.A. Experimental Designs Or Analyses: See above. Supplementary Material: N.A. Relation To Broader Scientific Literature: N.A. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: **Novelty** The methodology adopted in this work has certain novelty, i.e., flow-based model + training with differentiable solver. However, there is an apparent **weakness** in your experiment since your dataset is biased to convex domain, and therefore it is not so convincing that your method is superior to the baseline UM2N. Other Comments Or Suggestions: There could be a potential to fully justify any advantage of your flow-based model over existing machine learning method, e.g., UM2N. For example, is your method more data efficient? Since your method directly minimizes FE solution error with a differentiable solver, what benefit can such ``hybrid'' strategy can provide? Questions For Authors: 1. Is your method only applicable to convex domain? I see you have convex setup in your experiment. If yes, it'd be better to declare this requirement of convex domain clearly in your methodology as well. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful review of our manuscript and for your valuable comments and questions, following which we have performed several additional numerical experiments. **We appreciate your suggestion to benchmark more closely against UM2N, and we have now conducted extensive new experiments, including on non-convex domains.** We hope that we were able to address your concerns and that you can update your score accordingly. ### **Methods and evaluation criteria** 1. While we already benchmarked on examples from UM2N (Zhang et al., 2024), namely the flow past a cylinder, and on two datasets from M2N (Song et al., 2022), namely the Poisson and Burgers' equation on a square domain, **we have now obtained additional domain data from (Zhang et al., 2024).** This allowed us to conduct extensive additional experiments on **five non-convex domains** (four from the paper (Zhang et al., 2024) and an L-shaped domain). On each domain we solve Poisson's equation for randomly sampled Gaussian solutions with 100 training datapoints and 100 unseen test datapoints. The results (see [Figure D](https://imgur.com/a/rOdOAA0) and table below) confirm that our method performs robustly on non-convex geometries, achieving significantly greater error reduction than baselines and generating regular non-tangled meshes on all tested domains, succeeding even when some other approaches fail. Note that the UM2N results reported below were obtained using the pretrained model from the UM2N repository, and we continue to investigate and refine this baseline. #### Table 1: Error reduction (%) for various methods and non-convex domains (n.b. negative indicates error increase) |Domain|MA|UM2N|UM2N-G|G-Adapt| |-|-|-|-|-| |Geometry 1|0.23 ± 0.00|-76.85 ± 0.00|1.92 ± 0.02|**7.97 ± 0.04**| |Geometry 2|-1.00 ± 0.00|-83.88 ± 0.00|0.69 ± 0.08|**8.88 ± 0.24**| |Geometry 3|-| -75.82 ± 0.00| -0.96 ± 0.04|**6.62 ± 0.09**| |H-Geometry|-108.31 ± 0.00|-73.59 ± 0.00|-0.92 ± 0.00|**7.51 ± 0.00**| |L-Geometry|-89.40 ± 0.00| -138.43 ± 0.00|13.94 ± 1.18|**16.25 ± 0.25**| Mesh deformation times and aspect ratios can be found in [Figure D](https://imgur.com/a/rOdOAA0). 2. **Loss regularization:** We would like to clarify a key misunderstanding: the primary novelty in our loss is **not just the regularisation term in (9) but also the FEM error term** $E(\mathcal{M}_{\theta})$. Prior works relied on MSE loss to classically adapted meshes, whereas we directly minimise the FEM error, which is the **key driver of performance improvements**. The equidistribution loss is a secondary regulariser that **further enhances FEM error reduction** by ensuring that the monitor function (not the nodes) is equidistributed. This is in line with the reviewer's own intuition: "the density of mesh nodes should increase in the area where error is high", as the nodes will be distributed to areas where the curvature of the solution is high. Our experiments already highlight the advantages of our FEM loss component (UM2N vs UM2N-G). We further conducted an **ablation study** on the Poisson dataset showing that our choice of **weight 1 is optimal for this regularising loss term**. |Regularization weight|Error Reduction (%)| |-|-| | 0 (no equi-dist loss) | 22.42 | | 0.5 | 22.95 | | 1 | 23.99 | | 2 | 23.21 | | 4 | 22.14 | | 8 | 20.96 | ### **Other strengths and weaknesses** Through our extensive additional experiments we demonstrate that **our method is not restricted to convex domains**. Across all cases, **our approach significantly outperforms the two main baselines, UM2N (ML-based) and MA (classical method), in terms of FEM error reduction, while achieving comparable computational efficiency to UM2N.** ### **Other comments or suggestions** We appreciate the point raised concerning the motivation and advantages of our approach. 1. A **central novelty and advantage** of our method is that it optimises the FEM solution error directly, which is in contrast to prior work, including UM2N, which had designed **surrogates** to classical meshing approaches (such as MA). These classical methods rely on heuristics and cannot directly minimise the FEM error. 2. The flow-based approach, i.e. the diffusion deformer components in our GNN architecture are motivated by relaxation-based mesh movement and provide a **clear, quantifiable advantage over GAT-based deformers** used in UM2N as observed in our experiments (UM2N-G vs G-Adapt). 3. In contrast to prior work, our approach **does not use MA-solutions as a basis for training** and as such can be seen as more data-efficient. It nevertheless requires the repeated solution of the training problems during the training part (no additional solve is required during inference). ### **Questions for Authors** 1. Thank you for raising this question. Our method is indeed **fully applicable to non-convex domains** as demonstrated by our additional experiments. We have updated our manuscript to explicitly state this and to showcase the above results. --- Rebuttal Comment 1.1: Comment: Thank you for additional experiment to show about application on non-convex domains, which can change my evaluation of the paper. However, this also introduce significant modification of your paper, which makes it hard to evaluate accurately without seeing a complete version. Thus, I decide to change my score to 3. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer’s prompt evaluation of our new experiments on non-convex domains. We would like to clarify that these additions extend our original results without modifying the algorithm, methodology, model, or hyperparameters. The experiments adhere precisely to the framework of our original submission and serve solely to further reinforce our claims. The only manuscript changes consist of the inclusion of additional domain experiments in the main body and discussion on ablation tables in the Appendix. As all ICML 2025 papers permit an extra page for rebuttal feedback, we hope this clarifies that our work remains complete. We greatly value the reviewer’s feedback and welcome any further thoughts, as well as a potential re-evaluation of their score in light of this clarification.
Summary: In this work, a GNN-based mesh relocation method is proposed by directly minimizing the finite element solution error. A diffusion-based GNN-deformer is applied which can reduce mesh tangling. Experiments show the proposed method achieves lower solution error, on Poisson's, Burgers', and Navier-Stokes equation problems. ## update after rebuttal The authors have carefully addressed the comments and suggestions from all reviewers and have provided abundant extra experimental results. I will remain my already positive score unchanged. Claims And Evidence: The improvement in model structure and loss design makes sense, and is validated by experiments and ablation studies. Methods And Evaluation Criteria: - The results on larger-scale, 3D, and more complicated geometry problems will further validate the performance of the proposed method. - In the experiments, aspect ratio is taken as a metric. However, this is not necessarily required for good meshes, especially for this work, the final target (lower FE error) is taken to supervise the training, so we may get anisotropic good meshes. Therefore, I am not sure if it is appropriate to use aspect ratio as an evaluation metric. Theoretical Claims: I think Theorem 4.2 is solid. Note that some strong assumptions are required, such as sufficiently small timesteps. Hence in practice mesh tangling can still happen in extreme cases. Experimental Designs Or Analyses: The experimental designs and analyses are sound. Supplementary Material: I went through all the appendices. Relation To Broader Scientific Literature: This work can be applied in downstream industrial physical simulations such as fluid, structural, heat simulations, etc. Essential References Not Discussed: To my knowledge, I don't see any essential references not discussed. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: - In industry, normally people don't care about guaranteeing the topology of the mesh, as long as efficient and accurate solutions can be provided. Can the authors provide some reasons or scenarios where r-adaptation is mandatory or better than h-adaptation? - Why is the DirectOpt method shown in Fig. 1 not compared in the experiments? - I am not sure if the proposed mesh deformer should be called "diffusion-based" or "neural-ode-based"? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful review of our manuscript and for your valuable comments and questions, particularly for encouraging us to perform **experiments on more complex geometries and larger scale problems,** and to provide **clarification on our theoretical results and the use of ML-based r-adaptivity in industry**. We appreciate your positive feedback, hope the following responses answer your questions and encourage you to consider raising your score for acceptance. ### **Methods and evaluation criteria** 1. We have expanded our experiments to include **more complex, non-convex domains, and high resolution meshes** (see [Figures C & D](https://imgur.com/a/rOdOAA0)). Additionally, our model **generalises naturally to 3D**, and we tested it on Poisson's equation on the unit cube with Dirichlet BCs and Gaussian solutions. The results presented below and in [Figure A](https://imgur.com/a/rOdOAA0) show that the method outperforms MA significantly (out-of-the-box UM2N does not apply in 3D) and leads to **effective mesh point concentration in regions of interest**. |Model|Error Red. (%)|Time (ms)|Aspect| |-|-|-|-| |MA|12.71 ± 0.00|41049|2.97 ± 0.00| |G-Adapt|28.08 ± 0.36|494|6.91 ± 0.20| 2. We appreciate the reviewer’s note that aspect ratio is not essential for good meshes, which is a key aspect of our framework that aims to break the existing paradigm of classical meshing techniques: **minimal FEM-error meshes do not necessarily require a small aspect ratio**. While our primary loss term is the FEM error $E(\mathcal{Z},U_\mathcal{Z})$, **mesh quality also affects FEM-stiffness conditioning**, and high aspect ratios can introduce numerical instabilities in FEM solvers. We report the aspect ratio to show that G-Adaptive meshes achieves error reduction while maintaining reasonable conditioning. A clarification has been added in Appendix F.4. ### **Theoretical claims** The assumption that $d\tau$ needs to be "sufficiently small" to avoid mesh tangling **only requires $d\tau<0.5$**. This was shown in the original Appendix F.2 and we have now updated the statement of Theorem 4.2 to include this in the main body of the manuscript. ### **Questions** 1. r-adaptivity is a newer technology than h-adaptivity and as such is not yet widely adapted in industry. However, it has certain significant advantages over h-adaptivity. In particular it works with **a constant data structure, is easy to use on parallel architectures, it gives a more regular mesh** (often with guaranteed mesh regularity), it naturally **inherits Lagrangian and scaling structures in a PDE** (which is very useful for example in ocean modelling and studying PDEs with singularities), and can be **easily linked to existing external software** designed to solve a PDE on an unstructured mesh (for example a discontinuous Galerkin solver). As a result, **r-adaptive methods have recently been very successfully used**, for example, in the operational data assimilation codes of **national weather forecasting offices**, which when coupled to the computational dynamical core, have led to a very significant increases in computational accuracy, particularly for resolving local weather features such as fog and ice (Piccolo \& Cullen, Q. J. R. Meteorol. Soc., 2012). r-adaptivity has also found natural applications in the **steel industry** where the Lagrangian nature of the approach is very well suited to the evolving fine structures in the forging process (Uribe et al., Finite Elem. Anal. Des., 2024). **Possible disadvantages of r-adaptivity, such as excessive mesh computation cost, and a tendency to mesh tangling, are exactly the issues we address in this paper, proposing a fast and accurate method which avoids tangling.** 2. The **direct optimization method** is used in Fig. 1 **purely for exposition**, showing that MA-meshes are not necessarily optimal. DirectOpt computes the optimal mesh for a given PDE with known solution but is extremely slow and relies on data which is not available during inference. In contrast, once trained, **our G-Adaptive approach yields fast online mesh movement** without needing reference solution values. However, inspired by your comment we have added the DirectOpt results to Table 1: |Model|Error Reduction (%)|Time (ms)|Aspect| |-|-|-|-| |DirectOpt|27.40 ± 0.00|126,028| 33.99 ± 0.00| |MA|12.69 ± 0.00|3,780|2.11 ± 0.00| |UM2N| 6.83 ± 1.10|70|1.99 ± 0.03| |UM2N-G|16.40 ± 2.65|30|2.61 ± 0.17| |**G-Adapt**|21.01 ± 0.33|88|2.92 ± 0.03| 3. We agree that our architecture is fundamentally a Neural ODE on a graph. However, the specific form of this differential equation is crucial to the success of our method: The governing equation, $\dot{\mathcal{Z}}(\tau)=(\mathbf{A}_{\theta}(\mathbf{X}^k)-\mathbf{I}) \mathcal{Z}(\tau)$ resembles a discretized learnable diffusion equation, motivating our use of the term "diffusion-based GNN-deformer.". This terminology aligns with prior literature on similar architectures (Chamberlain et al., 2021a;b). --- Rebuttal Comment 1.1: Comment: Thanks for your hard work. All my concerns have been well addressed. Extra experiments have been performed to demonstrate the effectiveness of the proposed method in scenarios with larger scales, more complex geometries, and 3D. Overall, I will remain my already positive score unchanged.
null
null
null
null
null
null
Learning with Selectively Labeled Data from Multiple Decision-makers
Accept (poster)
Summary: The paper tackles the problem of correctly quantifying classification risk in the selectively labeled data setting. In this setting, true outcomes are only observed for samples that receive a certain classification/decision (e.g., default outcomes are only observed for those who are given the loan). To address this selection bias in observed outcome data, the paper proposes a framework that utilizes decision-maker pool heterogeneity to identify the true classification risk, under the assumption that the decision-maker is selected randomly for each sample. The authors first characterize the requirements of the data-generating process that are necessary to exactly identify the classification risk in this setting. They note that these requirements are overly strong for real-world applications and discuss relaxations under which partial identification is possible. With a cost-sensitive learning approach, they train classifiers using the identified (exact or partial) risk and demonstrate the efficacy of a loan application dataset with synthetic decision-makers. Claims And Evidence: The main contribution of the paper is the framework to identify classification risk using decision-maker assignment as an IV. The authors claim that, in many real-world applications, decision-making usually involves several decision-makers and the choice of decision-maker for any unlabeled sample is often random. While this methodology is creative, I don’t think the paper does a satisfying job of justifying this claim either using prior work or empirical data. I note two concerns below. 1. In terms of prior work, there is very limited discussion in the introduction to support this claim. The authors cite prior works that consider decision-maker heterogeneity in the judicial setting and discuss them in Appendix A. However, there’s no discussion of whether there's prior evidence for random decision-maker assignment and decision-maker heterogeneity in other domains (including the loan application domain that features prominently across the paper). 2. For empirical analysis, the paper chooses to create synthetic decision-makers, which again makes me question the validity of random (or quasi-random) decision-maker assignments in real-world settings or the availability of data on who made which decision. Methods And Evaluation Criteria: 1. Regarding the methods employed to address the selectively labeled data problem, I like the broad idea the paper utilizes of employing an IV strategy to address selection bias using external factors that control the decision. However, I am not completely convinced that decision-maker assignment offers the best IV pathway, mainly because the paper shows that exact identification requires overly strict assumptions. In light of this restriction, I am curious about the realistic settings (and not just the simple ones noted on Page 4) where the authors think this method can still provide accurate estimates of classification risk.
 2. For empirical analysis, the paper focuses on a loan-approval dataset, computing classification risk using the proposed framework in both exact and partial identification settings. Overall, it does seem to lead to more accurate classifiers than simple baselines. I am curious as to why the paper doesn’t compare their method to Rambachan et al. (even if limited to comparisons of risk evaluation using different methods) since they note that both papers consider the IV strategy to tackle the selectively labeled data problem.
 3. Additionally, considering that the empirical analysis is based on semi-synthetic data, it would be good to acknowledge the limitations of this analysis somewhere in the paper. Theoretical Claims: The theoretical claims made in the paper seem correct. Experimental Designs Or Analyses: The experimental analysis presented seems mostly sound, although it would be good to provide comparisons against other related works (e.g., Rambachan et al) if possible. One point that I think needs more discussion in the empirical analysis is how the $a_k, b_k$ (and correspondingly $l_k, u_k$) are specified/learned in the partial learning experiments. Currently, I don't see any discussion on these parameters in Section 6 (let me know if its there and I am missing something) and considering its relevance to the partial identification setting it will be good to discuss how they are being set for the experiments. Supplementary Material: The supplemental material contains discussions of related works, proofs of theoretical claims, and additional empirical analysis. Although, I would suggest moving some of related works discussion to the main body. Relation To Broader Scientific Literature: Overall, the paper definitely tackles an important problem of identification of prediction risk when the available outcome data is selectively labeled. The IV methodology of harnessing decision-maker heterogeneity is interesting and fits well (theoretically) with the decision-making setup of important real-world applications. However, as I noted above, there are doubts about the feasibility and practicality of the proposed framework which I believe severely limits the impact of the framework. Essential References Not Discussed: The paper notes the main related works in this field. However, it's strange to have the entire related works section in the Appendix. Several citations discussed in the "Selective Labels Problem" setting are crucial for motivating the issues associated with selective label problems in real-world applications and would be helpful in contextualizing the problem setting for readers who are unfamiliar with this domain. I strongly suggest having at least a short related work section in the main body to include a discussion on the primary references. Other Strengths And Weaknesses: The use of decision-maker assignment as an IV is creative and I wonder if it can also be considered as a potential intervention to ensure classification risk identification for future data collection procedures. Other Comments Or Suggestions: None Questions For Authors: 1. Are there prior works in multiple domains that provide evidence for the fact that decision-maker assignment is mostly random and that the decision-maker pool is heterogeneous? 2. Is there real-world data available in any domain where decision-maker assignments are noted and where the proposed framework can be employed? 3. Is it feasible to empirically compare the proposed framework to prior works (e.g., Rambachan et al) and, if so, does it achieve similar/better performance than prior works? 4. How are parameters $(a_k, b_k)$ and $(l_k, u_k)$ specified/learned in the partial learning experiments? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your valuable feedback. We now address each of your comments below. - Evidence of (Quasi-)randomly Assigned Heterogeneous Decision-Makers: The selective label literature has mainly considered two examples for randomly assigned heterogeneous decision-makers: 1. Judicial Decision-Making – In US, Judges are often randomly assigned to cases, yet they exhibit heterogeneous decision-making styles and biases (Kleinberg et al., 2018; Marcondes et al., 2019). This example is widely cited in the selective label literature. 2. Loan Approval – In fact, loan applications are also randomly assigned to different loan officers in some places. For example, there even exist platforms, such as OppFi (https://ortooapps.com/case-study/oppfi), that implements the random assignment to ensure fairness and equity. Even if the loan applications are not randomly assigned, they may be viewed as quasi-random as long as the loan officers roughly face the same distribution of applications given the observed features. Moreover, Bushman et al. (2021) demonstrate that individual loan officers significantly influence loan contract terms and performance, highlighting the heterogeneity across decision-makers. - Real-world Data: Kleinberg et al. (2018) indeed uses real-world data on all arrests made in New York City between 2008 and 2013, but the data are not publicly available. Existing selective label literature also considered a few other datasets. But unfortunately these are also not public. This is why we use synthetic or semi-synthetic data in our paper. However, we hope to clarify that synthetic data actually have advantages over real-world selective label data. This is because in the real data, the outcomes of the unlabeled units are unobservable so evaluating the results from a given algorithm is not straightforward. By using synthetic or semi-synthetic data, we do observe the true outcomes for all units so we can more easily assess different algorithms. - Decision-maker Assignment as IV: we understand your concern about the point identification assumption and your questioning the validity of view decision-maker assignment as IV. We hope to clarify that the key defining characteristics of IV are exogeneity and exclusion restriction. These two are satisfied by the decision-maker assignment under our assumptions, and many existing selective label works also leverage these two properties in their heuristic solutions. In contrast, the homogeneity assumption (NUCEM assumption) is not the defining characteristic of IV. There exist many works on IV that does not require this assumption, such as the local average treatment effect (LATE) analysis in Angrist et al. (1996) and the Balk-Pearl partial identification bounds. Moreover, one purpose of our point identification is exactly to reveal that it needs strong assumption, which motivates our IV partial identification. In experiments, we do observe that the partial identification approach tends to perform better. - Comparison with Rambachan et al. (2023): The focus of Rambachan et al. (2023) differs fundamentally from ours. Their work primarily studies evaluating various error measures of a given binary classifier based on IV-based partial bounds, whereas we focus on learning a robust classification rule under both point and partial identification settings. We note that it is not clear how to optimize their error measure estimators to effectively train robust classifiers. In contrast, our work develops a unified cost-sensitive learning (UCL) algorithm for both point and partial identification settings. This requires in-depth analyses of the minimax formulation and cost-sensitive learning problems. So their approach is not directly comparable to ours. We will further clarify this in our revision. - Specification of Range $a_k$ and $b_k$: In our experiments, we set $a_k(x) = 0$ and $b_k(x) = 1$ for all cases. The partial bounds $l_k(x)$ and $u_k(x)$ are then computed using the estimated nuisance functions along with the specified values of $a_k(x)$ and $b_k(x)$. These nuisance functions are learned from the observed data using the Gradient Boosting algorithm. We will clarify this point in future revisions. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for your responses. The clarification on why DM assignment can serve as a good IV does help better understand the appeal of this approach. I am still not completely convinced of the real-world feasibility since, as the authors note, real-world evaluation is difficult as DM assignments are usually not public. However considering the theoretical and empirical advantages, there might be potential for future work to address the question of feasibility in real-world settings. As such, I am increasing my score to reflect that and would recommend including a robust discussion in the paper on the limitations of the proposed approach. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback and for increasing the score. We sincerely value your engagement in the review process. In our revision, we will follow your suggestion, discussing the challenges with real-world data evaluations and acknowledging the potential limitations of our experiments with synthetic and semi-synthetic data. Thank you again for your time and consideration.
Summary: This paper studies multiclass classification with selectively labeled data, where label distribution is biased due to historical decision-making. By leveraging variations in decision rules across multiple decision-makers, the authors apply an instrumental variable (IV) framework to establish necessary and sufficient conditions for exact classification risk identification. When exact identification is infeasible, they derive sharp partial risk bounds. To mitigate label selection bias, the paper proposes a unified cost-sensitive learning (UCL) approach. Claims And Evidence: Yes, the claims made in the submission supported by evidence. Methods And Evaluation Criteria: Yes, methods and valuation criteria make sense for the problem or application at hand. Theoretical Claims: I did not thoroughly check each proof. However, most of the theoretical claims are not novel and have already been well-established in the literature. Additionally, they do not contradict my prior understanding. **Exact identification under IV**: Cui, Y. and Tchetgen Tchetgen, E., 2021. A semiparametric instrumental variable approach to optimal treatment regimes under endogeneity. Journal of the American Statistical Association, 116(533), pp.162-173. **Partial identification under IV**: Pu, H. and Zhang, B., 2021. Estimating optimal treatment rules with an instrumental variable: A partial identification learning approach. Journal of the Royal Statistical Society Series B: Statistical Methodology, 83(2), pp.318-345. **Cost-sensitive classification**: Bietti, A., Agarwal, A., and Langford, J., 2021. A contextual bandit bake-off. Journal of Machine Learning Research, 22.133, pp.1-49. Experimental Designs Or Analyses: Yes, it provides a semi-synthetic example. Supplementary Material: Related work and additional experiment results. Relation To Broader Scientific Literature: The contributions of this paper build upon existing work in instrumental variable (IV) methods and cost-sensitive classification. The theoretical foundations for exact identification under IV have been established by Cui and Tchetgen Tchetgen (2021), while partial identification under IV has been explored by Pu and Zhang (2021). The results in this paper can be seen as a special case of these prior works when the outcome is discrete. Additionally, cost-sensitive learning approaches, particularly in the context of contextual bandits, have been well studied, as highlighted by Bietti et al. (2021). While the paper applies these concepts to multiclass classification with selectively labeled data, its theoretical contributions largely align with existing literature rather than introducing fundamentally new insights. Essential References Not Discussed: No. Other Strengths And Weaknesses: The paper primarily builds on existing theoretical frameworks for instrumental variable (IV) methods and cost-sensitive classification, with its results being a special case of prior work when the outcome is discrete. While it provides a structured application of these ideas to multiclass classification with selectively labeled data, the theoretical contributions do not introduce fundamentally new insights, as they closely align with well-established results in the literature. To enhance its novelty, the paper could explore deeper theoretical results, such as deriving the efficiency bound for the risk function under exact identification. Other Comments Or Suggestions: No. Questions For Authors: I wonder why the paper focuses only on discrete outcomes. Can the approach be easily extended to continuous variables? If so, why not establish a more general framework for decision-making that encompasses both cases? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback and your references. Our work indeed builds on these previous literature and is therefore closely related to them. However, our work is not “a special case of these prior works”. Instead, our work significantly generalizes these existing literature and makes contributions beyond them. Below we provide detailed explanations, which we will also clarify in our revised version. - The Exact Identification in Cui and Tchetgen (2021). They study policy learning under unmeasured confounding using IV, focusing on a binary IV and binary treatment. They aim to learn a confounding robust treatment rule that maps features to a binary treatment. This is more like a binary classification problem with causal structure. In contrast, our work considers a multi-valued IV and a multi-class label, aiming to learn a classification rule that maps features to a multi-class label. This setting strictly generalizes their framework. Notably, their identifying NUCEM assumption involves the difference of certain conditional expectations given the two different IV values. This heavily relies on the binary nature of IV. Instead, we consider a more general assumption that can tackle general IV $Z$. Moreover, their learning procedure only uses hinge loss but we explore more surrogate losses for both binary and multi-class classification. - The Partial Identification in Pu and Zhang (2021). They explore policy learning under unmeasured confounding with a binary-valued IV, binary treatment, and binary outcome, again learning a robust treatment rule that maps features to a binary treatment. In contrast, we consider multi-valued IVs and multi-class labels. Pu and Zhang (2021) directly uses Balke and Pearl’s partial bounds for the binary IV and binary outcome setting. We extend these bound to accommodate multi-valued IVs and any bounded multi-class outcomes (see Assumption 4.1). Moreover, we rigorously prove the tightness of our IV-based partial identification bounds (Appendix C.1), which to our knowledge is new to the literature. Pu and Zhang (2021) also considers minimax learning. Their inner maximization problem can be easily solved in closed-form for the binary outcome and their final minimization is based on hinge surrogate loss. However, we consider multi-class classification where the inner maximization involves much more complex simplex constraints. We managed to give a closed-form solution by carefully analyzing the problem structure. - Contextual bandits in Bietti et al. (2021). This paper conducts an empirical analysis of several algorithms for online contextual bandits, which studies a fundamentally different problem from ours. We acknowledge that cost-sensitive classification and surrogate losses are not new problem ideas. But we hope to clarify that our contribution lies in in-depth analyses that transform the problems in both point and partial identification settings into a unified cost-sensitive classification form. Under our general framework, we can explore a range of different surrogate losses while Cui and Tchetgen (2021) and Pu and Zhang (2021) focus on only the hinge loss. We now further respond to your other comments. - Efficiency bound under exact identification: We appreciate your suggestion but we think this is beyond the scope of our current paper. The main focus of this paper is to provide a unified learning framework for the selective label problem in both point and partial identification settings. As our work is dense already, we leave the efficiency bound for future study. - Extension from Discrete to Continuous Outcomes: Thanks for the great question. Extension from discrete to continuous outcomes is indeed an important problem but we think this should be left for a separate future study. Notably, all existing selective label literature focuses on binary outcomes (see references in Appendix A). Our study of multi-class outcomes already constitutes an extension. Moreover, as we discuss above, our work also strictly generalizes Cui and Tchetgen (2021) and Pu and Zhang (2021), instead of being their special cases with restricted outcomes. In our extensions of these prior literature, our paper overcomes many new technical challenges, generalizing many assumptions, analyses and results in these literature. Finally, we hope to briefly touch on the challenges with continuous outcome. Our partial identification analysis involves specifying some bounds $a_k(X)$ and $b_k(X)$ in Assumption 4.1. For classification problems, these can be naturally set as 0 and 1. But for continuous outcome, we may need to specify the range for $E[Y^* | X, U]$, which may has no natural ranges. Moreover, in solving the inner maximization problem, we heavily rely on the simplex structure of the constraints. But such structure no longer applies to continuous outcomes.
Summary: This paper focuses on the problem setting of classification with selective labeled data, that is, the labeled data at hand can be biased because of decision-making in the past. This paper defines the problem mathematically and solves this problem from the perspective of the instrumental variable (IV) framework. There are two assumption settings: (1) No unmeasured common effect modifiers (NUCEM), which is a strong assumption that leads to a clean solution, and (2) Partial identification, where a reasonable solution can be obtained. Theoretical analyses of two assumption settings are provided. Furthermore, a practical algorithm for both cases is also provided based on weighted empirical risk minimization with calibration guarantee. Synthetic experiments show that the proposed method outperforms baselines. ## update after rebuttal After the rebuttal, I still think the idea of this paper is novel. It studies the problem setting extensively theoretically and also provides experimental results., Thus, I keep my score (4: accept). The authors clarified in the rebuttal that their work has novelty and also admits some current drawbacks of their methods (e.g., computation time). Claims And Evidence: 1. Strong theoretical results for a problem setting that is quite complicated and has practical relevance. 2. Experimental results show effectiveness of the proposed practical algorithm. Methods And Evaluation Criteria: Since the problem setting is quite complicated, I believe there does not exist a benchmark dataset that directly corresponds to this problem. This justifies the use of synthetic datasets that this paper decided to do. Theoretical Claims: The proposed method looks reasonable to me. The theoretical claims are sound in my understanding. Experimental Designs Or Analyses: Experimental designs and analyses are valid. Supplementary Material: 1. Many important information are in the appendix. Unfortunately, the whole related work discussion is in the Appendix. It would be better to have a related work discussion in more detail in the main body. 2. I briefly checked the proof of calibration and excess risk. But I didn't go through the proof of other analyses. Relation To Broader Scientific Literature: The paper is related to selective labels problems, where it has been discussed in appendix. It might be also similar to weakly supervised learning or domain adaptation in the sense that the observed labeled dataset has something different from the test distribution and we have to use information at hand somehow to derive a risk minimizer for the test distribution using observed training information. Essential References Not Discussed: No additional requests from me. Other Strengths And Weaknesses: Strengths 1. Strong theoretical results that improve an understanding of a complicated yet relevant problem setting. It is praiseworthy that this paper not only focuses on a restrictive NUCEM assumption but also considers the partial information assumption. 2. Practical algorithms with theoretical guarantee are provided, which can be relatively easy to implement. 3. Experimental results (although synthetic) show that the proposed method is effective compared with reasonable baselines. Weaknesses 1. Proposed method's weakness is not much discussed in my understanding. One might be that it could be computationally expensive (I'm not sure). Moreover, the estimation of weight could be incorrect, and we don't see much effect in the experiments, whether this can make the proposed method not work well. I find the comment in the paper, why NUCEM lost to partial under NUCEM assumption quite interesting that NUCEM requires a ratio estimation. I think such discussions could be useful. Or ablation study of the effect of imprecise weight estimation could also be useful. (but I'm also aware that the paper is already dense unfortunately). Other Comments Or Suggestions: The paper is quite dense already, but it would be better to explain more about related work in the main body if possible to highlight the novelty of the proposed work as well as reviewing prior work to the reader. Line 110 (left): if8 -> if Line 1523: conseuqnece -> consequence Line 1995: Vairable -> Variable Questions For Authors: 1. Could you please comment on the comparison of the computational cost of the proposed unified cost-sensitive learning (point), (partial), and vanilla training? 2. Since many weights have to be estimated, how important is the accuracy of weight estimation? Is the solution highly sensitive to this? 3. Is this the first work to use instrumental variable (IV) framework for selective labeled classification? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your insightful comments and positive feedback for our work. - Computational Cost: The computational cost of our method is higher than vanilla method which directly learn a classifier from the observed (selectively labeled) data. That is because our method consists of two steps: The first step is estimation of nuisance functions through a cross-fitting technique, which are used to construct the classification weights; The second step is a weighted / cost-sensitive classification procedure, which can be efficiently solved. The nuisance estimation indeed entails some additional computational costs. But this typically only involves a series of standard regression/classification fitting. This is usually manageable and also widely adopted in the causal inference literature (Chernozhukov et al., 2018). Moreover, this can be accelerated by fitting the nuisances on different folds of data in parallel. We will clarify the computational aspect in our revision. - Weight Estimation Accuracy: The accuracy of weight estimation can directly impact the performance of our method. In fact, our learning guarantee in Appendix E already captures this. Our excess risk bound in that part involves a term capture the weight estimation error. Notably, the weights to be estimated are different for the point and partial identification settings. Estimating the weight for point identification involves estimating two nuisances and their ratio. In contrast, the weight in the partial identification only involves the sum of a series of nuisance functions and does not involve any ratio. The latter is generally more insensitive to the nuisance estimation error. During the rebuttal period, we conducted some additional experiments for the simulation setting with confounding strength $\alpha_Y=0.5$ and $\alpha_D = 0.7$ (see setting details in Appendix F) to assess the impact of nuisance estimation error. We introduced Gaussian noise into the estimates of nuisance functions to inflate their errors, defined as $\tilde{\eta}(X_i) = \hat{\eta}(X_i) \cdot [1 + \sigma^2 \mathcal{N}(0,1)]$ with noise levels $ \sigma = \{0.0, 1.0, 2.0, 3.0, 4.0 \}$. The experiment is repeated for 10 times. We then computed weights based on these noisy estimates and analyzed their effect on resulting classification accuracy. Our findings reveal that both partial and point learning remain stable under small perturbations. However, beyond a certain noise threshold, performance degrades significantly. Notably, partial learning exhibits higher tolerance ($\sigma=4.0$) compared to point learning ($\sigma=2.0$). This demonstrates that the partial learning approach is indeed more resilient to nuisance estimation errors. In our revision, we will clarify the impact of nuisance estimation errors and add the extra numerical results. - First Work to Use IV for the Selective Labels Problem: Thank you for your suggestion on emphasizing the novelty of our work. Previous literature on the selective labels problem (SLP) also leverages the random assignments of heterogeneous decision-makers but their approaches are largely heuristic, as discussed in Appendix A. Our work uses the IV framework to provide principled point and partial identification analyses and derive rigorous learning algorithms. We remark that a closely related work by Rambachan et al. (2023) also considers IV-based partial identification in the context of selective label problem. However, our work is substantially different from theirs and makes many unique contributions. Please see our detailed responses to reviewer aAyP. While we already mentioned these in our paper, we will further highlight them in our revision. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for providing detailed responses to my concerns and I believe there is no misunderstanding of my review. It is great that the theoretical analysis also covers the weight estimation error. I still have a positive impression of this paper, and I will maintain the same score during the rebuttal period. However, I also saw that another reviewer pointed out that the results in this paper are not novel. I still haven't confirmed it myself since it is still unclear which main results have already been proven in which parts of the several papers the reviewer suggested. If it can be verified that the main results found in this paper have already been discovered, I might decrease the score in the final review. --- Reply to Comment 1.1.1: Comment: Thank you for your positive impression of our work and for clearly articulating your current concern. We truly value your active engagement in the review process. We would like to further emphasize that the main results of our work are novel, making multiple contributions beyond the existing literature. 1. Our study is the first to systematically explore both the point and partial identification of selective label classification within a principled instrumental variable (IV) framework. We offer unified cost-sensitive formulations for both scenarios. This not only allows for the use of a variety of surrogate losses but also provides a unified framework for theoretical analysis. In contrast, the existing selective label literature either lacks formal identification analyses or only covers partial identification for classifier evaluation, without considering classifier optimization or learning (Rambachan et al., 2023; see response to Reviewer aAyP). 2. Although our point identification result is related to Cui and Tchetgen (2021), our work differs from theirs in several key aspects. Cui and Tchetgen (2021) focuses on learning the optimal treatment allocation rule with a binary treatment $A$ and a binary IV $Z$. Their point identification result relies on the homogeneity assumption that $P(A = 1 \mid Z = 1, X, U) - P(A = 1 \mid Z = 0, X, U)$ is conditionally uncorrelated with $E[Y(1) – Y(0) | X, U]$ given $X$, where $Y(1), Y(0)$ are the two potential outcomes that are observed when $A=1, 0$ respectively. Notably, this assumption heavily relies on the binary nature of IV $Z$. They then use an inverse propensity weighted formulation to transform the problem into a weighted classification problem, with the treatment or IV serving as the binary label. Then the misclassification zero-one loss is replaced by the hinge loss for optimization. In contrast, our paper examines selective label classification with the decision-maker assignment as a multi-class IV. Our point identification Assumption 3.2 must account for multi-class IV and thus strictly generalizes the assumption in Cui and Tchetgen (2021). In Theorem 3.2, we further show that our assumption is the sufficient and necessary condition for the Wald-style identification in Theorem 3.3. Moreover, our selective classification problem with a multi-class label $Y^*$ is quite different from the classification reformulation of policy learning in Cui and Tchetgen (2021), where $A$ is viewed as the label. While our decision indicator $D$ plays a similar role as the treatment $A$ in Cui and Tchetgen (2021), it only controls the missingness of data. Our real classification targets is the multi-class outcome $Y^{\star}$, and we cannot view the decision $D$ as a label. Therefore, we address a general multi-class classification problem, unlike the binary classification problem considered in Cui and Tchetgen (2021). 3. Our partial identification result also makes several contributions beyond Pu and Zhang (2021). Pu and Zhang (2021) studies policy learning under partial identification with a binary treatment, binary IV and binary outcome. They consider the Balke-Pearl Bound or the Siddique Bounds (with an additional Non-Compliant Decision assumption) for partial identification, both restricted to the binary setting. Our partial identification bound generalizes the Balke-Pearl bound to a setting with a general multi-class IV and multi-class outcome, in the context of selective label classification. We also prove in our paper that this bound is sharp, meaning it provides the tightest bound under our assumptions. To our knowledge, these results are novel and have not been explored in the existing literature. Moreover, although both our work and Pu and Zhang (2021) consider minimax learning, our learning problem is more challenging due to the more complex partial identification bounds. Pu and Zhang (2021) deal with a binary outcome, so their inner maximization problem only involves a simple interval constraint on the one-dimensional conditional average treatment effect function and can be easily solved in closed form. In contrast, we consider a multi-class label, so our inner maximization involves a more complex simplex constraint on a vector of conditional probability functions. We carefully analyze this problem structure and provide a closed-form solution using the concept of “realizable” partial bound (see Theorem 4.3). This enables us to transform the minimax learning problem into a more tractable cost-sensitive learning problem. These results are novel and significantly generalize the findings in Pu and Zhang (2021) through refined analyses. We hope these explanations can address your concern. In our revision, we will clarify these differences more explicitly in both the literature review and the sections on point and partial identification. Thanks you!
null
null
null
null
null
null
null
null
Geometric Median (GM) Matching for Robust k-Subset Selection from Noisy Data
Accept (poster)
Summary: The paper proposes the use of the Geometric Median to robustly identify subsets of a dataset that best represents the full dataset. The main goal is to reduce sensitivity of the selection algorithm to outliers in the corrupted data, a problem that traditional subset selectors that rely on the empirical mean suffer from. In this regard, the geometric mean is proposed as more robust surrogate. An iterative selection algorithm is proposed that selects subsets that minimizes the discrepancy between the subset and the geometric mean. Theoretical guarantees are given on the convergence properties of the proposed algorithm. Claims And Evidence: The claims made in the paper are well supported in theoretical and empirical analysis. Methods And Evaluation Criteria: The evaluation criteria is standard for the subset selection problem. Theoretical Claims: I do not find any issues with the proofs of the theoretical claims in the paper, although i should stress that i only did a cursory pass over the proofs. Experimental Designs Or Analyses: The experimental setup appears standard for the problem domain. Supplementary Material: Yes. Mostly the sections on experimental setup details. Relation To Broader Scientific Literature: The paper builds on classical results in robust estimation and by utilizing the Geometric Mean, extend these workds into the current subset selection domain. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The main strength of the proposed method is its extension of subset selection methods using the GM, which is well grounded in theory and in so doing , answers the question of how to obtain subset selector that are robust to outliers. On the other side, one may point out that the main contribution is the substitution of the empirical mean with the geometric mean in an already existing subset selection method. Other Comments Or Suggestions: The captions of Figure 6 are obscured by vspacing. Questions For Authors: - How well does the proposed method perform when the corruption rate is larger than 20% in table 2? - The current experiments are performed on small scale datasets. How does the proposed method scale when applied on larger datasets? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and detailed feedback. We are encouraged that the reviewer finds the theoretical claims well supported, the empirical design appropriate, and the core contribution valuable. Below, we address the specific points raised: **Q1: Performance at Higher Corruption Rates (>20%) in Table 2** Thank you for raising this. We now include experiments with 30% image corruption across two datasets (CIFAR-100 and TinyImageNet), at 20% and 30% selection rate: | Method / Ratio | C(20%) | C(30%) | T(20%) | T(30%) | |----------------------|-------------------|--------------------|--------------------|--------------------| | Random | 24.51 ± 1.34 | 32.26 ± 0.81 | 14.64 ± 0.29 | 19.41 ± 0.45 | | Herding | 29.42 ± 1.54 | 37.50 ± 2.12 | 15.14 ± 0.45 | 20.19 ± 0.45 | | Forgetting | 29.48 ± 1.98 | 38.01 ± 2.11 | 11.25 ± 0.90 | 17.07 ± 0.66 | | GraNd-score | 23.03 ± 1.15 | 34.83 ± 1.22 | 13.68 ± 0.42 | 15.40 ± 0.52 | | EL2N-score | 21.95 ± 1.08 | 31.63 ± 2.84 | 10.11 ± 0.25 | 13.52 ± 0.32 | | Optimization-based | 26.77 ± 1.05 | 35.63 ± 0.63 | 13.20 ± 0.56 | 18.52 ± 0.33 | | Self-sup.-selection | 23.12 ± 1.47 | 34.85 ± 0.68 | 11.23 ± 0.32 | 17.76 ± 0.69 | | Moderate-DS | 28.45 ± 0.53 | 36.55 ± 1.26 | 15.27 ± 0.38 | 20.33 ± 0.28 | | **GM Matching** | **40.28 ± 1.02** | **50.71 ± 0.86** | **20.38 ± 1.09** | **25.93 ± 0.98** | GM Matching, significantly outperforms all baselines — the gains are even more pronounced compared to milder corruption (Tables 2, 4, 5), further validating the robustness of our approach. **Q2. Scaling on larger datasets?** We now include results ( Test Accuracy @ Top-5) on ImageNet-1k (1.2M training samples), using ResNet-50 and selection ratios from 60% to 90%: | Method / Ratio | 60% | 70% | 80% | 90% | |----------------------|-----------------|------------------|------------------|------------------| | Random | 87.91 ± 0.37 | 88.63 ± 0.95 | 89.52 ± 0.73 | 89.57 ± 0.60 | | Herding | 88.25 ± 2.16 | 88.81 ± 1.06 | 89.60 ± 0.58 | 90.41 ± 0.33 | | Forgetting | 88.83 ± 0.92 | 89.81 ± 0.97 | 89.94 ± 0.26 | 90.41 ± 0.58 | | GraNd-score | 88.48 ± 1.73 | 89.82 ± 2.07 | 90.24 ± 0.81 | 90.41 ± 0.62 | | EL2N-score | 88.48 ± 2.81 | 89.82 ± 1.14 | 90.34 ± 0.87 | 90.46 ± 0.96 | | Self-sup.-selection | 87.59 ± 2.61 | 89.56 ± 1.97 | 90.74 ± 0.27 | 90.49 ± 0.98 | | Moderate-DS | 89.23 ± 0.96 | 89.94 ± 0.74 | 90.65 ± 0.51 | 90.75 ± 0.35 | | **GM Matching** | **90.28 ± 0.38** | **90.54 ± 0.19** | **90.72 ± 0.26** | **90.84 ± 0.32** | GM Matching achieves the best performance across all pruning levels — outperforming all baselines. Beyond empirical scalability, kindly also refer to Section 5.4 and Appendix F, where we break down the time complexity of both GM estimation and greedy selection and motivating the batched variant (Algorithm 1). Figure 9-11 show wall-clock time scaling vs. dataset size, embedding dim, and batch size. **Note:** For the new experiments in Q1 and Q2, we follow the setup and reuse baselines from Moderate Coreset (ICLR 2023). Code: https://github.com/tmllab/2023_ICLR_Moderate-DS **Q3. Contribution** We thank the reviewer for acknowledging the strength of our theoretical grounding and robustness to outliers. We respectfully clarify that our contribution goes well beyond a simple substitution: Fundamentally, we propose a new combinatorial formulation for Robust Moment Matching, enabling systematic study of subset selection in noisy settings. While we instantiate our method with the GM, the framework supports a wide class of robust estimators (e.g., trimmed means, M-estimators), opening a principled direction in robust subset selection under noise. This formulation is fundamental, as it decouples selection from fragile mean estimation and instead aligns with a robust signal. Moreover, since the framework applies to general Hilbert spaces, it can extend to gradient space, enabling integration with methods like CRAIG that perform gradient matching for coreset selection. In summary, we present a general, theoretically grounded framework for robust coreset selection — not limited to a single estimator or modality — paving the way for future advances in robust data summarization. **Q4. Figure 6 formatting** Thank you for pointing this out. We will correct the spacing issue in the camera-ready version. We are grateful for the reviewer’s insights and look forward to constructive discussion and refining the paper accordingly.
Summary: The paper introduces Geometric Median (GM) Matching, a novel approach for robust k-subset selection from noisy datasets. The key contribution is replacing the empirical mean, which is sensitive to outliers, with the Geometric Median (GM), a robust estimator with an optimal breakdown point of 1/2. The GM Matching algorithm iteratively selects a subset whose mean approximates the GM of the potentially noisy dataset. Theoretical guarantees demonstrate that GM Matching achieves $O(1/k)$ scaling, outperforming traditional $O(1/\sqrt{k})$ scaling of uniform sampling, even under high corruption. Extensive experiments on image classification and generation tasks show that GM Matching significantly outperforms existing pruning methods, particularly in high-corruption settings, making it a strong baseline for robust data pruning. --------------- Updated ------------ Thank you for your response, based on other reviews and response, I feel confident in my rating of accept. Claims And Evidence: The paper makes several claims, which are mostly well-supported by theoretical analysis and empirical validation: * GM Matching is robust under high corruption rates: Theoretical guarantees (Theorem 1) prove that GM Matching remains stable even when up to 50% of data is arbitrarily corrupted. The experiments on CIFAR-100 and Tiny ImageNet confirm that GM Matching consistently outperforms other selection methods in corrupted environments. * O(1/k) convergence rate: The authors provide a mathematical proof that GM Matching converges at a quadratic improvement over uniform sampling. Empirical results (Fig. 3) support this claim, showing GM Matching achieving better moment matching error than herding and random sampling. * Superior performance in real-world scenarios: The experiments across multiple datasets (Tables 1, 2, 3) demonstrate that GM Matching consistently outperforms alternatives in both clean and noisy settings. Methods And Evaluation Criteria: The proposed method makes sense given the problem at hand, as it directly addresses the weaknesses of empirical mean-based selection methods in noisy datasets. The evaluation is mostly robust, with well-chosen benchmarks (CIFAR-100, Tiny ImageNet, MNIST), pruning ratios (20%-100%), and corruption scenarios (label noise, feature corruption, adversarial attacks). One minor issue: The baseline comparison might be missing Garg et al. (2023) for core set selection [1]. While Garg et al. is not explicitly designed for noisy datasets, this work could be relevant and should be included in Related Work. [1] Garg, Isha, and Kaushik Roy. "Samples with low loss curvature improve data efficiency." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20290-20300. 2023. Theoretical Claims: I checked the correctness of the theoretical proofs, and they seem sound. The key theoretical claim (GM Matching achieves O(1/k) convergence under gross corruption) is well-supported by rigorous mathematical derivations and empirical validation. While low-level details may have been missed, the results seem correct at the high level. Experimental Designs Or Analyses: The experimental design is solid, with extensive comparisons across different datasets, architectures, and corruption settings. The main concern is the use of FID for diffusion model evaluation. While acceptable in this case, the authors should consider moving to more robust generative evaluation metrics such as those proposed by Stein et al. (2023) and Jayasumana et al. (2024). [2] Stein, George, Jesse Cresswell, Rasa Hosseinzadeh, Yi Sui, Brendan Ross, Valentin Villecroze, Zhaoyan Liu, Anthony L. Caterini, Eric Taylor, and Gabriel Loaiza-Ganem. "Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models." Advances in Neural Information Processing Systems 36 (2023): 3732-3784. [3] Jayasumana, Sadeep, Srikumar Ramalingam, Andreas Veit, Daniel Glasner, Ayan Chakrabarti, and Sanjiv Kumar. "Rethinking fid: Towards a better evaluation metric for image generation." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9307-9315. 2024. Supplementary Material: No supplementary material was provided. Relation To Broader Scientific Literature: The work builds upon robust statistics and subset selection techniques, integrating Geometric Median estimation into a scalable data pruning method. The key contribution is applying GM to data selection for deep learning robustness, which is novel and well-grounded in theoretical and empirical research. GM Matching provides a formal and solid approach to robustness against label noise and feature corruption. Essential References Not Discussed: Garg et al. (2023): This work discusses low-loss curvature and its impact on data efficiency, which could strengthen the theoretical backing of GM Matching. While not directly related to robustness, it is worth including in Related Work. [1] Garg, Isha, and Kaushik Roy. "Samples with low loss curvature improve data efficiency." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20290-20300. 2023. Other Strengths And Weaknesses: Strengths: * Strong theoretical guarantees: The breakdown point of 1/2 makes GM Matching highly robust compared to mean-based approaches. * Extensive experiments: The authors test multiple datasets, architectures, and corruption types, making the results more generalizable. * Practical relevance: Given the increasing interest in data pruning and robust training, GM Matching is a strong baseline for future research. Weaknesses: * Forgetting method performance needs more discussion: The results indicate that Forgetting performs better at higher corruption rates, but the paper does not fully explain why. A more detailed discussion of when to use GM Matching vs. other approaches would be useful. * Under feature corruption, when should GM Matching be used versus other methods? This should be more explicitly stated. * Adversarial attack settings are unclear: The paper does not provide details on PGD attack parameters—e.g., what was the epsilon bound? How many iterations were used? This should be stated in the main paper or appendix. Other Comments Or Suggestions: * Add adversarial attack parameters: Without this information, it is hard to replicate the results. * Clarify when to use GM Matching: The paper should discuss specific conditions under which GM Matching is preferable over other methods (e.g., what levels of corruption?). Questions For Authors: 1. What were the specific attack parameters? Please provide details on the epsilon bound and number of iterations. 2. Why does Forgetting perform better at higher corruption rates? The paper suggests GM Matching is robust, yet Forgetting seems to outperform it at low pruning rates. Why is this the case? Some discussion would be helpful. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the detailed, thoughtful, and encouraging feedback. We are pleased that you found our theoretical results, empirical validation, and overall framing of GM Matching to be strong contributions. We respond to your key points below: **Q1. Missing Related Work** Thank you for pointing this out. Garg et al. (2023) presents an interesting and well-motivated approach based on low-loss curvature, which offers valuable insights into data efficiency and sample selection. While it is not explicitly designed for noisy settings, we agree that it is a promising direction for future work, particularly in exploring whether curvature-based selection methods can be made robust to corruption. We will cite and briefly discuss this work in the revised Related Work section. **Q2. Evaluation Metrics for Diffusion** We agree that FID has known limitations, especially when evaluating modern diffusion models. In our case, we used FID following standard practice for proof-of-concept (PoC) experiments on simple generative setups, and we appreciate that the reviewer finds this acceptable. That said, we thank the reviewer for pointing us to Stein et al. (2023) and Jayasumana et al. (2024), and we will reference both in the revised manuscript. **Q3. Adversarial attack parameters:** We appreciate the reviewer’s careful attention to reproducibility and thank you for this valuable feedback. Clarifying attack settings is indeed essential for replicating adversarial corruption experiments. As mentioned in the Experiments Section, our setup is identical to Moderate Coreset [ICLR 23]. The code is available at: https://github.com/tmllab/2023_ICLR_Moderate-DS . Specifically, In the adversarial corruption experiments, we used Projected Gradient Descent (PGD) as implemented in torchattacks, with the following parameters: $\epsilon$ = 8/255, step size = 2/255, number of iterations = 10, and random start = True. These are standard settings in the robust training literature (e.g., Madry et al., 2018). Additionally, we used Gradient Sign Attack (GSA) (Goodfellow et al., 2014) via advertorch, with the same $\epsilon$ = 8/255. Adversarial examples were generated using a pretrained ResNet-50 on CIFAR-100 with standard normalization. We will ensure these details are clearly stated in the revised manuscript - Appendix. Furthermore, we will release the code of GM Matching for reproducibility. **Q4.. Discussion on forgetting** We appreciate the reviewer highlighting this nuanced behavior. Forgetting-based methods, which leverage early training dynamics, can perform well in clean or mildly corrupted settings at high retain ratios (e.g., 80–90%), where noise has limited impact. In such regimes, GM Matching may be modestly disadvantaged due to the bias introduced by the geometric median, while Forgetting can surface informative, hard-to-learn examples. However, Forgetting is known to be sensitive to corruption, unstable across architectures, and often requires careful tuning and access to full training traces. In contrast, GM Matching is simple, tuning-free, and inherently robust—making it particularly effective in severe or mixed corruption scenarios. We will expand on this comparison in the revised manuscript and include a discussion highlighting when each method is most appropriate. **Q5. Settings where GM Matching is preferable.** We thank the reviewer for this thoughtful question. Theoretically, in clean settings, at low pruning rates: the GM is a biased estimator of the uncorrupted mean, and as such, GM Matching may perform similar to or slightly worse than algorithms that leverage informative or hard examples—e.g., those with high loss, large gradient norms, or that are frequently forgotten—can be more valuable, making dynamic or score-based methods (e.g., Forgetting, EL2N) highly competitive. However, GM Matching has strong advantage in two key regimes: (1) Clean settings with high pruning rates, where its $\mathcal{O}(1/k)$ convergence offers a sharp advantage over random sampling (a very strong baseline); and (2) Noisy settings, where its robustness to outliers leads to consistent improvements over dynamic or score-based methods, which often degrade under corruption. Even on real-world datasets without synthetic noise (e.g., CIFAR-100, TinyImageNet, ImageNet-1k), we observe consistent gains, underscoring its practical relevance—making it a robust, tuning-free default across varied conditions. We will include this discussion in our revised manuscript. We appreciate the thoughtful comments and suggestions, and we look forward to continued dialogue to further strengthen the work. --- Rebuttal Comment 1.1: Comment: Thank you for your response, based on other reviews and response, I feel confident in my rating of accept. --- Reply to Comment 1.1.1: Comment: Thank you again for your thoughtful review and for actively and promptly engaging with our rebuttal. Your feedback helped us strengthen the paper, and we appreciate your support and confidence in our work.
Summary: This paper proposes a dataset pruning method with subset selection. The proposed method utilizes the geometric median moment matching allowing a small amount of an approximation error. Also, the authors provide the theoretical guarantee that the proposed geometric median moment matching leads to a good approximation of the mean. Finally, several experiments on the benchmark datasets are conducted to demonstrate the efficacy of the proposed method on the clean dataset as well as noisy data. Claims And Evidence: The author's claims are clear. Methods And Evaluation Criteria: The proposed methods seem to make sense. Theoretical Claims: I haven't read the proof of the proposed theorem, but it seems to be straightforward maths. Experimental Designs Or Analyses: The experiments are conducted based on the previous literature. Supplementary Material: No. Relation To Broader Scientific Literature: - Essential References Not Discussed: - Other Strengths And Weaknesses: - The paper is clearly written, and easy to follow. - Extensive experiments are conducted to demonstrate the superiority of the proposed method. - The DDPM data generation experiment on the MNIST dataset is too simple. I am aware that training the diffusion model requires a long training time, but it would be more persuasive if the authors utilized complex datasets for training the generative model. Other Comments Or Suggestions: - The terminology $k$-subset selection might be misunderstood as choosing a few handful number of data instances, while the authors are selecting a portion of the dataset. - Regarding that, what happens to the experimental result if the authors set $k=10, 100, 1000$, for example in the CIFAR case? Questions For Authors: - What was the intuition behind utilizing the geometric median moment matching? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Q1. The terminology k-subset selection.** We appreciate the reviewer’s observation regarding potential ambiguity in the term k-subset selection. Our usage follows common conventions in the subset selection and coreset literature, where k typically denotes the size of the selected subset—either as an absolute number or as a fraction of the dataset i.e., $k = \rho n $ for some $\rho \in (0,1)$ interchangeably. Importantly, this terminology is not only standard but also theoretically grounded in our work. Our results are stated in terms of selecting k samples from a dataset of size n, and the error bounds naturally depend on k (e.g. Theorem 1). Thus, we feel our formulation justifies the use of k-subset selection both terminologically and theoretically. However, based on your suggestion, we will clarify this distinction explicitly in the revised manuscript to avoid any confusion. **Q2. Performance for small fixed subset sizes (k=10,100,1000)** We thank the reviewer for this valuable suggestion. We conduct additional experiments on CIFAR-10 using fixed subset sizes 10, 100, 1000 samples per class under three settings: clean, 20% label noise, and 40% label noise. A CLIP ViT-B/32 proxy encoder is used to select subsets, and a ResNet-18 is trained from scratch on the pruned data. | **Method / Subset** | **10** | **100** | **1000** | **Mean ↑** | |---------------------|--------|---------|----------|------------| | **No Corruption** ||||| | Random | 23.2 ± 3.9 | 42.4 ± 2.3 | 86.8 ± 0.7 | 50.8 | | Easy | 27.4 ± 0.4 | 42.5 ± 0.5 | 84.2 ± 0.3 | 51.4 | | Hard | 19.8 ± 1.7 | 39.7 ± 1.0 | 86.1 ± 0.2 | 48.5 | | Herding | 24.9 ± 1.6 | 45.7 ± 0.6 | 86.8 ± 0.4 | 52.5 | | Moderate | 24.0 ± 1.8 | 44.5 ± 2.7 | 86.1 ± 1.3 | 51.5 | | **GM Matching** | **25.6 ± 0.2** | **47.6 ± 1.9** | **86.9 ± 0.3** | **53.4** | | **20% Label Noise** ||||| | Random | 18.0 ± 2.4 | 36.4 ± 0.9 | 75.5 ± 0.7 | 43.3 | | Easy | 24.2 ± 0.6 | 40.7 ± 1.1 | 76.5 ± 1.9 | 47.1 | | Hard | 13.1 ± 1.9 | 22.7 ± 0.7 | 67.2 ± 0.5 | 34.3 | | Herding | 22.7 ± 0.3 | 38.5 ± 1.5 | 76.6 ± 1.3 | 45.9 | | Moderate | 23.0 ± 1.3 | 39.8 ± 1.3 | 75.9 ± 1.3 | 46.2 | | **GM Matching** | **26.0 ± 0.9** | **41.1 ± 1.8** | **77.8 ± 0.4** | **48.3** | | **40% Label Noise** ||||| | Random | 16.8 ± 2.0 | 28.3 ± 2.2 | 66.2 ± 0.8 | 37.1 | | Easy | 22.5 ± 1.5 | 34.1 ± 1.5 | 70.5 ± 1.1 | 42.4 | | Hard | 12.8 ± 1.3 | 16.5 ± 1.6 | 51.4 ± 1.9 | 26.9 | | Herding | 18.0 ± 1.4 | 30.1 ± 0.9 | 65.1 ± 1.4 | 37.7 | | Moderate | 20.2 ± 1.3 | 34.0 ± 1.7 | 67.8 ± 1.5 | 40.7 | | **GM Matching** | **23.3 ± 1.8** | **36.8 ± 1.4** | **71.0 ± 1.3** | **43.7** | As evident, GM Matching remains consistently strong even under extreme data reduction, particularly in noisy settings. This further supports the versatility of the method across the spectrum of subset sizes. **Q3. Simplicity of DDPM exp** We agree that MNIST is a relatively simple dataset. Our goal with this experiment was to provide a proof of concept (PoC) demonstrating the applicability of GM Matching to generative modeling — specifically, to show that high-quality subsets selected via our method improve generation quality, even under severe corruption. This controlled setup was intentionally chosen to isolate the effect of subset selection, avoiding confounding factors from architectural or optimization complexity. We believe this early result is a meaningful step in connecting robust selection to generative tasks. A thorough investigation of GM Matching in diffusion models on more complex datasets (e.g., CIFAR-10, CelebA) and with modern backbones (e.g., ADM) is an exciting direction for future work, as is exploring its applicability to LLMs. **Q4. Intuition behind GM** Great question — thank you for the opportunity to clarify. At the core of our approach is the observation that the empirical mean is highly sensitive to outliers and corrupted data, which can significantly distort subset selection. In contrast, the geometric median has a breakdown point of 50%, making it a far more robust estimate of central tendency in noisy settings. This intuition leads naturally to our Robust Moment Matching formulation: instead of chasing the (fragile) dataset mean, we align with a robust estimator that resists the influence of corrupted or adversarial data points. This makes the selection process resilient across a variety of noise settings — as our theory and experiments confirm. We thank the reviewer once again and look forward to engaging further in discussion in strengthening the paper.
null
null
null
null
null
null
null
null
Inductive Moment Matching
Accept (oral)
Summary: # Update I gave the score of 4. My complaints are minor, and the authors have addressed them in the rebuttal. I'm comformtable with the paper being published in the form close what it is right now. So, I decided to not change my evaluation. # Old Summary This paper presents moment matching self distillation (MMSD), an algorithm for training a generative model from scratch that is based on the stochastic interpolant framework. Given a data distribution $q(x)$ and a noise distribution $p(\epsilon)$, the stochastic framework constructs a stochastic process $\\{ q_t(x_t) : 0 \leq t \leq 1\\}$ such that $q_0(x_0) = q(x_0)$ and $q_1(x_1) = p(x_1)$. The generative model trained by MMSD is of the form $f^{\theta}\(x_t, s, t)$ where $\theta$ denotes the model parameter. The specification is that, for any $0 \leq s \leq t \leq 1$, if $x_t \sim q_t$, then $f^{\theta}(x_t, s, t) \sim q_s$. A well-trained model can thus be used to generate a data sample from a noise sample in 1 evaluation of $f^\theta$ or any other number of function evaluations. The proposed algorithm is as follows. In each training iteration: 1. We sample two times $s$ and $t$ such that $0 \leq s < t \leq 1$. We also compute an intermediate time $r = r(s,t)$ such that $s < r < t$. 2. We then sample $N$ pairs of data items and noise vectors $(x_1, \epsilon_1)$, $(x_2, \epsilon_2)$, $\dotsc$, $(x_M, \epsilon_M)$ where $x_i \sim q$ and $p_i \sim p$. 3. For each $1 \leq i \leq N$, we compute $x_{t,i} = \alpha_t x_i + \sigma_t \epsilon_i$ and $x_{r,i} = \alpha_r x_i + \sigma_r \epsilon_i$ where $\alpha_t$ and $\sigma_t$ are the noise schedule functions used in the formulation of $q_t$. 4. Compute two sets of samples $\\{ y_{s,t,i}: y_{s,t,i} = f^{\theta}(x_{t,i}, s, t) \\}$ and $\\{ y_{s,r,i} : y_{s,r,i} = f^{\theta^-}(x_{r,i}, s, t) \\}$ where $\theta^-$ is the gradient-stop version of $\theta$. 5. Compute the loss $\mathcal{L}$ as the distance between the two sets of samples $\\{ y_{s,t,i} \\}$ and $\\{ y_{s,r,i} \\}$ using the minimum mean discrepancy (MMD). 6. Update $\theta$ with the gradient of $\mathcal{L}$. The algorithm is strikingly simialr to a previous algorithm, consistency training (CT) [1]. The main difference MMSD uses MMD to compute the loss, but CT simply computes the loss as $\sum_i \mathcal{d}(y_{s,t,i}, y_{s,r,i})$ where $d$ is a distance metric. Another difference is that, in CT, the time $s$ is always zero, but it is variable in MMSD. The paper provides theoretical basis for the MMSD algorithm and shows that CT is one of its special case. It demonstrates that MMSD is effective by competitive FID scores acheived by MMSD-trained models on the CIFAR-10 and ImageNet 256x256 datasets. It also claims that MMSD training in more stable than some other algorithms, and that MMSD does not require hypertuning. *Citation* * [1] Song et al. "Consistency Models." ICML 2023. Claims And Evidence: The paper makes several claims. 1. MMSD is theoretically sound. 2. MMSD generalizes CT. 3. MMSD results in competitive and fast generative models. 4. MMSD training is stable. 5. The abstract implies that MMSD does not require extensive tuning. I do not have particular problems with (1), (2), and (3). The FID scores on CIFAR-10 and ImageNet 256x256 are good given that the models were trained from scratch. However, (4) and (5) are not so evident. MMSD introduces a quite number of hyperparameters: the kernel function to use for MMD, the number of groups M inside a minibatch, how to compute $r(s,t)$, and the weighting function $w(s,t)$. It also inherits conditioning parmaeters from the EDM and EDM2 papers. These hyperparameters have to be picked carefully in order for the model to achieve good scores. (Some hyperparameters such as the weighting function $w(s,t)$, are derived from previous works, and I believe this is a form of extensive tuning.) Moreover, the paper indicates that hyperparameters have impact on training stability. For example, picking $M \leq 2$ or picking $r$ to be too close to $t$ lead to training to become unstable. Methods And Evaluation Criteria: The paper uses CIFAR-10 and ImageNet 256x256, which are standard datasets for benchmarking generative models. It also uses the standard metric, the FID. There are no issues with these choices. The experiments are quite straightfoward as well: training models with MMSD and compare their FID scores to prior works. The paper includes scores upto NFE = 2 for CIFAR-10 and NFE = 8 for ImageNet 256x256 (along with guidance scale used), which makes comparison easy. Theoretical Claims: I skimmed through most proofs, but did not check all the calculations. The proofs were generally easy to follow and seem to be sound. Experimental Designs Or Analyses: The experiments are simple: training models with MMSD on benchmark datasets using well-known architectures and compare the resulting FID scores to previous works. I find no particular problems. Supplementary Material: I skimmed through the proofs and derivations in Section A, B, C, and G.1. I also read the algorithms in Section D, E, and F in order to understand the algorithm better, and I also briefly looked at Section J. Moving the algorithm in Section D to the main paper would make the paper easier to read. Relation To Broader Scientific Literature: Training a fast generative model in a stable and reproducible manner from scratch is an important problem. This paper contributes a novel and relatively simple algorithm that works well on widely used benchmarks, and I think this is an important contribution. Essential References Not Discussed: The authors cite Tee et al.'s physics-informed distillation (PID) paper but did not discuss its approach. PID tries to make the network under training so that its velocity field conforms to ODE that defines the sampling trajectory of a diffusion model. This approach is quite different from the approach taken by the paper under review or other papers, and so a discussion can be illuminating. There are other several papers that take this approach that the authors might consider citing. *Citation* * [1] Boffi et al. Flow map matching. 2024. * [2] Yang et al. Consistency Flow Matching:Defining Straight Flows with Velocity Consistency. 2024. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: In Section 7.2, "we haver a fixed batch size $B$ in which the samples are grouped into $B/M$ group of $M$ where each group shares the same $t$. Shouldn't this be "where each groups shares the same $s$ and $t$? How are $s$ and $t$ sampled during training? Am I missing something? Questions For Authors: I would like to know how much wall-clock time training with MMSD is takes with different group sizes M but the same batch size B. Including this information in the appendix would be quite helpful for other practioners. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the insights and suggestions. We would like to first refer the reviewer to the **Overall Update** section in our rebuttal for reviewer SxrL for important updates to the paper. And we would like to address the concerns below. > Claims on (4) training stability and (5) not requiring extensive tuning. We stress our abstract only claims that our method, similar to diffusion models, remains stable in the sense that common parameter choices can lead to a stable training process and decent quality, without the model becoming degenerate. We do not claim that they all obtain “good scores”, which may have been misinterpreted by the reviewer. In fact, even diffusion models require specific designs in the architecture to reach optimal scores – for example, the DiT paper found that AdaLN gives best performance with timestep conditioning, and in the EDM paper it is shown that variance preserving is much less sensitive to hyperparameter choices than variance exploding (see config A and config B in the paper). For best performance in IMM, similar to diffusion and Flow Matching, we should expect some degree of tuning and different hyperparam choices. The stability is substantiated in Figure 4,5,6, and 7 which show that, as claimed in abstract, our method converges across different time embedding, time conditioning functions, $M$, $r(s,t)$, and $w(s,t)$ params. We introduced variability in these choices precisely to show our method’s stability across them. For $M$, we emphasize that our convergence stability is not sensitive to $M$ as long as $M\ge 4$, which is a reasonable range for practical purposes. No deep learning methods can train stably for *all* parameter choices (e.g. even for diffusion too large of lr or incorrect weighting for some $t$ can lead to degeneracy). We can only safely guarantee the stability exists for a large range of choices, which is sufficient for practical success. Similarly for $r$, as long as $r$ and $t$ has a reasonable finite gap (i.e. $k\le 12$ in our case), we can achieve good performance. We also argue $w(s,t)$ did not require extensive tuning and is simply carried over from VDM and Flow Matching. As motivated in Appendix C.9, since no weighting can be derived from MMD itself, we resort to the closest diffusion counterpart as in VDM [1], from which the terms $\frac{1}{2}\sigma(b-\lambda_t) (-\frac{d}{dt}\lambda_t)$ are carried over. The term $\alpha_t$ is also theoretically motivated to follow VDM gradients. These terms may look complex at first to suggest extensive tuning but they come as a group and do not require ablation of each individual subterms. The only additional $\alpha_t^2 + \sigma_t^2$ term is simply motivated by [2] for Flow Matching schedule that upweights middle timesteps. The same effect can be achieved by sampling more middle timesteps instead. We also use EDM conditioning as a simple unifying notation for different parameterization choices (e.g. DDIM and Euler sampler can both be unified under this one notation, see Appendix C.5). We do not reuse the complex conditioning introduced in EDM. In fact, we find the most simple Euler parameterization $f_{s,t}(x_t) = x_t + (s-t)G_\theta(x_t,s,t)$ to work the best for ImageNet-256x256 (see Appendix C.5 and Table 4) and do not suggest deviating from this standard choice. > Discussion of Tee et al.'s physics-informed distillation (PID) PID is related as a distillation technique that explicitly matches the network’s own velocity field with that of diffusion. Different from other distillation methods, PID inputs noise $z$ and outputs $x_t$ whose derivative is matched with pretrained diffusion velocity field. The skip connection shares similarity with CM but requires $c_\text{out}(T)=1$ instead of $c_\text{out}(T)=0$, and different from distribution-matching distillation, it does not need to jointly train two networks. In addition to PID, we will discuss the two additional works in our revision. > Shouldn't this be "where each groups shares the same $t$ and $s$? Yes. This group should share the same $t$ and $s$. > Wall-clock time training with different group sizes M but the same batch size B. With $B=4096$ with DiT-XL architecture, we experimented across $M=2,4,8,16$ and find that all choices have per-step wallclock time of 0.53-0.55 seconds (one step here means a full optimization step accounting for both forward and backward passes). This is consistent with our analysis in Appendix C.4 that forward/backward pass is the computational bottleneck and time for computing the $M\times M$ matrix is negligible. \ &nbsp; [1] Kingma, Diederik, and Ruiqi Gao. "Understanding diffusion objectives as the elbo with simple data augmentation." Advances in Neural Information Processing Systems 36 (2023): 65484-65516. [2] Esser, Patrick, et al. "Scaling rectified flow transformers for high-resolution image synthesis." Forty-first international conference on machine learning. 2024.
Summary: The paper introduces Moment Matching Self-Distillation (MMSD), a novel framework for training few-step generative models from scratch. MMSD offers a single-stage training procedure that avoids the need for pre-training or optimizing two networks. It leverages self-consistent interpolants to match the moments of its distribution to that of the data, ensuring distribution-level convergence. ## update after rebuttal I thank the authors for their detailed response, which clarified my concerns about the use of classifier-free guidance in distillation models and the relationship between their method and Consistency Training. Given the authors' clarifications and commitments to improving notation and readability, I am happy to maintain my Accept recommendation. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. The authors demonstrate the effectiveness of MMSD through extensive experiments on CIFAR10 and ImageNet 256×256. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for the problem and application at hand. The use of benchmark datasets such as CIFAR-10 and ImageNet 256×256 is appropriate, as these are widely recognized and challenging benchmarks for evaluating generative models. The evaluation metric, Fréchet Inception Distance (FID), is a standard and reliable measure for assessing the quality and diversity of generated images. Theoretical Claims: I have gone through the theoretical proofs in the paper to a reasonable extent, and while I did not perform a 100% detailed verification, the proofs appear to be correct and well-justified. Experimental Designs Or Analyses: The experimental setups are well-structured and comprehensive. The authors evaluate MMSD on standard image benchmarks, including CIFAR-10, and ImageNet 256×256, which are widely used and respected in the field of generative modeling. Supplementary Material: I have reviewed most theoretical proofs in the supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper are related to diffusion/flow matching and distribution matching training/distillation, such as MMD-GAN, consistency model, and adversarial training. Essential References Not Discussed: After reviewing the paper, I did not find any particularly critical or essential references that were missing from the discussion. Other Strengths And Weaknesses: Strength: 1. A key strength of this paper is its novel and efficient approach to training few-step generative models from scratch. 2. By leveraging moment matching and self-consistent interpolants, MMSD guarantees distribution-level convergence and maintains stability across various settings, making it both robust and practical. 3. The theoretical formulations are rigorous. Weakness: 1. The rationality of classifier-free guidance for distillation models is questionable, particularly for one-step models. The experiment also demonstrates that cfg = 1.5 yields inferior results compared to cfg = 1.25 when step = 1. 2. The writing and organization of this paper need further refinement to improve readability. Some proofs, such as the proof of Theorem 1, which I believe uses induction to extend from an infinitesimal step to a long-range step, are somewhat obscured by overly complex notation. Other Comments Or Suggestions: I do not have any additional comments or suggestions for the paper. Questions For Authors: I don't think consistency models can be entirely considered a special case of this paper's framework. While consistency models can be understood from a distribution matching perspective, they are theoretically guaranteed to be trajectory-based distillation. Based on my understanding, this paper's method doesn't theoretically guarantee that the learned deterministic mapping is consistent with the PF-ODE mapping, since the framework is at the distribution level. And re-using $x_t$ for $x_r$ is aligned with consistency training only if r is very close to t. I'm not sure if this understanding is correct. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the insights and suggestions. We would like to first refer the reviewer to the **Overall Update** section in our rebuttal for reviewer SxrL for important updates to the paper. And we would like to address the concerns below. > The rationality of classifier-free guidance for distillation models is questionable, particularly for one-step models. The experiment also demonstrates that cfg = 1.5 yields inferior results compared to cfg = 1.25 when step = 1. It is true that theoretically CFG in the diffusion/Flow Matching context does not work well in 1-step context because we are no longer modeling the velocity field at any $t$. In addition, the transition kernel defined by this 1-step CFG should be subject to an acceptance rate as in Metropolis adjusted MCMC (see [3] for details) without which the generated distribution can deviate from data distribution. In practice, since we cannot evaluate the likelihood, we always accept the samples. In general, however, we empirically find that adding some CFG values indeed helps with quality. We find that without CFG, 1-step sampler yields 12.21 FID on ImageNet-256x256, which is significantly higher than the results using CFG (7.97 and 7.12). We hypothesize that with 1 step, the linear combination of conditional and unconditional branches can still effectively correct visual details that are inaccurately modeled using either branch. Our conclusion is that CFG is helpful for 1-step sampler, but CFG values that achieve superior results in multi-step regime may not necessarily transfer to 1 step because 1-step CFG is not as well theoretically motivated as its multi-step counterparts. We call for more studies on this phenomenon in future works. > The writing and organization of this paper need further refinement to improve readability. Some proofs, such as the proof of Theorem 1, which I believe uses induction to extend from an infinitesimal step to a long-range step, are somewhat obscured by overly complex notation. We will streamline our notation in proofs in our revision. > Questions regarding CT as a special case. We stress an important point: minimizer of Consistency Distillation (CD) loss is the PF-ODE of the pretrained diffusion but minimizer of Consistency Training (CT) loss is NOT the PF-ODE. Minimizer of CT coincides with PF-ODE only when data distribution is delta distribution, as assumed by many proofs in the original CM paper [1] (Appendix B.1 Remark 4) and iCT [2] (Sec 3.2). To see CT does not learn PF-ODE, recall CM loss $$\mathbb{E}\_{x_t,x,t}[w(t) || g_\theta(x_t,t) - g_{\theta^-}(x_r, r) ||^2]$$, where $x_r$ is ODE solution from $x_t$. Assume $w(t)=1$, the minimizer of this loss is $$ g_{\theta^*}(x_t,t) = \mathbb{E}\_{x|x_t}[g_{\theta^-}(x_r, r) ] $$. For CD, $x_r$ is ODE solution using pretrained score so $x_r$ does not depend on $x$, i.e. the conditional expectation can be dropped and $$ \mathbb{E}\_{x|x_t}[g_{\theta^-}(x_r, r) ] = g_{\theta^-}(x_r, r)$$. However, for CT, $x_r$ depends on $x$, and the conditional expectation is irreducible. If we assume $g_{\theta^-}(x_r, r) $ is a PF-ODE result, this new minimizer at $t$ deviates from any single PF-ODE due to the expectation. The original CM paper [1] and iCT [2] have a delta-distribution assumption that similarly allow elimination of the conditional expectation, which downplays the aforementioned problem in the general case. We therefore call for understanding its loss at a distribution level, and find that it is a special case from this moment-matching perspective. > Our method does not guarantee learning of PF-ODE. We do not guarantee our method learns PF-ODE. However, our method can converge to a different and equally valid solution whose *distribution* matches the data distribution. Consider a toy 2D Gaussian distribution $\mathcal{N}(0,I)$ as data, and the same Gaussian $\mathcal{N}(0,I)$ as prior. The PF-ODE is an identity function mapping any point $x$ to itself. However, consider another function $$f_{s,t}(x_t) = \text{rotate}(x_t, 2\pi*(t-s))$$ where $\text{rotate}(x, \phi)$ is a rotation operation around the origin by angle $\phi$. Since Gaussian is rotation-invariant, distribution of $f_{s,t}(x_t)$ stays $\mathcal{N}(0,I)$. This is another valid solution under distribution matching objective which our method can also possibly learn. \ &nbsp; [1] Song, Yang, et al. "Consistency models." (2023). [2] Song, Yang, and Prafulla Dhariwal. "Improved techniques for training consistency models." arXiv preprint arXiv:2310.14189 (2023). [3] Du, Yilun, et al. "Reduce, reuse, recycle: Compositional generation with energy-based diffusion models and mcmc." International conference on machine learning. PMLR, 2023.
Summary: This paper introduces a novel generative model enabling the few-step generation of high-quality photorealistic images. The approach builds on prior work in consistency trajectory models [1] and flow map matching [2], where a generator learns to move a noisy image from one timestep to another. However, unlike [1,2], which focus on pointwise matching along the Probability Flow ODE (PF-ODE), the proposed method instead matches the marginal distribution. The training follows a bootstrapping approach similar to consistency models, while distribution matching is achieved via Maximum Mean Discrepancy (MMD), eliminating the need for auxiliary networks required in prior methods like DMD [3] and MMD [4]. Beyond theoretical contributions, the paper introduces practical improvements for stable training and demonstrates strong performance on ImageNet with models trained from scratch. [1] Kim, Dongjun, et al. "Consistency trajectory models: Learning probability flow ode trajectory of diffusion." arXiv preprint arXiv:2310.02279 (2023). [2] Boffi, Nicholas M., Michael S. Albergo, and Eric Vanden-Eijnden. "Flow Map Matching." arXiv preprint arXiv:2406.07507 (2024). [3] Yin, Tianwei, et al. "One-step diffusion with distribution matching distillation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2024. [4] Salimans, Tim, et al. "Multistep distillation of diffusion models via moment matching." Advances in Neural Information Processing Systems 37 (2024): 36046-36070. Claims And Evidence: Yes, they are well-supported by theoretical proofs and strong empirical results. Methods And Evaluation Criteria: This paper is well-motivated, and the proposed method is intuitively sound. It follows a recent trend in multi-step generation, where the generator is trained to produce samples resembling those encountered at intermediate noisy distributions rather than directly targeting the clean distribution. This approach aligns with methods such as Piecewise Rectified Flow [1], the consistency trajectory model [2], and MMD [3], among others. It is exciting to see a novel method emerge along the same lines but driven by a different distribution matching objective. Furthermore, the use of Maximum Mean Discrepancy (MMD) as a training signal is well-justified. It eliminates the need for auxiliary network training, simplifying the approach while enabling stable training and strong performance. Beyond these methodological innovations, the overall results are solid, with rigorous benchmarking, systematic comparisons, and convincing ablation studies. [1] Yan, Hanshu, et al. "Perflow: Piecewise rectified flow as universal plug-and-play accelerator." arXiv preprint arXiv:2405.07510 (2024). [2] Kim, Dongjun, et al. "Consistency trajectory models: Learning probability flow ode trajectory of diffusion." arXiv preprint arXiv:2310.02279 (2023). [3] Salimans, Tim, et al. "Multistep distillation of diffusion models via moment matching." Advances in Neural Information Processing Systems 37 (2024): 36046-36070. Theoretical Claims: I didn't check the full proofs. Experimental Designs Or Analyses: Yes, the design is sound. Supplementary Material: I reviewed all experimental details. Relation To Broader Scientific Literature: It has broader implications for the generative modeling community, enabling end-to-end training of models that support both few-step and many-step inference. By eliminating the need for an additional distillation step during deployment, this approach streamlines the training-to-inference pipeline. Essential References Not Discussed: The discussion on related works are adequate. Other Strengths And Weaknesses: I would like to see additional results on T2I or T2V applications. These experiments should be relatively easy to do if initialized from a pretrained diffusion model and could greatly enhance the case for broader adoption. Other Comments Or Suggestions: n/a Questions For Authors: Q1: Would increasing the number of steps beyond 8 consistently improve performance? Is this improvement dependent on the training strategy, such as the choice of r(s,t)? Q2: In MMD [1], a DDIM-style sampler is also used during inference. Could this be applied to the proposed method? Currently, it appears that only the consistency sampler is utilized. [1] Salimans, Tim, et al. "Multistep distillation of diffusion models via moment matching." Advances in Neural Information Processing Systems 37 (2024): 36046-36070. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Overall Update: \ We thank all reviewers for their insights and helpful suggestions. We would like to announce several important updates. - To distinguish our method from other distillation-based post-training techniques, we change our title and model name from **“Moment Matching Self-Distillation”** to **“Inductive Moment Matching”**. - Better experimental results that outperforms the ones reported in submission. | | CIFAR-10 FID | | -------- | :-------: | | 1-step | 3.20 | | 2-step | **1.98** | | | ImageNet-256x256 FID | | -------- | :-------: | | 1-step (w=1.25) | 7.77 | | 2-step (w=1.25) | 5.33 | | 4-step (w=1.25) | 3.66 | | 8-step (w=1.25) | 2.77 | | -------- | ------- | | 1-step (w=1.5) | 8.05 | | 2-step (w=1.5) | 3.99 | | 4-step (w=1.5) | 2.51 | | 8-step (w=1.5) | **1.99** | - Scaling beyond 8 steps for ImageNet-256x256 | | ImageNet-256x256 FID | | -------- | :-------: | | 10-step (w=1.5) | 1.98 | | 16-step (w=1.5) | **1.90** | | 32-step (w=1.5) | **1.89** | The results continue to improve beyond 8 steps. We see that at 16 steps it achieves 1.90 FID and already outperforms the 2B parameter VAR baseline (1.92 FID). We see saturation beyond 16 steps, with marginal improvement at 32 steps (1.89 FID). \ &nbsp; ------------------------------------------ ## For reviewer SxrL: Thank you for your comments and suggestions. We would like to address your concerns below. > I would like to see additional results on T2I or T2V applications. These experiments should be relatively easy to do if initialized from a pretrained diffusion model and could greatly enhance the case for broader adoption. We test our algorithm on a text-to-image model trained on Datacomp [*] at 512x512 resolution. Samples using 8 steps can be found at [this link](https://drive.google.com/file/d/1e_2d0S1g4TStIkNtWE8UqGOO-BZZc9r0/view?usp=drive_link). While this is a preliminary result, we can see that our algorithm can easily transfer to T2I settings. > Q1: Would increasing the number of steps beyond 8 consistently improve performance? Is this improvement dependent on the training strategy, such as the choice of r(s,t)? Yes the result continue to improve beyond 8 steps. See results above in “Overall Update” that 16 steps attain 1.90 FID and performance tends to saturate beyond that, with 32 steps attaining 1.89 FID. Notably, this outperforms the VAR variant with 2B parameters. We additionally show a comparison between 8 step generation and 16 step generation in [this link](https://drive.google.com/file/d/1T8h-k7IW4b4srOMZbBppuwFT-LRk1CLP/view?usp=drive_link). We notice that extending to 16 steps only give minor shifts in visual content in most cases. This demonstrates that 8 steps already give near optimal solutions -- our method scales *efficiently* with sampling compute. With low probability, there can be major shift in content although their low-frequency component look similar (i.e. they look alike from afar) (see row 2 col 3). Additionally, the relative improvement in FID should be similar across different $r(s,t)$, although their convergence rate can significantly differ. This is evident during training (see Figure 6) where constant decrement in $t$ noticeably lags behind constant decrement in $\eta_t$. > Q2: In MMD [1], a DDIM-style sampler is also used during inference. Could this be applied to the proposed method? Currently, it appears that only the consistency sampler is utilized. We actually investigate both DDIM sampler (i.e. pushforward sampler) and consistency sampler (i.e. restart sampler) (see Sec 4.3). Pushforward sampler is equivalent to DDIM (while additionally injecting $s$ as conditioning) because $f_{s,t}^\theta(x_t)$ is defined as a DDIM step from $t$ to $s$ with $g_\theta(x_t, s, t)$ as the $x$-prediction network (see line 193-194). In fact, we also show that DDIM sampler is better than consistency sampler in Sec 7.3 and Figure 8. \ &nbsp; [*] Gadre, Samir Yitzhak, et al. "Datacomp: In search of the next generation of multimodal datasets." Advances in Neural Information Processing Systems 36 (2023): 27092-27112.
null
null
null
null
null
null
null
null
Deep Reinforcement Learning from Hierarchical Preference Design
Accept (poster)
Summary: This paper proposes HERON, a novel hierarchical reward design framework for reinforcement learning (RL) that leverages the hierarchical structures of feedback signals to ease the reward design process. HERON constructs a decision tree based on the importance ranking of feedback signals to compare RL trajectories and trains a reward model using these comparisons. The authors demonstrate HERON's effectiveness across various RL applications, including traffic light control, code generation, language model alignment, and robotic control. In traffic light control, HERON outperforms reward engineering techniques and achieves higher performance than the ground-truth reward. In code generation, HERON surpasses state-of-the-art methods using handcrafted piece-wise reward functions, showing improved sample efficiency and robustness. Claims And Evidence: The claims made in the submission are generally supported by evidence. The authors provide extensive experimental results across multiple domains (traffic light control, code generation, language model alignment, and robotic control) to demonstrate the effectiveness of HERON compared to existing methods. However, the claim that HERON can achieve decent performance even in environments with unclear hierarchy (robotic control) is less convincing, as the results show mixed performance and the environments tested may not fully represent the complexity of real-world scenarios where hierarchy is unclear. Methods And Evaluation Criteria: The proposed methods in the paper are well-suited for the problem of reward design in reinforcement learning, especially in scenarios where feedback signals have a natural hierarchy or where rewards are sparse. The evaluation criteria and benchmark datasets used, such as traffic light control, code generation, and robotic control tasks, are relevant and effectively demonstrate the versatility and effectiveness of the HERON framework. However, the evaluation could be further strengthened by including additional real-world applications with more complex hierarchical structures to better validate the robustness of HERON in diverse environments. Theoretical Claims: The paper does not present any formal proofs for theoretical claims. Instead, it focuses on empirical validation through extensive experiments across various applications. Therefore, there are no proofs to verify for correctness in this submission. Experimental Designs Or Analyses: The experimental designs and analyses presented in the paper are generally sound and well-structured. The authors conducted extensive experiments across multiple domains, including traffic light control, code generation, language model alignment, and robotic control, to validate the effectiveness of the HERON framework. However, the experiments could benefit from additional ablation studies to further isolate the impact of specific components of the HERON framework, such as the hierarchical decision tree and the preference-based reward model, on the overall performance. Supplementary Material: Yes, I reviewed the supplementary material. It includes detailed experiment settings, additional results, and explanations of the methods used in the main paper, which help to provide a more comprehensive understanding of the research. Relation To Broader Scientific Literature: The key contributions of the paper are closely related to the broader scientific literature on reinforcement learning (RL) and reward design. Specifically, the proposed HERON framework builds upon prior work in hierarchical reward modeling and preference-based learning, offering a novel approach to leverage hierarchical structures in feedback signals to simplify reward design. This work also connects to recent advancements in deep RL, particularly in applications like traffic light control and code generation, where it demonstrates significant improvements over existing methods, highlighting its relevance and potential impact in the field. Essential References Not Discussed: No Other Strengths And Weaknesses: The paper's strengths include its original approach to reward design through hierarchical structures, which is a creative combination of existing ideas in preference learning and RL. The application of HERON to real-world use cases like traffic light control and code generation demonstrates its practical significance and potential impact. The clarity of the paper is also commendable, with well-organized sections and clear explanations of the methodology and results. However, a potential weakness is the limited exploration of scenarios with unclear hierarchical structures, which could restrict the framework's applicability in more complex real-world environments. Other Comments Or Suggestions: No. Questions For Authors: 1. Could the authors provide more details on how HERON handles scenarios where the hierarchy of feedback signals is not clear or where signals have equal importance? This would help in understanding the robustness of the framework in more complex real-world applications. 2. How does the computational cost of HERON compare to other state-of-the-art methods, especially in terms of training time and resource requirements? This information is crucial for assessing the practical feasibility of deploying HERON in resource-constrained environments. 3. Can the authors discuss the potential impact of HERON on multi-objective RL problems, where objectives may conflict? Understanding this could highlight the broader applicability of HERON beyond the current scope of the paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer Q7xT, Thank you for the insightful review! We are glad you appreciate our work, and present our rebuttal to your review below. ## **Experimental Designs Or Analyses** **The experiments could benefit from additional ablation studies to further isolate the impact of specific components of the HERON framework, such as the hierarchical decision tree and the preference-based reward model, on the overall performance.** - Note that we conduct an extensive ablation study in section 4.5. We find that the margin parameter is quite important. - Furthermore, we also investigate the flexibility and robustness of HERON, finding HERON is both flexible and provides increased robustness. ## **Other Strengths And Weaknesses** **A potential weakness is the limited exploration of scenarios with unclear hierarchical structures, which could restrict the framework's applicability in more complex real-world environments** - First, we remark that HERON is designed for settings where there is a clear hierarchy (see our discussion on suitable scenarios) and performance in other settings is a “bonus.” In settings with unclear hierarchies like robotic control (see section 4.4), we find HERON can still outperform reward engineering. Our intuition is that in these settings a roughly correct hierarchy provides sufficient information for good performance. If the signals have equal performance, we propose in line 402 to randomly flip the ranking of equal feedback signals. However, we leave this study for future work. ## **Questions** **Could the authors provide more details on how HERON handles scenarios where the hierarchy of feedback signals is not clear or where signals have equal importance?** - See our above response to **Other Strengths And Weaknesses**. **How does the computational cost of HERON compare to other state-of-the-art methods, especially in terms of training time and resource requirements?** - We show the training time of HERON versus baselines in Figure 6a. HERON is around 25% slower than reward engineering, but greatly reduces tuning cost (Figure 6c). HERON is faster than other baselines such as the ensemble baselines as well. **Can the authors discuss the potential impact of HERON on multi-objective RL problems, where objectives may conflict?** - Thank you for the suggestion. As we mention in Line 420, HERON is not designed for multi-objective RL (MORL). In fact, HERON and MORL are solving different problems. MORL tries to train a policy on the pareto frontier among several reward factors. In contrast, HERON tries to find a way to combine feedback signals into a reward in a user-friendly way, such that the user can easily guide the agent’s behavior. This is useful in many real-world tasks—like code generation, aligning language models, or traffic light control—where some goals are naturally more important than others. To make this clearer, we’ll move the MORL discussion to the related work section and explain how it differs from our approach. Thank you again for the insightful review, and please let us know if you have any other questions. We look forward to further discussion!
Summary: This paper introduces HERON, a decision-tree-based approach for reward design in reinforcement learning. In particular, the authors leverage human expertise to define a hierarchy of feedback signals. The authors employ that hierarchy to compare trajectories, collecting a dataset (with a policy model) and learn a reward model (similar to standard procedure in preference-based RL methods). The reward model is then used to train the policy, and the whole procedure can be repeated to improve the reward model. The authors extensively evaluate their approach in diverse scenarios (multi-agent traffic light control, code generation, language model alignment and standard control environments), highlighting its versatility and performance. Finally, the authors present an ablation study, focused on the training time, hyperparameter sensitivity and tuning cost. Claims And Evidence: The authors mostly position their claims of HERON over: (i) the overall performance of the method in comparison with baselines, (ii) the robustness of the method to changes in magnitude of the underlying reward signals, and (iii) the adaptability of the method across multiple domains. The empirical evidence presented in Section 4 supports many of these claims: the authors show that HERON outperforms relevant baselines across multiple settings. In Section 4.1, the authors also show how the performance of HERON is less sensitive to changes in the dynamics of the environment. The authors also present an extensive ablation study in Section 4.5, which provides additional insights on the sensitivity of the model to hyper parameters, as well as the training and computational cost of training HERON. I would like to point out that in Section 4.2 (Results), the authors hypothesize that HERON's reward function "may be more conducive to learning". However no empirical support to this claim is provided, such as learning curves showing faster convergence. Methods And Evaluation Criteria: The proposed method is sound and addresses the real-world RL challenge of designing effective reward functions. The use of hierarchical preferences over reward features, instead of linear combinations is an interesting contribution and is well motivated and explained in the paper. The evaluation criteria are also appropriate for each domain: reward of the agents in the evaluation environment (given by a ground-truth reward function), win rates for language model alignment and pass@K metrics for code generation. Theoretical Claims: There are very little theoretical claims in this paper. The loss function of HERON (Equation 2) follows standard preference-based RL formulations and appears correct. There are no proofs in the paper. Experimental Designs Or Analyses: The experimental design and analysis appears to be sound. In particular, I would like to highlight the impressive range of evaluation scenarios employed in this work: from multi-agent systems, to code generation, LLM alignment and control tasks. Across all of them, HERON either outperforms or performs on par with the baseline methods. However, I would like to single out HERON’s post-training reward scaling procedure in Section 4.2. This method appears to be an additional form of reward shaping, but it is not really motivated in the paper. The authors should experimentally evaluate if this extra step actually contributes to the performance of HERON, with an ablated version of the model. Supplementary Material: I briefly reviewed the supplementary material, in particular Appendix C to understand the baselines employed in the paper (which is quite unclear in the main paper). Relation To Broader Scientific Literature: The paper positions HERON within the area of preference-based RL. The authors discuss in Section 2 the connections and differences between the proposed method and others in RLHF, reward shaping, and inverse RL. The authors propose a decision-tree-based approach to compare agent trajectories to build a reward function. Tree-based structures for the reward function have been explored previously explored [1], but the use of a hierarchy of feedback signals appears to be novel. There is also a significant connection to multi-objective RL (MORL) literature, yet the authors only mention this at the end of the paper. I believe that it should deserve a section in related work, especially as the feedback signals employed by the authors can be considered different objectives in MORL. [1] Bewley, Tom, and Freddy Lecue. "Interpretable Preference-based Reinforcement Learning with Tree-Structured Reward Functions." Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems. 2022. Essential References Not Discussed: A major missing reference and baseline is PEBBLE [1], a literature-standard method in preference-based RL. A direct comparison, in particular for the control experiments of Section 4.4., would clarify whether HERON’s benefits stem from its hierarchical structure or simply from being a preference-based reward modeling approach. [1] Lee, Kimin, Laura Smith, and Pieter Abbeel. "Pebble: Feedback-efficient interactive reinforcement learning via relabeling experience and unsupervised pre-training." arXiv preprint arXiv:2106.05091 (2021). Other Strengths And Weaknesses: One of the main strengths of this paper is that it presents an easily interpretable approach to reward design: the authors already explore some interpretability in Section 4.1 (Signal Utilization). The extensive evaluation suite is also to be commended. However, a significant limitation of HERON is its assumption of an underlying ranking of feedback signals. While the authors evaluate HERON on control tasks where it is not clear what in the ranking, it still remains unclear if the level of performance observed would generalize to other tasks. Other Comments Or Suggestions: There is a typo in Page 2: "humans typically start with the the most important factor" Questions For Authors: 1 - Can the authors justify better the need for the post-training reward scaling (presented in Section 4.2)? Shouldn't the original decision tree already learn to differentiate between these different feedback signals? Does HERON still outperform baselines without this heuristic adjustment? 2- How does HERON handle cases where the importance ranking of feedback signals is uncertain (for example in the experiments of Section 4.4)? Would its performance degrade significantly if the ranking were shuffled? 3 - Can you provide some empirical support to the claim that HERON's reward function "may be more conducive to learning" in Section 4.2? For example, showing the learning curves of HERON against the baselines. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer zX5S, Thank you for your detailed review and insightful suggestions! We are glad you think our work is novel. We provide our response to your review below. We shorten some questions to save space. ## **Claims and Evidence** **The authors hypothesize that HERON's reward function "may be more conducive to learning". However no empirical support to this claim is provided.** - We would like to clarify that this is an intuition rather than a definitive claim. Our hypothesis—that HERON's reward function may be more conducive to learning—was meant as a possible explanation for HERON's empirical performance advantage, not a conclusive statement. This hypothesis stems from the fact that many raw feedback signals are discrete or sparse, and reward models can serve as a way to smooth out this sparsity. - We do find some supporting evidence in Figure 14, where HERON exhibits significantly smoother and more stable learning curves compared to reward engineering. This suggests that HERON's reward function facilitates more stable training dynamics, which aligns with our intuition. ## **Experimental Designs Or Analyses** **However, I would like to single out HERON’s post-training reward scaling procedure in Section 4.2. This method appears to be an additional form of reward shaping, but it is not really motivated in the paper.** - Our post-training scaling is motivated by Figure 13, in which we found that reward modeling alone does not perfectly separate correct code from incorrect code at a global level. In order to further separate the two cases, we introduce a scaling parameter. Note that we only have a single parameter to tune, compared to the 4 used in CodeRL. - The pass@50 results without reward scaling can be found below. HERON still performs decently (outperforming all baselines, which are carefully tuned in prior works), but we find that post-scaling can further improve performance. We will include the results in the appendix. ### Table: Pass@50 Scores | Model | Pass@50 Score | |-------------|---------------| | HERON | 9.85 | | CodeRL | 9.81 | | HERON+PTS | 10.19 | | PPOCoder | 7.62 | | BC | 6.74 | ## **Relation To Broader Scientific Literature** **There is also a significant connection to multi-objective RL (MORL) literature...** - Thank you for pointing out reference [1], we will include it in our related work. As for MORL,we remark that MORL tries to find the pareto frontier among several reward factors. In contrast, HERON tries to find a way to combine feedback signals into a reward in a user-friendly way, such that the user can easily guide the agent’s behavior. We believe these are two orthogonal directions. We will move discussion of MORL to the related work section. ## **Essential References Not Discussed** **A major missing reference and baseline is PEBBLE**. - PEBBLE is not directly comparable to HERON, as PEBBLE relies on human preference data for learning. HERON on the other hand is meant for settings where we try to design a reward function from some freely available feedback signals. Therefore PEBBLE cannot be used as a baseline in our experiments. - Moreover, PEBBLE’s contributions (unsupervised pre-training and off-policy learning) are orthogonal to HERON’s contributions. Therefore we do not include PEBBLE as a reference. ## **Weaknesses** **However, a significant limitation of HERON is its assumption of an underlying ranking of feedback signals.** - While HERON assumes an underlying ranking of feedback signals, this assumption is well-motivated by many real-world scenarios where feedback signals naturally have a hierarchy—for example, in traffic light control, code generation, and LLM alignments. In such contexts, ranking the relative importance of feedback signals is valid. We believe good results in these environments already represent a significant contribution. - Moreover, as demonstrated in Section 4.4, HERON performs well even in the absence of a strict feedback hierarchy. This empirical resilience highlights its practical utility and broad applicability, even when the ranking structure is noisy or only partially specified. ## **Questions** **Q1:** See above response to **Experimental Designs Or Analyses**. **Q2:** We find HERON can still perform well in these settings, outperforming reward engineering baselines (Table 6). HERON is relatively robust to shuffled rankings. In Figure 6, we show how HERON performs with inexact domain knowledge, i.e. only knowing which factors fall in the top 3 and which ones fall in the bottom 3. By one tuning iteration (i.e. the best of two shuffled rankings out of a possible 36), HERON can already outperform the best-case performance of reward engineering. **Q3**: See above response to **Claims and Evidence**. Thank you again for your thoughtful review, and please let us know if you have any further questions. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for answering my questions. For this reason, I increase my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer zX5S, Thank you for the quick response and willingness to increase your score. We will be sure to include the discussed modifications into the next version of our paper.
Summary: The paper proposes HERON, a hierarchical preference-based RL framework that leverages the hierarchical importance of feedback signals to design reward functions. HERON constructs a decision tree based on human-provided importance rankings of feedback signals to compare trajectories and train a preference-based reward model. The authors claim HERON improves sample efficiency, robustness, and applicability to sparse reward settings. Experiments are conducted on traffic control, code generation, and other tasks to validate these claims. Claims And Evidence: The claim that HERON universally eases reward design is problematic since many real-world tasks do not naturally offer clearly prioritized feedback signals. More comprehensive experimental validation and discussion are needed to convincingly support the broad claims. Methods And Evaluation Criteria: **Methods**: The use of pairwise comparisons to train a reward model is innovative, yet its dependency on accurate, human-specified importance rankings may limit its scalability and applicability in domains lacking clear hierarchy. **Evaluation Criteria**: The benchmarks and tasks selected for evaluation appear to showcase improved sample efficiency and robustness. However, the evaluation does not sufficiently address scenarios where the hierarchical structure is ambiguous or entirely absent, raising concerns about the method’s generalizability. Theoretical Claims: **Theoretical Analysis**: The paper does not provide a rigorous theoretical analysis or proofs ensuring that the learned reward function preserves the optimal policy. **Issues**: Without theoretical guarantees, there is a risk that the policy may converge to a suboptimal local optimum, particularly in sparse reward environments where reward signals are less distinct. Experimental Designs Or Analyses: The choice of baselines is relatively outdated, lacking comparisons with recent advancements such as inverse preference learning approaches (e.g., Hejna & Sadigh, 2023) and Hindsight PRIORs (e.g., Verma & Metcalf, 2023).It remains unclear if the experiments cover a sufficiently diverse range of scenarios, especially those where feedback signals do not exhibit a clear hierarchical structure. Supplementary Material: Yes Relation To Broader Scientific Literature: The work builds upon prior research in preference-based reinforcement learning and reward shaping by introducing a hierarchical structure that mimics human decision-making. A more comprehensive discussion of how HERON fits into and advances the current state-of-the-art would be beneficial. Essential References Not Discussed: Lack of disscussions of recent studies, such as [1], [2]. [1] Hejna, J., & Sadigh, D. (2023). Inverse preference learning: Preference-based rl without a reward function. Advances in Neural Information Processing Systems, 36, 18806-18827. [2] Verma, M., & Metcalf, K. (2024) Hindsight PRIORs for Reward Learning from Human Preferences. In The Twelfth International Conference on Learning Representations. Other Strengths And Weaknesses: **Strengths**: Innovative hierarchical framework that intuitively aligns with human decision processes. Empirical results indicating improvements in sample efficiency and robustness on the tested tasks. **Weaknesses**: Reliance on a clear hierarchy in feedback signals, which may not be present in many practical applications. Lack of theoretical analysis to ensure that the learned reward function leads to optimal or near-optimal policies. Experimental comparisons are limited to older baselines, missing insights from more recent literature. Other Comments Or Suggestions: No Questions For Authors: 1. How does HERON perform in environments where feedback signals do not have an inherent hierarchical structure? 2. Have you considered leveraging neural network architectures to automatically learn or approximate the hierarchical priorities when explicit human rankings are unavailable? 3. Can you provide any theoretical analysis or formal guarantees regarding the convergence and optimality of the policies derived from the HERON framework, especially in sparse reward settings? 4. What is the rationale behind the selection of baseline methods, and how do you expect HERON to compare against more recent state-of-the-art methods like those proposed by Hejna & Sadigh (2023) and Verma & Metcalf (2024)? 5. How sensitive is HERON to noisy or suboptimal human rankings of feedback signals? [1] Hejna, J., & Sadigh, D. (2023). Inverse preference learning: Preference-based rl without a reward function. Advances in Neural Information Processing Systems, 36, 18806-18827. [2] Verma, M., & Metcalf, K. (2024) Hindsight PRIORs for Reward Learning from Human Preferences. In The Twelfth International Conference on Learning Representations. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 7gbW, Thank you for your detailed review and suggestions to improve our work. First and foremost we would like to address your review of our claims, in which you say we claim “HERON universally eases reward design”. Nowhere in the paper do we make this claim, and in fact we consistently state that our scope is limited to problems with hierarchical structure (see line 51 right, line 62 left, and line 436 left). We present our rebuttal of the remainder of your review below. We shorten some questions to save space. ## **Experimental Design or Analyses** **The choice of baselines is relatively outdated, lacking comparisons with [1] and [2]** - We believe there is a misunderstanding here. HERON is not comparable to preference learning methods. HERON assumes online access to an environment which gives the agent several feedback signals. We seek to design a reward function from these feedback signals, which is a classical setting in RL. On the other hand, [1] requires access to an offline preference dataset, while [2] requires access to online preference annotation. In contrast, HERON asks a human annotator to rank the importance of feedback signals (usually there are 3-7 such signals) one time. This is completely separate from the recommended papers. We include a detailed discussion on suitable scenarios in line 436. **It remains unclear if the experiments cover a sufficiently diverse range of scenarios...** - We conduct extensive experiments in 7 diverse environments, which we believe is sufficient to show the benefit of HERON (multi-agent traffic light control, code generation, LLM alignment, and four robotic control experiments). In robotic control experiments the feedback signals do not all have a clear hierarchy, yet we are able to beat or match the baselines in all environments. These results demonstrate that our method performs well in a diverse set of hierarchical or roughly hierarchical environments. ## **Relation To Broader Scientific Literature** **A more comprehensive discussion of how HERON fits into and advances the current state-of-the-art would be beneficial.** - Thank you for the suggestion. We will add more works on SOTA approaches in our related works section, including [1] and [2]. Please note we already have a discussion on suitable settings for HERON (line 436). ## **Weaknesses** **Weakness 1** - First, we want to point out that HERON is designed for settings where there is a clear hierarchy (see our discussion on suitable scenarios) and performance in other settings is a “bonus.” However, in environments without clear hierarchy, we find HERON can still perform decently, outperforming reward engineering baselines. See section 4.4 of our paper for more details. **Weakness 2** - Theoretical analysis is challenging, as it is difficult to precisely characterize the relationship between the feedback signals and the ground truth reward. For example, in traffic light control, defining an optimal combination of all relevant factors is inherently ambiguous. - That said, we hypothesize that if theoretical guarantees were to be established, the convergence of the policy would likely depend on two key factors: (1) the statistical error in the learned reward, and (2) the distribution shift between the state visitation distributions of the sampling policy and the optimal policy. ## **Questions** **Q1:** See our response to weakness 1. **Q2:** One approach is to learn a decision tree based on human preference data. This may work when the feedback signals have hierarchical structure and we only have a limited amount of human preference data. However, this would be a separate setting and we leave it for future work. **Q3:** See response to weakness 2. **Q4:** The main baseline we compare against is reward engineering, which has been the most popular approach for reward design over the past 20 years. We also compare against two baselines that have been used in MORL literature, the ensemble baselines. In relevant settings such as code generation, we compare to carefully designed and publicized reward functions like that of CodeRL. Finally, when available, we also compare training directly on the ground-truth reward. Again we note that those SOTA methods of [1] and [2] are not relevant in our setting, as they assume access to a large set of human preference data (either online or offline) which HERON is not designed to use. **Q5:** HERON is relatively robust to suboptimal rankings. In Figure 6, we show how HERON performs with inexact domain knowledge, i.e. only knowing which factors fall in the top 3 and which ones fall in the bottom 3. By one tuning iteration, HERON can already outperform the best-case performance of reward engineering. This indicates HERON can perform well even with slightly noisy rankings. Thank you for the detailed review, and please let us know if you have any further questions or need any clarification. --- Rebuttal Comment 1.1: Comment: Thanks for the response that addresses my concerns. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 7gbW, Thank you for the rebuttal acknowledgment and the willingness to raise your score. We will include the modifications we discussed in the next version of our paper.
Summary: In this work, the authors propose a novel hierarchical reward design framework tailored for environments that require the integration of multiple feedback signals. The framework is motivated by the observation that these signals often contribute unequally to the overall reward and that there is a hierarchy in how much each feedback contributes. The paper includes extensive empirical evaluations across four different applications and compares the proposed method with alternative reward design approaches. --- ## update after rebuttal Thank you to the authors for their detailed response. After reading the rebuttal and considering the other reviews, my main concerns were addressed, and I changed my original score. --- Claims And Evidence: The paper’s claims are primarily empirical—for instance, the authors claim improvements in robustness and overall performance compared to existing techniques in specific scenarios. Although the results appear to support these claims, crucial details are missing from the empirical evaluation section, making it difficult to fully confirm that the evidence supports the claims. Please refer to the detailed comments and questions in the Empirical Designs or Analyses section of the review. Methods And Evaluation Criteria: The method is generally well-described and motivated. The evaluation is also thoughtfully executed: the authors selected a variety of environments with distinct characteristics to showcase the different properties of the proposed method. For example, they demonstrate how the method performs when there is a clear hierarchy in the feedback signals versus when such a hierarchy is absent. Theoretical Claims: The paper does not present any theoretical claims or proofs. Experimental Designs Or Analyses: The experimental design is thorough, but several details require further clarification: - For the traffic light control experiment: - How many samples were used? The plots show considerable variance, and the sample size is important for drawing reliable conclusions. - What is the definition of the ground truth reward, and why does it show more variance compared to the other methods? Additionally, why do the other reward design methods appear to perform better than it? - Could the authors clarify why it is important that a relatively small proportion of decisions are made at each level (as suggested by the results in Figure 3), and how these proportion values were computed? - In the experiment presented in Figure 4, why was only the second feedback signal varied? What difference would it make if the number of cars passed (the first feedback) was not kept constant? - Could the authors elaborate on the setup for the robustness experiment detailed in Figure 5? For instance, what was the original training speed, to which speed was it changed afterward, and how many trials were used? - For the code generation experiments, what is the number of trials used, and what is the variance in the results? While the paper claims that the proposed method “significantly outperforms the baselines,” some results (e.g., those in Tables 1-4 for HERON and CodeRL) are fairly close, which might challenge this claim if the variance is high. - In the LLM alignment experiments, why was HERON-DPO compared with REINFORCE? Could the reward engineering baseline potentially perform better if coupled with a different algorithm (e.g., PPO or TRPO)? - Regarding the robotics experiment, could the authors clarify the details of Table 6? For example, what do the numbers represent, how were they generated (e.g., the number of trials), what is the final hierarchy used for the feedback signals, and how was this hierarchy determined? Supplementary Material: I have not reviewed the supplementary material. Relation To Broader Scientific Literature: This paper introduces a novel method for combining multiple feedback signals into a single reward measure. The approach offers advantages over traditional techniques, such as linear combinations with engineered weights, particularly in scenarios where the feedback signals have a clear hierarchy. Although the method addresses a somewhat specific problem, it appears to perform well in that context. I believe that the community could benefit from this approach. Essential References Not Discussed: The reference list appears comprehensive, and I did not identify any essential references that were missing. Other Strengths And Weaknesses: The method is clearly described and straightforward, making it easy to follow. The motivation is also well-articulated and intuitive. However, as noted in earlier sections of the review, there are important aspects of the empirical evaluation that should be further clarified in the paper. Other Comments Or Suggestions: Figure 6 is missing axis information. Questions For Authors: 1. On line 176, the authors mention that "it is possible to introduce pre-trained knowledge into the reward model." Could the authors provide concrete examples of this? 2. On line 198, the authors state that "in appropriate settings, we can use DPO …". Could the authors clarify what constitutes an appropriate setting in this case? For instance, are there particular types of tasks or domain characteristics that make DPO especially suitable? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer uDMx, Thank you for your thoughtful and detailed review of our paper. We are glad you appreciate the novelty of our approach as well as our experimental analysis. We provide our rebuttal to your critiques below. We enumerate your questions and comments to save characters. ## **Experimental Design and Analysis:** ### **1. Traffic Light Control:** **1.1: How many samples were used?** - We present results over five random seeds. Indeed we find that the baselines have high variance, but HERON exhibits very low variance. To further validate the efficacy of our method, we have run a t-test for the last 1000 evaluation time steps in the traffic light control environment. Our method is significantly better than the reward engineering baseline, with p=0.003. This gain justifies the use of our algorithm. **1.2: Ground truth reward.** - The ground truth reward can be found on line 693, and it has been developed over several papers. It was finalized in [Zhang 2019]. Other baselines sometimes outperform it, as we extensively tune all baselines. This tuning can be seen as a type of reward shaping, which can improve performance. **1.3: Proportion of decisions made at different levels.** - If too many decisions are made at a single level, that effectively means that information from other levels of the decision tree are being completely ignored, which is not ideal. Decisions being made at all levels indicate information from all feedback signals is being incorporated into the reward, which is desirable and indicates the efficacy of our reward design. We compute the proportions in traffic light control by recording which level of the tree each decision was made at throughout training. **1.4: Figure 4.** - We only vary the second feedback signal to keep the reward realistic, as the first feedback (traffic throughput) is almost always viewed as most important in traffic light control. We show the policy can still achieve good performance with other hierarchies in Figure 10. **1.5: Figure 5.** - The original speed is 35, then we change it to 25, 30, 40, 45 (See Figure 5). We conduct 5 independent trials for each experiment and report mean and standard deviation over the runs. ### **2. For the code generation experiments, what is the number of trials used, and what is the variance in the results?** - We conduct training over 1 seed due to the large expense of these training runs, which is in line with prior works. However, when evaluating a policy on APPS, we evaluate each policy with 1 million generated programs (there are 5000 test problems, we generate 200 for each problem) and 100000 on MBPP. Treating each set of programs as an independent Bernoulli variable, we can conduct a t-test. When considering the largest value of K in pass@k in tables 1, 2, 3, 4 we find that the a t-test comparing HERON and the best baseline has p values < 0.05, indicating that HERON indeed outperforms the baselines in a statistically significant manner. ### **3. In the LLM alignment experiments, why was HERON-DPO compared with REINFORCE?** - We compare with REINFORCE as it is one of the most popular and high performing approaches for LLM alignment these days [1]. It is possible the baseline could do better with PPO, but this would require extensive tuning and PPO is shown to underperform REINFORCE for LLM alignment [1]. [1] Ahmadian, Arash, et al. "Back to basics: Revisiting reinforce style optimization for learning from human feedback in llms." ### **4. Regarding the robotics experiment, could the authors clarify the details of Table 6?** - In table 6 we show the average ground truth reward obtained by each algorithm over the final 1000 iterations of training (training is 2 million steps total). We conduct experiments over 5 random seeds, and use the PPO algorithm to optimize the policies. The reward hierarchy is generally is-alive > moving forward > control cost > contact cost. We will make this more clear in the paper. ## **Questions** **Q1**: In code generation, we use a pre-trained language model as the initialization for the reward model. This is advantageous as it means reward training is faster and more accurate. Similarly for LLM alignment we could use a pre-trained LM as a reward, but opt for DPO in our experiments due to its simplicity. **Q2**: DPO is most useful when the horizon length of the task is one, as in that case we can directly train the policy on preference comparisons, without having to train a reward model. This is the case in LLM alignment, which is where DPO is mainly used. Thank you again for the detailed review. We are eager to know if your questions about our empirical evaluation have been satisfied, or if there are any more details we can give you.
null
null
null
null
null
null
Steering Protein Language Models
Accept (poster)
Summary: This paper presents a method to control the output of protein language models, inspired by the Activation Steering approach in the LLM domain, allowing the generated sequences to exhibit a given property. This method does not require retraining the protein language model and can be directly applied during the inference stage. The paper validates the effectiveness of this approach by manipulating properties such as thermostability and solubility. Claims And Evidence: The paper experimentally demonstrates that the proposed method can generate sequences with improved performance on a given property, supporting its claim. Methods And Evaluation Criteria: The method proposed in this paper is reasonable. However, I believe the evaluation criteria used in this paper have potential issues. During the generation process of ESM-2, the model's behavior (such as activation steering or certain activation parameters) directly depends on the structure and inference process of ESM-2, and the generated protein is then evaluated for properties such as thermostability using ESM-2. If the activation of the model is guided or modified during the generation process, then the evaluation may exhibit a certain level of circular dependency with the generation process, potentially causing the generated protein to perform exceptionally well in the ESM-2 evaluation, because the model has already been adjusted during the generation. This could potentially be considered an attack on ESM-2. In this part of the experiment (Table 1), we also observe that ESM-2 shows the best performance. Therefore, the paper needs to clarify whether this good performance is due to such an attack. Theoretical Claims: This paper is more "application-oriented," and therefore does not include many theoretical claims. Experimental Designs Or Analyses: I have reviewed the experimental design section of the paper, and I believe more testing metrics need to be included. The paper primarily tests how well the generated sequences from the protein language model align with the target properties and measures diversity and novelty, but I believe an important metric for evaluating PLM-generated results is testing the authenticity of these sequences, which the authors have overlooked. It is necessary for the paper to include metrics that assess the authenticity of the generated sequences, such as pLDDT and sequence likelihood. Supplementary Material: This paper does not provide supplementary materials. Relation To Broader Scientific Literature: The paper proposes a more novel method for controlling PLM output, which is insightful for protein design. Essential References Not Discussed: I did not find any important missing references. Other Strengths And Weaknesses: I am curious whether the paper's method performs well for modeling more niche properties. The model's performance was primarily validated in terms of thermostability, solubility, and fluorescence brightness, but these are already well-studied properties, and many protein sequences with these properties are included in the PLM training data. However, for protein design tasks involving more niche properties, I am uncertain whether this method would still perform well, as it fundamentally relies on the internal knowledge of PLM. Other Comments Or Suggestions: I don't have other comments. Questions For Authors: I don't have other questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for providing valuable feedback. We detail our response below point by point. Please kindly let us know whether you have any further concerns. ---- **Q1**: "the evaluation may exhibit a certain level of **circular dependency**. ... This could potentially be considered an attack on ESM-2. In this part of the experiment (Table 1), we also observe that ESM-2 shows the best performance. " We thank the reviewer for raising the important issue of potential circular dependency in our evaluation. Using ESM-2 both as the base generative model and as part of the downstream predictor could potentially introduce biases. However, we emphasize the following points to address this concern: - While using ESM-2 as a base model achieves superior performance in Table 1 to other base models, by comparing the generation performance with Action Steering and without Activation Steering (Original Model), we can we can observe a significant performance gain. This validates that improvements are attributable to the steering mechanism rather than inherent biases in ESM-2. - Besides, our proposed Activation Steering method demonstrates consistent and significant improvements over both Fine-tuning and the Original Model across all three base models (ESM-2, ESM-3, and ProLLaMA). This cross-model consistency suggests the improvements stem from the method itself rather than inherent biases in ESM-2. ----- **Q2**: " It is necessary for the paper to include metrics that assess the authenticity of the generated sequences, such as pLDDT and sequence likelihood." To address the concern about assessing the authenticity of generated sequences, we incorporate the pLDDT metric, evaluated using ESMFold, to measure the structural integrity of both initial and optimized protein sequences. The results, presented in the tables below, demonstrate that the pLDDTs remain consistent before and after applying activation steering. This consistency confirms that our method effectively guides protein generation towards desired functionalities while maintaining the authenticity of the sequences **Table 1**: pLDDT for protein generation. | Base Model | Method | Thermostability | Solubility | | -- | -- | -- | -- | | ProLLaMA | Original Model | 40.02 | 40.02 | | ProLLaMA | Fine-tuning | 37.79 | 37.68 | | ProLLaMA | Activation Steering | 39.51 | 39.52| | ESM2 | Original Model | 74.09 | 69.17 | | ESM2 | Fine-tuning | 72.62 | 68.16 | | ESM2 | Activation Steering | 73.02 | 68.36 | | ESM3 | Original Model | 76.14 | 73.54 | | ESM3 | Fine-tuning | 75.50 | 72.71 | | ESM3 | Activation Steering | 76.18 | 72.61 | **Table 2** pLDDT for protein optimization regarding thermostability. | | Medium difficulty | Hard difficulty | | -- | -- | -- | | Before Optimization | 79.75 | 79.99 | | AdaLead | 51.06 | 31.15 | | ESM2 + ASPO | 76.75 | 79.59 | | ESM3 + ASPO | 81.00 | 78.91 | **Table 3** pLDDT for protein optimization regarding solubility. | | Medium difficulty | Hard difficulty | | -- | -- | -- | | Before Optimization | 77.92 | 75.94 | | AdaLead | 36.87 | 35.89 | | ESM2 + ASPO | 78.38 | 77.13 | | ESM3 + ASPO | 78.15 | 77.98 | ---- **Q3:** "I am curious whether the paper's method performs well for modeling more **niche properties**... as it fundamentally relies on the internal knowledge of PLM." - We appreciate the reviewer's interest in the applicability of our method to more niche properties. We would first note that the challenge with niche properties is the scarcity of labeled data, which can hinder the training of effective predictors to estimate the performance. - We conducted an additional experiment focusing on the niche property of hydrolysis activity at 30°C for polyethylene terephthalate (PET). We utilized the dataset from (Seo et al. 2025), comprising 184 annotated samples. - To estimate the performance, we train a predictor on this dataset, achieving a Pearson correlation coefficient (R) of 0.57 in 5-fold cross-validation, indicating a reasonable prediction capability under data constraints. - We design optimization tasks of varying difficulty: - Medium Difficulty Task: Optimizing 69 samples with an initial average predicted activity of 1.61. - Hard Difficulty Task: Optimizing 67 samples with an initial average predicted activity of 175.62. - The results post-optimization using our ASPO method combined with ESM3 are as follows: |Task Difficulty|Initial Average Activity|Post-Optimization Average Activity| |-|-|-| | Medium | 1.61 | 5.00 | | Hard | 175.62 | 236.91 | - These results demonstrate that our ASPO method can effectively optimize even niche protein properties, such as hydrolysis activity in PET, underlining the versatility and potential of our method in broader protein design applications. > Reference: Seo, Hogyun, et al. Landscape profiling of PET depolymerases using a natural sequence cluster framework. Science 2025. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal and I have raised my score to 3.
Summary: This paper introduces a method for steering Protein Language Models (PLMs) towards outputs with desirable properties. It is based on a technique called 'Activation Steering' used for LLMs, where internal activation vectors of LLMs are modified to shift them towards desired behaviour. The authors show how this method can be applied to both Autoencoder PLMs like ESM2 and ESM3 as well as Autoregressive PLMs like ProLLAMA to produce proteins with higher ‘fitness’, i.e. higher values of desired properties like thermostability and solubility. In line with previous work in this field, oracle models are created by training CNN models based on relatively large datasets. The proposed method shows performance improvements over existing methods, while maintaining diversity and novelty of generated proteins. Claims And Evidence: 1. The paper claims to enable property-specific protein generation without the requirement for retraining models, thus providing a scalable alternative to other resource intensive techniques. 2. The paper introduces a new iterative optimization method which involves choosing which tokens to mutate based on an estimate of their individual fitness score. 3. An empirical evaluation is performed on models with 3 different architectures to show the universal effectiveness of the method and its performance in comparison to some existing methods in the field. Claim (1) is supported by empirical evidence, at least for the models included in the paper. Activation steering is indeed less resource intensive than directed evolution/reinforcement learning based methods as well as fine-tuning, though it is unclear if it so in comparison with other benchmarks used in the paper. Some of the technical claims made in the paper are unconvincing to me, such as “magnitude of a representation’s projection onto [the direction of the steering vector] can indicate it’s relatedness to the property”, which I have detailed in the methods/questions section. The method itself also makes certain assumptions which I don’t think are true. Again, I have detailed these in later sections. With respect to claim (2) – the paper does introduce a new optimization procedure to choose the tokens most weakly associated with the property of interest and update the internal representations of those tokens. It is unclear to me if the method of determining if a token contributes positively or negatively to the fitness function is completely correct, which follows from the point made above about projection onto the steering vector. Moreover, it is not clear to me from a theoretical standpoint how simply adding a common steering vector to the layer representation of any token, no matter where it is in relation to the regions of positive and negative fitness values, would improve overall fitness of the protein. Again, I have elaborated on this in the question section. For claim (3), empirical evidence is shown on 3 different architectures. With the lack of any theoretical discussion or explanation I’m unconvinced on why this method universally applicable to all PLMs. Is there a fundamental property of models on which this method is based? Latent spaces of different models behave differently – some may be amenable to post-training modifications while some may not. Overall, I think this paper may still be valuable since it details a method which empirically shows better results than existing methods for a real-world task with meaningful utility. However, there are multiple methodological leaps and assumptions which mean that it doesn’t convince me in terms of being an actual novel technical contribution. Methods And Evaluation Criteria: Methods: I’m unsure about certain parts of the method, which I have detailed below: 1. Firstly, the assumption that “PLMs inherently encapsulate intrinsic knowledge about specific protein properties” is only true if the training dataset has sufficiently large numbers of samples exhibiting the negative to positive range of values you want to generate at inference time. Without a meaningfully high number of samples, the model would not gain a semantic understanding of the required property and would not encode this in the structure of its latent space. This is not true for fine-tuning based approaches, where you can fine-tune on an additional dataset with relatively few samples and get desired behaviour. Thus, this method is an alternative to fine-tuning based methods only when the property of interest is already in the dataset the model is trained on. 2. Secondly, I’m unsure whether just adding a vector to the latent representation of a deep learning model will always lead to meaningful results. For example, adding a random vector to the latent representation of the model is likely to lead to gibberish outputs, as demonstrated in many papers on adversarial attacks. What if adding the steering vector to the latent representation takes the model to an unseen region, i.e. a region which doesn’t exist in the training set? 3. Even if you assume that the model is well behaved under the steering transformation, it is not strictly true that adding the steering vector to the latent representation of a particular sample will move it towards the desired property. Concepts like functional continuity, linearity, and other properties necessary for this to be true are not necessarily naturally developed by the model. As a simple thought experiment, assume that there is a spherical region in the middle of the latent space that corresponds to positive fitness. Then, in order to transform samples outside the positive space to lie within the positive space, I need to add vectors to all samples pointing inward towards the positive region, whereas the steering vector would always point in the same direction. You are assuming that a translational transform in the latent space of each layer can shift any token towards the desired property, which is not true. 4. “the magnitude of a representation’s projection onto [the direction of the steering vector] can indicate its relatedness to the property”. This is not strictly true. For example, you can infinitely scale a vector and its projection onto the steering vector will keep increasing. That doesn’t necessarily mean it’s relatedness to the property will keep increasing. 5. One question I have with this method is how you maintain existing properties of proteins not necessarily related to the fitness property, especially for protein optimization. This is especially possible with the iterative optimization where you repeatedly replace the tokens least associated with the property to optimize for. For example, say I have a protein with all the required properties but solubility. In optimizing for solubility using this method, do I lose other properties? This is important for protein optimization because I would generally start with a protein with certain properties and try to optimize for others. If the optimized protein is unrecognizable from the protein I start from, then this is just a conditional protein generation method, not an optimization method. [Discuss all method drawbacks here] Evaluation: 1. In line with previous works, this paper uses a trained model as an oracle. This model is trained on thousands (~40000) labeled examples and shows medium to high accuracy/correlation on a test set. While this is not optimal, it is in line with previous work in the area published at similar venues. In the absence of actual ground-truth values for the propertied of interest, this evaluation criteria, though not perfect, is the best that is possible. 2. Apart from fitness, evaluation criteria, such as diversity and novelty are well-chosen to assess the effectiveness of the generated sequences. One thing that is missing is, as mentioned, maintenance of existing properties apart from the property to optimize, as mentioned previously. 2. I’m unclear on the Dist_init and Dist_high metrics – are large or small values of these metrics favourable? I’m not sure how these metrics contribute to telling us about the quality of the output. There’s also no clear pattern in Tables 2, 3, 4 that tells me which models are better. Theoretical Claims: No major theoretical claims are made, the paper is mostly based on empirical results. It would be good to see some theory on major claims, such as whether adding steering vectors to intermediate layers always moves the output closer to the high fitness region, and whether this happens in a linear or smooth manner from the initial point to the final point as you perturb the activations more and more. Experimental Designs Or Analyses: Experiments are well designed to support the claims of the paper. Results are shown on major PLM structures (though only 3) and for three different properties. Supplementary Material: I reviewed the parts about how the oracle that gets solubility/thermostability values for optimized proteins is trained. While not optimal, I don't see any other way short of wet lab experiments to get ground truth values of fitness for the optimized proteins. I also reviewed the description of measures used to evaluate the models and have mentioned some concerns in the methods and evaluation section. Relation To Broader Scientific Literature: The paper is in line with previous work on protein fitness optimization. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: 1. The paper addresses a topic of importance in protein engineering. 2. The method is conceptually simple and seemingly shows good results despite its simplicity. 3. The method is demonstrated on both AE-PLMs and AR-PLMs, implying some sort of generalizability. However, this is not surprising given that the method is based on a fundamental concept of semantic organization in neural network latent spaces, which could be counted as a strength of the method. Weaknesses: I have outlines multiple weaknesses related to the method in the 'Methods and Evaluation' section. I will briefly summarize them here: 1. Optimizing for properties which are not reflected in the original training set. This is only a criticism because the method is compared to fine-tuning, which can be used to generate proteins with properties not in the original set using a small, new dataset. 2. Assumptions about the steering transformation being a coherent transformation path in the feature space of the model, in terms of semantic continuity of the space, linearity, etc. Theoretical grounding would be useful here. 3. Concerns about the magnitude of the projection of a vector reflecting it's relatedness to the property. Counterexample given in the 'Methods and Evaluation' section (point 4). 4. Questions about maintaining existing properties unrelated to the property to optimize for. 5. Model evaluation is based on oracle classification models without actual ground truth. However, as I mentioned, there's no real way to get around this short of actual experimental validation. Other Comments Or Suggestions: I will rate this paper as a 'weak accept' but I think certain issues need to be addressed, mainly point 2, 3 listed in the 'Methods and Evaluation' section above. Questions For Authors: Most of my questions are listed in the sections above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful and positive feedback. We have provided a detailed explanation for your concerns as follows. Please feel free to let us know if you have any additional concerns or questions. **W2: Assumptions .. continuity, linearity...Theoretical grounding** - Our method is based on two hypotheses: the **linear representation hypothesis** and the **superposition hypothesis** (T. Adly 2024). The linear representation hypothesis suggests that neural networks encode meaningful concepts as directions in their activation spaces, while the superposition hypothesis extends this by suggesting that networks utilize almost-orthogonal directions in high-dimensional spaces to represent features, embodying properties of additivity and homogeneity. These hypotheses underpin the design of many existing algorithms. >Ref: T. Adly. Scaling monosemanticity: Extracting interpretable features from Claude 3 sonnet. Anthropic, 2024. - However, the theoretical validation of these hypotheses is ongoing. As such, the theoretical grounding of our method remains a subject for future research. **M1: the assumption .. only true if dataset has large numbers of samples** - Our method assumes that the pretrained PLM has already encapsulated comprehensive knowledge of protein properties, given its pretraining on a massive number of both naturally occurring and synthetically designed protein sequences. This extensive pretraining dataset supports our assumption that the PLM possesses a universal understanding of various protein properties, making it suitable for steering the PLM to explore specific properties. - However, we acknowledge that if the PLM lacks prior knowledge of a target property, our method may not be effective, which is a limitation. - Regarding the reviewer's point on fine-tuning, it is important to clarify that FT also relies on the pretrained model having some foundational knowledge of the desired properties. Without this, FT on a small dataset risks overfitting and poor generalization, similar to the limitations faced by our proposed method. **M2: unsure whether just adding a vector** - We appreciate the reviewer's concern regarding the potential risks of modifying latent representations. Indeed, adding arbitrary vectors can disrupt model outputs, akin to adversarial attacks. However, our method carefully defines the steering vector to ensure meaningful modifications aligned with desired properties. - To illustrate, consider a simple case with a positive sample ($z_p$) and a negative sample ($z_n$). To shift $z_n$ towards $z_p$, the intuitive direction for modification is $v=z_p-z_n$. Extending this, if we have sets of positive and negative samples, the steering vector can be defined as the mean difference between these sets, aligning changes with the observed data distribution. - Our method assumes that the latent space adheres to the linear representation hypothesis and superposition hypothesis, allowing these mean-based modifications to navigate towards desired properties effectively. Empirical results confirm that this method in steering the latent representation leads to the desired enhancements. **M3: not strictly true that adding the steering vector .. move it towards the desired property** - The effectiveness of the proposed method relies on the linear representation hypothesis and superposition hypothesis. The thought experiment designed by the reviewer, where the latent space exhibits non-linear characteristics, poses a challenge to our assumptions. In this case, the proposed method does not work. **M4&W3: Concerns about the magnitude .. reflecting relatedness to the property** - Normalization techniques like LayerNorm and RMSNorm in transformers constrain vector magnitudes, ensuring they don’t scale infinitely. This addresses the counterexample. - Additionally, the magnitude is indicative of the importance of the token. Our relatedness score takes the magnitude of the token representations into consideration, following the attention mechanisms in transformers, which compute attention scores via dot products between queries and tokens. **M5&W4: maintain unrelated properties** - We evaluated both thermostability and solubility to ensure our method maintains unrelated properties during optimization. Due to length limitations, we only present solubility experiments. As shown in the following tables, steering for solubility has minimal impact on the unrelated property thermostability. Tab: Protein generation |Base Model |Method|sol(target)|therm(unrelated)| |-|-|-|-| | ProLLaMA | Original|.23|56| |ProLLaMA|FT|.24|57| |ProLLaMA|AS|.28|56| |ESM2|Original|.33|57| |ESM2|FT|.41|56| |ESM2|AS|.44|57| |ESM3|Original|.32|54| |ESM3|FT|.39|55| |ESM3|AS|.49|54| Tab: Protein optimization ||Medium|Medium|Hard|Hard| |-|-|-|-|-| ||**sol**(target)|**therm**(unrelated)|**sol**|**therm**| | Before Opt|.28|54|.09|54| | AdaLead|.62|50|.53|51| | ESM2+ASPO|.51|53|.35|54| | ESM3+ASPO|.65|53|.4|53|
Summary: This paper adapts activation steering techniques from LLMs to Protein Language Models (PLMs) to guide protein generation toward desired properties without retraining. It introduces an Activation Steering-based Protein Optimization (ASPO) framework that outperforms existing methods on thermostability, solubility, and GFP brightness tasks. Claims And Evidence: The key claims are well-supported by experimental evidence across multiple PLM architectures. Results show significant improvements in target properties while maintaining diversity and novelty compared to fine-tuning and other baselines. However, the worse fine-tuning results might be due to insufficient data and large numbers of learnable parameters. Do you use parameter-efficient fine-tuning, like LoRA? Methods And Evaluation Criteria: The methodology is sound, with appropriate evaluation metrics (fitness, diversity, novelty, distance measures) and comprehensive comparison against established baselines on relevant protein engineering tasks. Theoretical Claims: N/A Experimental Designs Or Analyses: The number of use cases is small though. In the ablation studies, it's better to show the trend of both properties and basic protein generation qualities with respect to the ablated variables. Supplementary Material: All. Relation To Broader Scientific Literature: It should be easy for the people in science community to adapt this method with a pre-trained large models. Essential References Not Discussed: N/A Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: See above. Questions For Authors: The single objective design is quite limited. How do you deal with multi-property optimization? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate very much your constructive comments on our paper. We have provided a detailed explanation for your questions as follows. Please feel free to let us know if you have any additional concerns or questions. ---- **Q1: "How do you deal with multi-property optimization?"** Thank you for your insightful question regarding our method for multi-property optimization.. - We propose a simple solution to perform steering on multiple properties by computing a composite steering vector that incorporates the steering vectors of individual properties. Specifically, if $v_\ell^{therm}$ represents the steering vector for thermostability and $v^{sol}_\ell$ for solubility, the combined steering vector used for multi-property optimization is given by $v_\ell = v_\ell^{therm} + v^{sol}_{\ell}$. - The remaining settings are the same as single property optimization. - We empirically test this method in scenarios involving both protein generation and protein optimization. The results, as detailed in the tables below, demonstrate that our method effectively improves multiple properties, albeit with a slight trade-off compared to optimizing a single property. - We plan to further refine our method to better manage these trade-offs and explore more sophisticated methods for steering vector combination and weighting for multi-property optimization in future work. **Table 1**: Multi-Property Steering for Protein Generation | Based Model|Method|Thermostability|Solubility | | --|--|--|-- | | ESM2|Original Model|56.77|0.327 | | ESM2|Activation Steering|75.29|0.409 | | ESM3|Original Model|60.70|0.312 | | ESM3|Activation Steering|67.82|0.449 | **Table 2**: Multi-Property Steering for Protein Optimization on Thermostability Medium Difficulty Task | Method|Thermostability|Solubility | | --|--|-- | | Before Optimization |59.78|0.299 | | ESM2 + ASPO|76.19|0.401 | | ESM3 + ASPO|76.64|0.332 | **Table 3**: Multi-Property Steering for Protein Optimization on Thermostability Hard Difficulty Task | Method|Thermostability|Solubility | | --|--|-- | | Before Optimization |46.38|0.237 | | ESM2 + ASPO|78.11|0.278 | | ESM3 + ASPO|74.99|0.311 | ---- **Q2: "In the ablation studies, it's better to show the trend of both properties and basic protein generation qualities with respect to the ablated variables."** Thank you for your valuable suggestion. We include trends for diversity, distance to the initial set, and distance tothe high fitness set in the following tables. and will update them in the corresponding figures. We will also reflect these trends in the revised figures to provide a clearer visualization of how the ablated variables impact both properties and basic protein generation qualities. **Table 4**: Sensitivity to number of samples for steering vectors extraction in protein thermostability optimization (Fig. 4(a)) | number of samples|10|25| 50|100|250 | | --|--|--|--|-- |-- | | ESM2-Medium-Diverisity|6.21|7.69|7.86|7.71 |7.71 | | ESM2-Medium-Dist_init |6.15|6.75|7.49|7.709|8.15 | | ESM2-Medium-Dist_high |10.87|10.77|10.54|10.63|10.51 | | ESM2-Hard-Diverisity|4.82|6.04|5.97|6.09|6.13 | | ESM2-Hard-Dist_init |6.66|7.21|7.02|7.29|7.30 | | ESM2-Hard-Dist_high |10.21|10.44|9.91|8.825 |9.10 | | ESM3-Medium-Diverisity|6.96|6.96|6.92|6.94|6.94 | | ESM3-Medium-Dist_init |6.08|6.09|6.09|6.00|6.00 | | ESM3-Medium-Dist_high |10.53|10.31|10.08|9.71 |10.12 | | ESM3-Hard-Diverisity|6.93|6.95|6.91|6.92 |6.932 | | ESM3-Hard-Dist_init |7.71|7.60|7.60|7.57|7.68 | | ESM3-Hard-Dist_high |9.94|9.74|9.46|9.25|9.27 | **Table 5**: Sensitivity of $\alpha$ in protein thermostability optimization (Fig. 6(a)) | $\alpha$|0.05|0.25|0.5|1|2|5|20 | | --|--|--|--|--|--|--|--| | ESM2-Medium-Diverisity|7.01|7.68 |7.70|7.71|7.29|7.79|7.74 | | ESM2-Medium-Dist_init |7.67|7.68|7.71|7.71|8.00|7.88|7.89 | | ESM2-Medium-Dist_high |10.43|10.45|10.52|10.63|11.42|11.46|11.39 | | ESM2-Hard-Diverisity|6.83|6.05|6.07|6.09|6.348|6.370|6.348 | | ESM2-Hard-Dist_init |6.43|7.68|7.71|7.29|7.37|7.36|7.36 | | ESM2-Hard-Dist_high |8.88|8.88|8.86|8.83|9.19|9.15|9.19 | | ESM3-Medium-Diverisity|7.01|7.25|7.14|6.94 |6.91|6.91 |6.91 | | ESM3-Medium-Dist_init |6.00|6.00|6.00|6.00|6.01|6.01|6.01 | | ESM3-Medium-Dist_high |9.66|9.63|9.66|9.71|10.06|10.06|10.06 | | ESM3-Hard-Diverisity|7.15|7.14|7.02|6.92 |6.83|6.83|6.83 | | ESM3-Hard-Dist_init |7.59|7.58|7.58|7.57|7.57|7.57|7.57 | | ESM3-Hard-Dist_high |9.38|9.32|9.28|9.25|9.84|9.84|9.84 | ---- **Q3: "the worse fine-tuning results might be due to insufficient data and large numbers of learnable parameters. Do you use parameter-efficient fine-tuning, like LoRA?"** - We appreciate your comment regarding parameter efficiency in fine-tuning. Indeed, we employ LoRA, specifically with a rank of 8, which we found optimal in our experiments after evaluating various ranks [2, 4, 8, 12, 16]. The rank of 8 outperformed others, with rank 4 being the next best. Our choice of hyperparameter alpha is 16.
Summary: This paper introduces activation steering, which is a technique adapted from large language models, to control protein language models for generating and optimizing protein sequences with targeted properties (e.g., thermostability, solubility, fluorescence). The method modifies internal model activations using steering vectors derived from contrastive representations of proteins with desired and undesired properties. For optimization tasks, the authors propose ASPO (activation steering-based protein optimization) and identify critical mutation sites via projection onto steering vectors. Experiments across autoencoder (i.e., ESM2, ESM3) and autoregressive (i.e., ProLLaMA) protein LMs demonstrate significant improvements in target properties without model retraining, outperforming fine-tuning and traditional optimization baselines. Claims And Evidence: Mostly supported. Methods And Evaluation Criteria: Yes. Theoretical Claims: Not applicable since there's no theoretical contributions. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Experiment Details Relation To Broader Scientific Literature: n/a Essential References Not Discussed: [PPLM] Plug and Play Language Models: A Simple Approach to Controlled Text Generation. In ICLR 2020. Other Strengths And Weaknesses: **Strengths** 1. The proposed activation steering is a training-free powerful method to generate protein sequences with target properties. 2. The authors also propose a novel Activation Steering-based Protein Optimization (ASPO) framework to improve the protein optimization performance and identify the mutation site. 3. Experimental results demonstrate that the proposed approach is able to significantly improve the steering generation performance on target property across various protein language models (ESM2, ESM3, ProLLaMA). **Weaknesses** 1. In section 4.1, the ESM3 model is able to generate a full sequence from all-mask states, while ESM2 can not. Therefore, for the ESM3 model, authors should also provide the results of directly generating full sequences in addition to revising based on a reference sequence. 2. Authors should provide more details about experiments, such as the number of iterations T, length distribution of generated sequences. 3. Clarity issue. Could you please further explain the lines 189-194: "For practical implementation, [...], A linear classifier to distinguish the representations from the desired and undersired sets."? Does this mean that you only steer the activation of the layer with the highest validation accuracy, instead of all layers, during inference? 4. Missing discussion of PPLM, an important related work on steering pre-trained LMs for conditional generation. Other Comments Or Suggestions: **Minor:** Typo: line 361, "Sensitivityive". Questions For Authors: 1. ESM2/3 models both demonstrate the scalability that a larger scale model is able to obtain better performance in various tasks in their paper. Therefore, could the performance of steering be further boosted by enlarging the model scale (such as, using ESM2 3B/15B or ESM3 7B/98B)? 2. In Figure 3, for AR-PLM, as the number of samples for extracting steering vectors increases, the relevant property values consistently decline. The line graph in the figure starts at 10 samples, and I am curious about the performance when the number of samples is further reduced (e.g., 1 sample). If the final trend shows that performance is always decreasing as the number of samples increases, does this mean that for AR-PLMs, users should use as few samples as possible for steering to obtain the best result? 3. Considering that the proposed method is training-free, will the proposed steering approach be compatible with other protein steering or optimization methods? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for providing valuable feedback. We detail our response below point by point. **W1: the ESM3 model ... generate a full sequence from all-mask states** We conduct experiments on full sequence generation using ESM3, maintaining the same settings as described in Section 4.1. The results are presented in the table below, demonstrating that Activation Steering significantly outperforms the baseline methods, thereby confirming its effectiveness. ||Thermostability|Solubility| |-|-|-| |Original Model|52.8|0.376| |Fine-tuning|65.5|0.412| |Activation Steering|79.6|0.466| **W2: more details about experiments** - The number of iterations T and the value of K - Protein optimization: thermostability: T=8, K=4; solubility: T=4, K=2; GFP: T=4, K=2 - Length distribution of generated sequences - Protein generation - thermostability: 60~256, mean: 194.1 - solubility: 47~256, mean: 168.5 - Protein optimization - thermostability - medium difficulty: 60~256, mean: 183.9 - hard: 102-256, mean: 208.2 - solubility - medium: 71~256, mean: 180.4 - hard: 47~253, mean: 158.9 - GFP: length of all sequences is 237 **W3: further explain the lines 189-194** Our method first involves computing a relatedness score for each token to identify which should be mutated, as discussed prior to the mentioned lines. In lines 189-194, we propose to determine the layer for computing the relatedness score as the one with the highest validation accuracy. This ensures that the most informative layer are utilized for token selection. After determining the tokens for mutation, we replace these tokens with a mask token. Importantly, activation steering is applied during inference across all layers, except the input layer, to steer the model's prediction on the masked tokens towards the desired property. We will revise lines 189-194 to enhance clarity on these points. **W4: Missing discussion of PPLM** Thank you for pointing out the omission of PPLM. PPLM indeed pioneers the concept of steering by modifying key-value pairs in the model's attention mechanism, guided by gradients from an attribute model. In contrast, our method employs a simpler activation steering approach that directly manipulates activations and does not require training an additional attribute model or updating steering vectors. We will include discussion of PPLM in the related work to highlight these distinctions. **Q1: Could the performance of steering be further boosted by enlarging the model scale** - We conducted experiments using ESM2-3B for protein sequence generation, maintaining the same settings as in Sec 4.1. The results are summarized in the table below. - Compared to ESM2-650M, the proposed method Activation Steering shows similar performance in thermostability and significantly improves solubility. - In contrast, fine-tuning performance decreases with larger ESM2, likely due to the need for more data to achieve optimal results. ||Thermostability|Solubility| |-|-|-| |Original Model|56.1|0.298| |FT|64.2|0.385| |AS|80.5|0.631| **Q2: the performance when the number of samples is further reduced** - We appreciate the reviewer's interest in the performance of AR-PLM with fewer samples. To address this, we conducted additional experiments using ProLLaMA with sample sizes ranging from 1 to 10. Each configuration was tested 10 times to mitigate randomness. The results, summarized in the table below, indicate that the optimal number of samples varies depending on the property being optimized. For thermostability, the best performance occurs with 8 samples, while for solubility, it peaks at 3 samples. Notably, the performance does not consistently improve with fewer samples; the lowest number of samples (1 sample) did not yield the best results. - This suggests that while reducing the number of samples can sometimes enhance performance, likely by focusing the generation on a narrower subcluster of proteins with desired properties, there is a trade-off in terms of robustness. Performance becomes less predictable and can vary significantly depending on the specific samples used to compute steering vectors. - In conclusion, while fewer samples can sometimes be beneficial, the optimal number of samples depends on the specific application and desired property, balancing performance with robustness. ||1|2|3|5|8|10| |-|-|-|-|-|-|-| |Thermostability|64.3|61.8|57.4|71.8|74.5|73.5| |Solubility|0.344|0.491|0.507|0.492|0.446|0.302| **Q3: compatible with other protein steering or optimization methods?** In this paper, we focus on studying steering for PLM and we apply PLM steering exclusively within the proposed ASPO method for protein optimization, positioning ASPO as a competitor rather than complementary to existing methods. Integration with other protein optimization strategies remains an interesting direction for future research. --- Rebuttal Comment 1.1: Comment: I really appreciate authors' efforts in addressing my concerns. I accordingly raise my score to 3. Please do incorporate all the discussion above in the final version.
null
null
null
null
null
null
Prediction models that learn to avoid missing values
Accept (spotlight poster)
Summary: The authors introduce missingness-avoiding (MA) machine learning, a framework for altering model training to avoid reliance on missing features. Through experiments on decision trees, lasso, and tree ensembles, they show that MA can reduce reliance on missing-value features with only a minor hit to predictive performance. This may yield improved interpretability of the model. Claims And Evidence: Yes, the experiments clearly show that MA estimators can mostly maintain performance while substantially reducing reliance on missing features. Methods And Evaluation Criteria: Yes the proposed methods are clearly explained and the evaluation criteria make sense. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, the experimental designs seem sound. One arbitrary choice seems to be the selection of alpha, which is chosen as the candidate model with the lowest ρ among those achieving at least 95 percent of the maximum AUROC. This seems sensible but I am curious if there are other reasonable choices, particularly if any of them can lead to improved test AUROC (regardless of missingness reliance). Supplementary Material: Yes, I read through it Relation To Broader Scientific Literature: The results generalize the results of Stempfle & Johansson 2024, which are specific to generalized linear rule models Essential References Not Discussed: None that I know of Other Strengths And Weaknesses: The paper addresses an interesting problem and a clear and well-explained solution. Some of the novelty may be slightly limited in light of Stempfle & Johansson 2024, but the extension to trees seems to be quite useful in practice. Other Comments Or Suggestions: N/a Questions For Authors: Can the authors show, even qualitatively, examples where the decrease in missingness reliance translates to real-world importance, e.g. improved interpretability on one of the datasets? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and appreciate their recognition of the clear explanations and sensible evaluation criteria. We respond to specific comments below. **Re: real-world importance of reducing missingness reliance** Thanks for raising this point—we understand its importance in demonstrating the improved interpretability of our MA method. In the last paragraph of Section 6.2, we refer to MA trees fitted to the Life dataset, shown in Figure 6 in the appendix. This example demonstrates how MA-DT behaves under different values of the missingness regularization parameter $\alpha$. For instance, when $\alpha$ is tuned to balance accuracy and reliance on missing values, MA-DT reduces reliance on missing data by 33% without sacrificing accuracy compared to a standard decision tree. We will consider including these figures in the main text in a potential camera-ready version. Additionally, we will consider adding examples of MA trees fitted to the ADNI dataset. These examples illustrate how MA learning avoids splitting on features that are likely to be missing, instead favoring more complete features that maintain predictive performance. At test time, such trees are more interpretable since they do not rely on features with missing values. **Re: different strategies for choosing $\alpha$** In the submitted version of the work, we primarily considered two strategies for selecting $\alpha$. We chose $\alpha=\alpha^{*}$ as part of the overall model selection process, selecting the candidate model with the lowest missingness reliance among those achieving at least 95% of the maximum AUC. If predictive performance is a priority, this percentage could be increased (at the potential cost of increased missingness reliance). Conversely, we chose $\alpha=\infty$ by selecting the model with the highest AUC among those achieving near-zero missingness reliance (ensuring that the range of $\alpha$ included sufficiently high values). A more general approach could involve setting a threshold for missingness reliance, such as 20%, and then maximizing AUC among models meeting this threshold. Another alternative is to maximize $\mathrm{AUC} – \gamma\cdot\hat{\rho}$, where $\gamma$ controls the trade-off between AUC and missingness reliance and needs to be set for the specific problem at hand. We thank the reviewer for raising this point and we have added the following discussion to the revised paper: Alternative strategies for selecting $\alpha^*$ should consider the specific application needs. For instance, rather than focusing solely on the acceptable trade-off in predictive performance for achieving the lowest possible $\alpha$, we might prioritize limiting the number of missing features per individual. Relying on an average measure across the dataset may not provide sufficient insight. Additionally, domain knowledge can help define an acceptable level of missingness, but quantifying this threshold is often challenging and subjective.
Summary: This manuscript focuses on improving the reliability of machine learning methods when encounter missing value at inference time. A novel framework termed Missingness-Avoiding learning is proposed to reduce the models' reliance of missing values for decision trees, sparse linear models and ensemble methods. Specifically, a classifier-specific regularization technique is proposed to alivate the models' dependency on missing values. Experiments on six datasets are carried out to validate the proposed method. Claims And Evidence: First of all, I would like to acknowledge that I am not familiar with the related works. I tried my best to understand this work and do the review. It is highly possible that I miss some importance aspects of this submission. - The authors claim that the proposed method is a general framework for training models with missing values. However, the proposed MA is only implemented with three types of classic machine learning methods, i.e., decision tree, linear model and ensembles. Is MA applicable for more general models like neural networks? Methods And Evaluation Criteria: - Most of the datasets used in the experiments are small-scale tabular data, which may be insufficient to well support the proposed method. - The compared baselines may be insufficient. For examople, the proposed MA is only compared with zero-imputation and MICE. Other advanced imputation-based methods such as [1] is missed. [1] GAIN: Missing Data Imputation using Generative Adversarial Nets, NIPS'18 [2] TabDDPM: Modelling Tabular Data with Diffusion Models, ICML'23 Theoretical Claims: There is no proofs in this manuscript that need to be checked. Experimental Designs Or Analyses: - The experimental designs make sense to me. I have no further comments on this point. Supplementary Material: No, I did not. Relation To Broader Scientific Literature: None. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful feedback and appreciate their recognition of our work addressing a practically important yet underexplored challenge in the healthcare domain—enhancing model reliability under test-time missingness. We address the specific comments below. **Re: MA learning in neural networks** Standard feed-forward neural networks typically rely on all input features to a non-zero degree unless the learning objective is regularized to yield sparse weights. Even then, the reliance is generally not contextual but remains the same for every input, regardless of the observed features. In principle, contextual reliance could be achieved by combining the MA penalty with attention modules that attend only to observed features. However, our focus is on tabular data, where neural networks tend to underperform. We instead target model classes like decision trees and linear models, which not only perform well on tabular data but also offer interpretability—an essential requirement in our primary application domain, healthcare. Exploring MA learning with attention modules for tabular representations is an interesting direction for future work. **Re: dataset sizes** We thank the reviewer for their observation and agree that the number of features in our datasets is relatively small (ranging from 16 to 42). However, we include a diverse set of datasets in terms of sample size—from a few hundred (e.g., Pharyngitis: 676, ADNI: 1,337, Breast Cancer 1,756, LIFE 2,864) to over 10,000 samples (e.g., FICO: 10,549, NHANES: 10,000). These dataset sizes are typical in the tabular ML literature and reflect realistic constraints in many application domains. **Re: advanced imputation methods as baselines** Thank you for sharing the two works. We agree that the methods could be compared to more advanced imputation approaches. However, our goal for the baselines was to include representative methods from different strategies for handling missing data: impute-then-predict, model-specific handling of missingness, and treating missingness as an informative signal. Each of these serves a slightly different purpose compared to the MA approach. We chose zero imputation and MICE because they are both widely studied in the literature and commonly used in practice, making them strong reference points for comparison. More advanced imputation methods are likely to yield similar reliance on missing values when combined with similar regressions or classifiers. Thus, the choice of imputation method is secondary in our study. To enable a fair comparison with our method, we also calculate the missingness reliance for the baselines. Since the baseline methods were not designed with the goal of learning to avoid missing values, but rather to provide accurate imputations, they may be incentivized to rely on well-imputed values—potentially leading to a higher missingness reliance metric.
Summary: This work introduces a generic framework for encouraging models to avoid accessing missing values through regularization. Specific implementations of this framework for Lasso, greedy decision trees, and tree ensembles are introduced. A thorough discussion of the settings in which missingness can and cannot safely be avoided is included, followed by experiments showing that the level of reliance on missing features can be substantially reduced without sacrificing much predictive performance on several real world datasets. Claims And Evidence: Yes. The clear discussion of settings in which the MA framework is expected to fail (Section 5.2) is particularly appreciated, and helps clarify when MA can safely be used. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes; I read all three proofs in Appendix B. The proofs are correct, although I have a minor suggestion for Corollary 1. (w1) The given proof is valid, but does not specify the hypothesis class from which h* is drawn. From the notation used in the paper, it is implied that h* must be a member of the hypothesis class H we are currently considering. This is benign for universal approximators like decision trees, but may be misleading when H is the class of linear models (there may be no linear model h st h(x) = E[Y | x] ). For clarity, I recommend specifying that h* may not be a member of H where appropriate. Experimental Designs Or Analyses: I did not check the code, but the description of the experimental setup is clear and thorough, and the described experimental setup makes sense. Supplementary Material: Yes, I reviewed appendices B, C, and F, and skimmed the other sections. Relation To Broader Scientific Literature: This work takes a different angle on the well-studied problem of handling missing data. It appropriately considers impute-then-predict, model-specific missingness handling, and missingness as a value baselines, which each have slightly different goals than the MA approach. Essential References Not Discussed: (w2) One notable omission from the set of baselines considered is [1]. Similar to this work, [1] introduces an approach that aims to avoid including missing values in the splits used in a decision tree. This is particularly relevant because [1] aims to split on x_j only in the subspace where x_j is not missing; this is an approach that would perform particularly well in the ODDC rule setting considered in the present work. [1] Beaulac, Cédric, and Jeffrey S. Rosenthal. "BEST: A decision tree algorithm that handles missing values." Computational Statistics 35.3 (2020): 1001-1026. Other Strengths And Weaknesses: Strengths - The paper is quite well written, with clean and clear notation used throughout. - The proposed methods are elegant, and empirically shown to be effective. - This work provides a clear discussion of settings in which we expect MA regularization to be viable in general (5.1) and in which we don't expect MA regularization to be viable (5.2). This is very helpful in understanding the method, and will support future research. Weaknesses - (w1) See "Theoretical Claims" - (w2) See "Essential References Not Discussed" - (w3) An (arguably) less restrictive setting than MNAR in which this strategy may be expected to fail is the case of informative missingness. This setting is discussed in Appendix C, but is not connected with the main paper in any way. I recommend adding some discussion of this point to the end of 5.2, since, in this case, we would expect MA to find inferior accuracy to missingness indicator approaches. - (w4) See "Questions For Authors" <- This is my primary concern, and I am inclined to increase my score if it can be addressed in a compelling way. - (w5) See "Questions For Authors" Other Comments Or Suggestions: To help respond to this review, I've labelled each point on which a response would be appreciated with (w#). Some additional suggestions to which I do not expect a response: - It is somewhat confusing to introduce sigma_ij in section 4.1 — I would recommend holding off on adding this term until 4.3, and simply defining an updated splitting rule there. - 4.2 describes an l1 based regularization, but the target problem seems to suggest an L0 regularization (i.e., we want to guide coefficients to exactly 0). It may be interesting to consider methods that optimize L0 regularized classification directly (e.g., [2, 3]) rather than Lasso. I do not expect this experiment to be done for rebuttal, and it does not effect my rating. - Throughout the paper there is discussion using the language of regression (i.e., “impute-then-regress”). However, the experiments focus on classification. Unifying the language would improve readability. - Typos - Line 162, right column — “we propose medications” should be “we propose modifications” - Line 216 left — it would be more accurate given equation 5 to say that Lasso uses a parameter lambda > 0, rather than alpha > 0 - Line 194 right — “individual trees ar fit” should be “individual trees are fit” [2] Liu, Jiachang, et al. "Fast sparse classification for generalized linear and additive models." Proceedings of machine learning research 151 (2022): 9304. [3] Dedieu, Antoine, Hussein Hazimeh, and Rahul Mazumder. "Learning sparse classifiers: Continuous and mixed integer optimization perspectives." Journal of Machine Learning Research 22.135 (2021): 1-47. Questions For Authors: - (w4) Why should practitioners prefer to avoid accessing missing variables, rather than reasoning based on missingness/handling it natively with, e.g., defaults? This seems like a key question in motivating this work, but is not compellingly answered in the current version of the paper. This is my primary concern, and I am inclined to increase my score if it can be addressed in a compelling way. - (w5) Corollary 1 speaks to the existence of a Baye's optimal model with 0 reliance on missing features, but it says nothing about convergence/relevance of the given loss functions to finding this h*. Is it possible to say anything stronger/connect the optimization problems described in Section 4 to this Corollary? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and valuable feedback, including their appreciation of our methods, empirical effectiveness, and the helpful discussion. We have addressed all suggestions to improve clarity and respond to specific comments below. **Re: motivation of avoiding missing values (w4)** MA learning is designed for prediction tasks with missing data where interpretability matters—clinical risk scores are a key example. Standard approaches (imputation, missingness indicators, default rules) often yield models that rely on unavailable inputs, as shown by their high missingness reliance in our experiments, which can reduce interpretability. For example, it may be difficult to justify that a physician or model should guess the value of a missing test (if using imputation) or that a patient’s risk of readmission should go down because the test result is missing (if using indicators or default rules in trees)—especially if the test is irrelevant based on the values of other features, such as a diagnosis. Though such tests may be predictive in other contexts, many learning tasks are underspecified, and models with similar accuracy may differ in missingness reliance. MA learning favors models that use information that is both predictive *and* available. Finally, we argue that MA learning and reasoning about missingness patterns go hand in hand. Increasing the penalty $\alpha$ encourages trees to split first on commonly observed features (e.g., demographics), and then on contextually available ones (e.g., tests for certain conditions). As discussed in Section 5 on ODDC rules, these observation patterns may be correlated with predictiveness, e.g., when the outcome reflects a diagnosis based on observed information. **Re: relevant literature (w2)** We thank the reviewer for highlighting the work by Beaulac & Rosenthal and have added it to the related work section. As noted, BEST suits ODDC settings with clear knowledge of the data-generating process, but defining gating variables is less straightforward with limited knowledge. In this sense, we view the MA framework as more general. A key difference between our methods is that MA trees do not explicitly split on the missingness mask. On the one hand, BEST can explicitly build trees using mask splits—often at the top—approaching the pattern submodel (PSM) of Mercaldo & Blume. On the other hand, MA trees cannot split based on non-MAR patterns, which may limit their expressiveness in such cases. However, avoiding explicit mask splits improves interpretability by preventing the tree structure from being dominated by missingness logic. Comparing MA-DT to BEST on the dataset from Beaulac and Rosenthal would be interesting, but the dataset does not appear to be publicly available. **Re: informative missingness (w3)** We introduce informative missingness in Section 3 but agree that Appendix C felt somewhat isolated. The reviewer is correct: when missingness is informative, adding missingness indicators is generally preferable to using an MA approach. For example, Beaulac & Rosenthal show that when missingness depends on the label, indicators can be predictive. In such cases, MA trees cannot leverage this information and may underperform, depending on the imputation. We appreciate the feedback and address this in the revised manuscript (Section 5.2). **Re: convergence (w5)** The reviewer raises an important question: if an optimal model $h^*$ exists, can our algorithm recover it with enough data? This depends on the model class, optimization, and data distribution. For example, if $h^*$ is a sparse logistic model, our L1 objective can likely recover it—similar to LASSO. The optimization in Section 4 supports Corollary 1: under ODDC rules, we can learn models with low missingness reliance and strong performance. We will add this discussion to the final paper. **Re: clarity (w1), notation, and regularization** - We agree it's important to clarify the relationship between $h^*$ and $\mathcal{H}$ and now state explicitly that $h^*(x) = \mathbb{E}[Y \mid x]$ may lie outside $\mathcal{H}$. This is benign for universal approximators (e.g., decision trees) but more significant for restricted classes like linear models. - We agree that introducing $\sigma_{i,j}$ in Section 4.1 may be premature; we now assume equal feature contribution initially and introduce $\sigma_{i,j} \in {0,1}$ later in Section 4.3, where it supports the generalization to tree ensembles. - We also acknowledge that starting with L0 regularization to penalize missing features would have been natural; however, we focused on L1 (LASSO) due to its compatibility with widely used implementations (e.g., scikit-learn, statsmodels). We will clarify this choice, along with a discussion of L0-based alternatives. Lastly, we thank the reviewer for their valuable feedback and excellent suggestions, and for noting inconsistencies and typos, which have been corrected in the revision. --- Rebuttal Comment 1.1: Comment: I thank the authors for their comprehensive response, and I am generally quite satisfied with the points raised by the authors. In particular, I found the clinical risk score example compelling -- a test might be given only to patients that a clinician considers high risk, so it does not help practitioners to say "this patient is low risk because they haven't had this test". I hope the authors will include this, and all other points raised in the rebuttal, in the final manuscript. I enjoyed reading this paper, and hope it will be accepted. I have updated my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful feedback and kind words. We’re glad the clinical risk score example resonated with you and will ensure all points from the rebuttal are reflected in the final manuscript. We truly appreciate your support and updated score!
Summary: The paper proposes a novel framework – “missingness-avoiding” (MA) machine learning – designed to mitigate the impact of missing data during prediction. The core idea is to train models that inherently minimize reliance on features with missing values at test time. This is achieved by incorporating classifier-specific regularization terms into the learning objectives for decision trees, sparse linear models, and ensemble methods. The authors demonstrate this approach on several datasets, showing improved performance compared to standard imputation techniques while maintaining interpretability Claims And Evidence: The central claim – that models can be trained to actively avoid using missing values – is clearly articulated in the abstract and introduction. However, the level of detail regarding when this avoidance is feasible and effective isn’t consistently clear throughout the paper. The evidence supporting this claim is primarily demonstrated through experimental results across multiple datasets. While the quantitative performance improvements are notable (especially with MA-DT), the paper could benefit from a more rigorous discussion of the conditions under which these gains are most pronounced. Methods And Evaluation Criteria: Methods: The authors’ method is well-defined, outlining the specific regularization terms applied to different model types (decision trees, sparse linear models, ensemble methods). The incorporation of classifier-specific regularization is a clever and theoretically sound approach. The use of MA-DT, MA-LASSO, MA-RF, and MA-GBT provides a good range of model implementations for comparison. Evaluation Criteria: The evaluation primarily focuses on AUROC (Area Under the Receiver Operating Characteristic curve) as a measure of predictive performance and missingness reliance. The use of cross-validation to assess model stability is appropriate. However, the paper could benefit from exploring other relevant metrics beyond AUROC, such as precision, recall, or F1-score, particularly when considering the implications for clinical applications where false positives and negatives have different costs. Theoretical Claims: The theoretical underpinning – that models can learn to exploit missingness patterns – is rooted in ideas related to Bayesian inference and the concept of minimizing expected risk. The paper does a reasonable job of connecting this with the idea of avoiding reliance on features with high missingness rates, but it could be strengthened by explicitly stating how the regularization terms are mathematically derived from these principles. For example, it might be useful to compare this approach to simpler stacked models and use this to situate where MA-learning could be useful. Experimental Designs Or Analyses: Experimental Design: The experimental design is commendable, utilizing multiple datasets (NHANES, LIFE, ADNI, Breast Cancer, Pharyngitis) with varying characteristics to demonstrate the generalizability of the MA approach. The inclusion of zero imputation as a baseline comparison is also sensible. Analysis: The analysis effectively presents the quantitative results, clearly showing the performance gains achieved by the MA models compared to standard methods. However, the paper could benefit from a more in-depth discussion of why these improvements occur. For example, are the MA models better at capturing non-linear relationships that are obscured when missingness is treated as simple imputation? A qualitative analysis of the decision trees generated by the MA-DT model would also be valuable. The paper could also benefit from a more thorough comparison to stacked models trained on complete data Supplementary Material: N/A Relation To Broader Scientific Literature: Overall the paper has sufficient literature review Essential References Not Discussed: Most essential references are in the paper Other Strengths And Weaknesses: The paper presents an interesting and potentially valuable concept with promising experimental results. However, the lack of clear guidance on when this approach is most effective, coupled with a somewhat superficial theoretical justification, warrants a "weak accept" recommendation. Further work is needed to solidify the theoretical foundations, provide more robust comparisons against alternative methods (particularly stacked models), and offer practical guidelines for practitioners on how to best apply this framework. Other Comments Or Suggestions: Please consider expanding on the theoretical framework, especially around when MA-learning is suitable. Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful feedback and appreciate their recognition of our well-motivated and novel MA framework. We have addressed all suggestions to improve clarity and respond to specific comments below. **Re: relevance of MA learning** We would like to direct the reviewer to Sec. 5, where we outline when MA learning is expected to perform well (Sec. 5.1) and when it may face challenges (Sec. 5.2)—a discussion also highlighted as a strength by Reviewer go58. MA learning is expected to work well when the missingness mask follows a clear structure, such as when ODDC rules govern the data-generating process. When there is no structure in the missingness patterns related to observed features, such as in an MCAR or a fully MNAR setting, we expect it to be harder for MA learning to achieve low missingness reliance with maintained accuracy. As the reviewer noted, our experiments show that MA learning outperforms baselines across datasets while greatly reducing reliance on missing values. Finally, MA learning may benefit from being combined with missingness indicators when missingness itself is highly informative—the reliance penalty will just make sure to keep such features to a minimum. **Re: guidelines for practitioners** To apply the MA framework in practice, we recommend starting by selecting a model class that aligns with the interpretability requirements of the application. Linear models and decision trees offer greater transparency, while ensembles may yield better performance with reduced interpretability. Once the model type is chosen, the trade-off between predictive performance and reliance on missing features can be adjusted through the regularization parameter $\alpha$. A higher $\alpha$ encourages the model to avoid relying on features that are often missing at test time, while a lower $\alpha$ prioritizes performance. The appropriate choice of $\alpha$ depends on how much missingness reliance is acceptable in the given context. We describe different strategies in the response to Reviewer eoMs. To evaluate this, practitioners can compute the reliance score ($\hat{\rho}$), which quantifies how frequently a model depends on missing features. Importantly, MA learning requires no assumptions about the underlying missingness mechanism (although the achievable accuracy-reliance tradeoff will be affected by its structure), making it a practical and robust approach across a wide range of real-world settings. **Re: qualitative analysis** We agree that qualitative analysis of MA trees is valuable. In Section 6.2, we refer to Figure 6 (appendix), which shows how MA-DT behavior varies with different values of the missingness regularization parameter $\alpha$ on the Life dataset. For instance, when $\alpha$ is tuned to balance accuracy and reliance on missing values, MA-DT reduces reliance on missing data by 33% without sacrificing accuracy compared to a standard decision tree. For the camera-ready version, we will consider including these figures in the main text and adding MA tree examples from the ADNI dataset. These illustrate how MA learning favors complete features over those with missingness, enhancing interpretability without compromising accuracy at test time. **Re: theoretical foundations and stacked models** Thank you for the insightful comment. We agree that strengthening the theoretical foundations is important. As for stacked models, we're unsure how they would address the challenges in our setting. Stacking typically involves training a meta-model on the outputs of several base models (e.g., decision trees, logistic regression). The reviewer suggests that these may be trained using complete data, but this is not generally available without imputation, which is what we try to avoid in the first place. Alternatively, we could train base models on different data subsets based on their missingness pattern, but this poses the same problem—it seems to us that the stacking question is orthogonal to reducing missingness reliance, but could be an alternative to boosting. We’re very open to considering this direction and would greatly appreciate any additional insights the reviewer can share on their suggestion. **Re: clarification on MA learning** Many model classes have large Rashomon sets—sets of near-optimal models that have similar predictive performance but differ on other properties. By adding the missingness reliance penalty, we prioritize models within this set that rely less on missing values. This can exploit nonlinear patterns (as with MA trees, where reliance is contextual) or linear ones (as with variable selection in MA-LASSO). We will include this in our Section 5 discussion. **Re: performance metrics** We agree that multiple metrics are important to capture different aspects of performance. Due to space constraints, we focused on AUROC and missingness reliance in the main text and will consider including additional metrics (e.g., F1 score) in the appendix.
null
null
null
null
null
null
Information Bottleneck-guided MLPs for Robust Spatial-temporal Forecasting
Accept (poster)
Summary: This paper proposes the Robust Spatio-Temporal Information Bottleneck (RSTIB) principle to enhance the robustness of spatio-temporal prediction models to noise interference. The authors introduce RSTIB-MLP, a multi-layer perceptron (MLP) based implementation that achieves state-of-the-art performance in the face of noise interference. Extensive experiments on multiple datasets demonstrate the robustness and efficiency of the proposed model. ## update after rebuttal I have read the overall feedback and the author has addressed my concerns, but some sections need to be rewritten and the final version needs to be carefully revised. Claims And Evidence: The claims are generally supported by clear evidence. The authors provide a theoretical framework for RSTIB and demonstrate its effectiveness through experiments on noisy datasets. However, the assumption of additive white Gaussian noise (AWGN) is restrictive and may not generalize to other noise types (I understand that this is for the sake of good relaxation of the theoretical derivation). In particular, real-world data may be noisy in nature with spatio-temporal entanglement, or simply missing. Methods And Evaluation Criteria: The proposed RSTIB-MLP method and evaluation criteria are applicable to the problem. The authors use standard metrics (MAE, RMSE, MAPE) and benchmark datasets for spatio-temporal prediction. The inclusion of the knowledge distillation module is innovative, but would be better if its role in enhancing feature diversity was explained more clearly. Theoretical Claims: Although I have not examined them carefully, I believe the theoretical claims are sound. The authors correctly derive the RSTIB principle by lifting the Markov assumption and provide proofs for the key propositions. Experimental Designs Or Analyses: The experimental design is comprehensive but has some limitations. The authors demonstrate that RSTIB-MLP outperforms state-of-the-art methods under noisy conditions. However, the experiments mainly use artificially added noise, which may not fully represent real-world scenarios. Supplementary Material: The supplementary materials are extensive and provide more details on the theoretical proofs, hyperparameter tuning, and experimental results. However, some sections could benefit from clearer explanations, especially on the role of knowledge distillation and the impact of different regularization terms. Relation To Broader Scientific Literature: This paper builds on the existing information bottleneck (IB) method and extends it to handle the dual noise effect in spatiotemporal prediction. This work is related to recent advances in robust representation learning and applies these ideas to real-world problems. However, the authors should do a better job of distinguishing their contribution from existing works, especially those cited in the paper. Essential References Not Discussed: This paper cites related works, but could benefit from discussing recent advances in robust machine learning and spatiotemporal forecasting. For example, recent works on adversarial training of time series data or methods for handling missing data in spatiotemporal models could provide additional context [1]. In addition, feature variance is used to quantitatively analyze the diversity among learned features, which can also be interpreted as spatiotemporal heterogeneity in the spatiotemporal forecasting scenario, as shown by a recent paper [2]. [1] Cheng, Hao, et al. "RobustTSF: Towards theory and design of robust time series forecasting with anomalies." ICLR 2024. [2] Chen, Wei, and Yuxuan Liang. "Expand and Compress: Exploring Tuning Principles for Continual Spatio-Temporal Graph Forecasting." ICLR 2025. Other Strengths And Weaknesses: ### **Strengths:** **S1.** The paper introduces a novel theoretical framework (RSTIB) that extends the information bottleneck principle to handle dual noise effects for spatiotemporal forecasting applications. **S2.** The experiments are comprehensive, covering multiple benchmark datasets and comparison baselines. **Weaknesses:** **W1.** Overall, my biggest concern is that its narrow setting affects the broad interest of the community. Specifically, the assumptions of AWGN are restrictive and may not generalize to other noise types. **W2.** The experiments mainly use artificially added noise, which may not fully represent real-world noisy scenes. **W3.** The role of knowledge distillation in enhancing feature diversity has not been fully explored. Other Comments Or Suggestions: **C1.** Basically, I think this is a good paper in the field of spatio-temporal forecasting, but its limitation lies in its diverse settings and motivations, which reduces its audience. I actually suggest that the authors change it to robust spatio-temporal forecasting under extreme noise conditions (use this as motivation to revise the writing story and introduce the theoretical framework), which I believe will win wider attention. **C2.** Therefore, the authors can include more real-world noise experiments (e.g., the spatiotemporal data missing setting of the reference article [3]) to verify the robustness claims. [3] Cini, Andrea, Ivan Marisca, and Cesare Alippi. "Filling the g_ap_s: Multivariate time series imputation by graph neural networks." ICLR 2022. Questions For Authors: **Q1.** How does the RSTIB principle handle noise types other than AWGN? Can the authors provide insights into robustness under different noise models? **Q2.** Can the authors elaborate on the role of knowledge distillation in enhancing feature diversity and its impact on robustness? Ethical Review Concerns: not applicable Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your positive and constructive comments. We provide a point-by-point response as follows. > ### **Re: Claims And Evidence & Experimental Designs Or Analyses & W1 & W2 & C2 & Q1** Due to space limits, please refer to **Re: W1** in the Rebuttal for Reviewer **ZfWm**. > ### **Re: Methods And Evaluation Criteria & Supplementary Material & W3 & Q2** - **KD's role in enhancing feature diversity:** Due to space limits, please refer to **Re: W2** in the Rebuttal for Reviewer **ZfWm**. - **KD's impact on robustness:** We observed performance gains by incorporating the KD module. The reason behind could be that KD dynamically tunes the balance across different time series, enabling MLPs with limited capacity to favor information containing less noise. However, the improvement in predictive performance is less prominent compared to robustness enhancements brought by the RSTIB, because in all situations, we still need to balance the preservation of the target and the compression of all the reparameterization. Intrinsically, although KD can tune the relative ratio, the improvement regarding predictive performance is still constrained by the objective itself. - **The impact of different regularization terms:** We provided further ablation studies and discussions on the respective roles of the regularization terms in Appendix K.2. Please refer to that section for more details. > ### **Re: Relation To Broader Scientific Literature** Appendix I.2 discusses the distinctions between RSTIB/RSTIB-MLP and existing IB methods. Please refer to that section for more details. > ### **Re: Essential References Not Discussed** We highly appreciate the suggested papers! Below we provide a detailed discussion on the relation between our submission and the cited works: - We are inspired that we could include an additional part for introducing the robust spatial-temporal forecasting: RobustTSF ([1]) considers time series forecasting with anomalies, while our work similarly considers spatial-temporal forecasting with noise perturbation. We share a similar way of experimental conduction: artificially introducing noise and data missing (specifically mentioned in **Essential References Not Discussed**). Including this paper can enhance our experimental settings and results' credibility. - Linking feature diversity with spatiotemporal heterogeneity from [2] is really intriguing. Both works consider robustness in spatial-temporal forecasting. Our work attempts to enhance the feature diversity, while [2] aims to capture the heterogeneity. Our work quantifies feature diversity using feature variance(*Var*), [2] quantifies heterogeneity using the Average Node Deviation (AND) metric: Given the feature matrix $X \in \mathbb{R}^{n \times d}$, the AND metric is defined as: $$ D(X) = \frac{1}{n^2}\sum_{i=1}^n \sum_{j=1}^n \sum_{k=1}^d (x_{ik}-x_{jk})^2 $$ To link it with *Var*, we further derive $D(X)$ as below: \begin{aligned} D(X) &= \frac{1}{n^2} \sum_{i=1}^n \sum_{j=1}^n \sum_{k=1}^d (x_{ik} - x_{jk})^2 \\\\ &= \frac{1}{n^2} \sum_{i,j,k} \left( x_{ik}^2 + x_{jk}^2 - 2x_{ik}x_{jk} \right) \\\\ &= \frac{1}{n^2} \left( 2n \sum_{k,i} x_{ik}^2 - 2 \sum_{k}\left( \sum_{i} x_{ik} \right)\left( \sum_{j} x_{jk} \right) \right) \\\\ &= \frac{2}{n^2} \left( n \sum_{k,i} x_{ik}^2 - n^2 \sum_{k} \bar{x}_k^2 \right) \\\\ &= \frac{2(n-1)}{n} \sum _ {k=1}^d \left( \frac{1}{n-1} \left( \sum _ {i=1}^n x _ {ik}^2 - n\bar{x} _ k^2 \right) \right) \\\\ &= \frac{2(n-1)}{n} \cdot \text{tr}(\mathbf{Cov}), \end{aligned} where $ \bar{x} _ k = \frac{1}{n} \sum _ {i=1}^n x _ {ik} $ and $ \text{tr}(\mathbf{Cov}) = \sum_{k=1}^d \frac{1}{n-1} \left( \sum_{i=1}^n x_{ik}^2 - n\bar{x}_k^2 \right) $. Thus, the AND metric is proportional to the trace of the covariance matrix $\mathbf{Cov}$: $$ D(X) \propto \text{tr}(\mathbf{Cov}) $$ The AND metric emphasizes overall variance, whereas our *Var* metric emphasizes balanced standard deviations. We'll extend the above in our final version. > ### **Re: C1** Many thanks for this constructive advice! We suspect that the mentioned "extreme noise conditions" refer to real-world noise experiments (as mentioned in **C2**), such as data-missing scenarios. We have actually provided such evaluations and discussions in Appendix K.3. Your suggestion is helpful as it guides us towards more focused and clear settings. Nevertheless, by including robustness studies under diverse noise conditions, we have also demonstrated the consistent performance of our method. We hope this addresses all your concerns. Thank you very much! --- Rebuttal Comment 1.1: Comment: I've read through the overall feedback, and it looks good. Good luck. Just to add, seriously—I’d recommend the authors check out my suggestion C1-C2 in the final version. It’ll help the paper reach a broader audience. Also, I believe that adding all above discussions will make the paper better. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Mrf2, Thank you very much for your positive feedback on our rebuttal. We greatly appreciate your valuable suggestions and will carefully consider how to win a broader audience in future revisions. Besides, we will add the discussions to refine our submission. Wishing you the best of luck as well! Best regards, Authors
Summary: This paper introduces a novel MLP training method based on the Information Bottleneck principle, termed RSTIB-MLP, designed to address the balance between model efficiency and robustness. By analyzing the dual noise effect in STgraph data, authors propose the Robust Spatiotemporal Information Bottleneck (RSTIB) principle. This principle relaxes the Markov assumption of IB and explicitly minimizes the impact of noisy information. In its implementation. Experimental results demonstrate that RSTIB-MLP outperforms other counterparts in robustness against noise interference across multiple spatiotemporal benchmarks, while maintaining higher computational efficiency. Claims And Evidence: Yes. Authors use clear algorithms and experiments to support their claims. Methods And Evaluation Criteria: Yes, they use classic ST benchmarks to formula their experiments. Theoretical Claims: Yes, I have checked the correctness. Experimental Designs Or Analyses: The experiment design is soundness. However, there are uncertainties regarding the correspondence between simulated noise scenarios and real-world conditions, and substantial challenges exist in validating the model's noise resistance capabilities. Supplementary Material: Yes. Many hyperlinks in the appendix are broken (for example, the Fig. ?? in page 16), and the author should correct these parts. Relation To Broader Scientific Literature: This study addresses an intriguing research question regarding model robustness in noisy scenarios, a topic that has received limited attention in prior research. Essential References Not Discussed: How do these recent spatio-temporal prediction methods perform under this paper's experimental settings? [1] PDFormer: Propagation Delay-Aware Dynamic Long-Range Transformer for Traffic Flow Prediction. AAAI 2023. [2] UrbanGPT: Spatio-Temporal Large Language Models. KDD 2024. [3] UniST: A Prompt-Empowered Universal Model for Urban Spatio-Temporal Prediction. KDD 2024. Other Strengths And Weaknesses: **Strengths** 1. The study employs a diverse and extensive set of experimental datasets. 2. Experimental results demonstrate the proposed method's effectiveness in handling data noise under specific scenarios. **Weakness** 1. The study lacks assessment of computational efficiency. How does the proposed method's computational performance compare with the baselines? 2. While the proposed methodology validates noise resistance by artificially introducing noise into the original data, several critical considerations warrant discussion: (i) Real-world scenarios present significant challenges in identifying noise occurrence patterns and frequencies, making noise characterization inherently difficult. (ii) Both training and testing datasets contain noise, which seemingly complicates the validation of the model's 'noise resistance capability'. How do the authors interpret and address these methodological challenges? 3. The assumption for relaxing the Z−X−Y restriction is vital for authors' RSTIB-MLP, but the discussion part of it is only in appendix. Could you please provide more details of it in the main text? Other Comments Or Suggestions: Please refer to Supplementary Material Questions For Authors: 1. The knowledge distillation module dynamically adjusts the regularization strength through the noise influence index (Definition 4.9). This index is calculated based on the prediction error of the teacher model. However, will the choice of the teacher model (such as STGCN) lead to significant deviations in the results? 2. Have unsupervised or self-supervised methods been attempted to replace the teacher model? The process by which the authors derived their algorithm seems to have nothing to do with Knowledge Distillation (KD). Can the final loss function be directly applied to unsupervised learning? 3. The assumption for relaxing the Z−X−Y restriction is vital for authors' RSTIB-MLP, but the discussion part of it is only in appendix. Could you please provide more details of it in the main text? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > ### **Re: Supplementary Material** We are sincerely sorry for the typos, and will correct them in our final version. P16: "illustrated in Fig.7(a)"; "represented in Fig.7(b)". > ### **Re: Essential References Not Discussed** Appendix K.7 has examined such scenarios where transformer-based models with a very large number of parameters (**PDFormer**and **STAEformer**) are compared. Please note that achieving new **SOTA performance** is not our claim. Instead, we dedicate to achieving a good trade-off between robustness and efficiency, which is demonstrated with SOTA STGNNs ((**iii**) above **Related Work**). We will include and discuss these three papers and further clarify this point in our final version. > ### **Re: W1** It is evaluated theoretically (Appendix H) and empirically (Section 5.3). Below we additionally evaluate the overall training to convergence time (TTCT) in PEMS04 and the training time per epoch(TTPE) on the large Weather2K-R dataset: |Method|TTPE(s)|TTCT(s)| |-|-|-| |RSTIB-MLP|67.2|2842.3| |DSTAGNN|1050.8|9283.7| |Graph-WaveNet|556.2|7308.6| |STG-NCDE|1436.1|9238.7| |STExplainer|1747.6|24514.0| It is clear that our full training is much faster; e.g., RSTIB-MLP reduces up to 88.42% compared to STExplainer. Besides, the superior efficiency of RSTIB-MLP is consistent on the large dataset. These results will be included to ensure comprehensiveness. > ### **Re: W2** (i) Please refer to **Re: W1** in the Rebuttal for Reviewer **ZfWm**. (ii) We follow the setting of RGIB as follows: - The train set is added with input noise and target noise. - The validate and test sets are added with the same set of input noise, while their targets remain clean (unchanged) to ensure accurate evaluation. All methods follow these settings. > ### **Re: W3 & Q3** Yes! we'll leave rooms in the main text for below: - **Assumption 4.1:** The sliding window mechanism is a technique for processing ST data, extracting fixed-length subsequences from raw data by progressively sliding a window over temporal or spatial-temporal dimensions. A critical feature is that the same data window can flexibly serve as either input or target, creating a "*dual noise effect*"—when a noisy sequence serves as both the input \(X\) in one window and the target \(Y\) in another, noise propagates bidirectionally. If \(Z - X - Y\) holds, then \(I(Z; Y|X) = 0\): The noisy information behind \(I(Z; Y|X)\) is directly ignored. Ignoring noise in \(Y\) is therefore problematic. The dual noise effect allows noise influences both input and target across overlapping windows, necessitating relaxation of \(Z - X - Y\). - **Assumption 4.2:** ST graphs exhibit invariant patterns (generalizable across time) and variant patterns (node-specific, time-varying dynamics). Invariant patterns might represent structural dependencies (e.g., road connectivity in traffic prediction), whereas variant patterns could reflect transient events (e.g., traffic congestion due to accidents). Data dynamics thus also depend on the current window's characteristics, meaning the prediction for \(Y\) is not entirely determined by \(X\), but also by \(Y\)'s unique dynamics. Therefore, the assumption \(Z - X - Y\) requires relaxation. > ### **Re: Q1** Please refer to **Re: Experimental Designs or Analyses** in the rebuttal for Reviewer **WNiV**. > ### **Re: Q2** To achieve this, a method calculating the noise impact indicators without supervised signals is required. Below provides one possible alternative unsupervised solution to our KD approach: Firstly, perturb the noisy \\(X\\) with noise to generate \\(X_{\\text{perturb}}\\). Then: $$ \hat{\alpha_i} = \frac{\exp\left(D\left(X_{\text{perturb}, i}, X\right)\right)}{\sum_{j=1}^{N} \exp\left(D\left(X_{\text{perturb}, j}, X_j\right)\right)}, \quad \forall i \in \{1, \ldots, N\} $$ The following training process is similar to RSTIB-MLP with (w/) KD. We investigate perturbation ratios of 0.1, 0.3, 0.5, and noise ratios of 0.1, 0.3, 0.5 to compare RSTIB-MLP with KD. Best results for the unsupervised method (RSTIB-MLP w/ Aug) across all perturbation ratios are reported below: |Noise Ratio|RSTIB-MLP w/|MAE|RMSE|MAPE(%)| |-|-|-|-|-| |0.1|KD|23.64|36.44|15.22| ||Aug|24.02|36.76|15.55| |0.3| KD |27.15|42.85 |17.19| ||Aug|27.62|43.73|17.69| |0.5|KD|27.16|43.43|17.76| || Aug|27.86|44.54|18.32| Results demonstrate that RSTIB-MLP w/ Aug cannot outperform RSTIB-MLP w/ KD. Potential reasons(risks): - Noise ratio is difficult to characterize precisely. Fixed perturbation ratios and augmented \\(X_{\\text{perturb}}\\) samples may not align with real scenarios. Even adjustable ratios still pose this risk. - Using only information from \(X\) neglects dynamics unique to \(Y\) (Assumption 4.2). Calculating the noise impact indicators solely from \(X\) thus yields relatively inaccurate quantification. This underscores the importance of our KD approach. We hope this addresses all your concerns. Thank you very much! --- Rebuttal Comment 1.1: Comment: Thank you for your response. The explanation provided by the authors is reasonable and convincing. Although the proposed method may not achieve state-of-the-art performance, the model demonstrates strong resistance to adversarial noise, which is important for practical traffic prediction scenarios. Therefore, I am willing to increase my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer qjmA, Thank you very much for your positive feedback and acknowledgment. We will add the discussions in our final version. We greatly appreciate your recognition of our submission and your efforts in reviewing our work. Please feel free to reach us if you have any further questions or suggestions. Best regards, Authors
Summary: The authors disclose the dual noise effect behind the spatial-temporal data noise, and propose theoretically-grounded principle termed Robust Spatial-Temporal Information Bottleneck (RSTIB) principle, which preserves wide potentials for enhancing the robustness of different types of models. Comprehensive experimental results show that an excellent trade-off between the robustness and the efficiency can be achieved by RSTIB-MLP compared to state-of-the-art STGNNS and MLP models. ## update after rebuttal The author's detailed response has largely addressed my initial concerns, so I will maintain my positive assessment. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. I have reviewed all the supplementary material provided by the authors. Relation To Broader Scientific Literature: The authors summarize the previous work in spatial-temporal prediction and point out the existing problems, which are also the challenges that this paper aims to solve. Essential References Not Discussed: The references are comprehensive. Other Strengths And Weaknesses: Strengths: 1. The RSTIB principle is based on information theory and provides a solid theoretical foundation for model design. 2. In the part of experimental evaluation, the baselines and datasets used are relatively comprehensive, especially some large-scale datasets. 3. The proposed RSTIB-MLP achieves SOTA predictive performance while requiring fewer computing resources. Weaknesses: 1. The noise in the data is assumed to be additive white Gaussian noise (AWGN). Does the derivation hold if the noise type is not AWGN? 2. How does knowledge distillation enhance the aspect of feature diversity? The author should further clarify its principle. Other Comments Or Suggestions: See Weaknesses. Questions For Authors: Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your positive and constructive comments. We provide a point-by-point response as follows. > ### **Re: W1** Please note that the derivation of the RSTIB principle and the implementation of RSTIB-MLP do not make assumptions regarding the noise type. AWGN is employed in our experiments primarily because it is commonly adopted as a noise model in information theory and experimental measurements within time series forecasting ([Ref1]). To better address this concern, we have provided further empirical study in Appendix K.3. This section details how RSTIB-MLP handles data missing scenarios compared with selected baselines, and also the contribution of each module under such noisy conditions. > ### **Re: W2** In our settings, the Lagrange multipliers are set to be constants. This means that the balance between preserving the target and applying regularization is fixed across different time series, which may not be optimal. Besides, regularization further limits the capacity of the MLP, preventing the model from fully learning complex features in the data. Therefore, we aim to better balance the preservation of the target and the regularization terms across different time series. Inspired by the observation that the feature variance will largely decrease as the noise ratio increases, we leverage the KD module based on the noise impact indicator information to achieve what has been described in the **Training Regime**: When noise impact is low, we relax the KD-based regularization; when there is a significant noise impact, we intensify the KD-based regularization. As a result, the feature variance can be increased by KD in the cases when the noise impact indicator shows high noise impact. Please note that noise impact indicators quantify the noise impact on each time series and this noise impact information is used to dynamically tune/adjust the optimization of our model. Hope that we have addressed all your concerns. Thank you very much! [Ref1] Teck Por Lim and Sadasivan Puthusserypady. Chaotic time series prediction and additive white Gaussian noise. *Physics Letters A*, 365(4):309–314, 2007.
Summary: "This paper theoretically motivates and implements a novel regularization technique for training MLP-based models in spatiotemporal forecasting. The MLP models are distilled from state-of-the-art architectures that leverage graph neural networks. KL divergence terms between the data (input, output, encoded) and assumed Gaussian noise levels were used in the proposed loss function. Extensive experiments demonstrate the effectiveness of the proposed approach, highlighting its potential to enhance forecasting performance Claims And Evidence: Yes Methods And Evaluation Criteria: Comparison with a wide range of spatio temporal algoirhms demonstrate the advantages of the proposed method across different settings. Theoretical Claims: No Experimental Designs Or Analyses: Experimental results show the robustness of the propsoed method across several noise levels and also in the case of clean signal (the clean signal normally have small amount of noise in the measurmenets and the proposed method was able to improve the performance in these cases too). However, I'm not clear on the details of the distillation process used and this is a critical point in this paper. What are the teacher model(s) used in the experiments in Table 2? How will the results differ if other teacher models were used? What is the perfomrance of the teacher model before distillation? Was this a distillation using only the output of the network or were intermediate features used? Supplementary Material: No Relation To Broader Scientific Literature: The proposed loss function can have implications that goes beyond weather forecasting and extend to applications using sensor based measruements in general. Essential References Not Discussed: No Other Strengths And Weaknesses: The paper is missing important implmentation details: 1. What loss function is used in the MLPs? 2. Were the teacher model pretrained or you trained them from scratch? In general more details on the distillation procedure should be given. Other Comments Or Suggestions: I suggest changing the paper title as I think it should have an emphasis on efficiency. How about using the answer to the question you proposed in the abstract as title: "can simple neural networks such as.." or something similar as a title Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your positive and constructive comments. We provide a point-by-point response as follows. > ### **Re: Experimental Designs Or Analyses** - **"What are the teacher model(s) used in the experiments in Table 2?"** Teacher model selection settings are described below Figure 4: "Our method is teacher model agnostic (Appendix. K.10), where we set the default teacher model to STGCN." - **"How will the results differ if other teacher models were used?"** This is discussed in Appendix K.10, where we directly compare empirical results obtained using different teacher models. Table 18 provides empirical details, and the process of preparing the teacher model is also described in Appendix K.10. We demonstrate that the superior performance of RSTIB-MLP is independent of the teacher model choice by showing that RSTIB-MLP also achieves strong robustness even when selecting MLP as the teacher model—outperforming or performing comparably to some state-of-the-art (SOTA) STGNNs. The potential reason behind this is elaborated in Appendix K.10. We will clarify this further in our final version. - **"What is the performance of the teacher model before distillation?"** Consistent with the previous response, we present an empirical study in Appendix K.10 to demonstrate that RSTIB-MLP’s performance is independent of the original teacher model's performance. Even when using a basic MLP as the teacher model, RSTIB-MLP can achieve better or comparably good robustness relative to some SOTA STGNNs. - **"Was this a distillation using only the output of the network, or were intermediate features used?"** Only the output of the network is utilized. Specifically, **Definition 4.9** clarifies that the teacher model's output is used exclusively to compute the noise impact indicator. > ### **Re: Other Strengths And Weaknesses** - **"What loss function is used in the MLPs?"** Below is our loss function: $$ \mathcal{L}_{RSTIB\text{-}MLP} = \sum _ {i=1}^{N}\left[-\mathcal{L} _ {\text{reg}}(Y_i^S, \tilde{Y}_i)\right] + \sum _ {i=1}^{N}(1 + \hat{\alpha} _ i)(\lambda _ x \mathcal{L} _ {x,i} + \lambda _ y \mathcal{L} _ {y,i} + \lambda _ z \mathcal{L} _ {z,i}) $$ This is also described in Eq.(6). Specifically, $\mathcal{L} _ {\text{reg}}(Y _ i^S, \tilde{Y} _ i)$ is the lower bound of $I(Z;\tilde{Y})$. Please refer to Proposition 4.8 for implementation details and proofs. Additionally, $\mathcal{L} _ {x,i}$, $\mathcal{L} _ {y,i}$, and $\mathcal{L} _ {z,i}$ represent the corresponding regularization terms applied to the input, target, and representation regions respectively. For their analytical calculations, please refer to Proposition 4.6 and Proposition 4.7. Descriptions of how we implement these regularizations are provided above Proposition 4.6 (for input and target) and Proposition 4.7 (for representation regularization). Moreover, $\hat{\alpha}$ is the noise impact indicator defined in Definition 4.9, serving to dynamically adjust the training of RSTIB-MLP when incorporating these regularization techniques. - **"Were the teacher models pretrained or trained from scratch? In general, more details on the distillation procedure should be given."** Relevant details are included in Appendix K.10 due to space limitations. We apologize for the unclear statement in the main text. The teacher model is trained from scratch by ourselves. The knowledge distillation procedure is implemented as follows: 1). Train the teacher model from scratch, following the same procedure as the student model. 2). Freeze the teacher model's parameters and start training RSTIB-MLP. 3). Only the output of the teacher model is leveraged to calculate the noise impact indicator as defined in Definition 4.9. According to Definition 4.9, the noise impact indicator is normalized using the Softmax function, reflecting the relative relationships among time series. > ### **Re: Other Comments Or Suggestions** Many thanks and we really appreciate for the suggested idea! Yes, the efficiency is one key advantage that we aim to demonstrate. Therefore, we include "MLPs" in our title to stress the point of our method. We will consider this constructive suggestion in our final version. We hope we have addressed all your concerns. Thank you very much!
null
null
null
null
null
null
On-the-Fly Adaptive Distillation of Transformer to Dual-State Linear Attention for Long-Context LLM Serving
Accept (poster)
Summary: This work intends to reduce LLM inference complexity without unacceptable serving quality degradation. The contributions are two folds: (1) dual-state linear attention (DSLA): a variant of gated linear attention with two hidden states for history and recency (2) DSLA-Serve: an online adaptive distillation framework that uses an offline chained fine-tuning recipe and an inference time Transformer layer to DSLA layer replacement strategy guided by a sensitivity-based layer ordering. Evaluations on commonsense reasoning, long-context QA, and text summarization demonstrate that DSLA-Serve yields 2.3× faster inference than Llama2-7B and 3.0× faster than the hybrid Zamba-7B, while retaining comparable performance across downstream tasks Claims And Evidence: 1. Claim: DSLA can preserve historical context, which overcomes the short-range bias observed in single-state GLA. Evidence: Figure 5 shows that the recency state and history state in DSLA are both effective. 2. Claim: Attention entropy is an effective sensitivity metric. Evidence: Figure 7 shows a strong correlation between the attention entropy of each layer and its performance impact. Appendix F also compares other metrics. 3. Claim: DSLA-Serve achieves significant inference speed-ups while maintaining competitive task performance compared to both full Transformer and hybrid models. Evidence: Section 5 (supported by Tables 1–3 and Figure 4) reports that the adaptive model runs 2.3× faster than Llama2-7B and 3.0× faster than Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-tailored to the problem. The authors use a broad range of benchmark datasets (e.g., Multi-Document QA, Code Understanding, WikiText-2, Lambada, and summarization tasks like CNN/DailyMail and XSum) that stress both long-context understanding and short-context performance. Hidden states weighting factor clearly shows the tradeoff between history and recency. LLM serving metrics such as memory usage and latency show the system advantage. Theoretical Claims: There are no theoretical claims to check. Experimental Designs Or Analyses: I have checked the soundness of the experimental designs and analyses. 1. Section 5.1 evaluates the DSLA models of two conversion rates on perplexity and the performance on long-context understanding. 2. Section 5.2 evaluates the DSLA models on short-context benchmarks 3. Section 5.3 evaluates the inference latency and memory usage during prefill and decoding stages. However, it is unclear what the prompt length is for the decoding latency experiment. 4. Section 5.4 evaluates the end-to-end system performance on open-source LLM serving traces. 5. Section 5.5 does ablation study on the ratio between history and recency, number of states, efficiency compared with Zamba-7B, and the effectiveness of sensitivity metric. Supplementary Material: There is no supplementary material to review Relation To Broader Scientific Literature: DSLA provides new insights to modeling memory and forget mechanisms. DSLA-Serve provides a new recipe for adaptive inference, which is key to understanding the extremely efficient human reasoning process. Essential References Not Discussed: I didn’t identify essential references not discussed in the paper. Other Strengths And Weaknesses: The experiments are comprehensive. However, the effectiveness of sensitivity is unclear after all the post-training stages including RL. Other Comments Or Suggestions: I don’t have other comments or suggestions. Questions For Authors: 1. What LLM serving system are you using? Could you explain why shorter generation length can take longer time in the autoregressive process? GPU memory allocation can explain the burst latency but cannot explain my concern. 2. If I understand correctly, the KV cache of the replaced layer is removed without being incorporated into the hidden states of the new DSLA layer. Have you tried to use the KV cache to further improve the model quality? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed and constructive comments! Please see below. **A1. Serving system details** We implemented our inference system on top of the DeepSpeed inference serving framework (MII). In the autoregressive process, for relatively short prefill lengths, our method may be slower than Transformers due to the additional linear projection layers. In addition to the standard K, Q, V, and O projections, DSLA introduces two extra layers to compute G1 and G2 (see Eq. 4 in our paper). This overhead impacts latency when memory is not the bottleneck. However, as we generate more tokens, GPU memory becomes the main bottleneck, and our method begins to outperform Transformers in terms of latency. **A2. KV cache** Thank you for the question. To clarify, the KV cache is not entirely “removed” but rather “implicitly integrated” into the hidden states of the DSLA layers. In the original attention layers, the full KV cache is retained, where each KV pair can be considered a hidden state, and the output depends on all such states. In contrast, the DSLA layer maintains only two hidden states, into which the information from all KV pairs is incrementally aggregated (please check the equation 4). This design significantly reduces the memory footprint, thereby improving efficiency. **A3. Effectiveness on post-trained models** Thanks for the interesting question! To evaluate our method's effectiveness on reasoning tasks, we tested it on the Qwen/Qwen2.5-Math-1.5B-Instruct model, which is fine-tuned with SFT followed by RL using GRPO. During distillation, we used open-source reasoning datasets [1,2], distilled 10% of the layers, and evaluated them on GSM8K and AIME24. Our model matched the teacher’s accuracy on GSM8K, showing that our method works on post-trained models. Table A. | Model | GSM8K | |---------------------------------|-------| | Qwen/Qwen2.5-Math-1.5B-Instruct | 85.0 | | DSLA-Qwen2.5-Math-1.5B-Instruct | 84.9 | Reference. [1] https://github.com/huggingface/open-r1 [2] https://huggingface.co/datasets/HuggingFaceH4/numina-deepseek-r1-qwen-7b [3] Muennighoff, Niklas, et al. "s1: Simple test-time scaling." arXiv preprint arXiv:2501.19393 (2025).
Summary: This work presents a robust approach to deploying gated linear attention (GLA) in practical production environments, addressing key limitations through two primary innovations. First, the authors address GLA’s strong recency bias, which impairs long-context performance. To mitigate this, they propose a dual-state mechanism. One hidden state models local context with a randomly initialized forget gate, while the other models long-term dependencies with a forget gate initialized closer to one (preserving earlier information). By introducing data-dependent dynamic interpolation and a contrastive penalty loss, the model effectively balances reliance between these two states. Second, the paper proposes DSLA-Serve, an adaptive distillation framework that selectively converts Transformer layers into DSLA layers during inference. To guide this conversion, the authors introduce an entropy-based sensitivity metric that identifies less critical layers. The framework dynamically substitutes these layers based on system conditions, ensuring performance stability. The chained fine-tuning strategy is introduced to maintain consistency after layer conversion. The experimental results demonstrate that DSLA effectively mitigates recency bias and achieves competitive performance on benchmarks, offering improved efficiency compared to other hybrid models. Additional discussions cover batched inference strategies for further deployment optimization. Claims And Evidence: Figures 3 and 5 provide evidence that the dual-state mechanism effectively enhances long-context capabilities. Table 4 demonstrates that converting multiple layers to linear attention can maintain performance. Regarding the efficiency claim, the evaluation is limited to comparisons with Llama2-7B and Zamba-7B, omitting a comparison with other hybrid architectures such as NVIDIA's 7B hybrid Mamba2. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are appropriate for the problem at hand. The paper evaluates performance on both short- and long-context benchmarks, complemented by efficiency comparisons, aligning well with the intended application. Theoretical Claims: This is not a theoretical paper. Experimental Designs Or Analyses: Yes, the experimental design and analyses appear generally sound. Ablation studies are relatively thorough, but the impact of chaining order during fine-tuning is not fully explored. Supplementary Material: No Relation To Broader Scientific Literature: This paper leans more toward an industrial focus; however, the proposed dual-state mechanism presents a meaningful methodological contribution to long-context modeling in linear attention. The dual-state design effectively addresses recency bias, a known limitation in linear attention models, by combining specialized hidden states for local and long-term dependencies. This innovation aligns with broader research on improving linear attention mechanisms and contributes to the ongoing effort to enhance efficiency in hybrid softmax/linear attention architectures. By enabling more effective deployment of such hybrids in long-sequence settings, this approach opens new opportunities for scalable long-context modeling. Essential References Not Discussed: No Other Strengths And Weaknesses: Overall, I believe this is a solid industrial-style paper that aims to improve efficient long-context modeling. If ICML is receptive to such contributions, I would recommend acceptance. However, the paper lacks comparisons with key baselines, particularly other distilled hybrid models such as LOLCATs, Transformers to SSMs, and Mamba-in-LLaMA. Including these comparisons would provide a more comprehensive evaluation of the proposed method’s effectiveness. Other Comments Or Suggestions: No Questions For Authors: Have you explored more recent linear attention variants, such as Gated DeltaNet [ICLR '25]? It would be interesting to investigate whether the proposed dual-state technique is applicable to other types of linear attention. Demonstrating its generalizability could significantly strengthen the paper’s contribution. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the thoughtful and constructive comments! Please see below. **A1. Comparison to Other Baselines** Thanks for the great suggestions. We compare our 1.5B-scale model with the Phi-Mamba-1.5B [1] and 7B-scale model to distilled Mamba [2]. As demonstrated in table A and B, our method consistently outperforms these baselines as well. For the 1.5B-scale model, we used the same teacher model– Microsoft's phi-1_5 model and used 800M tokens. We used the same environment setting and dataset reported in the main draft. Table A. | Method | Token | Winogrande | ARC-e | ARC-c | PIQA | Hellaswag | |---|---|---|---|---|---|---| | Transformers to SSMs [1] | 3.0B | 71.7 | 74.0 | 44.1 | 75.5 | 60.2 | | DSLA (Ours) | 800M | **72.93** | **74.49** | **44.45** | **75.90** | **60.48** | For the 7B-scale model, note that we used Llama2-7B while [2] used Llama3 to start with. Table B. | Method | PiQA | ARC-c | Hellaswag | MMLU | Winogrande | |---|---|---|---|---|---| | distilled Mamba [2] | **78.7** | **52.4** | **77.7** | 42.4 | 64.8 | | DSLA (Ours) | **78.7** | 42.8 | 74.1 | **48.5** | **67.8** | Also, thanks for bringing up Nvidia’s 8B Mamba Hybrid model. It's an interesting baseline for hybrid architectures. We chose Zamba-7B as one of the representative hybrid models because it is an open-source model of similar scale (7B) that can run on our machines. While both Zamba-7B and Nvidia’s Hybrid Mamba share similarities in terms of model architecture, both have limitations due to its fixed architecture, whereas DSLA can dynamically adapt to variability of load during inference. **A2. Generalization: Application to a different linear architecture** Thanks for the great suggestion! Unfortunately we were not able to get the parameters for GatedDeltaNet as it was not public. Instead, we applied the proposed method to an alternative sub-quadratic model, Mamba, to demonstrate its generalizability. Mamba shares similarities with GLA that we picked in the paper due to the presence of "selective gate". We extended Mamba’s selective gate to dual gate and fine-tuned the gate parameters with 400M tokens using the same setting used in our paper. Due to the time constraints, we were only able to convert 25% of layers. Table C shows a zero-shot performance of single state Mamba and dual state Mamba. Table C. | Model | Winogrande | ARC-e | ARC-c | PIQA | Hellaswag | |---|---|---|---|---|---| | Mamba-1.3B | 54.1 | 59.0 | 28.2 | 72.2 | 40.1 | | Dual-state Mamba [25%] | **54.4** | **60.1** | **28.7** | **72.2** | **47.5** | This highlights the generalizability of the DSLA method across various architectures. Reference. [1] Bick, Aviv, et al. "Transformers to ssms: Distilling quadratic knowledge to subquadratic models." Advances in Neural Information Processing Systems 37 (2024): 31788-31812. [2] Wang, Junxiong, et al. "The mamba in the llama: Distilling and accelerating hybrid models." Advances in Neural Information Processing Systems 37 (2024): 62432-62457.
Summary: This paper introduces an online distillation framework that dynamically converts Transformer layers to dual state linear attention (DSLA) during inference to improve efficiency for long-context LLM serving. DSLA uses two specialized hidden states to better preserve both historical context and recent information, addressing the short-range bias typical in linear attention architectures. Through a sensitivity-based layer conversion ordering and chained fine-tuning strategy, DSLA-Serve achieves 2.3-3.0× faster inference compared to baselines while maintaining comparable accuracy across various downstream tasks. Claims And Evidence: Well supported claims: 1. *Dual-state architecture*: The claim that it maintains both historical and recent context is supported through ablation analysis compared to single-state architecture and attention/weighting factor visualizations. 2. *Framework adaptability*: The framework is shown to be adaptive on a real-world workload, demonstrating that the conversation rate is adapted to the prompt length which leads to about 2.27x reduction in latency. 3. *Inference speed*: The authors provide evidence for the greater than 2x speed claims using two different base large language models, Llama2 and Zamba 7B models. In addition, they show latency improvements in a real-world workload scenario. Partially supported claims: 4. *Comparable quality to baselines*: The results do support that results are comparable on average but the individual results are sometimes worse and are dependent on the conversion rate. This could be problematic as it may non-trivial to find a conversion rate that works well across all datasets. Methods And Evaluation Criteria: Overall, the methods and evaluation criteria are well-chosen and appropriate for validating the paper's claims about improving LLM serving efficiency while maintaining performance. However, I have the following concerns: 1. The authors compare only against a single type of single-state baseline, which is quite simplistic. The gated-based attention models, for example, even though they are single-state could be capturing the historical context better. Ideally, the paper would be stronger if it was providing evidence that the dual state approach is useful for different types of recurrent attention mechanisms. 2. Studying the scaling effect to the performance of the proposed method is missing. There is no evidence that the approach works as the model size increases; the evaluation should demonstrate the relative effect even with limited budget (e.g. 1B, 4B, 7B, 12B). 3. Performing a qualitative analysis would be useful to better understand why there is degradation or improvement in certain datasets. It is somewhat surprising that the proposed method works better than full attention especially in code understanding and multi-doc QA. Theoretical Claims: The paper is empirical in nature and doesn't make any explicit theoretical claims. It only uses prior theoretical findings in the complexity analysis part. There are no formal proofs or theoretical arguments regarding the effect of conversion to model capacity and optimality of dual states. Experimental Designs Or Analyses: The experimental design has been well-designed for the most part: baseline comparisons, evaluation task diversity, ablation studies, real-world workload results. There a few parts that can be improved in the exposition and analysis: 1. Adding some discussion on the training overhead that is introduced when training the dual-state attention. 2. It would be useful to experiment with different deployment scenarios to identify in which ones the proposed approach is most beneficial. 3. A few technical details that are missing from the experiment section such as what inference optimization techniques were used and what are the details of the inference workflow used by each method. 4. The experimental analysis would benefit by showing some statistical error information to better understand the performance differences and how consistent they are. Supplementary Material: Read up to section F. Relation To Broader Scientific Literature: C1: Dual-state linear attention The literature in this area has focused mainly on single state mechanisms (e.g. Mamba, GLA). Using a dual state seems to be novel in this area and should be applicable to different types of methods in principle. Other than that this work makes use of well-established techniques such as GLA and theoretical (complexity) or empirical (recency bias) findings from prior work. C2: On-the-fly layer conversion and efficient serving There are offline methods focusing on reducing the complexity of attention through fine-tuning or layer-wise training (e.g. https://arxiv.org/abs/2103.13076). The on-the-fly conversion idea is interesting and is adaptive to different workloads but it is not compared to and positioned well with respect to other distillation methods (https://arxiv.org/abs/2305.02301), model compression methods (https://openreview.net/pdf?id=MxF0IKJtKW), or other inference optimization methods (flash attention, paged attention, KV caching). Essential References Not Discussed: See my reply above. Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: See above Questions For Authors: 1. What is the training overhead cost of the proposed approach? Please mention how long does it take to train each layer and the total time it would take to convert the whole model. 2. Could the authors elaborate what is the intuition behind the dual attention state. Why it is sufficient and how layer conversion affects model capacity? 3. After which level of conversion does the quality of the model start to deteriorate? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed and constructive comments! Please see below. **A1. Intuition behind dual state and model capacity** This work stems from our observation (Fig 1) that a single-state linear attention struggles to capture the full range of contextual information handled by self-attention. To address this, we introduce a dual-state mechanism that separately captures historical and recent context (Fig 5). Similar observations on foundation models’ memorization was also made in contemporary pre-training works (e.g. [6]). Our ablation (Tab 5) shows that more than two states yield limited gains. Despite the shift from self-attention, model capacity remains largely intact due to the dual states' expressiveness, as evidenced by strong results across LongBench (Tab 1), summarization (Tab 2), and Harness (Tab 3). **A2. Generalization: Application to a different linear architecture** Due to the space limit, please check A2 and Table C in our response to Reviewer a2iq. **A3. Scalability: Application to different model size** As requested, we evaluated our method on both 1.5B and 7B models. Results for the 7B model are in Tables 1–3 of the paper. Table A shows 1.5B results, where our distilled hybrid model outperforms the baseline Transformer (microsoft/phi-1_5) and 1.5B SSM model. Similar trends are observed at the 7B scale. Table A. | Model | Token | Wino | ARC-e | ARC-c | PiQA | HS | |---|---|---|---|---|---|---| | Transformer | 150B | 73.4 | 75.0 | 48.0 | 76.6 | 62.6 | | SSM [1] | 3.0B | 71.7 | 74.0 | 44.1 | 75.5 | 60.2 | | DSLA | 800M | **72.93** ± 0.01 | **74.49** ± 0.08 | **44.45** ±0.01 | **75.90** ± 0.01 | **60.48** ± 0.0049 | **A4. On performance gain on Code understanding and Multi-doc QA** Tasks where DSLA performs better: Our method outperforms Zamba on code understanding, likely due to the strong coding understanding of our teacher model (LLaMA-2-7B). For Multi-doc QA, we even surpass the full-attention teacher—possibly because dual-state linear attention alleviates the "lost in the middle" issue. To support this, we ran a needle-in-a-haystack task and observed better retrieval when answers appeared between positions 1k–2k (e.g., DSLA retrieved 80% of answers at depth=19%, compared to 40% for the Transformer), Since Multi-doc QA often involves mid-context retrieval, DSLA is particularly effective. Tasks where DSLA performs worse: The performance drop may stem from approximation limitations in certain layers. We view deeper task-specific analysis as promising future work and will highlight these findings in the paper. **A5. Different deployment scenarios** While our method targets large-scale serving with multiple users, it also benefits single-user settings like PCs or edge devices. While the need for adaptiveness in large-scale systems arises from temporal variability and spatial imbalance, as explained in Sec. 3.2, the need for adaptiveness in edge devices stems from dynamic changes in power requirements [4], to meet a specific SLO. Static attention models may underperform under such variability, but DSLA adapts to changing hardware limits. To simulate an edge scenario, we ran a 340M model on a P100 GPU (12GB) and saw a 1.2× latency improvement over full attention when generating 1024 tokens from a 128 token prompt. **A6. Discussion on conversion rate** Importantly, our method highlights *progressive and selective conversion* to minimize accuracy degradation. Additionally, our framework allows setting a maximum conversion threshold to ensure accuracy is not significantly impacted. **A7. On performance deterioration** Empirically, we observed that performance begins to deteriorate when more than 75% of the model is converted, similar to [2]. However, this degradation can be mitigated through additional instruction tuning or Lora as in [3]. **A8. Training overhead details** Training a single layer of a 7B model on 1B tokens takes ~5 hours on 4×A100 GPUs (80GB). Compared to full pretraining (858 days [5]), our fine-tuning costs just ~0.07%—a one-time expense. **A9. Statistical error information** We report the statistical error of End-to-end latency in Table B and lm harness result on Table A. We will update other experiments in final version. Table B. | Model | Latency | |---|---| | Llama2-7B | 93.64 ± 5.8 | | Zamba-7B | 122.52 ± 8.6 | | DSLA-7B | 40.83 ± 2.2 | [1] Transformers to ssms: Distilling quadratic knowledge to subquadratic models [2] The mamba in the llama: Distilling and accelerating hybrid models [3] LoLCATs: On Low-Rank Linearizing of Large Language Models [4] Dynamic-ofa: Runtime dnn architecture switching for performance scaling on heterogeneous embedded platforms [5] Llama: Open and efficient foundation language models [6] Leave no context behind: Efficient infinite context transformers with infini-attention --- Rebuttal Comment 1.1: Comment: Thank you for the replies. The rebuttal addressed most of my concerns: generalization to different architectures (via Mamba extension), varying model sizes (1.5B and 7B evaluations), insights about model improvement on code and long-doc tasks, statistical error reporting, and clarification on training overhead (minimal at ~0.07% of pretraining cost). For this reason, I decided to improve my score. --- Reply to Comment 1.1.1: Comment: Thank you so much for taking the time to review our rebuttal and for your thoughtful follow-up. We really appreciate your feedback and are glad to hear that our clarifications addressed your concerns. Best regards, Authors
Summary: This paper introduces Dual-State Linear Attention (DSLA), a novel attention mechanism designed to mitigate the short-range bias of traditional linear attention methods while maintaining the efficiency benefits necessary for long-context LLM serving. The key idea is to maintain two specialized hidden states, one for capturing historical context and another for tracking recency, allowing DSLA to balance global and local dependencies better than prior linear attention methods. To efficiently integrate DSLA into Transformer architectures, the paper proposes DSLA-Serve, an adaptive on-the-fly distillation framework that selectively replaces Transformer layers with DSLA layers during inference. This conversion is guided by sensitivity-based layer ordering, ensuring that DSLA layers are introduced progressively without degrading performance. The chained fine-tuning strategy ensures that each newly converted layer remains compatible with the previously replaced ones, preserving model consistency across partial conversions. Experiments demonstrate the effectiveness of DSLA. Claims And Evidence: Claim: Canonical Single-state linear attention overemphasizes recent tokens. Evidence: observation in Fig. 1 Claim: DSLA enables model to flexibly trade off accuracy and efficiency. Evidence: Tab. 1,2,3,4 Methods And Evaluation Criteria: The normal benchmarks for LLMs are adopted. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: I haven't spotted any problems with experiment designs. It uses a normal llm setting and compare with well-known baselines like GLA and Zamba. Supplementary Material: Yes, I checked the supplementary materials. Relation To Broader Scientific Literature: In contrast to previous one-stage linaer attention, this paper proposes a two-stage solution that is flexible for use and efficient. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. Novelty: the method is novel compared with existing linear attention works. 2. Practical: since the method does not require re-training, it could be adapted to existing transformers. 3. Clearity: charts and plots demonstrate ideas clearly. Weaknesses: The authors are advised to add more baselines for comparison, including RetNet and Mamba. Other Comments Or Suggestions: The system overview in the appendix should be moved to the main paper for better understanding. Questions For Authors: Apart from latency, is it okay to report and compare FLOPs? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the detailed and constructive comments! Please see below. **A1. Comparison with RetNet and Mamba** We added more baselines including RetNet [1] and Mamba [2], and measured zero-shot performance on challenging tasks including PiQA, ARC-challenge, Hellaswag (HS), MMLU, Winogrande. On average, DSLA (ours) outperforms both RetNet and Mamba. Table A. | Model | PiQA | ARC | HS | MMLU | WG | Avg | |:---:|:---:|:---:|:---:|:---:|:---:|:---:| | RetNet 6.7B | 77.8 | 39.9 | 72.9 | 26.1 | 66.1 | 56.56 | | Mamba 7B | 78.3 | 42.8 | **77.8** | 33.0 | **71.9** | 60.76 | | Ours | **78.7** | **43.2** | 75.4 | **46.1** | 69.9 | **62.66** | **A2. FLOPs Report** Thanks for the great suggestions. Please note that we primarily reported end-to-end latency in our paper, as FLOPs reflect only the total computation per inference and do not account for dynamic load, memory bandwidth, or parallelism. Following table shows FLOPs measured with torchprofile [3] by feeding a fixed input of length 12 and performing a single token generation. Since FLOPs scale linearly with the number of activated parameters, Zamba-7B exhibits higher FLOPs due to its repeated activation of shared attention layers. Table B. | Model | Architecture | GFLOPS | |---|---|---| | Llama-2-7B | Transformer | 158.6 | | Mamba 7B | SSM | 125.6 | | Zamba 7B | Hybrid | 277.7 | | DSLA 7B (100% converted) | Dual-state Linear Attention | 163.5 | We will reorganize the figure’s location on our revised version. Thanks for your thoughtful comments! **Reference** [1] Sun, Yutao, et al. "Retentive network: A successor to transformer for large language models." arXiv preprint arXiv:2307.08621 (2023). [2] Gu, Albert, and Tri Dao. "Mamba: Linear-time sequence modeling with selective state spaces." arXiv preprint arXiv:2312.00752 (2023). [3] https://github.com/zhijian-liu/torchprofile
null
null
null
null
null
null
Rethinking Time Encoding via Learnable Transformation Functions
Accept (poster)
Summary: This paper proposes a learnable time representation framework—referred to as LeTE—that aims to improve upon prior time encoding methods which rely on fixed or narrow inductive biases (e.g., purely sinusoidal functions). The authors introduce two learnable approaches for modeling time: one based on Fourier series expansions and another on B-splines. Additionally, they propose a combined version that leverages both. By making the transformation functions learnable, the approach can, in principle, encompass various existing time encodings (like Time2Vec) as special cases. The authors further claim invariance to time rescaling and better interpretability over prior methods. Claims And Evidence: **Time Rescaling Invariance** The authors assert that the proposed method is invariant to time rescaling (e.g., changes in units from days to hours). While they provide a theoretical argument, the empirical validation of this property is less explicit in their experiments. **Enhanced Interpretability** The paper claims better interpretability over previous methods by allowing direct reconstruction of the learned transformation functions. However, the evidence provided (largely visualizations in the appendix) may not conclusively show that these representations are more interpretable than, for example, a straightforward sinusoidal basis. Additional qualitative or quantitative assessments would strengthen this claim. Methods And Evaluation Criteria: Yes. The paper’s chosen methods and evaluation criteria appear well-matched to the goal of improving time encodings and testing them in realistic contexts. Theoretical Claims: I checked the correctness of their equations in the main text. Experimental Designs Or Analyses: Overall, the experimental designs are broadly sound, using appropriate metrics and mainstream baselines for time-series and dynamic graph tasks. Supplementary Material: I checked the KAN section when I wanted to figure out their relationship, but their supplementary doesn't demonstrate that. Relation To Broader Scientific Literature: Overall, the paper’s key contributions fit naturally into—and extend—existing streams of research on time-encoding and functional approximation within neural models, offering a flexible drop-in alternative that maintains or improves upon the advantages of earlier fixed-basis (sinusoidal) approaches. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: - Strength **Comprehensive Literature Review** The authors provide a clear, concise overview of earlier time encoding methods, illustrating how LeTE builds upon and generalizes them. **Well-Organized Composition** The paper’s structure—with clear figures, tables, and references—helps the reader grasp the proposed approach and its variants (Fourier-based, B-spline-based, combined). **Generalization to Prior Methods** By demonstrating that prior time-encoding approaches (like Time2Vec) are particular cases of LeTE, the authors show strong potential for “plug-and-play” deployment. This could be attractive for practitioners seeking simple drop-in enhancements. - Weakness **Overstated Claims** While the paper provides theoretical proofs for invariance and interpretability, the experimental demonstrations of these claims are not as strong. For instance, readers would benefit from direct empirical evidence or metrics that validate rescaling invariance. **Clarity on How Time Embeddings Are Used** It would help to explain more concretely how these learned embeddings tie into the final predictions, and clarify why this learnable embedding is superior to the previous ones. **Interpretability Remains Nuanced** Although LeTE can reconstruct learned transformations, “interpretability” is not necessarily obvious. Additional evidence (beyond raw visualizations) would make the case more convincing. Other Comments Or Suggestions: Please see weakness. Questions For Authors: **Nature of the Learned Curve** From Figure 2, it looks like the time embedding is effectively learning some function of t through various basis functions. Beyond the direct representation of time, do you see these learned curves capturing other hidden phenomena (e.g., seasonalities, abrupt events)? How should readers interpret or use these curves in practice? **Role of the Scaling Factor [s]** You introduce a learnable scaling factor after the layer normalization in each dimension. Given that you already learn coefficients within the B-spline or Fourier expansions, can you clarify why this additional scaling is necessary? **Connection to Kolmogorov–Arnold Theorem** The use of B-splines and references to function superposition remind me of current lines of work involving the Kolmogorov–Arnold Theorem (KAN). Could you elaborate on whether LeTE is conceptually related to KAN-based approaches? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Rescaling** Thank you for the insightful comment. We agree that empirically demonstrating rescaling invariance is important. To directly support this, we conducted an additional tiny experiment by applying the Combined LeTE to Wikipedia/TGN, using two different time input scales: t=t/60 (interpreted as minutes), and t=t/3600 (interpreted as hours), whereas previous experiments used Unix timestamps (in seconds). As shown in the results of this experiment (please see the [anony. repo. Rescaling](https://anonymous.4open.science/r/LeTE_ICML25_Rebuttal/Rescaling.pdf)), both versions achieve similar performance and outperform the baselines. Minor differences may be attributed to other factors in the training environment. While we provide proofs of rescaling invariance in Appendix C.3, we would also like to clarify that our existing experimental settings indirectly verify this property: In Sec. 4.2 (Time-series), we use absolute Unix timestamps, which are large-scale values. In Sec. 4.3 (Dynamic Graph), we use relative time differences, typically much smaller in scale. Despite the difference in time magnitude across these settings, LeTE consistently outperforms the baselines, illustrating strong robustness to changes in time scale. This further supports the claim that LeTE is inherently invariant to time rescaling, thanks to its learnable transformations. **Interpretability** Please refer to our responses to **Reviewer 1gph** and **Reviewer ozTs.** We made a detailed analysis of the learned curves and demonstrate how these curves can be interpreted in practice. **How TEs Are Used** In practice, the learned LeTE are used in one of two ways: Added to feature embeddings (in time series models): $\mathbf{x} = \text{TokenEncode}(x) + \text{LeTE}(t)$; or concated with node and edge features (in dynamic graphs): $\mathbf{x} = [\text{Node Features} | \text{Edge Features} | \text{LeTE}(t)]$. In this way, LeTE provides temporal signals to the model, enabling it to modulate attention weights or node interactions based on temporal context. By contrast, prior time encoding methods either: use hand-crafted encodings, or use fixed sine functions, limiting their ability to represent complex time patterns (periodicity, non-periodicity, and mixed). Moreover, LeTE leverages learnable non-linear transformations, including both Fourier-based and spline-based, allowing it to learn time patterns directly from data in a flexible, data-driven manner and capture a richer time patterns. As shown in our experiments (Sec. 4), replacing prior TEs with LeTE consistently improves performance across a diverse set of downstream tasks, demonstrating that the learned embeddings are not only more expressive, but also more generalizable. **Scaling weight** The reason we added the learnable scaling weight $s_i$ after LayerNorm can be concluded as follows. Actually, we first consider to add this learnable scaling weight is due to we conducted some experiments and compare that the performances of adding or without these scaling weights, we found that the experiments with the scaling leads to better performance. Upon further reflection, we believe the performance gain can be attributed to the following reasons: We apply LayerNorm to each dimension of the transformed signal to stabilize optimization and ensure comparable scales across dims. However, this normalization step removes the original scale information that might have been encoded by the learned function coefficients. The scaling factor $s_i$ reintroduces flexible amplitude control after normalization. In practice, adding the scaling leads to slightly better performance, as it allows each dim to adjust its impact during learning. Without this factor, some dims may become under- or over-represented. **Relationship with KAN** You are correct that the use of B-splines in LeTE is conceptually connected to the ideas from the KAT. The motivation behind LeTE arises from the limitation observed in existing time encoding methods: they typically employ fixed non-linear functions, which constrain their ability to model diverse time patterns. To overcome this, we adopt a deep function learning perspective, introducing the learnable transformations — either through Fourier series or B-spline. This enables LeTE to flexibly encode complex mixed time patterns. While this design philosophy aligns with the spirit of KANs, there are some differences: 1. LeTE is designed specifically to address limitations in time encoding, serving as a lightweight, plug-in module for downstream tasks. In contrast, KANs are proposed as general network architectures. 2. LeTE includes not only spline-based functions but also introduces Fourier-based one, particularly suited for modeling periodic patterns. However, KAN primarily relies on splines. 3. KANs are typically layered architectures, while LeTE acts as a plug-and-play TE that maps time to vector embeddings, which are then fed into larger models. --- Rebuttal Comment 1.1: Comment: Thank the authors for the comprehensive reply. Most of my concerns were addressed and I realized I had a misunderstanding on scaling invariance. I will adjust my rate accordingly. --- Reply to Comment 1.1.1: Comment: We are sincerely grateful for your positive assessment and for raising your score! Thank you for your thoughtful recognition and constructive feedback. We will carefully revise our paper based on your suggestions to improve its clarity. Once again, thank you for your valuable insights. Your detailed comments have been incredibly helpful to us!
Summary: The paper proposes a time encoding method that can work as a plug-and-play functionality to capture diverse patterns in the real world. The method is motivated by the observation that the existing time encoding approaches struggle to capture non-periodic and mixed patterns. To capture such complex patterns, the paper proposes to transform timestamps to representations via the combination of Fourier series and Spline functions. By their inherent inductive biases, the Fourier series is more capable of capturing periodic patterns whereas the Spline functions more excel in modeling non-periodic patterns. The paper assesses the effectiveness of the proposed method by replacing the time encoding modules with the proposed one in various time series tasks. The experimental results confirm the superiority of the proposed model. ## update after rebuttal The authors have partially addressed my concerns, and I am now leaning toward acceptance. I suggest incorporating the newly conducted experiments and the comparison with prior work (TIDER) in the revised version to better highlight the novelty. Additionally, the current experiments only focus on methods modeling temporal patterns, assessing LeTE’s efficacy in time series forecasting would be more comprehensive by also comparing it with approaches designed to capture spatial correlations, such as iTransformer [2] and Sumba [3]. [2] iTransformer: Inverted transformers are effective for time series forecasting, ICLR 2024. [3] Structured matrix basis for multivariate time series forecasting with interpretable dynamics, NeurIPS 2024. Claims And Evidence: Most of the claims are supported by experiments. In the introduction, the paper argues that the existing time encoding methods cannot capture the complex patterns caused by holidays, but there is no evidence to verify whether the proposed method can capture such patterns. It is better to provide experiments to show this claim. Methods And Evaluation Criteria: The proposed methods and evaluation criteria seem reasonable to me. Theoretical Claims: The proofs seem correct. Experimental Designs Or Analyses: In the method part, the authors suggest setting the hyperparameter $p$ to 0.5, but in the experiments, they adjust it across different datasets and methods. The reviewer is curious whether the efficacy of the proposed method is sensitive to this hyperparameter. Because it will significantly limit its applicability if the performance largely hinges on a careful tuning of this hyperparameter in practice. The paper should present the results in Table 1 by setting $p$ to its default value of $0.5$. Supplementary Material: I checked the appendix and the code repository. The code files in the repository are invalid, and the page shows that "The requested file is not found" when clicking the code files. Relation To Broader Scientific Literature: The proposed method can work as a plug-and-play module for various time series modeling approaches. Essential References Not Discussed: The idea of adopting Fourier series to learn representations for time series has been explored in [1]. The paper should discuss its connection and distinction from the prior work to clarify its novelty. [1] Multivariate Time-series Imputation with Disentangled Temporal Representations, ICLR 2023. Other Strengths And Weaknesses: Strengths: * The proposed method is invariant to time rescaling. * The paper is well-written and easy to follow. Weaknesses: * It is not clear if the performance gain is sensitive to the hyperparameter $p$. * The provided code repository is invalid, and this raises a concern regarding reproducibility. * The novelty of the proposed method should be further clarified by comparing it with prior work [1]. [1] Multivariate Time-series Imputation with Disentangled Temporal Representations, ICLR 2023. Other Comments Or Suggestions: * The paper should present the results in Table 1 by setting $p$ to its default value of $0.5$. * It is better to provide the experiments to support the claim of capturing the complex patterns caused by holidays. * Please update the code repository to eliminate the concern regarding reproducibility. Questions For Authors: Please refer to the questions listed above regarding hyperparameter sensitivity, reproducibility, and novelty clarification. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Sensitivity to the hyperparameter** Actually, the performance gain can be influenced by the hyperparameter $p$. As we analyze in the experiments on dynamic graphs (please refer to Appendix G.2, “Comparative Analysis of Different Variants of LeTE,” and Table 9 for details), the performance remains robust across different values of $p$. Specifically, as shown in Table 9, regardless of whether $p$ is set to 0, 0.5, or 1, our method almost consistently outperforms the baselines. However, the downstream results do affected by the $p$. For the time series forecasting experiments, we actually also choose $p$ only from the set {0, 0.5, 1}, so there is no concern regarding careful tuning of this hyperparameter in practice. We have also uploaded a result table where $p$ is set to 0.5 (Tab. 1 in the [new anony. repo., especially for the rebuttal](https://anonymous.4open.science/r/LeTE_ICML25_Rebuttal/Table%201%20and%205.pdf)). As seen in the newly uploaded tables, our method still achieves high win rates for both MSE and MAE. Intuitively, slight tuning of $p$ within the set {0, 0.5, 1} can lead to even better results. **Code repository** We have updated the [original anony. repo.](https://anonymous.4open.science/r/LeTE), we hope it works now. We also newly added the running logs and the requirements file to the repository. If you still facing the "The requested file is not found" problem, you may try to download the repository and check it. Alternatively, you may also download our code from the openreview Supplementary Material, we also uploaded a copy of the same code when we submitted our manuscript. **Comparison with [1]** Thank you for pointing this out. We acknowledge that [1] (TIDER) also incorporates Fourier series to model temporal patterns in multivariate time-series data. However, our work differs from TIDER in terms of motivation, model design, and application scope. **Motivation:** TIDER employs Fourier series to model the seasonal component of time-series data as part of a decomposed temporal structure (trend + seasonality + local bias) within a matrix factorization framework, specifically for the task of missing value imputation. In contrast, LeTE is a task-agnostic, general-purpose time encoding module, designed to replace previous temporal encodings with fully learnable transformations. Our goal is to provide a unified and flexible time encoding mechanism that can be applied across different temporal modeling scenarios. **Model Design:** TIDER is specialized for a specific task and evaluated exclusively on imputation. Its architecture is tightly coupled with that objective. In TIDER, Fourier series are used internally to model a latent factor $V_s$, and only for capturing periodicity, within a low-rank matrix factorization design. In LeTE, Fourier-based functions are used to encode timestamps into embeddings, which are then used across multiple downstream tasks. The encoding serves as a plug-and-play module and is jointly trained with downstream objectives. **Application scope:** Our experiments cover time-series, dynamic graph, event-based tasks, and real-world applications, demonstrating that LeTE is not only expressive but also highly adaptable across domains. **Moreover,** LeTE introduces a unified framework for time encoding via deep function learning, encompassing not only Fourier-based functions but also spline-based functions, and even hybrid combinations (Combined LeTE). **In summary,** while both works utilize Fourier series, TIDER focuses on a specific modeling component for a single task, whereas LeTE introduces a general-purpose, extensible, and theoretically grounded time encoding framework that supports a wide range of tasks in temporal modeling. **Holidays** From the perspective of a long time window, holidays can be regarded as a type of periodic pattern. As we demonstrate that our method can effectively capturing periodic patterns, the patterns caused by holidays can also be captured. Please also refer to Appendix G.4: Capturing Periodic, Non-Periodic, and Mixed Patterns in Data, and Fig. 11 and 12 for more details. As a brief recap, we demonstrate that our method enables models to capture periodic, non-periodic, and mixed time patterns. Specifically, the holiday-related periodicity can be simulated by the low-frequency signal shown in Fig. 11 (synthetic periodic data). We kindly invite you to review our responses to **Reviewer 1gph** and **Reviewer ozTs**, where we illustrate the interpretability of our proposed method and explain how it effectively captures different temporal patterns.
Summary: This paper proposes LeTE (Learnable Transformation-based Generalized Time Encoding), a flexible and learnable time encoding framework that generalizes existing methods (e.g., Time2Vec, Functional Time Encoding). By parameterizing nonlinear transformations via Fourier series and B-spline functions, LeTE provides a more expressive representation of temporal information, capable of modeling periodic, non-periodic, and mixed time patterns. Extensive experiments on time series forecasting, dynamic graph representation learning, and real-world applications demonstrate its superior performance and generalizability. Claims And Evidence: Most of the key claims in the submission are well supported by empirical evidence and theoretical analysis. Supported Claims: 1. LeTE is a generalization of existing time encoding methods (e.g., Time2Vec, FTE): This claim is backed by formal derivations and proofs (e.g., Proposition 3.1), showing how specific parameter settings in LeTE reduce to previous methods. The argument is mathematically sound and clearly presented. 2. LeTE is capable of capturing a wider range of time patterns (periodic, non-periodic, mixed) The authors support this through both the construction of learnable nonlinear transformations (Fourier and spline-based), and comprehensive empirical evaluations across tasks that exhibit different temporal dynamics. The wide range of tasks (forecasting, dynamic graphs, financial modeling) provides convincing evidence that LeTE handles complex patterns beyond those captured by fixed-function encodings. 3. LeTE achieves better performance with fewer dimensions (higher dimensional efficiency): This is substantiated via ablation experiments (Section 4.5, Figure 5–7), showing that LeTE with 2/8/16 dimensions can outperform traditional FTE with 100 dimensions—a strong empirical support for the efficiency claim. 4. LeTE is invariant to time rescaling: This is theoretically demonstrated (Proposition 3.2), similar to prior work on Time2Vec and FTE. The formulation and proof are clear and align with expectations for such encoding schemes. Questionable Claims: 1. LeTE offers enhanced interpretability: While the authors argue that the learned functions are interpretable (due to their basis in Fourier/spline components), interpretability is only briefly mentioned and weakly demonstrated via some visualization (Appendix G.3). The paper could be strengthened by providing concrete examples of how the learned functions reflect real-world temporal patterns, perhaps via visualization or case studies. Methods And Evaluation Criteria: Yes, both the proposed method and the evaluation protocol are appropriate and well-aligned with the problem of time representation learning in machine learning models. Theoretical Claims: The theoretic claims are actually extremely trivial. No need to verify the correctness. Experimental Designs Or Analyses: Yes, I have reviewed the experimental design and find it generally sound and reasonable. The authors evaluate their proposed time encoding method on a wide set of tasks that represent standard and widely accepted benchmarks in the field, including time series forecasting, dynamic graph learning, and a real-world classification application. Supplementary Material: Yes, I reviewed part of the appendix, mainly the G.3 Relation To Broader Scientific Literature: The paper’s contributions build meaningfully on a well-established line of research in time encoding for temporal machine learning tasks, such as time series analysis and temporal graph. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and suggestions. As Reviewer 1gph also raised similar concerns, and due to space limitations, **we have addressed some of these points in our response to Reviewer 1gph. Could you kindly review the first part of our reply there? Below is the continuation of our response following the reply to Reviewer 1gph.** **Different Datasets** We reconstruct and plot the non-linear functions for a 4-dim LeTE trained on MOOC/TGN (shown in Fig. 3, in [anony. repo.](https://anonymous.4open.science/r/LeTE_ICML25_Rebuttal/Figure%201-6.pdf)). By comparing these results to those from the Wikipedia (Fig. 1), it can be seen that the dim 0 exhibit a lack of periodicity. From the reconstructed equations of dim 0, the higher-frequency terms are generally small. For instance, the coefficients of $\cos(5x')$ and $\sin(5x')$ are relatively small (e.g., -0.0134 and -0.0136), suggesting that their contribution is minimal and insufficient to generate significant fluctuations. As a result, the overall function primarily exhibits slow oscillations, making the plot appear to be mainly non-periodic within a certain input window. This observation aligns with the findings in our original paper (Appendix G.1 and Fig. 8 (this refers to figure in the original paper), where the spectral entropy statistics also show that the Wikipedia exhibits stronger periodicity compared to MOOC. Thus, by comparing the plots of LeTE across different data, we can indirectly explore the periodic or non-periodic nature of the data present. **Different Backbones** We provide plots of the same dataset trained with TGN and DyGFormer, shown in Fig. 4 and Fig. 5, with the y-axes set to the same level for each backbone to facilitate a direct comparison. As the figures show, despite using different backbone models, the learned functions exhibit similar trends and shapes for each dimension. This illustrates the stability of our TE and makes the interpretability process more reliable. Of course, there may be some detailed differences between LeTEs trained on different models. This is intuitively due to the presence of various influencing factors, such as the model architecture, the interaction of TE with other modules, the optimization process and etc. However, we can validate the idea by inspecting the plot in a simplified manner. **Comparing lower- and higher-dim LeTE:** We further compare the lower- and higher-dim LeTE by reconstructing the non-linear functions (please refer to and compare Fig. 1 and 6 in the repo.). Intuitively, the higher-dim representation will provide more information. As seen from the plots, dim 2 in Fig. 6 is dominated by the basis function, partially losing the information captured by dim 3 in Fig. 1. From the perspective of the reconstructed functions, for the Fourier-based dims, the LeTE with only 1 Fourier-based dim has a single input transformation, $x'$, and all frequency components are computed based on this transformation. This means the LeTE encodes on a broader time scale (reminder: we use Wikipedia here) and models the time difference variations of editing activities without distinguishing patterns at different scales. Since there is only 1 dim, it is harder for the TE to interpret editing patterns at different time scales. In contrast, for the LeTE with 2 Fourier-based dims, each dim has different input transformations ($x'_0$ and $x'_1$), enabling the model to capture more detailed editing behaviors at different scales. For example, dim 0 may rely more on $ x'_0$ (with a larger scaling factor), focusing on short-term fluctuations (high-frequency), while dim 1 may rely more on $x'_1$ (with a smaller scaling factor), focusing more on long-term trends (low-frequency). Thus, higher dims allow the model to handle behaviors at different time scales, providing higher interpretability. Similarly, for the LeTE with 1 Spline-based dim, it primarily focuses on adjusting a single level, potentially describing how time affects behaviors. However, relying on just 1 dim makes it difficult to capture more complex time dynamics. For the LeTE with 2 Spline-based dims, the weights of the coefficients are more distributed, granting the overall LeTE stronger local adjustment capabilities. Moreover, since a dim may be dominated by basis func or Spline funcs, higher dims naturally have stronger expressive power. Although higher-dim LeTEs offer stronger performance and better explain the information captured by the model, the interpretability analysis of higher-dim LeTEs becomes more complex and may require a dim-by-dim analysis. **Summary** We thank the reviewer for the suggestions on the interpretability of our method. We will consider adding this discussion to the appendix. Additionally, we have prepared code to process the learned LeTE parameters, reconstruct key non-linear transformations, and visualize them. This code will be updated in our publicly available repo after the review process is complete.
Summary: This paper proposes Learnable Transformation-based Generalized Time Encoding, a new approach to encoding time in machine learning tasks. LeTE generalizes popular functional time encoding strategies and makes the non-linear transformation fully learnable. The authors use techniques from Fourier expansions and spline functions to parameterize these transformations and this flexibility allows for better modeling of periodic, non-periodic, and complex mixed temporal patterns compared to methods that rely on a fixed function. They demonstrate LeTE’s effectiveness on a variety of tasks: event-based image classification, time-series forecasting, dynamic graph link prediction, and a real-world financial risk control application. Claims And Evidence: The claims are supported by the experiments on various tasks: event-based image classification, time-series forecasting, dynamic graph link prediction, and a real-world financial risk control application. Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: The experiment designs make sounds. Supplementary Material: Yes Relation To Broader Scientific Literature: Time encoding is very important for time-series analysis, sequence modeling, and graph representation learning. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths 1. Clear motivation and novelty: The paper addresses a known limitation in existing time-encoding methods: most rely on fixed periodic assumptions (e.g., sine or cosine), making them less effective for mixed or non-periodic time dynamics. 2. Comprehensive experimental evaluation The experiments span multiple domains: Weaknesses / Concerns Interpretability dicussion could be expanded: detailed demonstrations of how domain experts might interpret the learned curves (especially in high-dimensional time encodings) could strengthen the real-world applicability argument. More ablations or visualizations of learned functions in real data scenarios would further highlight interpretability. Other Comments Or Suggestions: N/A Questions For Authors: Is the learned time encoding functions in real data scenarios interpretable? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and suggestions. We are happy to further discuss the interpretability of our method real data scenarios. We would also like to clarify that demonstrating interpretability for very high-dim encodings is challenging. We choose to use a 4-dim Combined LeTE to present our analysis. The training process and settings are consistent with those used in the main experiments of our paper. We will demonstrate the interpretability of our model from the following perspectives: 1. Reconstructing the learned non-linear transformation functions and plotting them to provide an intuitive analysis. 2. Analyzing each dim to interpret what information it represents. 3. Comparing different datasets under the same backbone. 4. Comparing different backbones' LeTE under the same dataset. 5. Comparing the low- vs. high-dim LeTE to assess the impact of dimensionality on interpretability. **Reconstructing** As discussed in the paper, the previous TEs exhibit a degree of interpretability by using fixed sine functions, which reflect periodic patterns. However, this strong inductive bias also limits their expressiveness and generalization to complex patterns or non-periodicity. In contrast, the LeTE is fully learnable, and the learnable non-linear functions can still be reconstructed and visualized from learned parameters, allowing for interpretability through function inspection. We demonstrate this interpretability using a 4-dim Combined LeTE, trained on the Wikipedia/TGN. Fig. 1 (see [anony. repo.](https://anonymous.4open.science/r/LeTE_ICML25_Rebuttal)) shows the learned functions for each dim. The first 2 dims are Fourier-based, and the last 2 are Spline-based. **Analysing each dim** **Fourier-based:** The Fourier coefficients explicitly encode frequency components, offering an intuitive view of the captured patterns. Compared to fixed sine functions, our learnable Fourier-based TE captures periodic patterns with finer granularity and greater flexibility, enabling the representation of both periodicity and subtle non-periodicity within specific ranges. For a single dim, low-frequency components capture long-term trends, while high-frequency components focus on short-term fluctuations. As an example: Wikipedia dataset, which records editing activities, where nodes represent users or pages, and edges with timestamps capture editing events (frequency magnitude spectrums are shown in Fig. 2, note the inputs are time differences in this case): Dim 0 shows a strong high-frequency response. The learned coefficients include $\cos(3x'): 0.29, \cos(4x'): -0.16, \cos(5x'): -0.42$. This suggests that dim 0 is sensitive to short-term repetitive edits, i.e., high-frequency editing behavior. Dim 1 captures low- to mid-frequency patterns, with large coefficients: $\sin(1x'): 0.96, \sin(4x'): 0.61, \cos(4x'): 0.29$. These reflect longer-term periodic behaviors. For example, frequency-1 may correspond to daily or weekly editing cycles, while frequency-4 may capture sub-daily repeated interactions. This dim may reflect user habits or regular community editing patterns. Thus, Fourier-based dims not only retain the periodic interpretability of sine functions but also exhibit richer frequency composition, allowing it to simultaneously capture both short-term bursts and long-term rhythms. Moreover, this approach could be extended to analyze more complex patterns. As our goal here is to present the underlying idea, we will not go deeper here. **Spline-based:** The Spline functions offer complementary advantages, particularly for non-periodicity. In Spline-based dims, where we applied a basis function (Tanh), if the weight of the it is higher, it may dominate a specific dim—such as dim 2 in Fig. 1. However, there are other dims where Splines dominate, e.g., dim 3. We combine the specific case on Wikipedia and explain: Dim 2: The output increases monotonically with time difference, indicating a time-decay-like effect — the longer the time since last edit, the stronger the encoding response. This may suggest the TE has learned that re-activation after long inactivity is a significant event in this specific case. Dim 3: The function exhibits sharp peaks and local bumps, indicating that the TE assigns particular importance to certain time intervals. These may correspond to known active editing windows or reaction delays. The sharpness of some coefficients suggests the TE has captured rare but important temporal phenomena, such as one-off campaigns or anomaly spikes. The Spline inherently capture local time features, indicating specific time intervals that the model considers critical. Sharp peaks coefficients within the curves suggest the occurrence of sudden events or anomalies. This local characteristic is advantageous for identifying rare phenomena. **Due to space limitations and similar concerns raised by Reviewer ozTs. Please kindly check the remaining parts in our response to Reviewer ozTs.**
null
null
null
null
null
null
ASCENSION: Autoencoder-Based Latent Space Class Expansion for Time Series Data Augmentation
Reject
Summary: A summary of the paper: This paper presents ASCENSION, a VAE-based data augmentation framework designed to address distribution discrepancies in Class Expansion. It uses latent space to improve the applicability of data augmentation and evaluates ASCENSION’s impact on classification performance across various time-series datasets. Key weaknesses: 1.The claimed novelty of the paper lies in latent space and the use of VAE for data augmentation in time-series, but these techniques are not fundamentally new. Latent space clustering, in particular, is well-documented and widely used in existing literature. Additionally, the paper lacks a detailed discussion of the specific limitations in time-series scenarios and how ASCENSION addresses class expansion in these contexts. The contributions seem to be a combination of existing methods, tested in a specific setting, without introducing significant new innovations. 2.The motivation of the paper is unclear. In the introduction, the authors mention issues like the lack of research on periodicity in data augmentation, VAE’s limitations in class expansion, and the divergence between training and operational performance, but the specific problem the paper aims to address remains confusing. As a result, the specific problem the paper aims to address remains unclear. A clearer definition of the problem would strengthen the paper’s focus. 3.The paper lacks detailed justification for its proposed method, which does not fully align with the stated contributions. For instance, the introduction mentions controlled and progressive expansion of class probability densities and boundaries, as well as preventing harmful overlap, but these concepts are not adequately explored or mathematically developed in the method section. This lack of detail diminishes the clarity and impact of the paper’s contributions. 4.The experimental section does not provide enough detail to demonstrate how ASCENSION effectively expands class probability densities and boundaries as claimed. 5.The paper lacks clarity and coherence, making it difficult to follow. For example, the phrase “demonstrate potential” (Line 17) is uncommon. In Line 42, the term “distribution discrepancy ratio” used to explain "when training and operational data distributions diverge" is redundant and still fails to pinpoint the underlying causes of the limitations. Additionally, key details are scattered across various sections, including the appendix, making it hard to follow the argument. Claims And Evidence: The claims of novelty are problematic because the use of latent space and VAE is already well-documented. The paper also doesn’t clearly explain the specific limitations in time-series and how ASCENSION addresses class expansion. Methods And Evaluation Criteria: The evaluation method in the paper focuses only on overall classification accuracy and does not provide enough detail to demonstrate how ASCENSION effectively expands class probability densities and boundaries as claimed. Additionally, the baseline models used for comparison are not the most recent. Theoretical Claims: The proposed method is relatively simple and primarily descriptive, lacking detailed proofs or a deeper theoretical explanation to support its claims. Experimental Designs Or Analyses: The evaluation method in the paper focuses only on overall classification accuracy and does not provide enough detail to demonstrate how ASCENSION effectively expands class probability densities and boundaries as claimed. Supplementary Material: No. I didn’t find the materials. Relation To Broader Scientific Literature: I believe the paper’s contributions have limited relation to the broader scientific literature. The techniques used, such as latent space clustering and VAE for data augmentation, are not new and have been widely explored in prior work. Essential References Not Discussed: The paper cites relevant related work on data augmentation but fails to clearly explain the limitations of these methods and how the proposed approach addresses them. The references are not fully used to frame the contributions or justify the motivation. Other Strengths And Weaknesses: No Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the review. Below, we clarify the novelty and motivation of our work and provide new results supporting our key hypotheses ## Summary "**C4.1** *Claimed novelty lies in latent space and use of VAE for DA in time-series, but these are not fundamentally new **C4.2** Motivation of the paper is unclear **C4.3** Paper doesn’t clearly explain th limitations in time-series and how ASCENSION addresses class expansion.*" This work is motivated by the fact that most state-of-the-art DA methods for time series focus on intra-class generation. We hypothesize that controlled, progressive class boundary expansion in latent space can boost classification. While prior works (e.g., Modals, [Wa24, Wa24b]) explore class expansion, they do not include gradual control. Our key contribution, the α-scaling mechanism, enables this, going beyond clustering loss combination. Experiments show ASCENSION consistently outperforms baselines: with ResNet, gains of 2.8%–8.5%, and up to 30.2% vs. KoVAE; with FCN, 1.3%–13.2%, and up to 31.7%. A new ablation study (added to rebuttal) attributes 7%–61% of the gains to α-scaling. Due to space limits, we refer the reviewer to our response to Reviewer h2Y9 (C3.5) for more details about this ablation study. These new results will be included in the revised paper. [Wa24] Wang, T. et al. (2024) Fine-grained Control of Generative Data Augmentation in IoT Sensing. Advances in Neural Information Processing Systems, 37, 32787-32812. [Wa24b] Wang, T. et al. (2024) Data augmentation for human activity recognition via condition space interpolation within a generative model. ICCCN (2024) "**C4.4** *Paper lacks clarity and coherence; Key details are scattered across sections, making it hard to follow...*" Thank you for pointing out the language issues - we’ve corrected them. We’re also open to any specific suggestions the reviewer may have regarding content organization." ## Methods And Evaluation Criteria "**C4.5** *Evaluation method focuses only on overall classification accuracy and does not provide enough detail to demonstrate how ASCENSION effectively expands class probability densities/boundaries. Baseline models are not the most recent*" Section 4.2.6 and Fig. 5 qualitatively assess class expansion risks, with quantitative details in Appendix D. Based on reviewer suggestions, we added ImagenTime, Diffusion-TS, and KoVAE to our baselines. ASCENSION outperforms all: for FCN, they rank 3rd, 5th, and 9th in total accuracy; for ResNet, 4th, 6th, and 9th. Summary results appear below; Full results available at: https://github.com/ASCENSION-PAPER/ASCENSION/tree/main/comparison_results - ResNet | Method | ↑Nb Augmented | ↑Augmented mean acc | Nb Unchanged | Unchanged mean acc | ↓Nb Worsened | ↑Worsened mean acc | Nb Total | ↑Total mean acc | |--------------|---------------|---------------------|--------------|--------------------|--------------|--------------------|----------|-----------------| | ASCENSION | **56** | **4.0**|16 | 0.0 |**30** | -**1.7**| 102 | **1.7** | | ImagenTime | 26 | 1.8 |17 | 0.0 |59 | -6.2 | 102 | -3.1 | | Diffusion-TS | 30 | 1.3| 6 | 0.0 |66 | -9.2 | 102 | -5.5| | KoVAE | 1 | 0.7 |6 | 0.0 |95 | -30.6| 102 | -28.5| - FCN | Method | ↑Nb Augmented | ↑Augmented mean acc | Nb Unchanged | Unchanged mean acc | ↓Nb Worsened | ↑Worsened mean acc | Nb Total | ↑Total mean acc | |--------------|---------------|---------------------|--------------|--------------------|--------------|--------------------|----------|-----------------| | ASCENSION | **50** | **3.0** |13 | 0.0 |**39** | -**1.4** | 102 | **1.0**| | ImagenTime | 25 | 2.8 |13 | 0.0| 64 | -3.0 | 102 | -1.2 | | Diffusion-TS |37 | **3.0** | 7 | 0.0|58 | -14.8| 102 | -7.3 | | KoVAE | 3 | 7.1| 2 | 0.0|97 | -32.5 | 102 | -30.7| ## Theoretical Claims "**C4.6** *The method lacks detailed proofs or a deeper theoretical explanation to support its claims.*" Despite its simplicity, extensive empirical comparisons substantiate our method’s effectiveness against SoTa alternatives. The new ablation study quantifies the α-scaling mechanism and contrastive loss impact (cf response to Reviewer h2Y9 - C3.5). Although contrastive loss benefits are established (>2% accuracy increase), notably, the α-scaling mechanism contributes significantly (7 to 44%) of accuracy improvement over ResNet and 10 to 61% over FCN. Existing sections will be refined to enhance clarity of the justification." ## Supplementary Material "**C4.7** *No. I didn't find the materials.*" Supplementary materials are available on our anonymous GitHub (see Sec. 6): https://github.com/ASCENSION-PAPER ## Relation To Broader Scientific Literature "**C4.9** *Paper fails to explain limitations of SoTa methods and how ASCENSION addresses them.*" Appendix A outlines key limitations (e.g., temporal distortion, GAN instability) and explains how ASCENSION advances the state of the art. Figure 6 gives a timeline of all baselines.
Summary: This paper introduces a VAE-based generative data augmentation approach for time-series data called ASCENSION. This work aims at progressively expanding inter-class boundary during the generation, enabling the exploration of underrepresented or unseen latent distribution in the training data. The major technical innovation lies in the design of clustering loss and iterative training process. Comprehensive experiments are conducted to demonstrate the effectiveness of ASCENSION. Claims And Evidence: The experimental results presented in this paper provide strong support for the claims made. However, the assertion that "To our knowledge, no state-of-the-art DA method for time-series classification enables progressive (iterative) and meaningful class boundary expansion during synthetic data generation" may require reconsideration. The work presented in [1] also appears to propose a progressive class boundary expansion for time-series generative data generation. While the methodologies differ—ASCENSION manipulates features in the latent space, whereas [1] focuses on controlling conditions—it would be beneficial to compare these two approaches and revise the original claim accordingly. This comparison would provide a more comprehensive and accurate representation of the current state of the art in this field. [1] Fine-grained Control of Generative Data Augmentation in IoT Sensing, NeurIPS 2024 Methods And Evaluation Criteria: The proposed approach of incorporating clustering loss into VAE training appears intuitive and straightforward. However, the implementation details are not clearly outlined. Specifically, how is the clustering loss computed during training? Is the distance loss calculated for each data point within a batch? How does this approach compare to or differ from contrastive learning? Additionally, how does the weighting of the loss terms impact the final performance? The paper also introduces an iterative training approach, but the rationale behind this choice is not clearly explained. A more straightforward alternative could be to control class expansion by simply adjusting the clustering loss. The authors' own analysis at Line 651 suggests that the iterative training method is prone to instability and may lead to error accumulation over time. This instability raises concerns about the practical applicability of the technique. To strengthen their case, the authors should provide a more comprehensive justification for the iterative training approach. Specifically, they need to explain its fundamental advantages over other potential methods. A comparative analysis demonstrating why this approach was selected over seemingly simpler and potentially more stable alternatives would greatly enhance the credibility and value of their proposed method. Theoretical Claims: No theoretical claims are made in this paper. Experimental Designs Or Analyses: The experimental work presented in this paper is comprehensive. However, a significant issue is the absence of an ablation study. Such a study would be particularly valuable in demonstrating the individual impacts of two key components: the clustering loss and the iterative training approach. Specifically, it would be beneficial to understand how each of these elements independently contributes to the overall performance of the proposed method. Supplementary Material: Related work, additional experiments and implementation details are described in the supplementary material. Relation To Broader Scientific Literature: As mentioned in **Claims And Evidence**, prior research has explored the potential of generating inter class synthetic samples as a data augmentation approach [1][2]. It would be beneficial to do a wider survey and incorporate the findings into the related work. [1] Fine-grained Control of Generative Data Augmentation in IoT Sensing [2] Data augmentation for human activity recognition via condition space interpolation within a generative model Essential References Not Discussed: Please see **Relation To Broader Scientific Literature**. Other Strengths And Weaknesses: This paper proposed an intuitive improvement over VAE for time-series data augmentation. However, there remains ambiguity regarding the implementation of the method that requires further clarification. The motivation of iterative training also stays unclear. A large amount of experiments are conducted to show that ASCENSION's comparative performance over the baselines in various TSC tasks. But ablation studies are required in order to prove the validity of the design choices. Other Comments Or Suggestions: The paper is well-written and easy to follow. There is no apparent typo or formatting issue that I noticed. Questions For Authors: The paper exhibits strong merit through its thorough comparative experiments. Its findings will likely serve as valuable reference points for researchers in the field. While I have raised several questions in my previous comments, particularly regarding the iterative training justification, progressive boundary expansion claims, and the need for ablation studies, I remain positive about the overall contribution. I would be inclined to provide a higher rating if the authors can address these concerns with reasonable explanations. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer’s detailed and constructive comments. We address each of the raised concerns below. We clarify the novelty and rationale of ASCENSION compared to the referenced works below. We also provide ablation study results. ## Claims And Evidence & Relation To Broader Scientific Literature "**C3.1** *the assertion that 'no SoTa DA method for time-series classification enables progressive (iterative) and meaningful class boundary expansion...' may require reconsideration. [1] also appears to propose a progressive class boundary expansion... prior research has explored the potential of generating inter class synthetic samples [1,2]. It would be beneficial to do a wider survey and incorporate the findings into the related work.*" References [1, 2] explore class boundary expansion via inter-class interpolation, but our approach differs in two key ways: (i) we extrapolate within a single class for controlled expansion, and (ii) the process is iterative and progressive—*starting small and expanding gradually (cf. Fig. 1)* —while assessing risk during sample generation, eliminating the need to assign class labels. We thank the reviewer for pointing out these works and will include them in the revised Related Work (App. A). Unfortunately, as public implementations are not yet available, we could not include them in our comparison. Nonetheless, we added three new DA baseline methods—ImagenTime, DiffusionTS, and KoVAE—as suggested by Reviewer SfpM. Full version of results available at: https://github.com/ASCENSION-PAPER/ASCENSION/tree/main/comparison_results ## Methods And Evaluation Criteria "**C3.2** *how is the clustering loss computed during training? Is the distance loss calculated for each data point within a batch? how does the weighting of the loss terms impact the final performance?*" $\mathcal{L}_{cluster}$ is a contrastive loss computed per batch and back-propagated with the total loss via SGD. We initially used uniform weights, but following the reviewer’s suggestion, we are running a grid search over 256 combinations. So far, 2 have been tested, showing only a 0.02% accuracy variation—too early for conclusions, but more results will be included in the revised version and Supplementary Material. "**C3.3** *How does this approach compare to or differ from contrastive learning?*" We thank the reviewer for noting the imprecise description, $\mathcal{L}_{cluster}$ is in fact a contrastive loss and this will be clarified in the revised version. "**C3.4** *The rationale behind this (iterative training) choice is not clearly explained. A more straightforward alternative could be to control class expansion by simply adjusting the clustering loss. The authors' own analysis at Line 651 suggests that the iterative training method is prone to instability ... they need to explain its fundamental advantages over other potential methods*" The rationale for iterative training stems from our hypothesis that progressively and controllably expanding class boundaries in latent space improves classification. While prior works (e.g., Modals and [1]) have explored class expansion, they lack mechanisms for gradual control. Our α-scaling mechanism preserves clear class boundaries before extrapolating outside a class space—unlike approaches that simply reduce clustering loss early on. Despite some instability, ASCENSION shows consistent improvements across all UCR 102 datasets: with ResNet, gains range from 2.8%–8.5% (Table 1) and up to 30.2% vs. KoVAE; with FCN, 1.3%–13.2%, and up to 31.7% vs. KoVAE. Due to space limits, we refer the reviewer to the new results table in our response to Reviewer SfpM (see C2.2). An ablation study (detailed below) attributes 7%–61% of these improvements to progressive class expansion. These findings will be included in the revised version. ## Experimental Designs Or Analyses "**C3.5** *A significant issue is the absence of an ablation study; it would be beneficial to understand how each of these elements independently contributes to the overall performance.*" An ablation study was conducted for the rebuttal to assess the individual roles of the clustering loss and α-scaling mechanism - full results at: https://github.com/ASCENSION-PAPER/ASCENSION/tree/main/ablation_study/ + the α-scaling mechanism significantly improves performance, with median gains of 0.3–0.5% and top-quartile gains over 1.9%, accounting for 7–44% (ResNet) and 10–61% (FCN) of the accuracy gap with baselines. On 102 datasets, the performance delta (mean accuracy step>1 – step=1) is: | Classifier | Q1 | Median | Q3 | IQR | |------------|--------|--------|--------|--------| | ResNet | 0.00000 | 0.00502 | 0.01923 | 0.01923 | | FCN | 0.00000 | 0.00333 | 0.01346 | 0.01346 | + removing the clustering loss yields only a 1.3% averafe gain (vs. 4% with the full method), with benefits limited to the first augmentation step. Subsequent degradation highlights its key role in supporting progressive expansion --- Rebuttal Comment 1.1: Comment: I appreciate the authors reply. I would like to raise my rating.
Summary: This paper introduces ASCENSION, a VAE-based data augmentation (DA) technique tailored for time series classification (TSC). The core idea centers on a controllable and progressive latent space class expansion mechanism, leveraging the structured latent space of VAEs. ASCENSION aims to overcome the limitations of traditional and generative DA methods—particularly class boundary rigidity and overfitting to the training distribution. The model introduces a clustering loss to ensure intra-class compactness and inter-class separability and iteratively expands latent distributions with a tunable α-scaling factor. Empirical evaluations are conducted across 102 datasets from the UCR archive, comparing ASCENSION against DA baselines. Results show that ASCENSION achieves the most consistent classification gains with fewer instances of performance degradation. Claims And Evidence: The paper’s claims regarding robust performance gains, latent class expansion, and distributional generalization are somewhat supported by empirical evidence. However, there are concerns: - Several notations and definitions (e.g., $L_{class}$, $K_y$) are poorly explained or introduced late, reducing clarity around the contribution and optimization objectives. - The absence of fair comparisons with strong generative baselines (e.g., DiffusionTS, KoVAE, ImagenTime) limits the strength of the benchmarking. Methods And Evaluation Criteria: The evaluation across 102 datasets is impressive in scope. However, evaluation fairness is questionable, as there are state-of-the-art time series generation methods that are not compared, such as ImagenTime [1], DiffusionTS [2], KoVAE [3], and GT-GAN [4]. [1] Naiman, Ilan, et al. "Utilizing image transforms and diffusion models for generative modeling of short and long time series." [2] Yuan, Xinyu, and Yan Qiao. "Diffusion-ts: Interpretable diffusion for general time series generation." [3] Naiman, Ilan, et al. "Generative modeling of regular and irregular time series data via Koopman VAEs."‏ [4] Jeon, Jinsung, et al. "GT-GAN: General purpose time series synthesis with generative adversarial networks." Theoretical Claims: No Experimental Designs Or Analyses: The experimental setup is mainly sound, with multiple classifiers (ResNet/FCN) and diverse datasets. However: - The use of $L_{class}$ loss is unexplained in Section 3 before its inclusion in the final loss formulation. - It is unclear what $K_y$ represents in Equation (3)—the number of mixture components? - Fairness of experimental baselines is questionable due to conditional generation mismatch. - No ablations are presented to show the necessity of the clustering loss or the $\alpha$-scaling expansion mechanism independently. Supplementary Material: N/A Relation To Broader Scientific Literature: It aims to provide a robust data augmentation method. Essential References Not Discussed: The paper omits several recent generative time series models, DiffusionTS, GT-GAN, KoVAE, and ImagenTime, which offer stronger baselines for time series generation. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: None Questions For Authors: Why $L_{cluster}$ is used and not simply contrastive loss? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and constructive critique. We provide the requested clarifications below and provide comparison results with the suggested baselines. We will include all this in our revised manuscript. ## Claims And Evidence & Methods And Evaluation Criteria "**C2.1** *Several notations and definitions ($\mathcal{L}_{class}$, $K_y$) are poorly explained, reducing clarity around the contribution*" To enhance clarity, these notations will be further explained in the revision. Specifically, $L_{class}$ denotes the contrastive loss, while $K_y$ represents the number of points for class $y$ at the current augmentation step. The paper will be reorganized so that $\mathcal{L}_{class}$ and $K_y$ are defined as soon as they become relevant. "**C2.2** *The absence of fair comparisons with strong generative baselines such as ImagenTime [1], DiffusionTS [2], KoVAE[3], GT-GAN [4]*" We have conducted additional experimental comparison studies for the rebuttal, incorporating the suggested methods. These new results will be incorporated into Section 4 (Results). Unfortunately, we were unable to reproduce GT-GAN, as the available code contains hard-coded parameters and would require substantial modifications. As it stands, the implementation appears difficult to reproduce without further clarification or updates. A summary of the new results is provided below, demonstrating that ASCENSION significantly outperforms the evaluated baselines. ImagenTime, Diffusion-TS, and KoVAE are respectively ranked -- in terms of "total" accuracy performance -- 3rd, 5th and 9th for FCN (among all the evaluated baselines), and 4th, 6th and 9th for ResNet. Full version of the new experiments (for all datasets) is available at: https://github.com/ASCENSION-PAPER/ASCENSION/tree/main/comparison_results - ResNet | Method | ↑Nb Augmented | ↑Augmented mean acc | Nb Unchanged | Unchanged mean acc | ↓Nb Worsened | ↑Worsened mean acc | Nb Total | ↑Total mean acc | |--------------|---------------|---------------------|--------------|--------------------|--------------|--------------------|----------|-----------------| | ASCENSION | **56** | **4.0**|16 | 0.0 |**30** | -**1.7**| 102 | **1.7** | | ImagenTime | 26 | 1.8 |17 | 0.0 |59 | -6.2 | 102 | -3.1 | | Diffusion-TS | 30 | 1.3| 6 | 0.0 |66 | -9.2 | 102 | -5.5| | KoVAE | 1 | 0.7 |6 | 0.0 |95 | -30.6| 102 | -28.5| - FCN | Method | ↑Nb Augmented | ↑Augmented mean acc | Nb Unchanged | Unchanged mean acc | ↓Nb Worsened | ↑Worsened mean acc | Nb Total | ↑Total mean acc | |--------------|---------------|---------------------|--------------|--------------------|--------------|--------------------|----------|-----------------| | ASCENSION | **50** | **3.0** |13 | 0.0 |**39** | -**1.4** | 102 | **1.0**| | ImagenTime | 25 | 2.8 |13 | 0.0| 64 | -3.0 | 102 | -1.2 | | Diffusion-TS |37 | **3.0** | 7 | 0.0|58 | -14.8| 102 | -7.3 | | KoVAE | 3 | 7.1| 2 | 0.0|97 | -32.5 | 102 | -30.7| ## Experimental Designs Or Analyses "**C2.3** *Fairness of experimental baselines is questionable due to conditional generation mismatch.*" We are not sure to understand what the reviewer means by "conditional generation mismatch". We would be happy to follow up on this topic if the reviewer can clarify. "**C2.4** *No ablations are presented to show the necessity of the clustering loss or the α-scaling expansion mechanism independently*" "Two ablation studies were conducted to assess the individual roles of the clustering loss and α-scaling mechanism. These new studies will be added to the revised version. Full results: https://github.com/ASCENSION-PAPER/ASCENSION/tree/main/ablation_study The results and findings of these two new studies are presented below: + the α-scaling mechanism significantly improves performance, with median gains of 0.3–0.5% and top-quartile gains over 1.9%, accounting for 7–44% (ResNet) and 10–61% (FCN) of the accuracy gap with baselines. On 102 datasets, the performance delta (mean accuracy step>1 – step=1) is: | Classifier | Q1 | Median | Q3 | IQR | |------------|--------|--------|--------|--------| | ResNet | 0.00000 | 0.00502 | 0.01923 | 0.01923 | | FCN | 0.00000 | 0.00333 | 0.01346 | 0.01346 | + Removing the clustering loss yields only a modest average accuracy gain of 1.3%, compared to 4% with the full method. Notably, improvements are mostly observed in the initial augmentation step, while later progressive steps lead to rapid accuracy degradation without the clustering loss, thus highlighting its essential role in sustaining performance throughout the augmentation process. "**C2.5** *Questions For Authors - Why $\mathcal{L}_{cluster}$ is used and not simply contrastive loss?*" The term $\mathcal{L}_{cluster}$ in our paper refers to a contrastive loss. We thank the reviewer for highlighting the lack of clarity and will revise the manuscript accordingly.
Summary: This work introduced a data augmentation method for time series data called ASCENSION. The method is based on the classic VAE-based training and sampling process, but the authors incorporated a clustering loss into training for better classification performance. The proposed method iteratively augments the dataset by repeating the training and sampling process multiple times. The proposed method is systematically evaluated and applied on UCR dataset, and show stable performance improvement over benchmarks. Claims And Evidence: The authors systematically evaluated their method on all UCR datasets, and demonstrated clear performance improvement over baseline methods, which supports their claims. The authors also provided sufficient ablation analysis to understand the effective components of the proposed method, showing how the hyper parameters can affect performance. Methods And Evaluation Criteria: The proposed method, especially the iterative training of VAE and sampling of its latent space makes a lot of sense for UCR datasets, as such datasets are often of small scale. The authors evaluated their method on all 102 UCR datasets, provided a systematic evaluation of the proposed method. It would make more sense if the authors can also apply their method on UEA datasets for multivariate time series classification tasks. Theoretical Claims: There is no theoretical claims. All analytical results seem proper to me. Experimental Designs Or Analyses: The authors evaluated their method systematically on the UCR Benchmark. The authors also carefully selected the baseline methods, and provided systematic evaluation of the baseline methods on all datasets. It is indeed a common problem that data augmentation methods provide varying performance boost/degradation on diverse set of datasets, and the authors provided experiments to look at this problem carefully, which makes the provided results sound. One issue is that the ablation experiments are conducted on very specific datasets. Can the authors provide additional rationale on using certain datasets (instead of using other ones) for ablation analysis? Supplementary Material: I did not review the supp materials. Relation To Broader Scientific Literature: The authors nicely summarized the broader scientific literature in the related work section. Essential References Not Discussed: VAE-based augmentation method is classical and has been widely investigated, and the authors can provide slightly more comprehensive literature search to include more related work. Yet the current references are sufficient for reader to understand the presented work. Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read our work and offer such constructive feedback. We appreciate the recognition of the systematic nature of our experiments, as well as the suggestions to discuss the application of our method to multivariate data and expanding our ablation analyses. ## Methods And Evaluation Criteria "**C1.1** *It would make more sense if the authors can also apply their method on UEA datasets for multivariate time series classification tasks.*" We agree with the reviewer that our method can theoretically apply to multivariate time series classification tasks. However, this would require changing the encoder to account for the multiple dimensions, as well as a specific parameter tuning. This is why we focus our extensive study on univariate time series. We will include a sentence discussing the potential extension to multivariate data, as the rebuttal period is (unfortunately) too short to conduct such adaptations and experiments. ## Experimental Designs Or Analyses "**C1.2** *One issue is that the ablation experiments are conducted on very specific datasets. Can the authors provide additional rationale on using certain datasets (instead of using other ones) for ablation analysis?*" Regarding the study associated with Figures 7 to 16 (impact of the number of iterations on classification performance), we chose -- for conciseness -- to present results for one representative dataset per UCR category, e.g., one from the 6 ECG Signal datasets, one from the 8 Device datasets, etc. Full results are available in the supplementary material at: https://github.com/ASCENSION-PAPER/ASCENSION/tree/main/alpha_study We believe this subset sufficiently captures the overall trends, which align with the discussion in Appendix C. If the remaining graphs are deemed essential, we are happy to generate and include them in the Supplementary Material (given this requires significant computation time). We thank the reviewer for highlighting this and will add a clarifying note in the revised version for transparency. ## Essential References Not Discussed "**C1.3** *VAE-based augmentation method is classical and has been widely investigated, and the authors can provide slightly more comprehensive literature search to include more related work.*" In the initial version of the paper, we did include a comparison with VaDE, a seminal VAE-based data augmentation technique. In addition of VaDE, we incorporated a recent method, KoVAE [Na23], in our new experiments (conducted for the rebuttal), as well as two additional diffusion model-based DA methods, Diffusion-TS [Yu24] and ImagenTime [Na24], in order to strengthen the benchmark study. A summary of the new results is provided below, demonstrating that ASCENSION significantly outperforms all the evaluated baselines (incl., KoVAE). Full version of the new experiments (for all datasets) is available at: https://github.com/ASCENSION-PAPER/ASCENSION/tree/main/comparison_results These new results will be added to the result section (Sec. 4), and methods discussed in the Related Work (Appendix A), which covers the history of VAE-, diffusion model- and GAN-based data augmentation methods. We are happy to discuss additional references that the reviewer would provide. - ResNet | Method | ↑Nb Augmented | ↑Augmented mean acc | Nb Unchanged | Unchanged mean acc | ↓Nb Worsened | ↑Worsened mean acc | Nb Total | ↑Total mean acc | |--------------|---------------|---------------------|--------------|--------------------|--------------|--------------------|----------|-----------------| | ASCENSION | **56** | **4.0**|16 | 0.0 |**30** | -**1.7**| 102 | **1.7** | | ImagenTime | 26 | 1.8 |17 | 0.0 |59 | -6.2 | 102 | -3.1 | | Diffusion-TS | 30 | 1.3| 6 | 0.0 |66 | -9.2 | 102 | -5.5| | KoVAE | 1 | 0.7 |6 | 0.0 |95 | -30.6| 102 | -28.5| - FCN | Method | ↑Nb Augmented | ↑Augmented mean acc | Nb Unchanged | Unchanged mean acc | ↓Nb Worsened | ↑Worsened mean acc | Nb Total | ↑Total mean acc | |--------------|---------------|---------------------|--------------|--------------------|--------------|--------------------|----------|-----------------| | ASCENSION | **50** | **3.0** |13 | 0.0 |**39** | -**1.4** | 102 | **1.0**| | ImagenTime | 25 | 2.8 |13 | 0.0| 64 | -3.0 | 102 | -1.2 | | Diffusion-TS |37 | **3.0** | 7 | 0.0|58 | -14.8| 102 | -7.3 | | KoVAE | 3 | 7.1| 2 | 0.0|97 | -32.5 | 102 | -30.7| [Na23] Naiman, I. et al. (2023). Generative modeling of regular and irregular time series data via Koopman VAEs. arXiv preprint arXiv:2310.02619. [Na24] Naiman, I., Berman, N. et al. (2024). Utilizing image transforms and diffusion models for generative modeling of short and long time series. Advances in Neural Information Processing Systems, 37, 121699-121730. [Yu24] Yuan, X., & Qiao, Y. (2024). Diffusion-ts: Interpretable diffusion for general time series generation. arXiv preprint arXiv:2403.01742.
null
null
null
null
null
null
GANQ: GPU-Adaptive Non-Uniform Quantization for Large Language Models
Accept (poster)
Summary: The paper proposes GANQ, a GPU-Adaptive Non-Uniform Quantization framework leveraging lookup table (LUT)-based mixed-precision GEMM for efficient deployment of Large Language Models (LLMs). GANQ introduces a post-training quantization optimization algorithm to effectively solve LUT-based quantization objectives, significantly reducing both quantization cost and quantization errors. Experiments show GANQ outperforms existing methods in reducing perplexity gaps at 3-bit and 4-bit quantization, achieving up to 2.57× inference speedup on an NVIDIA RTX 4090 GPU. Claims And Evidence: Please refer to "*Methods And Evaluation Criteria*". Methods And Evaluation Criteria: Strengths: + Introduces GANQ, an innovative GPU-adaptive, LUT-based non-uniform quantization method, demonstrating clear perplexity improvements for quantized OPT and Llama models. + Provides promising inference efficiency improvements (up to 2.57× speedup) on real GPU hardware (NVIDIA RTX 4090). Weaknesses: - Accuracy evaluations are limited, lacking tests on more recent models (e.g., Llama-3.1), long-context scenarios (LongBench), and reasoning-intensive benchmarks (e.g., GSM8K). - Efficiency benchmarks only compare against GPTQ, with reported performance gains notably inconsistent (lower) compared to prior literature where GPTQ can gain 4x higher throughput. - Experimental setups for latency measurement lack critical details such as batch size and input token length, hindering clarity, reproducibility, and proper interpretation of the results. Theoretical Claims: Please refer to "*Methods And Evaluation Criteria*". Experimental Designs Or Analyses: Please refer to "*Methods And Evaluation Criteria*". Supplementary Material: Yes. Supplementary materials provided extra accuracy evaluation results. Relation To Broader Scientific Literature: This paper contributes to LLM quantization for efficient LLM inference. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: Please refer to "*Methods And Evaluation Criteria*". Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the feedback. Below, we address your concerns one by one. **Response to Weakness 1** We acknowledge the value of broader evaluation and have conducted additional experiments. Below are WikiText PPL results for **LLaMA-3.2** models, using the same settings as in the paper. ||1B|3B|1B-Instruct|3B-Instruct |-|:-:|:-:|:-:|:-: |FP16|9.76|7.81|13.16|11.05 |(4-bit / 3-bit) |RTN|18.08 / 2.6e3|10.53 / 4.8e2|22.91 / 7.0e3|15.58 / 5.7e2 |GPTQ|24.07 / 1.7e2|6.0e3 / 6.3e3|18.40 / 1.1e2|4.5e3 / 3.3e3 |OmniQuant|12.90 / 4.3e2|8.87 / 14.82|16.31 / 34.89|12.33 / 19.55 |GANQ|**10.78** / **15.91**|**8.35** / **10.85**|**14.36** / **21.32**|**11.99** / **15.84** These results confirm that GANQ consistently outperforms baselines with superior and more stable performance. While OmniQuant learns the quantization factors and is relatively robust, it still fails in some case (i.e, 3-bit for LLaMA-3.2-1B). RTN and GPTQ show significant sensitivity to weight distribution and degradation, with GPTQ notably unstable on 3B models. To explore this, we test GPTQ with a group size of 128 to mitigate outlier effects (same table head): |||||| |-|:-:|:-:|:-:|:-: |GPTQ (g128)|15.37 / 72.00|1.1e2 / 1.1e3|16.03 / 28.31|49.14 / 6.9e2 Using a group size of 128 improves GPTQ's performance on 1B models but still performs poorly on 3B models, highlighting its sensitivity to weight distribution. We initially did not include long-context or reasoning-intensive tasks as we evaluated base models without task-specific fine-tuning (e.g., LLaMA-2-7B). Following standard practice [1,2], we used Table 3 tasks to assess general capabilities. We test LongBench and GSM8K on LLaMA-3.2 1B/3B-Instruct and their quantized versions. ||1B-Instruct||3B-Instruct|| |-|-|-|-|-| ||LongBench|GSM8K (%)|LongBench|GSM8K (%) |FP16|11.5|32.90|12.7|64.97 |RTN|0.2|4.17|12.1|37.68 |GPTQ|error|11.14|error|~0.0 |OmniQuant|8.9|16.53|11.5|54.74| |GANQ|11.7|27.75|12.5|60.50 GANQ consistently outperforms baseline. For GPTQ, consistent with PPL results on 3B-Instruct, GSM8K accuracy is near zero. On LongBench, GPTQ failed with a "skip_special_tokens" error, despite identical settings across all methods and use of the official toolkit [5]. We plan to incorporate these results into the revision. **Response to Weakness 2** In Table 5, we compare the efficiency of dequantization-based and LUT-based methods. Following [3], we use GPTQ as a representative baseline for dequantization-based method, as similar methods like RTN and OmniQuant can generally share the same inference kernel and offer comparable efficiency. We will clarify this in the revision. We acknowledge that the GPTQ efficiency reported in Table 5 is lower than that in the original GPTQ paper. This discrepancy is mainly due to differences in the inference kernel, hardware, and model size, with the kernel likely being the key factor. As noted in Section 4.1, we followed [1] and used the GPTQ-for-LLaMA [2] inference kernel for GPTQ-quantized models. The reported results reflect actual observations. However, since [2] is implemented in Triton, and our experiments were conducted on an NVIDIA RTX 4090, potential incompatibilities or limited hardware support may explain the slower inference in our setup. As for metric, as noted in Section 4.1, we followed [1,3], using a batch size of 1 to generate 1024-token sequences and reporting total CUDA time. To further validate this, we performed an additional test using GPTQ's official CUDA kernel implementation [1] (available only for 3-bit quantization and OPT models). The results (total CUDA time in seconds) are as follows: ||FP16|GPTQ(3bit)|GANQ(3bit) |-|-|-|-| |OPT-6.7B|16.784|6.543|6.519 The results are consistent. We will incorporate these findings and clarifications into the revision to resolve potential confusion clearly. Finally, we would like to emphasize that our main contribution is the proposed LUT-based non-uniform quantization method, which improves quantization accuracy. For inference efficiency, it can similarly benefit from advances in LUT-based kernel implementations, as dequantization-based methods do with their kernels. **Response to Weakness 3** As detailed in Section 4.1 (Lines 313–318, right column) and reiterated in Response to Weakness 2, we follow the evaluation setup used in prior works [1,3] to measure inference latency. Specifically, we report the total CUDA time required for the model to generate 1024 tokens, using a batch size of 1 and no initial input tokens. We will make these settings more explicit in the revised paper to improve clarity and reproducibility. [1] Frantar, Elias, et al. "Gptq: Accurate post-training quantization for generative pre-trained transformers." [2] Sun, Mingjie, et al. "A simple and effective pruning approach for large language models." [3] Kim, Sehoon, et al. "Squeezellm: Dense-and-sparse quantization." [4] https://github.com/qwopqwop200/GPTQ-for-LLaMa [5] https://github.com/THUDM/LongBench --- Rebuttal Comment 1.1: Comment: - Weakness 2: The authors' focus on the 3-bit latency measurement in their rebuttal misses a key point. The primary comparison for efficiency in Table 5 is based on 4-bit quantization, and I would expect to see evaluation results for that case instead. The authors' response seems to sidestep the central issue. In the original GPTQ implementation (or the Marlin library, TinyChat library, TensorRT-LLM), GPTQ (or any other 4-bit weight-only g128 quantization method e.g., AWQ) achieves nearly 4x higher throughput. In contrast, using the numbers from Table 5, GANQ only achieves about a 2x speedup, which is comparable to the speedup seen with 8-bit quantization. This highlights a significant discrepancy in efficiency that needs further explanation. - Weakness 3: In response to Reviewer QPHs, I am not requesting results based on larger batch sizes; rather, I am pointing out that the evaluation setups themselves are missing from the paper (as they only mention 1024 generation tokens). These details are crucial for understanding the context of the measurements. Additionally, both context length and generation length play a significant role in determining the computational characteristics of LLM inference, as they directly impact the proportion of attention operations, especially in long-context tasks and reasoning models that rely on chain-of-thoughts. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful follow-up. Your insights, together with Reviewer QPHs', greatly help us clarify the key factors behind LLM inference speedup from model compression. **Response to Weakness 2** We attribute the speedup differences mainly to kernel implementation, hardware variations (e.g., FP16 cross-GPU overhead), the model, and the generation length, affecting memory- vs. compute-bound behavior (as noted by you and Reviewer QPHs). Table 5 accurately reflects performance in our setup, with **kernel choice** being the primary factor for the 4-bit speedup gap. Thus, we previously verified this using an alternative GPTQ kernel (only 3-bit support). Below, we would like to make it more clear and finally replicate AWQ's 4-bit kernel performance in our setting. **1: Kernel Implementation Impact** Following [1], we initially used [2] as the GPTQ kernel, which showed a lower speedup due to suboptimal GPU efficiency in our experimental environment. To validate this, we performed experiments using GPTQ’s official CUDA kernel (currently only for OPT models in 3-bit). Here are results (batch size=1, seq_len=1024): ||FP16|GPTQ-3bit|GANQ-3bit |-|-|-|-| OPT-6.7B|16.784|6.543|6.519 GPTQ shows about 2.5$\times$ speedup, which is much improved compared with [2] but still lower than the 3.25$\times$ or 4.53$\times$ reported in GPTQ paper. We attribute the remaining gap to differences in hardware (e.g., number of GPUs), model size, and generation length, as detailed in the next section. **2. Hardware, Model Size, Generation Length Impact** The GPTQ paper's higher speedups (3.25$\times$ or 4.53$\times$) were obtained under conditions vastly different from ours: |Bit|Speedup|Model|Length|GPU for FP16|GPU for 3-bit| |-|-|-|-|-|-| 3|3.25|OPT-175B|128|8 A6000|2 A6000 3|4.53|OPT-175B|128|5 A100| 1 A100 The critical difference here is that their FP16 baseline required multiple GPUs (incurring significant cross-GPU communication overhead), making their setup inherently memory-bound and thus more advantageous to quantization. Our experiments were performed entirely on a single RTX 4090, with smaller model size (OPT-6.7B), longer generation lengths (1024), and no cross-GPU overhead. This naturally reduces the relative benefit from quantization. We explicitly demonstrate this impact through additional benchmarking for OPT-6.7B across different generation lengths (single RTX 4090, batch size=1, CUDA time in seconds): |Length|FP16|GPTQ(3-bit)|GANQ(3-bit) |:-:|-|-|-| |64|0.986|0.353|0.329 |256|4.000|1.464|1.383 |1024|16.790|6.553|6.528 These results clearly show that longer generation sequences reduce the observed speedup (e.g., from ~2.79 at length 64 down to ~2.56 at length 1024 for GPTQ), further explaining why our reported speedup for GPTQ in 3-bit is lower. **3. AWQ Kernel Validation** Following your suggestion, we use AWQ’s kernel, which reports a 2.79–3.3$\times$ speedup on a single RTX 4090 with batch size 1 and sequence length 200 for various models in the AWQ paper. Below are our results on LLaMA-7B (batch size = 1, sequence length = 200): ||CUDA Time (s)|Speedup ($\uparrow$)|Peak Memory (GB, $\downarrow$) |-|-|-|-| FP16|3.26|1.00|12.66 AWQ|1.18|2.76|3.87 GANQ|1.50|2.17|3.76 AWQ’s measured speedup (~2.76$\times$) aligns closely with the original AWQ paper’s results for 7B models. Again, this highlight the pivotal role of **kernel optimization** for quantization efficiency, further clarifying the discrepancy. We observed gains moving from GPTQ-for-LLaMA [2] to the GPTQ CUDA kernel, and further to AWQ’s CUDA kernel, despite similar quantization schemes. The kernel from [1] we used performs well but may still leaves room for improvement. We emphasize that our primary contribution is in improving LUT-based quantization accuracy rather than kernel-level optimization. Nonetheless, our method remains compatible with future LUT-based kernel improvements (e.g., 2–4$\times$ in [3], up to 6.93$\times$ in [4]) can directly amplify GANQ’s performance. **We believe our work, alongside advances in kernel engineering, can jointly drive meaningful progress in the field.** **Response to Weakness 3** We fully agree on the importance of clearly stating experimental setups, including hardware, inference kernels, batch size, and generation lengths. We'll explicitly clarify these details as well as above analysis in our revision. Finally, we plan to revise Table 5 in the paper to explicitly highlight GANQ’s compatibility with existing LUT-based kernels and emphasize that our focus remains on more accurate quantization rather than kernel optimization. Thank you again for your time and thoughtful insights! [1] Kim, Sehoon, et al. "Squeezellm: Dense-and-sparse quantization." [2] https://github.com/qwopqwop200/GPTQ-for-LLaMa [3] Guo, Han, et al. "Fast matrix multiplications for lookup table-quantized llms." [4] Mo, Zhiwen, et al. "Lut tensor core: Lookup table enables efficient low-bit llm inference acceleration."
Summary: This paper proposes a look-up-table based non-uniform quantization algorithm for LLMs. The algorithm is based on mixed-integer quadratic programming, with a mathematical proof that the authors deduce. The results show that the proposed method has better accuracy than state of the art methods on 4-bits and 3-bits quantization. The authors also provide a CUDA implementation that shows more than 2x speedup over fp16 baseline on NVIDIA RTX 4090 GPU. Claims And Evidence: - The claims of obtaining higher accuracy compared to other methods have been supported by strong results Methods And Evaluation Criteria: - Paper evaluated different model families (OPT and Llama), diferent generations (LLama1, Llama2, and Llama3), and different sizes (8B, 70B, etc.) - I would suggest adding: - other more recent families (e.g., Mistral or DeepSeek) - other sizes of recent models (e.g., Llama 3.2 1B, Llama3 70B). Though I understand that the authors may not be able to fit large models if they only have access to a RTX 4090 GPU - Compared with a good number of approaches that use or don't use outlier mitigation techniques Theoretical Claims: - I don't claim that I have fully understood the mathematical proof. Experimental Designs Or Analyses: - I believe accuracy comparisons are thorough and done properly - However, I am concerned about the context length of perplexity, as different papers have evaluated perplexity with different context length. So need to double check that context length is consistent in the comparisons. - Though the CUDA implementation of the proposed method has decent speedup, I am concerned that the measured GPTQ CUDA speedup was around 0.3x (i.e., 3 times slower than the baseline), while Table 6 of the GPTQ paper shows speedups of up to 4.53x, albeit on a different GPU, and different LLM architecture. If there is an issue in reproducing similar speedups claimed by the GPTQ paper, I suggest not to include those measurements in the table Supplementary Material: I skimmed through it. Relation To Broader Scientific Literature: The paper compared with latest state of the art quantization algorithms (GPTQ, AWQ, OmniQuant, SqueezeLLM) Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - Usage of mixed integer quadratic programming is probably a novelty in the quantization field. Though, I might be wrong. - The proposed algorithm is based on rigorous mathematical proof - The proposed algorithm was optimized for parallelized compute platforms and implemented on GPU - "GANQ consistently outperforms baseline methods such as RTN, GPTQ, and OminiQuant across all configurations" - The proposed algorithm doesn't require a lot of memory, unlike approaches like SqueezeLLM and OmniQuant, and hence can quantize larger models on smaller GPUs. Other Comments Or Suggestions: - Equation 5: Please provide citation to a paper or textbook that explains how the closed form solution could be obtained - Line 286, Column 2: "by reporting perplexity on language generation tasks": my understanding generation tasks don't measure perplexity. I would prefer to re-word it to "perlexity on language datasets" - Please specify context length of perplexity evaluation Questions For Authors: - Is the algorithm similar to GPTQ's algorithm, in the sense that both decide to quantize each element in a row sequentially? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback and strong support. We appreciate your recognition of our contributions and provide detailed responses below. **Comment 1:** *The context length of perplexity.* Thanks for pointing this out. We use a sequence length of 2048 for all models and methods, following prior work, and will clarify this in the revision. **Suggestion 1:** *More recent models and model families/sizes:* Thank you for the suggestion. To further validate GANQ, we conducted additional experiments on the latest LLaMA-3.2 models. Below are the WikiText PPL results (sequence length 2048, same setting as in our paper). ||1B|3B|1B-Instruct|3B-Instruct |-|:-:|:-:|:-:|:-: |FP16|9.76|7.81|13.16|11.05 |(4-bit / 3-bit) |RTN|18.08 / 2.6e3|10.53 / 4.8e2|22.91 / 7.0e3|15.58 / 5.7e2 |GPTQ|24.07 / 1.7e2|6.0e3 / 6.3e3|18.40 / 1.1e2|4.5e3 / 3.3e3 |OmniQuant|12.90 / 4.3e2|8.87 / 14.82|16.31 / 34.89|12.33 / 19.55 |GANQ|**10.78** / **15.91**|**8.35** / **10.85**|**14.36** / **21.32**|**11.99** / **15.84** The additional results confirm that GANQ consistently achieves superior and more stable quantization performance. Our systematic optimization framework for layer-wise non-uniform quantization allows GANQ to adapt effectively to varying weight distributions, unlike baseline methods, which demonstrate greater sensitivity and instability, especially at 3-bit. We will add these findings in the revision. Regarding the suggestion to evaluate more model families (e.g., DeepSeek) and larger scales (e.g., LLaMA-3 70B), we appreciate your understanding that such experiments are limited by our hardware (RTX 4090 GPU). That says, since GANQ operates row-wise on linear layers and is designed to adapt to diverse weight distributions, we expect it to generalize well to other architectures and larger models. We plan to extend our evaluations to broader model families and larger scales in the future work. **Comment 2:** *GPTQ CUDA speedup measurements* We acknowledge that the GPTQ efficiency reported in Table 5 is lower than that in the original GPTQ paper. This discrepancy is mainly due to differences in the inference kernel, hardware, and model size, with the kernel likely being the key factor. As noted in Section 4.1, we followed prior work [1] and used the GPTQ-for-LLaMA [2] inference kernel for GPTQ-quantized models. The reported results reflect actual observations. However, since [2] is implemented in Triton, and our experiments were conducted on an NVIDIA RTX 4090, potential incompatibilities or limited hardware support may explain the slower inference in our setup. To further validate this, we performed an additional test using GPTQ's official CUDA kernel implementation [3] (available only for 3-bit quantization and OPT models). The results (total CUDA time in seconds) are as follows: ||FP16|GPTQ (3bit)|GANQ (3bit) |-|-|-|-| |OPT-6.7B|16.784|6.543|6.519 The results are consistent. We will incorporate these findings and clarifications into the revision to resolve potential confusion clearly. **Suggestion 2:** *The derivation of the closed-form solution of Equation 5* We will add the detailed derivation of Equation 5 in the appendix of the revision. Let $$ f(\\mathbf{T}_i)=\\|\\mathbf{W}_i\\mathbf{X} \\mathbf{T}_i\\mathbf{S}_i^{k+1}\\mathbf{X}\\|^2. $$ by the first-order optimality, let $$ \\nabla f(\\mathbf{T}_i)=2(\mathbf{W}_i\\mathbf{X}- \\mathbf{T}_i\\mathbf{S}_i^{k+1}\\mathbf{X})\\mathbf{X}^\\top(\\mathbf{S}_i^{k+1})^\\top=0, $$ we have $$ \\mathbf{T}_i^{k+1}=\\mathbf{W}_i \\mathbf{XX}^\\top (\\mathbf{S}_i^{k+1})^\\top ((\\mathbf{S}_i)^{k+1}\\mathbf{XX}^\\top (\\mathbf{S}_i^{k+1})^\\top)^\\dagger, $$ where $(\\cdot)^\\dagger$ denotes the Moore-Penrose inverse. **Corrections** We agree that "perplexity on language datasets" is a clearer phrase and will adopt this wording in the revision. **Response to Question for Authors** Although both GPTQ and our proposed method (GANQ) quantize model weights in a row-wise manner, the strategies differ significantly. GPTQ uses a greedy, sequential element-wise quantization based on the Optimal Brain Surgeon framework [4], quantizing one element at a time and updating subsequent weights to minimize reconstruction error. In contrast, our method models LUT-based non-uniform quantization as a mixed-integer quadratic program. Rather than sequentially quantizing elements, we employ an alternating optimization for an entire row with a closed-form solution for the T-subproblem and an efficient back-substitution algorithm for the S-subproblem, enabling more systematic and effective quantization than GPTQ. [1] Kim, Sehoon, et al. "Squeezellm: Dense-and-sparse quantization." [2] https://github.com/qwopqwop200/GPTQ-for-LLaMa [3] Frantar, Elias, et al. "Gptq: Accurate post-training quantization for generative pre-trained transformers." [4] Hassibi, Babak, and David Stork. "Second order derivatives for network pruning: Optimal brain surgeon." --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed rebuttal. I have read all the reviews and their corresponding rebuttals and I would like to keep my score. I believe authors have comprehensively responded to the requests by the reviewers. Regarding the request by Reviewer V6cW to include speedups on larger batch sizes and longer sequence lengths: my understanding is that weight-only quantization (that is the scope of this paper) optimizes memory bound processes, which is the case of LLM autoregressive decoding for batch size 1 and moderate sequence length. For larger batch sizes or very large sequence lengths, autoregressive decoding becomes more of a compute bound process and weight-only quantization may not speed it up (in some cases it may even slow it down). Hence, most papers on weight-only quantization don't evaluate speedups for batch size > 1. (On the other hand, approaches on weight+activation quantization can speed up such compute bound processes). --- Reply to Comment 1.1.1: Comment: Thank you for your kind words and for taking the time to carefully read our rebuttal and the reviewers’ responses. We greatly appreciate your feedback and constructive comments. Your clarification regarding the distinction between weight-only and weight+activation quantization is very helpful. We will highlight this distinction clearly in the revision. Besides, we believe that how to extend our method to "weight+activation quantization" is a valuable point for future research. Thank you once again for your support!
Summary: The paper present GANQ, a GPU-Adaptive Non-Uniform Quantization technique specifically tailored for efficient inference of Large Language Models (LLMs). GANQ introduces a principled optimization model based on Mixed-Integer Quadratic Programming (MIQP) to achieve layer-wise quantization using a Lookup Table (LUT) approach. It leverages GPU acceleration to efficiently handle the computational complexity. Extensive evaluations demonstrate GANQ’s significant reduction in perplexity relative to state-of-the-art methods across various models and datasets, achieving a notable inference speedup (up to 2.57X on NVIDIA RTX 4090) Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: no Relation To Broader Scientific Literature: Novel approach to use MIQP to find the best layer wise quantization strategy Essential References Not Discussed: no Other Strengths And Weaknesses: strengths 1. GANQ clearly formulates non-uniform quantization as an MIQP problem, allowing systematic optimization rather than heuristic approaches. 2. Demonstrates superior perplexity improvements across both 3-bit and 4-bit quantization, and outperforming established quantization methods. 3. exploits GPU parallelism by decomposing the original optimization into highly parallelizible subproblems. weakness: 1. The method relies heavily on Cholesky decomposition, potentially limiting its applicability if XX^T matrices are ill-conditioned. 2. Limited details on how the MIQP is solved on GPU, there are mature toolkits for the optimization problems such as Gurobi or CPLEX, more discussions are needed. 3. Table 2 does not include performance of AWQ and suqeezeLLM, better to have a more comprehensive experiements. Other Comments Or Suggestions: ## update after rebuttal: I appreciate the responses from the author. I am happy to increase my score. Questions For Authors: How sensitive is GANQ’s quantization accuracy and optimization efficiency to the conditioning of the Cholesky decomposition matrix XX^T? Have you explored alternative numerical methods for stability? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the insightful comments and recognition of GANQ's contributions. Below, we address the concerns raised. ## Concern 1: Reliability of Cholesky Decomposition and Sensitivity to Preconditioning of $XX^\top$ **Response to Weakness 1** Yes, our method relies on Cholesky decomposition to efficiently solve the S-subproblem. When $XX^\top$ is ill-conditioned, we apply preconditioning to ensure the decomposition remains feasible, maintaining the effectiveness of our framework. This is a standard technique in numerical linear algebra. **Response to Questions for authors** Yes, we implement an adaptive preconditioning method for $XX^\top$ and examine its impact on quantization accuracy and efficiency below. Remark 3.1 introduces preconditioning $XX^\top$ via $XX^\top+\lambda I$, with $\lambda>0$. In practice, we adopt an adaptive method that enforces diagonal dominance, ensuring positive definiteness without manual tuning $\lambda$. A symmetric matrix $A$ is positive definite if it is diagonally dominant with positive diagonal entries, i.e., $|a_{ii}| \geq \sum_{j \neq i} |a_{ij}|$ for all $i$. Let $\Sigma = XX^\top$, with $\Sigma_{ii} \geq 0$. For each row $i$, we compute the offset vector $\delta$ as follows: $$\delta_i = \max\left(\sum_{j=1}^n|\Sigma_{i,j}|-2\Sigma_{i,i},\ 10^{-8}\right), $$where $\Sigma_{i,j}$ denotes the element at the $ i$-th row and $ j$-th column of $ \Sigma $. Then, perform $$L=\mathrm{Cholesky}(\Sigma+\mathrm{Diag}(\delta)), $$where $\mathrm{Diag}(\delta)$ constructs a diagonal matrix from the vector $\delta$, and $L$ is the lower-triangular Cholesky factor. To assess sensitivity to preconditioning (fixed $\lambda$ and adaptive) before Cholesky decomposition, we run 4-bit quantization experiments on OPT-125M and report PPL on WikiText. ||$\lambda=0.5$|$\lambda=1.0$|$\lambda=10.0$|$\lambda=40.0$|$\lambda=100$|Diagonally dominant |-|-|-|-|-|-|-| |PPL|29.14|29.04|28.98|29.05|29.09|28.58 For a clear comparison on quantization accuracy, we also show the baseline methods again below. ||Full (FP16)|RTN|GPTQ|OmniQuant |-|-|-|-|-| |PPL|27.66|37.11|31.08|30.98 The results show that quantization accuracy is largely **insensitive** to the choice of preconditioning. The adaptive method achieve the best PPL (28.58), with fixed $\lambda$ yielding similar results. All results outperforms baselines, confirming the robustness. In terms of efficiency, our preconditioning uses simple operations (e.g., summation, diagonal adjustments) with no noticeable impact on overall efficiency. **We initially considered this an implementation detail or engineering trick and plan to include the discussion in the Appendix of the revised paper.** ## Concern 2: Details on GPU-Based MIQP Solver Implementation **Response to Weakness 2** - **Details on how the MIQP is solved on GPU** Section 3.2 details how we solve the MIQP model using an alternating direction framework and leverage GPU for efficient computation. As shown in Equation (5), the T-subproblem has a closed-form solution with matrix-vector multiplications, which are highly efficient on GPUs. As shown in Equations (6)–(20), Figure 2, and the corresponding text (Lines 272–274, right column; Lines 295–300, left column), we illustrate how the S-subproblem is solved. Exploiting row-wise independence, we stack $W_i$ and $T_i$ vectors and organize $S_i$ matrices into a tensor, enabling parallel computation across rows on GPUs. In summary, we group independent vectors and matrices into larger matrices and tensors, enabling operations like multiplication and summation to be efficiently accelerated on GPUs. - **The adoption of Gurobi or CPLEX** While Gurobi and CPLEX are powerful commercial solvers, they are designed on generic purpose for general optimization models, including MIQP. In contrast, our framework exploits the specific structure of the model under discussion, yielding greater efficiency than generic solvers that overlook such problem-specific properties. We attempted to use Gurobi to solve Equation (2) during rebuttal, a single-row subproblem with OPT-125M dimensions, but it produced no output within three minutes. In contrast, our GPU-accelerated framework quantizes the entire model in the same time. ## Concern 3: Comparison with AWQ and SqueezeLLM **Response to Weakness 3** We compared our method with AWQ and SqueezeLLM in Table 4, as both incorporate mechanisms for handling outliers. AWQ utilizes a quantization block size of 128 (a default setting in the AWQ paper), which helps mitigate the impact of outliers. A key contribution of SqueezeLLM is isolating outliers into a separate sparse matrix while retaining 10 FP16 rows. Accordingly, we categorize these methods as outlier-aware quantization approaches and include their results in Table 4 for a fair comparison. As shown in Table 4, our method, equipped with a similar outlier-handling mechanism, achieves better performance compared to both.
Summary: The paper proposes GANQ, a post-training non-uniform quantization method optimized for hardware-efficient mpGEMM for LLMs. GANQ is LUT-based weight-only quantization capable of handling outliers. The experimental results demonstrate that the proposed GANQ outperforms baselines and achieves up to 2.57 times speedup. Claims And Evidence: 1. The paper proposes GANQ (GPU-Adaptive Non-Uniform Quantization), a post-training non-uniform quantization method. 2. GANQ uses GPU-adaptive optimization, improving efficiency and achieving several times speedup compared to baselines. 3. GANQ is a non-uniform quantization method capable of handling outliers, improving performance. 4. Experimental results confirm GANQ’s efficiency and performance. Methods And Evaluation Criteria: The proposed method is evaluated on perplexity, CUDA time, peak memory, and quantization cost, providing a reasonable and comprehensive assessment of GANQ. Theoretical Claims: The theoretical proof of GANQ is well-constructed, clear and easy to follow. 1. $m, n, p$ in line 147 and 148 (right column) are not introduced. Experimental Designs Or Analyses: The experimental designs and analyses are clear and comprehensive. However, some suggestions could improve the paper. 1. The settings of GANQ\* (with outlier handling) are not explained, such as the ratio of outliers and quantized weights. 2. Some baselines are missing from the profiling comparison (Table 5) without explanation. 3. GANQ\* should be compared in profiling and quantization cost. 4. The ratio of outliers and quantized weights could be discussed in more detail, as different ratios affect performance and cost. Supplementary Material: The appendix has been carefully reviewed. It provides the algorithm for outlier extraction and supplemental results on perplexity. Relation To Broader Scientific Literature: The paper proposes a post-training non-uniform quantization method that improves efficiency and performance compared to previous works. The experimental results align with prior findings, showing that quantization can degrade perplexity while reducing memory demands. Essential References Not Discussed: All essential references are discussed. Other Strengths And Weaknesses: All strengths and weaknesses have been mentioned above. Other Comments Or Suggestions: Line 134 (right column): a repetition of the word “measures“ Questions For Authors: In Table 5 (CUDA time), the full model only performs matrix multiplication, while GANQ involves table lookup, weight replacement, and then matrix multiplication. What explains GANQ being faster than the full model? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the review and helpful suggestions. We appreciate your positive feedback and address your concerns below. **Comment:** *$m,n,p$ in line 147 and 148 are not introduced.* **Response:** $m,n,p$ denote the dimensions in a linear layer. In LLMs, $m$ is the output dimension, $n$ the input dimension, and $p$ the total number of tokens processed (batch size $\times$ sequence length). These values vary by models and layers. For example, in LLaMA-7B: ||m|n |-|-|-| |$W_{k,q,v,o}$|4096|4096 |$W_{gate,up}$|11008|4096 |$W_{down}$|4096|11008 We use 128 samples with a sequence length of 2048, so $p=128\times 2048$. This highlights the large scale of the problem and the efficiency of our method in solving it. **Suggestion 1:** *The settings of GANQ$^\*$ (with outlier handling) are not explained ...* **Response:** We describe GANQ$^\*$'s outlier-handling in Section 3.3 and detail it in Algorithm 2 (Appendix A), which separates outliers based on a ratio $r$, with the rest for quantization. As noted in Lines 365-367 (right), we typically set $r=0.5\\%$ for a fair comparison. We will further clarify this in the revision. **Suggestion 2:** *Some baselines are missing from the profiling comparison ...* **Response:** In Table 5, we compare the efficiency of dequantization-based and LUT-based methods. Following [1], we use GPTQ as a representative baseline for dequantization-based method, as similar methods like RTN and OmniQuant can generally share the same inference kernel and offer comparable efficiency. We will clarify this in the revision. Besides, as in [1], we use GPTQ-for-LLaMa [2] as the inference kernel for GPTQ, which currently supports 4-bit acceleration only. Thus, Table 5 includes only 4-bit GPTQ results. While briefly mentioned in Section 4.3, we will clarify it further in the revision and plan to add results with GPTQ kernels that support 3-bit acceleration. **Suggestion 3:** *GANQ$^\*$ should be compared in profiling ...* **Response:** We show the results for GANQ$^\*$ below: ||GANQ$^\*$|CUDA Time (s)|Speedup|Peak Memory (GB) |-|-|-|-|-| |OPT-6.7B|4-bit|10.39|1.61|5.13 ||3-bit|10.73|1.56|4.39 |LLaMA-7B|4-bit|9.82|1.82|4.16 ||3-bit|8.85|2.02|3.32 While GANQ$^\*$ offers better quantization quality via outlier handling, it incurs slightly higher inference time due to separate sparse matrix operations. Thus, the choice between GANQ and GANQ$^\*$ should depend on the desired trade-off between quality and efficiency. We note that 3-bit quantization for OPT-6.7B in GANQ$^\*$ leads to longer inference times than 4-bit, likely due to: (1) memory bandwidth reduction being the main speedup factor (as in our response to *Questions for Authors* below); (2) sparse outlier operations becoming a bottleneck in this case; and (3) 3-bit values misaligning with byte boundaries (INT8), causing entries to span bytes and adding indexing overhead. We will clearly present and discuss these findings in the revision. **Suggestion 4:** *The ratio of outliers and quantized weights could be discussed ...* **Response:** As in Tables 2 and 3, GANQ is already effective without outlier handling, outperforming baselines and close to FP16 models. As noted in Section 3.3, adding outlier handling can further improve quantization quality but increases inference time and memory, highlighting a trade-off. To illustrate this, we compare both the PPL and efficiency metrics in one table. For example, on LLaMA-7B: ||PPL on WikiText|CUDA Time (s)|Speedup|Peak Memory (GB) |-|-|-|-|-| |FP16|5.68|17.86|1.00|13.06 |4-bit|5.83|8.46|2.11|4.14 |4-bit+0.5%|5.76|9.82|1.82|4.16 These results demonstrate a clear trade-off between quantization quality and efficiency. We will include this discussion in the appendix to highlight this balance. **Response to Questions For Authors** GANQ's speedup mainly arises from reduced memory bandwidth usage, which align with observation in other model compression methods (Figure 3 in [3]). By storing weights as 4-bit or 3-bit indices, GANQ greatly lowers memory traffic between global memory and CUDA cores compared to FP16. While GANQ introduces some overhead from table lookups and weight reconstruction, these are minimal. The lookup table are small (e.g., 16 entries, 32 bytes per row for 4-bit quantization), and thus reside in fast constant or shared memory, adding negligible latency. Weight reconstruction is efficiently fused with matrix multiplication in CUDA kernels, avoiding extra global memory accesses. In summary, GANQ’s modest overhead is far outweighed by memory savings, resulting in faster overall inference, especially on bandwidth-constrained GPUs. **Response to Other Suggestions** Thanks for catching the typo, we will fix it in the revision. [1] Kim, Sehoon, et al. "Squeezellm: Dense-and-sparse quantization." [2] https://github.com/qwopqwop200/GPTQ-for-LLaMa [3] Lin, Ji, et al. "Awq: Activation-aware weight quantization for on-device llm compression and acceleration."
Summary: This paper proposes a GPU-adaptive non-uniform quantization framework for LLMs. By formulating quantization as a mixed-integer optimization problem, the authors aim to achieve efficient low-bit inference on hardware that lacks native mixed-precision matrix multiplication support. Their approach relies on lookup tables (LUTs) rather than repeated dequantization. They introduce an iterative algorithm to solve for the best codebooks per layer in a way that (i) addresses outliers, and (ii) facilitates rapid table lookups. The paper reports strong perplexity results on several LLM architectures (OPT, LLaMA, etc.), along with speedups when compared to FP16 baselines. ## update after rebuttal Thanks to the authors for the explanations provided in response to my questions, which largely address my initial concerns. However, the paper still requires significant revisions to offer a more comprehensive discussion and comparison with other LUT-based methods. It also needs more rigorous experiments, concrete evidence of real-world efficiency, and details on efficient implementation. Claims And Evidence: The main claims are that (a) non-uniform LUT-based quantization, solved through a GPU-friendly mixed-integer optimization procedure, can significantly improve perplexity at lower bitwidths versus uniform quantization baselines; (b) the proposed method is broadly compatible with outlier-handling strategies; and (c) it yields better throughput and memory savings. These are overall supported by experiment results. Methods And Evaluation Criteria: The core method is well-motivated and the criteria and benchmarks are widely used. Theoretical Claims: The primary theoretical discussion is the layer-wise formulation as a mixed-integer quadratic program and its subsequent decomposition. There does not appear to be any new theorems with formal proofs requiring correctness checks. Experimental Designs Or Analyses: The experimental design is approporate. While there lacks real end-to-end performance numbers (throughput/latency in tokens/second or ms/token) directly compared to baseline quantization methods. Currently, memory usage numbers are mentioned selectively; a more explicit breakdown across methods and model sizes would clarify the trade-off. Supplementary Material: Yes, all. Relation To Broader Scientific Literature: This paper focus on LLM quantization, which is related to works like GPTQ, OmniQuant, AWQ, etc. Essential References Not Discussed: - Fast Matrix Multiplications for Lookup Table-Quantized LLMs [1]: The authors should discuss how their proposed LUT-based approach differs from or improves upon the matrix multiplication kernels and quantization schemes in that work. - LUT Tensor Core: Lookup Table Enables Efficient Low-Bit LLM Inference Acceleration [2]: Though this work is cited, there needs a in depth discussion since this work also reports accuracy vs. speed trade-offs with LUT-based quantization. Drawing explicit comparisons would be valuable, especially for verifying whether the GPU-adaptive optimization here significantly outperforms simpler LUT-coded designs. [1] https://arxiv.org/abs/2407.10960 [2] https://arxiv.org/abs/2408.06003 Other Strengths And Weaknesses: I summarize all strengths and weaknesses here: Strengths: - Well-motivated approach in bridging the hardware-software gap for efficient low-bit inference. - The per-layer, parallelizable formulation is elegant and presumably implementable with moderate engineering effort. - Strong empirical results on perplexity across multiple models. Weaknesses: - Limited demonstration of real-time inference improvements. In practice, adopters would want to see the latency/throughput gains across model sizes, batch sizes, and sequence lengths. - Comparisons with more LUT-specific or hardware-accelerated methods are not deeply explored. - Code availability is not explicitly offered. Releasing the code would help ensure reproducibility and allow the community to adopt the method more easily. Other Comments Or Suggestions: In tables, OminiQuant -> OmniQuant Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the thorough and insightful feedback. We would like to address your concerns point by point below. ## Clarification of References ([1, 2]) and Response to Weakness 2 Thank you for highlighting the relevance of references "Fast Matrix Multiplications for Lookup Table-Quantized LLMs [1]" and "LUT Tensor Core: Lookup Table Enables Efficient Low-Bit LLM Inference Acceleration [2]". As their titles suggest, these works primarily contribute optimized LUT-based matrix multiplication kernels and tensor cores. In contrast, our primary contribution is a novel non-uniform quantization scheme tailored for LUT-based inference. Our method produces quantized representations that can leverage and complement the kernels or tensor cores proposed in [1,2]. Indeed, enabling such compatibility was part of our motivation, which we have explicitly mentioned in the Introduction (Lines 93-95, left column), where [2] has been cited. Due to different objectives, [1] and [2] focus on end-to-end inference speedup using custom kernels, while our work emphasizes quantization accuracy and compatibility with LUT-based kernels. For inference speed, we currently use the kernel from [3], as noted in Section 4.1. In the future, we plan to adopt newer LUT-based inference kernels like [1,2] to take advantage of their advancements. For quantization accuracy, to the best of our knowledge, [2] does not propose a new quantization scheme. Instead, it validates the LUT tensor core by precomputing lookup tables with operator fusion. For [1], it utilizes a learned NormalFloat quantization scheme. As noted in their Section 4.2: "*Based on our earlier experiments, we selected a group size of 64, which strikes a good balance between quality and speed.*" This means that every 64 elements share a lookup table. While this improves quantization quality, it increases memory overhead. For example, for LLaMA-3-8B's $W_{q}$ ($4096\times 4096$), a group size of 64 results in 64 ($4096/64=64$) lookup tables per row (each with 16 FP16 values for 4-bit quantization). In contrast, our method is initially designed to support one lookup table per row (i.e., channel-wise), which is much more memory-efficient. Below is the theoretical model compression ratios (0.5 byte for INT4, 2 byte for FP16): |[1]|Ours |-|-| |$\frac{0.5\times 4096^2+2\times 64\times 16 \times 4096}{2 \times 4096^2}=50\\%$|$\frac{0.5\times 4096^2+2\times 1\times 16 \times 4096}{2 \times 4096^2}=25.39\\%$ Besides, as noted in Section 3.3, our method supports outlier-handling techniques by extracting outliers into a sparse matrix, balancing compression and quality. For LLaMA-3-8B, our method already outperforms [1] in 3-bit quantization with just 0.5% outlier separation. In the 4-bit setting, increasing the outlier ratio can further improve quality of GANQ, for example, with a 5% ratio, GANQ achieves a PPL of 6.25, which is comparable to [1] using a group size of 64. |WikiText PPL|3-bit |-|-| |[1] with group size 64|7.5 |GANQ + 0.5%|7.46 These results highlight a key trade-off between quality and memory, with our method offering a much more compact representation while maintaining strong quality. We will include these comparisons in the revision to highlight our method's unique benefits and compatibility with existing LUT-based kernels. ## Response to Weakness 1 Thank you for raising the concern about real-time inference improvements. To clarify, we included inference benchmarks in Table 5. Following prior works [3,4], we used batch size 1 to generate 1024 tokens and reported total CUDA time (in seconds) using the LUT kernel from [3], as detailed in Section 4.1. The results show clear overall inference speedup and peak memory reduction for OPT-6.7B and LLaMA-7B, highlighting the practical benefits of our method over uniform quantization and FP16 baselines. We acknowledge the importance of testing across different batch sizes and sequence lengths and will expand our experiments accordingly. However, we emphasize that efficiency is largely determined by the underlying LUT-based kernel implementation. As noted in our response regarding LUT kernels [1,2], our main contribution is a novel LUT-based non-uniform quantization scheme. We plan to integrate optimized LUT kernels in future work to further improve efficiency. We appreciate the suggestion and will highlight the practical benefits more clearly. We believe that both our work and LUT kernel innovations can jointly make valuable contributions to the community. [3] Kim, Sehoon, et al. "Squeezellm: Dense-and-sparse quantization." [4] Frantar, Elias, et al. "Gptq: Accurate post-training quantization for generative pre-trained transformers." ## Response to Weakness 3 We fully agree that open-sourcing will enhance reproducibility and impact. We will release the complete code upon acceptance. ## Typo Thank you for pointing out the typo, we will correct it in the revised version. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification and for sharing the additional results. However, I would like to see some comprehensive latency and throughput results under different batch sizes and sequence lengths. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback. Below, we provide comprehensive latency speedup results for GANQ (with the kernel from [1]) relative to the FP16 model across various batch sizes and generation lengths on a single NVIDIA RTX 4090 GPU. Note that we slightly modified the benchmarking code to support batch sizes $>$ 1, resulting in minor differences from previous results. LLaMA-7b (4-bit): |Length / Batch Size|1|2|4| |:-:|-|-|-| |64|2.14|1.71|1.27| |128|2.12|1.70|1.26| |256|2.10|1.69|1.26| |512|2.06|1.66|1.26 |1024|2.02|1.63|1.25 LLaMA-7b (3-bit): |Length / Batch Size|1|2|4| |:-:|-|-|-| |64|2.45|1.95|1.39 |128|2.40|1.91|1.38 |256|2.39|1.90|1.36 |512|2.33|1.85|1.35 |1024|2.27|1.78|1.33 As shown, speedup decreases with larger batch sizes or longer sequence lengths. This occurs because model compression primarily benefits memory efficiency, while increased batch sizes and sequence lengths shift the bottleneck toward computation, reducing memory-related gains. Reviewers QPHs and V6cW also highlighted this common phenomenon in weight quantization methods; we will explicitly include this analysis in the revised manuscript. We would also like to emphasize again that our primary contribution lies in **the accuracy improvement of LUT-based quantization** rather than kernel-level optimizations. Although our current experiments utilize the kernel from [1], our approach is compatible with future, potentially more optimized LUT-based kernels. Furthermore, we plan to revise Table 5 in the paper to explicitly highlight GANQ’s compatibility with existing LUT-based kernels and emphasize that our focus remains on more accurate quantization rather than kernel optimization. [1] Kim, Sehoon, et al. "Squeezellm: Dense-and-sparse quantization."
null
null
null
null
Matryoshka Quantization
Accept (poster)
Summary: The paper presents a method for multi-scale quantization of large language models across multiple precisions (int8, int4, int2). By utilizing the nested structure of integer data types, the proposed technique allows different precision levels to be nested within one another. The resulting quantized model can then be served at different precisions based on deployment requirements. Claims And Evidence: The paper correctly claims comparable accuracy of the proposed method and existing int4 and int8 quantization schemes, and improved int2 performance, although the experiments were restricted to the C4 dataset. Methods And Evaluation Criteria: Use of the C4 dataset is meaningful but limiting; including more comprehensive results on e.g. the Pile or at least WikiText-2 is strongly recommended. Theoretical Claims: This is an empirical paper with no major theoretical results. Experimental Designs Or Analyses: The experiments are appropriately constructed and conducted. Supplementary Material: Yes, all of it. Relation To Broader Scientific Literature: The proposed methods belongs to the category of Learning-Based Quantization Methods and can work with other such (existing) techniques, including QAT and OmniQuant. Essential References Not Discussed: The proposed MatQuant outperforms existing methods only at the extreme int2 quantization levels. It is therefore critically important to compare and contrast MatQuant against other schemes specifically targeting such regimes. The work that comes to mind is Egiazarian et al, "Extreme Compression of Large Language Models via Additive Quantization", ICML '24, which reports strong performance in 2-bit (vector) quantization settings. Other Strengths And Weaknesses: On the one hand, when considering quantization levels higher than int2, the proposed method does not perform as strongly as the existing (quantization level-specific) techniques. On the other hand, at int2 MatQuant does work better than the considered competing (scalar quantization) alternatives but the performance is much weaker than that of higher quantization level schemes. To assert the utility of MatQuant, the authors should compare and contrast it with SOTA methods for low-bitwidth quantization schemes (e.g., AQLM in the above mentioned reference). Other Comments Or Suggestions: None. Questions For Authors: A clarifying question: It is stated that upon quantization by MatQuant, the weights shift to higher values. Does the choice of quantization bins agree with information-theoretically optimal approach to scalar quantization utilized by e.g. Normal Float Quantization method? Ethical Review Concerns: None. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough feedback. Below are our responses to the comments/questions: **Comparison with SOTA int2 methods:** We would like to clarify that the main goal of the work is to propose an adaptive quantization method that generates a single model that can do well at any precision and provide accurate mixed precision models (Mix’n’match). Int2 result demonstrates that the technique can potentially help train a significantly more accurate low bit-width model, but it is not the main goal, so the baselines are also set accordingly. For Mix’n’match, we compare against the baseline of learning quantized models independently and then stitching layers together to produce a model with the same per layer quantization. The table given below shows that MatQuant based Mix’n’match can be 21% more accurate than a Mix'n'Match between separately trained baselines. AQLM is a complementary technique to MatQuant, so AQLM+MatQuant might provide a more accurate 2bit model than AQLM. However, as mentioned above, that is orthogonal to the goal of the work and we leave it for future investigation. The Matryoshka-style nesting is possible across multiple axes like the number of code books for each group and the number of elements in each code book (i.e., 2^bits). Table: Mix'n'Match on MatQuant + OmniQuant for Gemma-2 9B. Mix'n'Match with MatQuant sub-models does substantially better than a Mix'n'Match between baseline models trained explicitly for 2 and 4 bits. | Config. | Method | ARC-c | ARC-e | BoolQ | HellaSwag | PIQA | Winogrande | Average | | :------------------------------------------------------- | :---------------------------------- | ----: | ----: | ----: | --------: | ----: | ---------: | ------: | | 222222444444444444444444444444444444222222 = 3.43 avg bits | Mixnmatch using Baseline 2, 4 bit | 30.03 | 48.02 | 62.26 | 44.47 | 67.30 | 59.75 | 51.97 | | 222222444444444444444444444444444444222222 = 3.43 avg bits | Mixnmatch using MatQuant 2, 4 bit | 52.56 | 79.04 | 78.99 | 75.66 | 80.20 | 69.38 | **72.64** | | 222444444444444444444444444444444444444222 = 3.71 avg bits | Mixnmatch using Baseline 2, 4 bit | 31.48 | 50.34 | 62.14 | 43.97 | 67.19 | 60.30 | 52.57 | | 222444444444444444444444444444444444444222 = 3.71 avg bits | Mixnmatch using MatQuant 2, 4 bit | 56.83 | 81.06 | 80.73 | 76.34 | 81.07 | 67.88 | **73.98** | | int4 | Baseline | 58.79 | 78.37 | 83.55 | 76.71 | 81.45 | 67.09 | 74.33 | | | MatQuant | 57.25 | 77.36 | 84.86 | 75.52 | 81.50 | 66.77 | 73.88 | | int2 | Baseline | 39.16 | 63.43 | 72.11 | 52.24 | 72.63 | 61.88 | 60.24 | | | MatQuant | 48.72 | 72.18 | 79.20 | 68.11 | 76.17 | 66.77 | 68.52 | **Relation to NF Quantization Data Type:** MatQuant does not explicitly try to push the weight distribution towards an information-theoretic optimal quantization scheme. The fact that we slice bits in MatQuant to obtain sub-models incentivizes gradient descent to carefully optimize the MSBs. Since the 2-bit sub-model corresponds to the top-2 MSBs, MatQuant optimizes the top-2 MSBs to maximize the int2 sub-model’s quality and thus ends up pushing the peak of the weight distribution towards the right, and in the process does a more uniform allocation to the higher buckets. This is evident in Figure 1 (c), where we can see that the peaks for a model trained with MatQuant are lower than that for the baseline and the weight distribution in the case of MatQuant is a bit more uniform. **Datasets other than C4:** Popular quantization papers such as GPTQ, QuIP, and SpinQuant utilize C4. To be consistent with popular literature, we use C4 for our experiments. We hope that the rebuttal clarifies questions raised by the reviewer. We would be very happy to discuss any further questions about the work, and would really appreciate an appropriate increase in score if reviewers’ concerns are adequately addressed.
Summary: This paper introduces Matryoshka Quantization, a novel multi-scale quantization technique that enables single-model multi-bitwidth operation. The low-precision models extracted by MatQuant can have significant improvement compared with standard quantization. ## update after rebuttal The authors address most of my concerns, and I keep my original rating to lean toward accepting the paper. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The method makes sense for the problem. Theoretical Claims: No theory result in this paper. Experimental Designs Or Analyses: Lack of inference runtime benchmarks, which are critical for deployment. Also, does MatQuant need more GPU memory and time during training? It's better to provide the comparison results. Supplementary Material: No code is available. Relation To Broader Scientific Literature: Unlike existing works that need to optimize for each precision, MatQuant can train and maintain a single quantized model but serve it with the precision demanded by the deployment. Essential References Not Discussed: Though QAT and OmniQuant are effective tools for quantization, there are some works like that show better performance than those. Considering the results of MatQuant+novel QAT methods can strengthen the paper's contribution. Bondarenko Y, Del Chiaro R, Nagel M. Low-rank quantization-aware training for llms. Also, as mentioned in the previous section. It's unclear whether MatQuant introduces more GPU memory usage or training time. There are several works that focus on the efficiency of QAT. Chen M, Shao W, Xu P, Wang J, Gao P, Zhang K, Luo P. Efficientqat: Efficient quantization-aware training for large language models. Other Strengths And Weaknesses: There is one more weakness: the main results are on LM <10 B. I have a scalability concern about how MatQuant performs on large models. Other Comments Or Suggestions: No extra comments. Questions For Authors: 1. What is the runtime overhead of MatQuant vs. standard QAT methods? 2. How does MatQuant perform on very large-scale models? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful review. Below are our responses to the comments/questions: **Training memory/compute requirements:** For both QAT and OmniQuant, MatQuant can be up to 30% cheaper than the training three separate baselines (one for 2-bits, one for 4-bits, and one for 8-bits). By recomputing some of the intermediate tensors generated during the forward pass when needed for gradient computation, MatQuant can run in the same memory requirements as a single baseline run. So overall from a training cost point of view, MatQuant uses up to 30% less GPU hours, and is more effective than having separate training runs for different bit-widths. **Training curves:** Our experiments indicate that MatQuant requires less samples/iterations to converge to the same training perplexity. Specifically, 1. 2-bit QAT on Gemma2 9B: MatQuant and S.P. Matquant achieves same training perplexity with 20M tokens as the baseline with 100M tokens 2. 2-bit OmniQuant on Gemma2 9B: MatQuant’s layerwise reconstruction error at 8M tokens and S.P. MatQuant’s layerwise reconstruction error at 5.8M matches that of baseline at 20M tokens. **Inference cost:** A MatQuant trained 2/4/8-bit model will have the same inference cost as the baseline (QAT or OmniQuant) 2/4/8-bit models, i.e., inference budgets for homogeneous precisions is exactly the same as that of the baseline since we do not change the base quantization algorithm. With Mix’n’Match, we can further trade off quality with latency catering to several serving environments since our models lie on the pareto-optimal curve for memory-vs-quality. **Plugging MatQuant on the latest methods:** we thank the reviewer for referring the latest QAT style techniques. Most SOTA QAT methods build on top of the baseline QAT setup by either adding auxiliary losses or distillation or some forms of pretraining. So we adopted a baseline general QAT setup in our experiments to demonstrate the core advantage of MatQuant. MatQuant is complementary to the above mentioned methods of auxiliary losses etc. Furthermore, as we have shown, MatQuant works with a wide variety of quantization techniques (QAT, OmniQuant). So we believe MatQuant can be then combined with the latest QAT style methods. **Scaling MatQuant to larger models:** We performed preliminary experiments on Gemma-2 27B with QAT. As shown in the table below, we find that our observations from 2B and 9B scales hold for the 27B model as well. That is, MatQuant’s quantized models are quality neutral (or better by up to 5% for 2-bit) for all the bit widths that we trained for as well as for the interpolated bit-widths 3 and 6. We will add these results to the next version of the manuscript. Table: MatQuant + QAT on Gemma-2 27B performs on par with the baseline for int4 and int8 and significantly outperforms it for int2. Also, the int3, int6 models obtained for free via interpolation perform comparably to the explicitly trained baselines. | DType | Method | ARC-c | ARC-e | BoolQ | HellaSwag | PIQA | Winogrande | Avg. | log pplx. | |-----------|-------------|-------|-------|-------|-----------|-------|------------|---------|-----------| | bf16 | | 60.49 | 78.96 | 83.18 | 82.24 | 84.60 | 75.69 | 77.53 | 2.199 | | int8 | Baseline | 60.49 | 79.12 | 80.34 | 83.15 | 84.60 | 76.01 | 77.28 | 2.169 | | | MatQuant | 60.41 | 79.88 | 78.41 | 82.54 | 85.04 | 75.77 | 77.01 | 2.141 | | int4 | Sliced int8 | 59.30 | 77.90 | 83.94 | 81.69 | 83.68 | 73.88 | 76.73 | 2.232 | | | Baseline | 59.39 | 79.38 | 83.79 | 82.45 | 84.44 | 75.30 | 77.46 | 2.250 | | | MatQuant | 59.73 | 78.96 | 76.06 | 81.68 | 84.49 | 74.82 | 75.96 | 2.201 | | int2 | Sliced int8 | 25.85 | 27.99 | 55.72 | 25.07 | 51.25 | 49.41 | 39.21 | 15.601 | | | Baseline | 32.25 | 55.56 | 67.58 | 55.50 | 69.59 | 59.04 | 56.59 | 3.061 | | | MatQuant | 44.8 | 71.25 | 70.31 | 67.61 | 76.33 | 64.64 | **65.82** | **2.674** | | int6 | Sliced int8 | 60.15 | 79.12 | 81.31 | 83.13 | 84.60 | 76.64 | 77.49 | 2.111 | | | Baseline | 60.41 | 79.84 | 80.18 | 82.77 | 84.60 | 76.48 | 77.38 | 2.170 | | | MatQuant | 59.98 | 79.21 | 75.57 | 82.4 | 84.49 | 75.45 | 76.18 | 2.146 | | int3 | Sliced int8 | 52.05 | 72.69 | 63.73 | 73.42 | 79.98 | 69.46 | 68.55 | 2.797 | | | Baseline | 57.17 | 79.38 | 73.61 | 78.91 | 83.24 | 72.85 | 74.19 | 2.373 | | | MatQuant | 54.95 | 76.18 | 63.18 | 77.36 | 82.37 | 72.30 | 71.06 | 2.494 | We hope that the rebuttal clarified your questions about the paper and hopefully leads to an even more positive evaluation of the work. We are happy to further discuss if you have any questions about the paper. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed response to the concerns. I have no other concerns and think this work brings a clear contribution to the related community. So I keep the positive rating.
Summary: The paper introduces Matryoshka Quantization (MatQuant), a novel multi-scale quantization technique designed to jointly learn various bitness representations in one single training, and improve low-bit precision quantization for LLMs. The method is evaluated on Gemma-2 and Mistral models, and the results demonstrate better perplexity and downstream performance compared to the standard QAT baseline. The paper also demonstrates some variations of MatQuant, including Single Precision and quantizing both FFN and Attention. **Update after rebuttal**: My latest reply reflected my final update. Claims And Evidence: [Claims are supported] * MatQuant maintains a single quantized model but serves it with different precision demands. * Int2 precision models of MatQuant achieve bigger improvement. * MatQuant can interpolate int3 and int6 representations without explicitly training on them. Methods And Evaluation Criteria: Yes. The models (Gemma, Mistral), evaluation tasks (C4, ARC, BoolQ, HellaSwag,...), and baselines (QAT, OmniQuant) make sense. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Yes. * The main results on int2 are validated. * There are experiments to ablate the hyperparameter $\lambda$. * Single Precision experiment shows better performance. [Weakness] * The paper doesn't show the figure of performance vs training samples (or training time) for MatQuant. MatQuant might require more (or maybe fewer) samples or time to converge. * Isn't Single Precision MatQuant the same as standard QAT because it set $\lambda$ for 4 and 8 bits to 0? Why does it still show performance improvement? * Maybe just me, but I don't understand the config and datatype of Table 4. Why there are two 8 in "[8, 4, 2, 8] -> [4;2]", and what does 4; 2 mean? Supplementary Material: No Supplementary Material. Relation To Broader Scientific Literature: 1. The paper is related to Matryoshka-style training. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful review. Below are our responses to the comments/questions: **Training memory/compute requirements:** For both QAT and OmniQuant, MatQuant can be up to 30% cheaper than the training three separate baselines (one for 2-bits, one for 4-bits, and one for 8-bits). By recomputing some of the intermediate tensors generated during the forward pass when needed for gradient computation, MatQuant can run in the same memory requirements as a single baseline run. So overall from a training cost point of view, MatQuant uses up to 30% less GPU hours, and is more effective than having separate training runs for different bit-widths. **Training curves:** Our experiments indicate that MatQuant requires less samples/iterations to converge to the same training perplexity. Specifically, 1. 2-bit QAT on Gemma2 9B: MatQuant and S.P. Matquant achieves same training perplexity with 20M tokens as the baseline with 100M tokens 2. 2-bit OmniQuant on Gemma2 9B: MatQuant’s layerwise reconstruction error at 8M tokens and S.P. MatQuant’s layerwise reconstruction error at 5.8M matches that of baseline at 20M tokens. We will add the training curves to the next version of the manuscript. **Single Precision (S. P.) MatQuant vs Baseline:** 1. S.P MatQuant first quantizes the model to 8-bits and then slices the 2 MSBs. However, the baseline directly quantizes the model to 2 bits. 2. The scaling factor and the zero point in S.P. MatQuant are derived from the 8-bit quantized model. Here, the model has 6 additional bits to better optimize the scaling factor and the zero point since the quantized weight corresponds to only the first two bits and the model can use these additional 6 bits to improve the overall quality. We hypothesize this additional entropy (overparameterization) gives S. P. MatQuant an edge over the baseline. **Table 4: [8, 4, 2, 8 -> 4;2]:** [8, 4, 2, 8 -> 4;2] means the standard cross entropy (or layer-wise reconstruction) is applied to the 8-bit, and the 4 and 2-bit sub models. In addition to this, the 4 and the 2-bit sub models are distilled from the 8-bit model’s logits (or layer output). The notation is present in the caption of Table 4, but we will further clean it up to make it easier to parse. We hope that the rebuttal clarified your questions about the paper and hopefully leads to an even more positive evaluation of the work. We are happy to further discuss if you have any questions about the paper. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for addressing my questions. I kept my original rating to lean toward accepting the paper.
null
null
null
null
null
null
null
null
Learning Representations of Instruments for Partial Identification of Treatment Effects
Accept (poster)
Summary: The paper presents a method for learning bounds on CATE (conditional average treatment effect) in the event of observed covariates X, unobserved confounding U, binary treatment A, scalar outcome Y, and an instrument Z which may be high dimnensional. The proposed method extends: Schweisthal, J., Frauen, D., van der Schaar, M., and Feuer- riegel, S. Meta-learners for partially-identified treatment effects across multiple environments. In ICML, 2024. which deals with the case of discrete instruments, and provides the core result of Lemma 2. The contribution of the paper is a neural net approach to mapping the high dimensional continuous variable Z to a set of k partitions. Consequent bounds on the CATE are then found straightforwardly (Theorem 1). Claims And Evidence: This paper presents a reasonable approach to a setting with some real-world application, and has some methodological improvements over the prior work by Schweisthal et al. that it extends. However, the contribution might be considered incremental. Methods And Evaluation Criteria: A number of strategies are employed to train the instrument discretization network. These combine: 1) minimizing the bound width, 2) ensuring no bin has too small a mass, 3) enforcing that the distribution of Z given its bin is heterogeneous across bins. These approaches are all reasonable so I have no questions or objections. The evaluation is conducted on three synthetic benchmarks, to ensure a known ground truth. Since no method yet addresses the setting, the main point of comparis on is the naive k-means discretization on the one hand, as well as the known ground-truth CATE. Again, this choice is reasonable, and I have no questions on it. Theoretical Claims: Asumptotic normality of the conditional mean and propensity score estimates is shown by combining CLT and delta method (Theorem 2). I did not read the proofs in detail but standard techniques are used. Experimental Designs Or Analyses: see above Supplementary Material: only briefly - see discussion of theory. Relation To Broader Scientific Literature: see above Essential References Not Discussed: n/a Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you a lot for your positive feedback and your comments! We are happy to see that our claims, methods, theory, and experiments are well received and do not raise any additional concerns. Here, we would kindly like to elaborate on why our paper is **not** an incremental contribution of the work of Schweisthal et al. but instead a standalone work with multiple novelties in a different setting. (i) **Applicability to arbitrary IVs and target bound width minimization**: We show that the existing bounds for discrete instruments from Lemma 1 (which Schweisthal et al. used and which are also just an adaptation of the Manski bounds) can be applied to other instrument types (continuous, high-dimensional) by using _arbitrary partitioning functions_, enabling to transfer and generalize the bounds to new unconsidered settings such as Mendelian randomization (MR) or indirect experiments with complex nudges. **This is a novel theoretical finding**. While this may seem straightforward at first sight, to the best of our knowledge, we are not aware of any prior work considering that connection, i.e., _even our naive baseline leveraging $k$-means clustering has not been considered before_. This is orthogonal to the work of Schweisthal et al. who derived model-agnostic learners in the **discrete** IV setting, while we focus on **complex and continuous** IVs. Further, this finding allows us to develop the **new objective of directly targeting bound width minimization** during representation learning to learn optimal partitions (Eq. (8)). Based on this, **we make two major theoretical contributions regarding optimized training**: (ii) **Stability by avoiding alternating learning**: A straightforward implementation minimizing the bounds following Eq. (8) would require alternating learning. The reason is that, after every update step of $\phi(z)$, the quantities $\mu_\phi^a(x, \ell)$ and $\pi_\phi(x, \ell)$ are not valid for the updated $\phi$ anymore and would need to be retrained to ensure valid bounds. This is computationally highly expensive and results in unstable training and convergence problems. However, our method circumvents these issues: by using our novel Theorem 1, we show that, while training $\phi(z)$, the quantities $\mu_\phi^a(x, \ell)$ and $\pi_\phi(x, \ell)$ can be directly calculated. (see also our subsection “Implications of Theorem 1” on page 4). For that, we can simply evaluate the nuisance functions, which only need to be trained once in the first stage. Therefore, we avoid any need for alternating learning, resulting in more efficient and stable training. Here, also note that Theorem 1 and its proof in Appendix A1 target effective _estimation_ of our target quantities, and **thus is orthogonal to the works about discrete instruments** which aim for the _derivation_ of bounds. (iii) **Improved finite sample robustness**: Even using our stable training procedure from above, optimizing for Eq. (4) only yields valid bounds _in expectation on the population level_. However, if the discrete representation learning results in highly imbalanced marginal probabilities during training (i.e., $\mathbb{P}(\phi(Z)=\ell)$ is small for some $\ell$), this can result in high estimation variance of the nuisance estimates and thus unreliable bound estimates. **We show this more formally in our Theorem 2 where we provide theoretical guarantees for the asymptotic behavior**. In contrast, we avoid these problems: by using our theoretically motivated custom loss from Eq. 19 with the respective regularization from Eq. 17, _we enforce lower estimation variance during training and thus more reliable bound estimates_. In sum, we only leverage the formulation of the closed-form bounds of Schweisthal et al. from the discrete IV setting ( - which is also **not** their main contribution but only extending Manski bounds to the CATE - ) as a simple starting point for our method. Thus _our major contributions are independent of the contributions of Schweisthal et al_., which are model-agnostic meta-learners in a different setting (discrete vs. complex continuous IVs). **Action**: We further improved the comparison to previous work in our paper to clarify the novelty of our method. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I think the explanations above add clarity to the paper, and would be helpful to incorporate in a final version. I will maintain the current score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer xPaP, Thank you for your response and your positive evaluation of our work! As promised, we will incorporate our explanations and the more direct comparison with prior work from above into our paper to improve the clarity, in particular with regard to the novelty of our method. Best regards, The Authors
Summary: This paper proposes a new method for partial identification of the conditional treatment effect (CATE) when working with continuous or high-dimensional instruments. By mapping complex instruments into a learned discrete representation, the authors apply Manski-style bounds while mitigating the instability that arises in adversarial or iterative training. Their two-stage procedure first estimates nuisance functions, then partitions the instrument space in a way that reduces variance in finite-sample settings. The key theoretical result is that these learned partitions yield valid, reasonably tight bounds for the CATE under the usual IV assumptions, but with fewer structural constraints than point-identification approaches. The proposed method is empirically evaluated on synthetic datasets. ## update after rebuttal The authors' rebuttal addressed most of my concerns. However, I still view the work as somewhat incremental and, as such, I will maintain my borderline positive score (**3: Weak accept**). Claims And Evidence: The paper makes three primary contributions: (1) a discrete representation approach that yields valid bounds on the CATE for complex Ivs (2) a two-step estimation procedure claimed to be more stable than adversarial approaches (3) theoretical guarantees that their learned bounds are valid under standard IV assumptions. Contributions (1) and (3) are well-supported by evidence: detailed proofs and empirical results confirm that the proposed method produces narrower bounds compared to naive discretizations, and simulations demonstrate good alignment with the true CATE or oracle bounds. However, contribution (2) is not substantiated, as the authors neither provide direct comparative experiments nor theoretical justification for this claim. Thus, while the paper effectively motivates the stability advantage conceptually, it lacks sufficient evidence to show that this holds in theory/applications. Methods And Evaluation Criteria: The authors propose a two-stage approach: first, they train neural networks for nuisance functions (propensity and outcome regressions), then they learn discrete partitions of the instrument space to minimize bound width while balancing finite-sample variance. They evaluate performance primarily by coverage and average bound width, with additional metrics like MSD to gauge robustness under different partition sizes. These benchmarks are sensible for partial-identification methods in synthetic and semi-synthetic scenarios, though it would be valuable to see how the approach performs on a real-world dataset—even if we would only rely on qualitative assessments for validation. Theoretical Claims: The theoretical claims are supported by detailed proofs, for which I commend the authors. I carefully reviewed Theorem 1 and Lemma 1 and they appeared correct to me. I skimmed the proof of Theorem 2, and it appeared correct as well, although I did not verify every detail thoroughly. Experimental Designs Or Analyses: The authors evaluate their method on simulated data, including scenarios with high-dimensional instruments (e.g., SNP-like variables) and known ground-truth oracles. I checked the design’s overall soundness—splits, metrics, and comparisons to baseline methods—and found nothing amiss. The coverage metrics and bound widths are computed in a consistent way for partial identification, and the analyses appear robust with no evident methodological flaws. Supplementary Material: I reviewed Appendices A-D. Relation To Broader Scientific Literature: The approach addresses an existing gap in the literature where prior methods either required strong structural assumptions for point identification with IVs or relied on adversarial techniques to build valid bounds. The core idea—partitioning the instrument space and applying Manski-type bounds to the CATE function—is straightforward and heavily builds upon existing literature. Although not highly novel, the theoretical results and empirical validations are convincing, making this work a strong candidate for inclusion in the conference. Essential References Not Discussed: It could be beneficial to reference additional recent works on partial identification beyond Padh et al. (2023) and Levis et al. (2023) if they exist—for instance, studies that attempt to bound treatment effects with minimal assumptions in observational data, or new theoretical advances that might provide alternative bounding approaches. But otherwise, I think the related literature section is pretty comprehensive and includes the relevant references. Other Strengths And Weaknesses: [Summary of the thoughts from the above sections] Strengths * The paper presents a straightforward method for applying Manski bounds to complex IV settings, supported by clear theoretical justifications. * Empirical evaluations demonstrate effective coverage and tighter bounds in challenging, high-dimensional scenarios. * The proposed method is modular, allowing the integration of different models for the first-stage nuisance estimation. Weaknesses * Although the authors claim improved stability over adversarial methods, no direct comparative experiments or formal proofs substantiate this claim. * The benefits over naive clustering methods can be modest in simpler scenarios, with improvements becoming more pronounced in complex cases. * While benchmarks presented are appropriate for partial-identification methods in synthetic and semi-synthetic contexts, it would be valuable to evaluate the method's performance on real-world datasets, even if such assessments were qualitative. Other Comments Or Suggestions: * I believe there is a typo in the definition of $\mathcal{L}_b$ in Eq (16) since he loss seems to decrease when the upper bound is lower than the lower bound which is not valid given that the objective is to minimize this loss. * Additional intuition for the auxiliary loss $\mathcal{L}_{aux}$​ would be helpful, especially why cross-entropy is the best penalty to induce diverse partitioning. Questions For Authors: [Summary of the thoughts from the above sections] * Loss Mechanics: How does your loss function handle scenarios where the upper bound might dip below the lower bound, given that $\mathcal{L}_b$ is minimized directly? * Stability Claim: You mention more stability than adversarial methods—could you expand on whether you attempted a direct comparison or if you can point to any theoretical rationale beyond avoiding min-max loops? * Interpretation of Partitions: In a domain like genetics, do you see real-world interpretability in these learned clusters (e.g., discrete genotypes), or is it purely an internal mechanism? * Auto-Tuning: Choosing the number of partitions $k$ seems crucial for balancing bound tightness and sample size per partition. Have you considered automatic selection methods (e.g., a validation-based approach)? Overall, I consider this a strong partial-identification paper, and I support acceptance—pending clarifications on stability and the auxiliary loss. Overall, I consider this a strong partial-identification paper, and I support acceptance—pending clarifications on stability and the auxiliary loss. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive and very actionable review! We took your comments at heart and improved our paper as follows. # Response to Claims and Evidence - **Stability of our method compared to adversarial approaches**: This is a very interesting point! As _widely investigated_ in the literature, adversarial (typically min/max) optimization suffers from _instability due to joint optimization of competing objectives_, which often leads to **slow convergence, gradient issues, and sensitivity to hyperparameters**. In contrast, our two-step approach decouples nuisance function estimation and partitioning to provide a more controlled, stable learning process. Even in simple **empirical** cases, when we estimated the bounds in an adversarial manner by retraining $\mu_\phi$ and $\pi_\phi$ after updating $\phi$ (instead of using our Eq. 3 and 4), we could _not even achieve convergence_. This highlights the superior stability of our method. **Action**: For the final version, we will report results with more exhaustive tuning of the adversarial baseline to provide a fair comparison. # Response to Methods and Evaluation Criteria - **Real-world data**: For benchmarking, we use synthetic DGPs that are **closely tailored to the real-world settings in Mendelian randomization (MR)**, such as polygenic risk scores and SNPs, and provide results for different levels of complexities. **We added new experiments with real-world data from a chemotherapy study containing genetic variants**. This allows us to the effect of exposure (smoking) on cancer progression (outcome). **We provide the data description, results, and short interpretation here:** https://anonymous.4open.science/r/IVRep4PartId-714C/rebuttal/rebuttal_experiments.pdf. In sum, we observe that our method provides expected results while showing similar Width and **clearly more robust estimation** compared to the naive baseline, which confirms the effectiveness of our method. **Action:** We will include the real-world experiments in our paper. # Response to Weaknesses - Regarding the points “improved stability against adversarial methods” and “real-world experiments”, we kindly refer to our response from above. - **Limited benefits in simpler scenarios**: We agree that the benefits in terms of improved tightness are limited in the simpler setting. However, even though our method is designed for more complex settings, we still achieve similar performance in tightness (Width), while also providing more robust estimation (MSD) here. This shows that our method is suitable in both simpler and more complex settings. # Comments suggestions - **Loss formulation for $L_b$ in Eq. (16)**: While it might seem counterintuitive, our formulation in Eq. (16) is correct. By our **Theorem 1**, we ensure that in population, the upper bound is always _greater than the lower bound_, and thus, the loss **cannot become negative**. In finite samples, by having large estimation errors in our bound estimates, this could change in theory. However, by using our **regularization loss** in Eq. (17), we directly avoid such unreliable estimations. Also, when running our experiments, we _never_ observed a negative loss, **undermining its applicability**. - **Intuition for $L_{aux}$**: Here, as a motivation for cross-entropy, we followed the heuristic that if the activations before the discretizations can be separated more easily, they should be more diverse. We also did some runs with the empirical Wasserstein distance loss to directly enforce more distributional distance in Z given the partitions. However, the cross-entropy loss performed stable. Further, the auxiliary loss only reduces the _estimation variance_, but does not affect the average _width or coverage_ (**see also our experiments in Appendix K**), thus turning out to be **useful but not crucial** for our method. # Response to questions - For loss mechanics and stability, we kindly refer to our answers above - **Interpretation of Partitions**: In principle, the clustering is only an internal mechanism to optimize our learning objective. However, under certain assumptions (see also **Appendix F.1**), maximizing for similar propensity scores within clusters and high heterogeneity between the clusters will lead to the closest bounds. Thus, one could implicitly identify genetic variations with similar influence on the exposure within one cluster. - **Auto-Tuning**: Indeed a major benefit of our method compared to the baseline is the _robustness regarding $k$, since hyperparameter selection is hard in applied causal inference due to a _lack of access to the ground truth CATE_. However, we nevertheless provide practical guidelines in **Appendix F.2** for the optimal selection of $k$. Therein, we propose two approaches: (1) an expert-informed approach and (2) a data-driven approach, which can be used seamlessly in practice. --- Rebuttal Comment 1.1: Comment: Dear authors, Your rebuttal has addressed most of my concerns. However, I still view the work as somewhat incremental and, as such, I will maintain my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer m2Ds, Thank you for your response and your positive evaluation, we are happy that we could address most of your concerns! For a more detailed comparison of our contribution to prior work, and especially, for a detailed explanation why our novelty is orthogonal to the work of Schweisthal et al. (2024), we kindly refer to our **rebuttal to reviewer xPaP**. Best regards, The Authors Schweisthal J, Frauen D, Van Der Schaar M, Feuerriegel S. “Meta-learners for partially-identified treatment effects across multiple environments.” ICML 2024.
Summary: The authors tackle the problem of partial identification of treatment effect with high dimensional instrument variables that have a complex relationship with the treatment. They do this by learning a discrete representation of the instrument variable and deriving a learning objective that does not require retraining the nuisance functions at each iteration step. A loss then introduced that includes extra regulariser terms that ensure good performance in the finite sample setting. They compare their method against a "naive" baseline that discretises the instrument (via kmeans) and applys existing bounds for discrete instruments. The performance of their estimator improves over this baseline. ## update after rebuttal I will keep my positive score. Claims And Evidence: The claims of the paper are: - Allow for partial identification with continuous, high dimensional instrument variables that may have complex relationship with the treatment. - Introduce an algorithm that avoids adversarial learning. - Demonstrate effectiveness both theoretically and empirically. The claims are validated through the design of their algorithm as well as the good performance on simple and more complex tasks. It's a bit unclear how the theory shows that the algorithm is "effective", this contributions could be worded better. Methods And Evaluation Criteria: The paper compares on benchmarks with simple propensity and a more complex propensity to show that their method can gain performance in the first setting without losing performance in the first setting. They introduce multiple metrics that tackle different properties of the bounds. However the MSE* and width* metric are only shown for Dataset 3. Theoretical Claims: The theoretical claims relate to the closed form computation of the nuisance functions, and the finite sample properties of the proposed algorithm. They seem correct. Experimental Designs Or Analyses: The experimental design makes sense for the problem at hand, increasing the complexity to show that the proposed changes are necessary. Supplementary Material: The algorithm details and the data generation, along with some of the proofs. Relation To Broader Scientific Literature: There is ample discussion of the difference in the settings compared to previous work. The fact that adversarial learning is not required is also a contibution. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: Strengths - The paper tackles a well formed problem that is realistic and show that their solutions works. Weaknesses: - Some of the presentation could be improved. Other Comments Or Suggestions: - Figure 4: The writing in this figure is too small. The figure needs to be a lot larger. I would suggest even moving it to the appendix. - Figure 3: This is not clear, is the training of the representation and updating of nuisance parameters not repeated multiple times? As is shown in Appendix H. The figure makes it seem like the representation is trained, then the nuisance updated which then results in the bounds. - Motivation behind the metrics could be made clearer. What is a higher or lower value of a metric saying about the algorithm/performance under a decision? Questions For Authors: There does not seem to be much of an improvement on MSE* in dataset 3, which is the purported use case. Although you claim the the MSE* performance is better over the naive baseline, it seems to me that they are the same (within error). Am I missing something here? Why would the performance be so similar? Agains, the width is the same as well within error, and the coverage of the naive baseline is not much lower. How should I interpret these results? Why do we care about the MSD score? In practice I will chose a single k and I want the performance for that k. If there is a procedure for choosing the best k, I would want to see the performance for that best k. For example, one method could be ok for all k, whereas another could be great, but only for a single k. A clearer discussion of the metrics and the results would really help the narrative. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you a lot for your positive review and your helpful feedback! We took your comments at heart and improved our paper as follows. # Response to Claims and Evidence **Wording of contribution**: Thank you for this remark! We use the term “effective” to reference the good performance of our method when optimizing for our different objectives: (1) learning tight bounds with low average width, (2) providing valid bounds with full coverage, and (3) demonstrating robust estimation without requiring alternating learning or retraining of nuisance functions while providing stable performance with low MSD between different values of $k$. However, we agree that this terminology is a bit loose. **Action:** We will spell out more clearly which property of our method we refer to, and we will thereby spell out our contributions more clearly to avoid confusion. # Response to Methods and Evaluation Criteria **MSE*** **and Width*** : We agree that the MSE* and Width* would be also interesting metrics for Datasets 1 and 2. However, to calculate these metrics, we need to approximate the oracle bounds. Since we use continuous IVs for datasets 1 and 2 to show diverse settings, ground truth bounds cannot be estimated by closed-form solutions. For the high-dimensional Dataset 3 (and Dataset 4 in the additional experiments in **Appendix E**), we model the IVs to be dependent on a lower-dimensional discrete space such that we can approximate the oracle bounds. However, we agree that our paper would benefit from explaining this more thoroughly. **Action:** We will include an additional paragraph where we discuss our considered metrics in more detail. # Response to Weaknesses and Other Comments Or Suggestions - **Improved Presentation:** Thank you for your careful checks! **Action:** We will use the additional page of the camera-ready version to improve our presentation significantly including FIgures 3 and 4. - **Figure 3**: As you correctly pointed out and as presented in Appendix H, our current Figure 3 represents one training step. **Action:** To improve clarity, we will add our loss function to the final Figure and distinguish between (i) the training loop to update the parameters, and (ii) the final forward pass to estimate the final bounds. - **Motivation behind metrics**: As mentioned above, we will provide an additional discussion behind our choice of metrics. Here, we shortly want to summarize the most important aspects: The **Coverage**(# CATE within estimated bounds / n), and **Coverage*** (# ground truth bounds within estimated bounds / n) should always be 1 and as high as possible to allow for reliable decision-making. The **Width** (sum of upper minus lower bound / n) and **Width*** (“Width for runs with Coverage* of at least 95%) express certainty about the CATE, and should be, in principle low. However, note that through estimation variance, low values of Width can become _falsely overconfident_ (tighter than the oracle bounds), which is why we introduce the Width* for Datasets 3 and 4. The **MSD** shows the variation for different values of $k$. Low values indicate robust behavior. This metric is especially important in real-world applications where we do not have access to the oracle CATE or oracle bounds, and thus cannot calculate the Coverage, Coverage*, or Width*. This makes it hard to select a $k$ which guarantees validity but also a certain tightness of the bounds. In **Appendix F**, we provide an extended discussion about the role of $k$ and practical guidelines for selection. # Response to Questions - **Results Dataset 3**: On average, **we show improvements of about 9 % for** **MSE*** **and 2% for Width***. For the latter, even though we filter for runs with coverage above 95%, there still can be up to 5% observations with overconfident bound estimations with low width for the naive baseline, biasing the gap to appear smaller. Further, the standard deviations are inflated by summarizing the runs over different $k$. Within $k$, these effects appear stronger, as shown in Figure 5. Here, we can also see that for low values of $k$, coverage is optimal for both methods while only for the rarer evaluated high values of $k$ does the coverage start to decline rapidly for the naive baseline. - **MSD score and role of $k$**: For the MSD score and the reporting of different $k$, we kindly refer to our answer from above. In **Appendix F**, we provide an extended discussion about $k$ and provide practical guidelines for selection that are suitable depending on the specific problem setting: (1) an expert-informed approach and (2) a data-driven approach. In our experiments, we provide the results for the respective values of $k$ in Table 3 and Figure 5, such that we can check the performance under different selection strategies. **Action: To address all points, we will add an extended discussion of the metrics and results to our paper.**
Summary: The paper proposes a method for partial identification, that is, bounding, of treatment effects in the instrumental variable (IV) setting. Specifically, the paper studies the scenario where the instruments are continuous and potentially high-dimensional. It proposes an approach for partial identification through a mapping of instruments to a discrete representation space, and learning the discrete representation by minimizing the width of the bounds. Theoretical results are provided claiming the validity of the bounds. Experiments are performed to show the effectiveness of the proposed method against a naive baseline. ## update after rebuttal Thanks for the rebuttal to clarify a few things. Overall, the novelty of the paper is acceptable although somewhat incremental. The experimental results show that the proposed method is less sensitive to $k$ but are not convincing in showing that the proposed method is actually better than the naive baseline when a best $k$ is selected. I've updated my score to Weak Reject. Claims And Evidence: Overall, it feels that the paper exaggerates the novelty of the approach. It looks to me the proposed method is a somewhat direct extension of the existing closed-form bounds with discrete instruments. Rather than simply discretizing continuous instruments and using the existing bounds (the naive baseline), the paper proposes to learn a discrete representation of the continuous instruments that optimize bound width. The existing bounding results (Lemma 2) should be presented in the main paper. In addition, I'm not sure about the claims regarding the "tightness" and "validity" of the bounds obtained by the proposed method throughout the paper. I think these claims are rather misleading. -I don't understand the objective given in (1). To my understanding, we want valid and "tight" bound $b^-(x)$, $b^+(x)$ for each given $x$. Why would one want to minimize the "expected" bound width? In what sense are you claiming the "tightness" of the bounds given by (1)? Why do you call this "informative"? To my understanding, "informative" has a different meaning. -Overall, I'm not sure about claiming tightness and validity under discretization and finite samples. Methods And Evaluation Criteria: Overall, the proposed method and evaluation criteria, including test datasets, are reasonable and make sense for the problem at hand. Theoretical Claims: I did not check the details of the proofs for theoretical claims. They look reasonable. Theorem 1 is a somewhat direct extension of existing results. Lemma 1 is standard. Experimental Designs Or Analyses: The overall experimental designs and analyses are reasonable. However, I have the following concerns: -Can existing methods in IV settings with continuous treatments be tailored for binary treatment and therefore used as baselines (even if they are not directly tailored for binary treatment)? -In Tables 1 and 2, some of the improvements in Width and MSE are small. Are the improvements actually statistically significant? -In Table 3, it looks like the Naive baseline improves significantly with increasing $k$ values. What are the results for $k\geq 3$? -I'm not sure it makes sense to show the results that are averaged over different $k$ values (what $k$ values are the results averaged over?). It makes more sense to me to select the best $k$ value. Have you tried choosing the best $k$ instead? Supplementary Material: I did not carefully review the supplementary material; I just skipped through some parts. Relation To Broader Scientific Literature: To my understanding, the work directly extends the existing closed-form bounds with discrete instruments given in Lemma 2. Essential References Not Discussed: None. Other Strengths And Weaknesses: none Other Comments Or Suggestions: -I believe assuming the causal structure in Fig. 1 implies the identifiability Assumptions 2 and 3. -$s_1$, $s_2$ in (7) and (8) are not defined. -I couldn't follow what the paragraph "Implications of Theorem 1" on page 4 is talking about. -Eq. (24), (25) in Line 237 should be (3) and (4). Questions For Authors: 1. I don't understand the objective given in (1). To my understanding, we want valid and "tight" bound $b^-(x)$, $b^+(x)$ for each given $x$. Why would one want to minimize the "expected" bound width? In what sense are you claiming the "tightness" of the bounds given by (1)? Why do you call this "informative"? To my understanding, "informative" has a different meaning. 2. Can existing methods in IV settings with continuous treatments be tailored for binary treatment and therefore used as baselines (even if they are not directly tailored for binary treatment)? 3. I'm not sure it makes sense to show the results that are averaged over different $k$ values. It makes more sense to me to select the best $k$ value. Have you tried choosing the best $k$ instead? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your review and the opportunity to clarify multiple points of our paper! We took your comments at heart and improved our paper as follows. # Response to concerns around claims and evidence, theoretical claims, and relation to broader scientific literature - **Novelty of our paper and implications of Theorem 1**: We would like to emphasize that, while the extension of existing closed-form solutions to complex instruments is novel (i.e., even the naive baseline has not been considered yet), this is only _a small part_ of our contribution. Importantly, instead of **only deriving** valid bounds, we focus on a method for **improved estimation**. Based on this, we introduce the new objective of directly minimizing bound width during representation learning (Eq. 8). Importantly, in Theorem 1, we show that this objective can be optimized _without alternating learning_: we can directly evaluate the necessary quantities from pre-trained nuisance functions by leveraging Eq. 3 and Eq. 4. (otherwise, we would need to retrain $\mu_\phi$ and $\pi_\phi$ after every update step of $\phi$ in an alternating way). Thus, with our formulation, **we avoid instability and high computational cost**. (see also “Implications of Theorem 1” in our paper). Lastly, to ensure robustness in finite samples, we regularize for balanced partition probabilities (Eq. 14), _reducing estimation variance as shown in our Theorem 2._ - **Clarifications on our objective in Eq. (1), and claims about “tightness” and “validity”**: The validity of our bounds in population follows straightforwardly from Theorem 1. In finite samples, as for every method, full validity (i.e., guaranteed 100% coverage) can never be proven. However, by using our regularization loss, we mitigate the risk of overconfident estimation, which is unlike existing methods that do not account for estimation uncertainty. In addition, we refer to tightness by minimizing the expected bound width as much as possible given some representation $\phi$. Importantly, this usually does not lead to **sharp bounds** (i.e., the “tightest possible ones” for all $x$ which would only minimize the population tightness in Eq. 12), since this would not allow us to reduce the estimation variance. This is also our motivation for our objective in Eq. (1): _We aim to reduce the average bound width to get robust estimates of “informative” bounds for the largest part of the population_. Here, “informative” refers to “being useful for the underlying decision-making problem”. E.g., for classical CATE estimation, informative means that the lower bound is greater or the upper bound is below some decision threshold (often 0). **Action:** While we are consistent in our terminology with previous work on partial identification, we will add a discussion about the different terms to improve the clarity of our work. # Response to experimental designs and analyses - **Adaptation of existing methods**: As these methods often include additional assumptions over the continuous interventional distribution in the form of additional constraints, the adaptation is not straightforward. However, in **Appendix E**, we provide results for additional adapted baselines, such as extensions of methods that were designed for point identification. - **Significance of results**: As standard in machine learning, we avoid making inferences about statistical significance since this would require careful adjustment for dependencies and multiple testing, but instead report mean and standard deviation over multiple seeds. - **Role of $k$ in Table 3**: Dataset 1 is defined in a simple way such that a naive partitioning with $k=3$ can already closely approximate optimal bounds. Interestingly, with k=4 the partitioning gets worse, yielding only a coverage of 0.49 for the naive baseline, while still yielding 1.00 for our method. _This shows the robustness of our method_, and even may indicate that the narrow bound width of 0.83 for the baseline for k=3 may be due to _falsely overconfident_ bound estimates. - **Evaluation and selection of $k$**: We average over the k-values considered in our sensitivity analysis as reported in Table 3 and Figure 5. The intuition here is that, in real-world applications, one does not have access to the ground-truth CATE or oracle bounds, and thus coverage cannot be checked to select the best k. When only considering width as the criterion, this would lead to always choosing the tightest bounds, which are likely to not yield full coverage, especially for the baseline. In **Appendix F**, we provide an extended discussion over the role of $k$ and give guidelines on how to select k in practice. # Response to other comments or suggestions - Thanks for your careful remarks! We will adjust our paper accordingly. Regarding the “Implications of Theorem 1”, we kindly refer to our answers above. # Response to Questions - Thank you for your questions! We addressed all of them in our answers above. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. -The novelty of the paper is incremental but acceptable. -To my understanding, the goal of partial identification is to find valid and sharp/tight bound $b^-(x)$, $b^+(x)$ for each given $x$. So, the ideal objective should not be Eq. (1), which is already an approximation or proxy of the partial identification problem. What space are you optimizing in (1)? On the other hand, it makes sense to learn a discrete representation that minimizes the expected bound width. However, claiming learning "tight" bound as a contribution throughout the paper is misleading when "tightness" here actually means ``minimizing the expected bound width as much as possible given some representation $\phi$'', a measure specific to the problem the paper defined. -Comparison with the naive baseline and the role of $k$: The experimental results indeed show that the proposed method is less sensitive to $k$ than the naive baseline, that is, more robust. But, at the end of the day, one uses the algorithm to output a bound based on some $k$. It doesn't make sense to compare the performances of the two algorithms based on the average over different $k$ values. Other than being less sensitive to $k$, it's hard to draw the conclusion that the proposed method is actually better than the naive baseline based on the experimental results shown. In the complex Dataset 3, width* and MSE* are actually comparable. --- Reply to Comment 1.1.1: Comment: Thank you very much for your response and for giving us the chance to elaborate on the remaining points of concern! We are confident we can address all of them with minor changes in our paper. - We are happy that we were able to address some of your concerns regarding our novelty. For a more detailed comparison of our contribution to prior work, we kindly refer to our **rebuttal to reviewer xPaP**. - We understand that our objective in Eq. (1) differs from the typical formulation in some other partial identification papers, but it targets the same objective. Thus, we would like to show the connection to the more traditional formulation. First, we can formulate the space we are optimizing over as the compatible distributions (distributions over joint data including $U$, unobserved) that are compatible with the observed data distribution $$ \mathcal{M} = \{ \mathbb{P}^*(z, a, x, u, y) \mid \mathbb{P}(z, a, x, y) = \int \mathbb{P}^*(z, a, x, u, y) du \} $$ **Option A: classical formulation**: Here, recent literature often formulates the goal of partial identification such that $$ b^+(x) = \sup_{\mathbb{P}^\ast \in \mathcal{M}} \tau_{\mathbb{P}^\ast}(x) $$ (and equivalently for $b^-(x)$) gives optimal _sharp_ bounds. In our paper, we make use of another formulation: **Option B (ours)** To get _valid bounds_, we define the set $$ \mathcal{V}_{+} = \{ b : \mathcal{X} \to \mathbb{R} \mid \tau_P\ast(x) \leq b(x) \text{ for all } P^* \in \mathcal{M},\ x \in \mathcal{X} \} $$ (and equivalently for $\mathcal{V}_{-} $). Then, we can minimize the bound width via $$ b_*^-,\ b_*^+ \in \arg\min_{b^- \in \mathcal{V}-,b^+ \in \mathcal{V}+} \mathbb{E}_X [b^+(X) - b^-(X)] $$ This is **equivalent** to option A and also gives _sharp_ bounds (i.e., if the width for every $x$ is minimal, then the expected width will be minimal, too). Only later, when we restrict the bounds to functions that can be expressed dependent on the discretization $\phi$, we don’t necessarily yield _sharp_ bounds, but instead, focus on robust estimation. We use Option B because we have a good notion of what valid bounds are from Theorem 1. Here, we can directly incorporate the objective above into our loss function as the _bound width minimization_ term. Further, while our usage of the term “tight” is motivated by the formulation above (minimizing the expected width), we fully agree that this contains ambiguity and can lead to confusion about the distinction to the term “sharp”. **Action:** We will add both formulations and the motivation for using Option B from above to Sec. 3 to improve the clarity of our paper. Further, we will update our usage of the term “tight” and instead directly refer to “reducing the expected bound width” to dissolve ambiguities. - In our experiments, we report the results averaged over multiple $k$ because, as usual in causal inference, hyperparameter tuning is more challenging without access to the ground truth CATE, and thus there are different strategies to select $k$ (see also **Appendix F.2**). Thus, taking the average over the reasonably selected $k$ can be seen as reporting the summarized performance over different strategies that would have resulted in selecting the different values of $k$ (e.g., by the expert-informed approach). However, we agree that this presentation is not optimal, and even leads to _less clear_ performance gains, as when considering $k$ separately (see Figure 5, for more details we kindly refer to our **rebuttal to reviewer XTJ8, “Response to Questions”**. Further, in our updated Figure 5 (https://anonymous.4open.science/r/IVRep4PartId-714C/rebuttal/sensitivity_k_combined.pdf), we now also report the MSE* over the different $k$ runs, showing clearer improvements than the averaged results). Thus, **we now report the results for the dataset using our data-driven approach** for selecting $k$, which (without access to ground truth CATE) results in a selection of $k$ = 15. | **Metric** | Naïve | **Ours** | **Rel. Improve** | |----------------------------|--------------------|---------------------|------------------| | Coverage* [↑] | 0.600 ± 0.547 | **0.937 ± 0.057** | **56.17%** | | Width* [↓] | 1.818 ± 0.076 | **1.788 ± 0.012** | **1.65%** | | MSE* [↓] | 0.094 ± 0.030 | **0.085 ± 0.009** | **9.57%** | We observe that, while we have some smaller gains in Width* and MSE*, the Coverage* of the naive baseline is low while ours remains high, **showing the key benefit of our method for reliable decision-making**. **Action:** We will update our results as shown above and move current tables averaging over $k$ to the Appendix to improve the presentation of our paper and better highlight the strength of our method.
Summary: To partially identify and estimate the bounds of the conditional average treatment effect (CATE) with potentially high-dimensional instruments, the authors propose a two-step approach. This method first learns discrete representations of the complex instrumental variables $Z$, then derives closed-form bounds based on these representations, and finally maps the bounds back to the original problem. The authors present two theorems for computing these bounds: one from a population perspective and the other from a finite-sample perspective. Additionally, they introduce a neural approach for learning CATE bounds when dealing with complex instruments. Experimental results demonstrate that the proposed method achieves superior validity and robustness. Claims And Evidence: Yes. I did not find any problematic claims. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are well-aligned with the problem at hand. The method effectively captures the key characteristics of CATE by complex $Z$. The evaluation metrics such as convergence, width, and MSD effectively illustrate the performance of the proposed methods. However, the benchmark dataset used in the study is a simulated dataset, while it is better to perform experiments on a real-world dataset. Theoretical Claims: I checked Theorem 2 and found no discussed issues. Experimental Designs Or Analyses: The authors simulated three datasets to conduct MR simulations. The evaluation metrics include coverage, width, and MSD for Datasets 1 and 2, while for Dataset 3, they introduced three analogous metrics. Additionally, the authors provided implementation details and conducted a sensitivity analysis on different $k$. However, I believe the comparison is limited to only a naive baseline, lacking comparisons with more alternative methods. Furthermore, the experimental settings could be adjusted to settings where existing methods are applicable, facilitating a fairer comparison. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: This paper introduces a two-stage approach for partial identification of treatment effects using high-dimensional instruments, leveraging discrete representations to obtain closed-form bounds. Previous works on Machine Learning for CATE estimation (e.g., Singh et al., 2019) focused on settings where the treatment effect can be identified and used ML methods with favorable properties (e.g., Kennedy et al., 2019). In partial identification with binary treatment, Robin and Manski proposed closed-form ATE bounds for bounded $Y$, and in the binary case, Balke and Pearl derived tighter bounds. For discrete variables, some researchers have provided broad overviews, but the issue is that these methods fail to effectively leverage continuous and high-dimensional instrumental variables to learn tighter bounds. In contrast, the method proposed by the authors expands the applicable scope and achieves tighter bounds. Essential References Not Discussed: None. Other Strengths And Weaknesses: - The paper is well-written and clearly structured, making it easy to follow the key arguments and contributions. The figures and illustrations are well-designed and effectively convey the key insights of the paper. - The contribution in this paper appears to be somewhat limited. The core contribution primarily involves redefining the forms of $\mu$ and $\pi$ from Lemma 2 under a discrete representation $\phi$, directly leading to Theorem 1. - In Table 3, the naïve method for dataset 1 performs significantly better in terms of width. Can the authors provide an explanation for this observation? - It seems that $k$ is important for the performances, from the sensitivity analysis. Performances are different with different $k$. How to set $k$ in practice? Other Comments Or Suggestions: 1. The sentence 'There are many reasons, including costs …' reads awkwardly and contains a grammatical issue. 2. Some notations in Theorem 2, such as $q$ and $\theta$, only appear in the proof and are not found in the main text, which may confuse readers. 3. The abbreviation "wrt." should be explained the first time it appears. 4. If possible, it would be helpful to place Figure 4 near Figure 5. Questions For Authors: - The experimental section lacks real-world datasets and relies solely on simulated experiments, which do not demonstrate the performance of the method in real-world settings. - Additionally, the comparison with other methods is insufficient. Since the authors claim that other methods are not suitable for their setting, and their method can handle more complex instruments $Z$, why not apply their method in a simpler setting and compare it with other methods? This would make the argument more convincing. - There are many invalid IVs in MR. In this case, does the proposed method still work well? Or how to deal with these invalid IVs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed feedback and the overall positive evaluation of our manuscript! We will take all your comments to heart and improve our manuscript accordingly. Below, we provide answers to all your questions # Response to Methods and Evaluation Criteria - **Benchmark datasets**: We agree that providing insights on real-world data is an interesting extension for our paper. However, it is standard in causal inference literature to rely on _synthetic data-generating_ processes for benchmarking, since the CATE (and for our setup, also optimal bounds on the CATE) can **never be observed in real-world data**. Thus, we provide benchmarking using synthetic DGPs that are **closely tailored to the real-world settings in Mendelian randomization (MR)**, such as polygenic risk scores and SNPs, and provide results for different levels of complexities. Further, by using our DGP for datasets 3 (and 4 in the Appendix), we can approximate _oracle bounds_, allowing us to check for validity also with respect to optimal bounds and not only compared to the CATE, **which is an extension over previous work** in similar settings. **Action**: To further strengthen our paper, we provide **additional insights on real world data** in MR (https://anonymous.4open.science/r/IVRep4PartId-714C/rebuttal/rebuttal_experiments.pdf). **Overall, our findings are consistent with our synthetic setup and indicate robust performance of our method** # Response to Experimental Designs or Analyses - **Additional baselines and experimental setups**: Thank you! Importantly, we focus on the highly relevant setting with _complex instruments_ and _binary treatments_ as it has been neglected in prior work. On the other hand, this implies, that our method is _not tailored or applicable_ to setups considered by existing papers (i.e., our method cannot be naively adopted to continuous treatments; and, for simple discrete IVs, no tailored representation learning is necessary since the closed form bounds can be estimated directly). Thus, we focus on our considered datasets. However, we agree that additional baselines besides the naive one are interesting. Thus, in **Appendix E**, we provide results for additional adapted baselines to show that our tailored method is clearly superior and necessary. # Response to Strengths and Weaknesses - **Contribution**: We agree that the reformulation of $\mu$ and $\pi$ is one of our main contributions as this allows us to avoid alternating learning. However, we would kindly highlight our major other contributions: (i) **Novel setting**: We are the **first** to directly focus on **partial identification with complex instruments such as in MR**, (ii) **Our learning algorithm**: We do _not_ only avoid alternating learning and leverage discrete representation learning but also encourage reduced estimation variance by our _tailored loss function_ as shown in Theorem 2. - **Dataset 1, Table 3 performance**: As the DGP for Dataset 1 is simple, a naive clustering with $k=3$ could already lead to close-to-optimal bounds. However, importantly, for Datasets 1 and 2, _lower width does not necessarily indicate better performance_. By using only finite data for estimation, the bounds could also be **falsely overconfident**, even if they lead to full coverage of the true CATE, as the naive baseline does not try to reduce estimation variance. Since, in these DGPs, we use _continuous IVs_, we cannot approximate the oracle bounds (as for Datasets 3 and 4). Instead, the main finding here is that **our method leads to robust estimates for different $k$, while the naive baseline highly depends on the selection of $k$**. - **How to set $k$ in practice**: As mentioned, a major benefit of our method compared to the baseline is the _robustness regarding $k$_. However, since hyperparameter selection is hard in applied causal inference due to a _lack of access to the ground truth CATE_, we provide practical guidelines in **Appendix F2**. Therein, we propose two approaches: (1) an expert-informed approach and (2) a data-driven approach, which can be used seamlessly in practice. # Response to other Comments Thank you for your careful reading! All of your points are very helpful and we will include them in the camera-ready version of our paper. # Response to Questions - Thank you for your questions! For the first two questions, we kindly refer to our responses above. - **Invalid IVs**: As in typical MR and IV settings, we need to ensure that the exclusion and independence assumptions hold. However, unlike usual methods, we do **not** require the relevance assumption: we can – in principle – also use SNPs that are not associated with the exposure /treatment. Since these irrelevant IVs do not lead to closer bounds, our method will just not use their information. Our datasets 3 and 4 indeed contain 75% of irrelevant IVs, which demonstrates the **strong performance of our method in this relevant scenario**.
null
null
null
null
Leveraging Skills from Unlabeled Prior Data for Efficient Online Exploration
Accept (poster)
Summary: In this paper, the authors propose SUPE, which combines the idea of pretraining offline skills and online exploration via hierarchical policy. SUPE first extracts low-level skills using a variational autoencoder, then pseudo-labels unlabeled trajectories with optimistic rewards and high-level action labels. Using this reward signal, SUPE train a high-level policy that composes pretrained skills for efficient exploration. In experiments, SUPE consistently outperforms previous methods across 42 long-horizon and sparse-reward tasks. ## Update after rebuttal The reviewers addressed my questions and updated score accordingly. Claims And Evidence: The overall pipeline of using offline action primitives to train online HRL and applying UCB-style rewards is clear and convincing. However, as noted below, there are similar ideas in the offline RL version, and the assumption that you have access to both (a) offline data and (b) an online environment can be a strong one. The authors argue that "~ these methods cannot be directly used in our setting as they require offline data to have reward labels", but it remains unclear how common in the real world the special case of offline data being un-labeled but online interactions being rewarded is. Methods And Evaluation Criteria: The evaluation criteria for the proposed method seem appropriate, and experiments were conducted across a sufficient number of environments. Although the number of ablation studies is not very large, the two most important factors (the RND coefficient and skill horizon length) were well examined. Theoretical Claims: . Experimental Designs Or Analyses: The experimental design and analysis seem valid. Supplementary Material: Checked Appendix Relation To Broader Scientific Literature: As the authors have claimed, there are not many clear methods that first learn offline action primitives and then utilize them in online settings. However, similar ideas do exist in the offline domain, and it might be better to provide a direct comparison with those approaches. (For more details, please refer to the section "Other Strengths and Weaknesses.") Essential References Not Discussed: . Other Strengths And Weaknesses: The proposed method can be considered as an extension of OPAL (but online version), combined with a UCB approach. In other words, from a technical perspective, it seems equivalent to applying a UCB-style reward to OPAL and training it in an online setting. Other Comments Or Suggestions: . Questions For Authors: Why do the HILP-based methods in the kitchen environment of Figure 2 achieve a non-zero return right from the start? What are the pros and cons of SUPE into HILP and the vanilla version, and what insights can you offer on which one should be chosen depending on the environment? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your detailed review and insightful comments. We especially appreciate the comments on the motivation. For your concern on the novelty of our method, we believe there might be a misunderstanding of our method – we believe that what we were describing is actually one of our baselines, “Trajectory Skills”. In our response, we explain in detail that we are not just a naive extension of OPAL combined with UCB (that is the baseline “Trajectory Skills”) and the differentiating step in our method makes a huge difference in performance from this naive extension as demonstrated in our experiments. **"The proposed method can be considered as an extension of OPAL (but online version), combined with a UCB approach. In other words, from a technical perspective, it seems equivalent to applying a UCB-style reward to OPAL and training it in an online setting."** We believe there is a misunderstanding. Our method is not equivalent to an extension of OPAL combined with a UCB approach. Rather, our “Trajectory Skills” baseline is equivalent to an online version of OPAL combined with a UCB approach from a technical perspective. Our “Trajectory Skills” baseline pretrains skills the same way as OPAL and learns a high-level online RL agent with a UCB approach to encourage exploration. The only, yet very important difference that distinguishes SUPE from this baseline is the pseudo-relabeling of the offline data that allows it to be used as additional off-policy data during online learning. Without the relabeling, we would not be able to leverage the offline data as additional off-policy data for online learning. We showed in our paper that using the offline data this way online results in a consistent/large performance boost over the “Trajectory Skills” baseline (Figure 2 and 3). **"It remains unclear how common in the real world the special case of offline data being un-labeled but online interactions being rewarded is."** In robotics, having access to task specific data can be expensive as it often requires human demonstration (e.g., through teleoperation or carefully scripted policy). It is often much easier to have access to a diverse dataset that is not directly related to the downstream task and use unsupervised RL algorithms to pretrain on this diverse dataset, and then fine-tune on a downstream task. This paradigm also started to gain popularity in RL in recent years [1, 2, 3, 4, 5]. In particular, O2O [1], FB [5] and ExPLORe [4] all study the setting where the offline data is unlabeled but the reward information is available in the online phase. O2O and FB assume the reward function is available right from the beginning of the online phase, whereas ExPLORe and our paper assumes the reward signal can only be obtained from online interactions. While the relevance of this problem setting merits discussion (which we will add to our paper), we believe that the number of prior works on this in just the last few years suggest that the topic is of interest to the ML community. [1] "Unsupervised-to-Online Reinforcement Learning" [2] "Fast Imitation via Behavior Foundation Models" [3] "Reinforcement learning from passive data via latent intentions" [4] "Accelerating exploration with unlabeled prior data" [5] "Learning one representation to optimize all rewards" **“Why do the HILP-based methods achieve a non-zero return right from the start?”** The data distribution for D4RL Kitchen is high quality, and BC methods can achieve some non-zero return [1]. Thus, since the HILP skills are effective at mimicking dataset behavior, sampling HILP skills can potentially achieve nonzero return prior to online exploration. [1] D4RL: Datasets for deep data-driven reinforcement learning. arXiv preprint arXiv:2004.07219, 2020. **"What are the pros and cons of SUPE into HILP and the vanilla version, and what insights can you offer on which one should be chosen depending on the environment?"** From current empirical observation, SUPE appears to be more stable during offline learning, and excels at long-horizon locomotion tasks (ie. $\texttt{visual-antmaze}$, $\texttt{antmaze}$, $\texttt{humanoidmaze}$). SUPE (HILP) is competitive for some manipulation tasks ($\texttt{antsoccer}$, $\texttt{scene}$), and achieves higher initial performance in some environments. We would like to thank you again for your constructive feedback and detailed reviews. Please let us know if you have any other concerns or questions. **If we have successfully addressed all your concerns, could you kindly raise your rating?** --- Rebuttal Comment 1.1: Comment: Thank you for the authors' rebuttal. I agree that this work has novelty compared to "OPAL combined with a UCB approach", and pseud-relabelling is the key idea that makes this work perform better than OPAL combined with a UCB approach. My concerns are resolved, and let me update my assessment. --- Reply to Comment 1.1.1: Comment: Thank you for raising the score, and again for your detailed review and insightful comments. We are really glad that your concerns have been addressed!
Summary: The paper introduces hierarchical RL to train high-level policy to utilize the unlabel offline trajectories, and train reward estiamation, high-level action (skill) and extra offline data in online learning to guide policy learning. It claims to achieve efficient exploration performance in sparse reward tasks by pseudo-labeling each trajectory segment with an upper-confidence bound (UCB) reward. Claims And Evidence: The claim that the algorithm achieves efficient exploration and faster online learning is supported by experiments in state-based and pixel-based reinforcement learning (RL) environments, demonstrating higher normalized returns compared to baselines. This claim is backed by clear and convincing empirical evidence, including detailed performance tables and learning curves. Methods And Evaluation Criteria: Based on my understanding, the proposed method consists of two steps. First, a high-level policy is pretrained on unlabeled trajectory data using a VAE-based approach to learn diverse skills. Second, during online RL, a skill-conditioned policy collects data by interacting with the environment, combining offline data from the pretraining step with online data for training. The evaluation uses goal reaching task with normalized returns like D4RL to assess performance. The inclusion of offline trajectories is reasonable for improving exploration efficiency. However, it is unclear why skill extraction is critical for online RL, as the algorithm uses offline data twice—first for skill extraction and second to augment training data—without justifying the marginal benefit of pseudo-labeling skills. Can you explain why skill extraction via pseudo-labeling provides a significant advantage over directly using offline data for training? A strong justification or a fair ablation study would strengthen the rationale as the title claimed. Theoretical Claims: The paper does not present formal theoretical claims or proofs. It includes serveral equations defining the loss objective to optimize the proposed model’s performance. As such, there are no proofs to verify. Experimental Designs Or Analyses: Although the paper lacks existing methods as it claims, it compares several online reinforcement learning (RL) methods, such as EXPLORE, DBC+JSRL, and a skill discovery approach like HILP. This section of the experiment appears robust, supported by statistical significance tests. Supplementary Material: I check the supplementary material and found them consistent with the paper’s experimental claims. Relation To Broader Scientific Literature: The topic in this paper quite aligns with some unsupervised reinforcement learning problems, which improves efficient exploration ablility and downstream task performance by addtional pretraining phase, some papers for reference. I believe that such method can be also adopt into online RL setting. [1] URLB: Unsupervised Reinforcement Learning Benchmark Michael Laskin, Denis Yarats, Hao Liu, Kimin Lee, Albert Zhan, Kevin Lu, Catherine Cang, Lerrel Pinto, Pieter Abbeel [2] CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery Michael Laskin, Hao Liu, Xue Bin Peng, Denis Yarats, Aravind Rajeswaran, Pieter Abbeel [3] Constrained Ensemble Exploration for Unsupervised Skill Discovery Chenjia Bai, Rushuai Yang, Qiaosheng Zhang, Kang Xu, Yi Chen, Ting Xiao, Xuelong Li [4] METRA: Scalable Unsupervised RL with Metric-Aware Abstraction Seohong Park, Oleh Rybkin, Sergey Levine [5] Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills Víctor Campos, Alexander Trott, Caiming Xiong, Richard Socher, Xavier Giro-i-Nieto, Jordi Torres [6] Behavior Contrastive Learning for Unsupervised Skill Discovery Rushuai Yang, Chenjia Bai, Hongyi Guo, Siyuan Li, Bin Zhao, Zhen Wang, Peng Liu, Xuelong Li Essential References Not Discussed: See the Relation To Broader Scientific Literature section. Other Strengths And Weaknesses: Strengths: 1. clear writing and easy to follow. Weakness: 1. see discussion in the Methods And Evaluation Criteria section. Other Comments Or Suggestions: The qualitative results and reported learning curves are clear and well-designed. However, it is unclear how the skill-conditioned policy guides goal-reaching tasks during evaluation. I suggest adding visualizations, such as those in Figure 4 of BeCL [Yang et al., 2023], to illustrate how the policy operates conditioned on different latent skills. A response with such visuals could enhance my confidence in the skill policy’s effectiveness. [1] Behavior Contrastive Learning for Unsupervised Skill Discovery Rushuai Yang, Chenjia Bai, Hongyi Guo, Siyuan Li, Bin Zhao, Zhen Wang, Peng Liu, Xuelong Li Questions For Authors: 1. Table 1 shows that the KL coefficient and GRU layers vary across environments rather than being consistent. Can you explain the rationale for these differences? A clear justification would strengthen the method’s design choices. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your detailed review and insightful comments. We especially appreciate the additional references you point out and the clarifying question on pseudo-labeling. For your question on how policy operates conditioned on different latent skills, we provide an additional detailed analysis that visualizes the skill latent and show how leveraging skills effectively reduces the exploration horizon, resulting in learning speedup online. **"I suggest adding visualizations, such as those in Figure 4 of BeCL [Yang et al., 2023], to illustrate how the policy operates conditioned on different latent skills"** We conducted additional analysis of the skill latent to demonstrate how skill discovery helps online exploration in one of our hardest domains, $\texttt{humanoidmaze}$. In particular, we randomly sample 16 latent vectors from our skill latent and roll-out our pre-trained low-level skill policy (corresponding to each of the sampled latent vectors) for an entire episode and visualize the x-y position throughout each of the 16 trajectories. For comparison, we also plotted the agent’s trajectories if actions were completely random. As shown in the figures [here](https://anonymous.4open.science/r/supe-rebuttal-8279/latent-viz/README.md), our unsupervised offline pretrained skills were able to navigate quite far from the initial positions even before any online learning. Such structured navigating behaviors allow the high-level policy to effectively operate at a reduced exploration horizon. Instead of training a low-level agent to predict $H$ correct actions in a row, the high-level policy only needs to predict 1 correct action to achieve a similar effect. As the exploration horizon is effectively reduced by a factor of $H$, we are able to train the high-level policy in SUPE to explore more efficiently and consequently solve the task much more quickly. **"Can you explain why skill extraction via pseudo-labeling provides a significant advantage over directly using offline data for training?"** The pseudo-labeling enables us to use offline data as additional off-policy data for the online training of the high-level agent. The high-level agent operates at a longer time-scale – taking only one high-level action ($z$) per 4 environment steps. Pseudo-labeling skills allows us to convert the low-level trajectories in the offline data $(s_t, a_t, s_{t+1}, a_{t+1}, \cdots)$ into high-level trajectories $(s_t, z_t, s_{t+H}, z_{t+H}, \cdots)$. Without pseudo-labeling, we would not be able to directly use the offline data to update the high-level RL agent online, which means that we could no longer use the offline data twice. **"Table 1 shows that the KL coefficient and GRU layers vary across environments rather than being consistent. Can you explain the rationale for these differences?"** The OGBench ($\texttt{humanoidmaze}$, $\texttt{antsoccer}$, $\texttt{scene}$, $\texttt{cube-single}$, $\texttt{cube-double}$) tasks are generally more difficult, so we use larger networks than for the simpler D4RL tasks. We select our network sizes for these OGBench environments according to the reference policy size used in the original OGBench paper [1]. For the manipulation tasks $\texttt{cube-*}$ and $\texttt{scene}$, the data distribution is narrower, and we found that a higher KL coefficient helps account for the difference in dataset diversity. [1] Park, Seohong, et al. "OGBench: Benchmarking offline goal-conditioned RL." arXiv preprint arXiv:2410.20092 (2024). **Related work references** Thanks for the additional references on unsupervised RL. We will incorporate them in our discussion in the paper and update in the camera ready version. We would like to thank you again for your constructive feedback and detailed reviews especially on the additional related work references. Please let us know if you have any other concerns or questions. **If we have successfully addressed all your concerns, could you kindly raise your rating?** --- Rebuttal Comment 1.1: Comment: Thank you for the authors’ efforts and their detailed response. Regarding my original question—"Does skill extraction via pseudo-labeling provide a significant advantage over directly using offline data for training?"—I understand your the clarification that extracting skills is necessary due to your algorithm’s design, which relies on an additional high-level policy to output latent features. However, I remain uncertain about whether it is truly necessary to extract skills and introduce an additional high-level policy (as opposed to using a single skill policy) for online exploration problem. Related works, such as EDL [1] and RLPD [2], demonstrate that directly leveraging offline data can effectively enhance online exploration performance, and these approaches appear to work well. I believe that it is more simple way to learn knowledge from prior data. Moreover, some works like METRA[3], and CIC[4] are still show promising exploration ability. Although it is orthogonal, they are just a low-level skill policy not relied on hierarchical guidance and skill feature is directly from simple uniform distribution. It is still unclear whether high-level policy for skill output is necessary. At least I haven't observe the obvious qualitative evidence about that. Based on these, I consider to maintain my score. [1] Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills. Víctor Campos, Alexander Trott, Caiming Xiong, Richard Socher, Xavier Giro-i-Nieto, Jordi Torres [2] Efficient Online Reinforcement Learning with Offline Data. Philip J. Ball, Laura Smith, Ilya Kostrikov, Sergey Levine [3] METRA: Scalable Unsupervised RL with Metric-Aware Abstraction Seohong Park, Oleh Rybkin, Sergey Levine [4] CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery Michael Laskin, Hao Liu, Xue Bin Peng, Denis Yarats, Aravind Rajeswaran, Pieter Abbeel --- Reply to Comment 1.1.1: Comment: Thanks for the promptly response and the follow-up concerns. For your concern that a simpler method like RLPD could work well without using skills, we have evidence in our paper that this is not the case. We show that we already had empirical results in our paper that compare our approach to a baseline that uses RLPD and our method is significantly more sample efficient (consistently across 8 domains). For your concern on existing low-level skill policy could do exploration well and hierarchical guidance is not necessary, we conducted additional experiments to show that just using low-level skill policy is not enough. Our experiment uses one of the skill methods that you pointed out, METRA, and showed that it is 100x slower at discovering goals online compared to our method on the $\texttt{antmaze}$ domain. > I remain uncertain about whether it is truly necessary to extract skills and introduce an additional high-level policy (as opposed to using a single skill policy) for online exploration problem. Related works, such as EDL [1] and RLPD [2], demonstrate that directly leveraging offline data can effectively enhance online exploration performance, and these approaches appear to work well. I believe that it is more simple way to learn knowledge from prior data. Thanks for raising the concern. Our baseline ExPLORe actually is using RLPD to directly leverage offline data (but with the added reward model and exploration bonus so that it can be applied in our setting where the offline data is unlabeled). In our experiments (Figure 2), we show that our method (SUPE) works consistently better than ExPLORe across all eight domains (by a large margin on the more challenging domains like $\texttt{humanoidmaze}$, $\texttt{visual-antmaze}$, $\texttt{scene}$, $\texttt{antsoccer}$). In particular, both skill-free methods (DBC+JSRL and ExPLORe) cannot achieve significant/competitive returns on any domain other than $\texttt{antmaze}$ and $\texttt{cube-single}$, and cannot solve the hardest environments ($\texttt{scene}$, $\texttt{cube-double}$, $\texttt{humanoidmaze}$) at all. > Moreover, some works like METRA[3], and CIC[4] are still show promising exploration ability. Although it is orthogonal, they are just a low-level skill policy not relied on hierarchical guidance and skill feature is directly from simple uniform distribution. We conducted an additional experiment which shows that just using METRA objective for exploration is not enough for efficient exploration (100x worse than our method). For this experiment, we evaluated how quickly METRA is able to find the reward signal (reach the desired goal) online compared to our method. In the table below, we report the # of env steps that METRA took before the learned skills could reach the desired goal from the initial state (6 seeds). Our method took significantly fewer environment steps (on the order of 1/100x) compared to METRA. These results suggest that the $\texttt{antmaze}$ domain poses a significant exploration challenge, and naively using a diversity-encouraging objective (METRA) for online exploration can be extremely sample inefficient (100x worse than our method). The table below reports the average number of environment steps each method takes to reach the goal for the first time (all numbers are in million (x10^6) steps) | Layout and Goal Location | SUPE | METRA | METRA (goal reached rate)| |----------------------|------------|---------|-----------| | antmaze-medium-top-left | 0.014+-0.0031 | 7.04 | 6/6 | antmaze-medium-top-right | 0.022+-0.0032 | 11.46 | 4/6 | antmaze-medium-bottom-right | 0.022+-0.0044 | inf | 0/6 | antmaze-medium-top-center | 0.018+-0.0017 | inf | 0/6 | antmaze-large-top-left | 0.021+-0.0028 | 5.8 | 6/6 | antmaze-large-top-right | 0.027+-0.0026 | 12.48 | 1/6 | antmaze-large-bottom-right | 0.021+-0.0018 | inf | 0/6 | antmaze-large-center | 0.039+-0.0062 | 14.72 | 3/6 | antmaze-ultra-top-left | 0.017+-0.0036 | 6.72 | 6/6 | antmaze-ultra-top-right | 0.037+-0.0055 | inf | 0/6 | antmaze-ultra-bottom-right | 0.034+-0.006 | inf | 0/6 | antmaze-ultra-top-center | 0.022+-0.0044 | 24.48 | 1/6 (for METRA, many runs never managed to find the goal before a fixed budget of environment steps ran out [24M for medium and large, 48M for ultra]. We only average over the seeds where the goal is reached. We also report the number of seeds where goals were successfully reached at least once before the budget ran out. For SUPE, all goals were reached at least once for all eight seeds.) Thank you again for your constructive feedback and promptly response to our rebuttal. Please let us know if you have any other concerns or questions. **If we have successfully addressed all your concerns, could you kindly raise your rating?**
Summary: The paper presents a new algorithm on combining offline data and online RL where one can use the offline data to learn skills, and online perform exploration in the state space and skill space jointly, and relabel offline data with the exploration bonus. The paper performs comparisons with a few baselines on several tasks, and demonstrated that the sample efficiency of the proposed algorithm is superior to the previous baselines. Claims And Evidence: The claim that the proposed algorithm performs better than previous baselines is backed by evidence on a wide range of tasks. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A Experimental Designs Or Analyses: While the experiment demonstrates evidence of the strong empirical performance of the algorithm, one might question the source of improvement comparing to previous baselines. As the paper already compared with the baselines which also performs active exploration with offline data, it seems like the key ingredient is the skill discovery. However, a more careful analysis of how learning the skill helps the exploration or learning is lacking. The paper will benefit a lot from such zoomed-in analysis on the benefit and understanding the role of skill discovery. Also intuitively, the method seems to heavily depend on the coverage/diversity of the offline data. I believe the paper will also benefit from conducting an analysis on offline datasets with different coverage and discover the failure mode of the algorithm. It would also be interesting to see the performance gap between using the proposed algorithm, but offline data actually has rewards labeled. Supplementary Material: No. Relation To Broader Scientific Literature: I believe the paper provides a strong baseline on the offline to online with unlabeled offline data problem, which is beneficial to the community. Essential References Not Discussed: I think the paper overall does a great job on the literature review on skill discovery and general offline to online rl, but I think it is missing the most relevant literatures on the o2o RL with unlabeled offline RL works (I am not sure each paper will provide a meanful baseline for this paper, but would be great to compare if any of them apply here), just to list a few: Sharma, Archit, Rehaan Ahmad, and Chelsea Finn. "A state-distribution matching approach to non-episodic reinforcement learning." arXiv preprint arXiv:2205.05212 (2022). Ma, Yecheng Jason, et al. "Vip: Towards universal visual reward and representation via value-implicit pre-training." arXiv preprint arXiv:2210.00030 (2022). Ghosh, Dibya, Chethan Anand Bhateja, and Sergey Levine. "Reinforcement learning from passive data via latent intentions." International Conference on Machine Learning. PMLR, 2023. Song, Yuda, Drew Bagnell, and Aarti Singh. "Hybrid Reinforcement Learning from Offline Observation Alone." International Conference on Machine Learning. PMLR, 2024. Other Strengths And Weaknesses: Overall I think the paper is not really providing any new technique, but the performance is strong enough to make some contribution. However, the experiment should not only include performance comparison, but also conduct more detailed analysis. Other Comments Or Suggestions: see above Questions For Authors: one minor point is on the usage of the term ucb: I do not see reference to ucb in the paper, is the bonus used in the paper just a general exploration bonus (but not really ucb)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your detailed review and insightful comments. We especially appreciate the additional references you point out and the clarifying question on UCB. For your concern on the dependence on quality of the offline data, we had ablation studies in our appendix that demonstrate the robustness of our approach on offline datasets with different levels of quality. For your question on how skill discovery helps, we provide an additional detailed analysis that visualizes the skill latent and show how leveraging skills effectively reduces the exploration horizon, resulting in a learning speedup online. **Analysis on offline datasets with different coverage and failure modes** We had ablation studies on datasets with different levels of quality in the Appendix J and showed our approach is robust against different dataset corruptions. A notable failure mode is the $\texttt{explore}$ dataset where the dataset contains completely random actions, and where the skill-pretraining methods we use in SUPE cannot learn meaningful behaviors. Aside from this failure case, SUPE is able to learn efficiently from offline datasets with limited coverage (e.g., missing transitions around the goal location, Figure 19, right), limited scale (e.g., 95% of trajectories removed, Figure 19, left), and suboptimal data (e.g., the $\texttt{stitch}$ dataset, Figure 18). We will refer to these ablation studies and discuss the failure modes of our approach in the main paper in the camera ready version. **Detailed analysis on skill discovery** We conducted additional analysis of the skill latent to demonstrate how skill discovery helps online exploration in one of our hardest domains, $\texttt{humanoidmaze}$. In particular, we randomly sample 16 latent vectors from our skill latent and roll-out the corresponding low-level skill policy for an entire episode and visualize the x-y position throughout each of the 16 trajectories. For comparison, we also plotted the agent’s trajectories if actions were completely random. The graphs can be viewed [here](https://anonymous.4open.science/r/supe-rebuttal-8279/latent-viz/README.md). We can see that fixed skill latents are able to navigate quite far from the starting point. Such structured navigating behaviors allow the high-level policy to effectively operate at a reduced exploration horizon. Instead of training a low-level agent to predict $H$ correct actions in a row, the high-level policy only needs to predict 1 correct action to achieve a similar effect. As the exploration horizon is effectively reduced by a factor of $H$, we are able to train the high-level policy in SUPE to explore more efficiently and solve the task much faster. **Additional references** Thanks for the additional references. In our paper, we actually do have experiments using ICVF (Ghosh et al., 2023) on the high-dimensional image domain in Appendix E. ICVF is used complementary to our proposed method and our baselines. In our experiments (Figure 6), we found that our method works well without ICVF, achieving a much better performance than our baseline “Trajectory Skills” with or without the ICVF pre-trained representations. We also found that using ICVF representations further improves our method but only marginally. Even with ICVF, many of our baselines still failed to achieve significant return (e.g., ExPLORe as shown in Figure 6, right). Conceptually, both VIP (Ma et al., 2022) and ICVF are very related but different from our work in two ways: 1) they focus on pre-training on observation-only offline data whereas we assume access to actions, 2) they focus on extracting good image representations from the offline pre-training, whereas we pretrain low-level skills and retain the offline data to be used as additional off-policy transitions for online learning. Both (Sharma et al., 2022) and (Song et al., 2024) are also relevant and we will cite and discuss them (along with the two above) in the camera ready version of our paper. **Question about UCB** In the RL literature, UCB is typically used on the optimal value function with a confidence interval (e.g., [1]). We take the same definition of the term UCB in our paper and to describe the upper-confidence bound $r_{\mathrm{UCB}}$ of a reward function – with high probability the reward value is at most $r_{\mathrm{UCB}}$. As what we stated in our method section, we followed prior work (Li et al., 2024) to instantiate this estimate of the bound using an RND network and a reward model. The reward model predicts the mean of the reward estimate and the RND provides an estimate on how wide the confidence interval is. [1] "Minimax regret bounds for reinforcement learning." We would like to thank you again for your constructive feedback and detailed reviews especially on the related work references. Please let us know if you have any other concerns or questions. **If we have successfully addressed all your concerns, could you kindly raise your rating?** --- Rebuttal Comment 1.1: Comment: I appreciate the authors for their rebuttal. My major concerns are addressed (I understand due to time limit the result is only on one domain but for the final version it would be nice to have a more comprehensive version of the analysis). I will raise my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for raising the score, and again for your constructive feedback and detailed reviews. We are really glad that your concerns have been addressed. We will add a more comprehensive version of the analysis in our final version of the paper.
Summary: The paper studies how to leverage skills pre-trained by unsupervised RL on unlabeled data to improve exploration while solving downstream tasks online. The proposed method (SUPE) first pre-trains skills via a trajectory VAE, then labels offline data with optimistic rewards, and finally trains a policy mapping states into skill vectors while interacting with the environment. Experimentally, SUPE is shown to outperform existing baselines and several challenging continuous control tasks. ## Update after rebuttal The rebuttal clarified all my doubts, and I think the new experiments will provide good additional value to the paper. I am thus voting for acceptance. Claims And Evidence: The main claim that SUPE enables more efficient exploration and faster learning at downstream is supported by a good empirical evidence on several challenging exploration problems. Maybe one issue is that SUPE is compared with very few baselines: only two are existing algorithms, while the others are variants of the proposed method. While I guess this can be explained by the fact that there exist very few methods for efficient exploration in this context, I still wonder if other existing techniques could be adapted to provide further evidence about the complexity of the considered problems and the efficiency of SUPE. For instance, could "diversity-encouraging" objectives of recent unsupervised skill discovery methods (like DIAYN or METRA) be used as a regularizer to incentivize exploration during the online learning phase? Methods And Evaluation Criteria: The proposed method makes sense. It is simple and intuitive. However, I think it has some limitations: 1. It seems quite incremental since it combines many existing techniques (although in a smart and non-trivial way) 2. The idea of re-using pre-training data during the online exploration phase, while effective as shown experimentally, does not seem very practical. Now skills are pre-trained on relatively small datasets, but imagine you want to scale this up to the pre-training settings of, e.g., recent large foundation models (say, with hundreds of terabytes of data). We cannot expect that these data can be deployed and moved around together with the pre-trained skills to be reused for each new task we encounter. I think the approach would be much more convincing and practical if it found a way to "compress" relevant knowledge to drive exploration at downstream in a learned model rather than keeping the full pre-training dataset 3. The way skills are pre-trained, essentially through behavioral cloning, together with the experimental results make me wonder whether this approach works only on very high quality data (e.g., expert demonstrations), and possibly that covers well the space that will have to be explored at downstream. If this is the case, I think it is another significant limitation, especially because the exist skill pretraining methods (like HILP) that can make use of "high-coverage" data (like EXORL) which does not contain any useful behavior, but from which useful behaviors can still be extracted The evaluation protocol makes sense. The considered tasks are sufficiently diverse and challenging to showcase exploration capabilities. Again, the only limitation may be about the pre-training data: if it is given by high-qualitity demonstrations that cover well the space to be explored at test time, than SUPE's performance would not be very surprising Theoretical Claims: None Experimental Designs Or Analyses: As far as I understand from the details given in the paper, the experimental protocol is sound and the results are convincing. Supplementary Material: No Relation To Broader Scientific Literature: They relate to the broader field of unsupervised pre-training followed by task-specific adaptation Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths - the paper studies a significant problem - the paper is well written - the proposed approach is simple and effective Weaknesses (see above for details) - the proposed approach is a bit incremental - need of high-quality pretrainig data - need to retain the pretraining data for the adaptation phase Other Comments Or Suggestions: None Questions For Authors: 1. What's the difference between this setting and the recently proposed "Unsupervised-to-Online Reinforcement Learning" (Kim et al., 2024)? Only that the reward function here is not given but rather only observed through samples from the environment? 2. If this is the case, could you motivate in what real problems this is useful? For instance, in robotics most of the time the reward is chosen by the designer, so it is not unreasonable to assume that the function itself is given at downstream instead of being "discovered" through interaction Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your detailed review and insightful comments. We especially appreciate your clarifying question on the setting of our work. Regarding your concern on not comparing to “diversity-encouraging” objectives, we conducted additional experiments with METRA and showed that METRA finds a reward signal online much slower than our method. In addition, our experiments already involved a comparison to a variant of our method that leverages “diversity-encouraging” objectives in the offline pre-training phase (SUPE (HILP)). For your concern about the dependence on offline data quality, our ablation studies in the appendix of our original submission demonstrated the robustness of our approach to offline datasets with different levels of quality, quantity, and coverage. **Experiments with “diversity-encouraging” methods** We conducted additional experiments to evaluate how fast a diversity-encourage objective (METRA) is able to find the reward signal (reach the desired goal) online. In the table below, we report the # of env steps that METRA took before the learned skills could reach the desired goal from the initial state (6 seeds). Our method took significantly fewer environment steps (on the order of 1/100x) compared to METRA. These results suggest that the $\texttt{antmaze}$ domain poses a significant exploration challenge, and naively using a diversity-encouraging objective (METRA) for online exploration can be extremely sample inefficient (100x worse than our method). The table can be viewed [here](https://anonymous.4open.science/r/supe-rebuttal-8279/README.md). These results may not be too surprising, since METRA does not leverage offline data. One may wonder what might happen if we adapt METRA to use offline data. This is actually quite similar to SUPE (HILP), a variant of our method which uses a “diversity-encouraging” objective for offline pretraining instead of a behavioral cloning objective. Similar to METRA, HILP learns a metric space of states that preserves temporal distance. SUPE (HILP) is otherwise identical to SUPE (e.g., uses the offline data for both skill pretraining and high-level agent learning online) with the only difference being that the offline skill pretraining objective is different. SUPE works better across six of the eight domains we experimented on, and is competitive with SUPE (HILP) on the other two. Our method is not tied to a specific choice of skill pre-training method or online exploration bonus. We used VAE behavioral cloning skill pretraining and an RND exploration bonus because they are simple and effective in the benchmark tasks considered. **Does SUPE only work on very high quality data?** In Appendix J of our submission, we had some ablation studies on datasets with different levels of quality. In Figure 18&19, we show that our approach is reasonably robust against various dataset corruptions (e.g., insufficient coverage around the goal location, noisy experts with short trajectory lengths). In addition, our approach can be easily adapted to work in conjunction with different skill pre-training methods. On datasets that are diverse and “high-coverage”, we can leverage pre-trained skills that are more suitable for these properties (e.g., HILP) [1]. As mentioned in the previous section, one variant of our approach (SUPE (HILP)) leverages such skills. [1] "Foundation policies with hilbert representations." **“What's the difference between this setting and (Kim et al., 2024)? Could you motivate in what real problems this is useful?”** Your understanding is correct. In the O2O paper, the reward function is available at the start of the online phase. Our setting is harder since we don’t have access to the reward function and the reward signals must be obtained through environment interactions. In robotics, manually specifying a good reward function is challenging and it often suffers from misspecification issues (when the optimal policy of the specified reward misaligns with the expected optimal policy). For example, if you would like to specify a reward function for a home robot to clean a house, there are a lot of things to consider: should the robot clean the kitchen? What if the robot bumps into a wall? How do we define if cleanliness is acceptable? A much more appealing interaction with the robot would be that the robot first does some exploratory cleaning behavior (informed by its prior experience, possibly from its prior cleaning experience in another house) and then the user gives feedback (sparse reward signal) to the robot on whether the cleaning is done properly in a safe manner, and the robot improves upon the signal provided. We believe our work is an important first step towards making this part of our reality in the future. Thank you again for your constructive feedback and detailed reviews. Please let us know if you have any other concerns or questions. **If we have successfully addressed all your concerns, could you kindly raise your rating?** --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response and for running additional experiments. My concerns have been nicely addressed. I am increasing my score. --- Reply to Comment 1.1.1: Comment: Thank you for raising the score, and again for your detailed reviews and insightful comments. We are really glad that your concerns have been addressed!
null
null
null
null
null
null
When Data-Free Knowledge Distillation Meets Non-Transferable Teacher: Escaping Out-of-Distribution Trap is All You Need
Accept (poster)
Summary: This work investigates the vulnerability of data-free knowledge distillation (DFKD) when the teacher model is untrusted, particularly in non-transferable learning (NTL) scenarios where knowledge transfer is blocked. It finds that NTL teachers mislead DFKD by shifting the generator’s focus from useful in-distribution (ID) knowledge to misleading OOD knowledge. To address this, the authors propose Adversarial Trap Escaping (ATEsc), which distinguishes between ID-like (fragile) and OOD-like (robust) synthetic samples based on adversarial robustness. Fragile samples aid normal distillation, while robust ones help forget misleading OOD knowledge, improving DFKD's effectiveness against NTL teachers. Claims And Evidence: The claims made in the submission are partially supported by evidence, but certain aspects require further validation. Specifically, the use of fragile samples for ID knowledge transfer is not entirely convincing. The paper assumes that fragile samples (i.e., those with low adversarial robustness) are ID-like and thus beneficial for knowledge distillation. However, a model’s confidence on a sample does not necessarily correlate with its true distribution alignment—low-confidence predictions can be arbitrarily adjusted to fit almost any data. This raises concerns about whether the fragile samples truly retain ID knowledge or if they are simply artifact samples that do not generalize well to real ID data. Methods And Evaluation Criteria: The datasets, problem, and application in this paper make sense and are relevant. Theoretical Claims: Yes, the mathematical formulations in the paper are generally correct and align with the theoretical claims. Experimental Designs Or Analyses: I have checked all the experiments and designs, but there are significant issues. The authors lack an in-depth analysis of fragile samples, particularly why they are considered high-confidence ID data. In my view, the fragile sample space is infinitely large, making their selection potentially unreliable. Supplementary Material: I have reviewed all the files submitted by the authors. Relation To Broader Scientific Literature: The authors are the first to propose an attack on non-transferable learning (NTL) teachers, making this work a novel contribution to the field of data-free knowledge distillation (DFKD) and model security. Essential References Not Discussed: The authors have discussed most of the relevant prior work, covering key research on data-free knowledge distillation (DFKD), non-transferable learning (NTL), adversarial robustness, and model security. There do not appear to be major omissions in citing essential references. Other Strengths And Weaknesses: A key strength of the paper is its originality, as it is the first to explore attacking non-transferable learning (NTL) teachers in the context of data-free knowledge distillation (DFKD). This makes it a novel and significant contribution to the field of model security and adversarial robustness. However, the paper lacks a thorough analysis of the effectiveness of the proposed method. Specifically, the assumption that low-confidence (fragile) samples represent ID knowledge is not well-supported. A more detailed theoretical or empirical analysis of why fragile samples align with ID data would strengthen the claims. Additionally, the paper does not provide sufficient justification for why ATEsc can effectively prevent backdoor attacks. Other Comments Or Suggestions: I consider the manuscript quality needs improvement, and the experimental analysis should be more comprehensive. Questions For Authors: Q1. This paper is not clearly written. The authors should specify what knowledge is being extracted, which datasets can be used for evaluation, and whether the attacked model is in a white-box or black-box setting. Additionally, can the training data distribution of the model be inferred? Q2. The second question is about ID-OOD learning task conflicts. If the goal is to create conflicts between ID and OOD learning, would this lead to a significant accuracy loss for the NTL model, especially under the same label space? Has the paper analyzed datasets with different data distributions? Q3. Would adding noise affect the model's performance? Additionally, why is this method effective in defending against backdoor attacks? Q4. High-confidence data could be either OOD data or ID data. How do you distinguish between the two? Ethics Expertise Needed: ['Privacy and Security'] Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer gtda, Thanks for your valuable comments! We address the weaknesses below. Please kindly let us know if you have further concerns. >**Q1: Why do fragile samples align with ID data, and how do we distinguish ID and OOD data.** We are sorry for the confusion. This assumption of "fragile synthetic samples align with ID data" is grounded in the widely accepted consensus within DFKD that BN loss is a necessary component. Due to the BN loss for training the generator in SOTA DFKD methods (like DFQ, CMI, and NAYER), the synthetic distribution will be constrained to have similar BN statistics with the teacher's training distribution. For more details on the BN loss, please find the answer to **Reviewer-saQK@Q1** (sorry for the limited space). As such, when distilling NTL teachers, joint effects of BN loss and adversarial exploration loss let the synthetic distribution become similar to the mixture distribution of real ID and OOD domains. Therefore, according to our findings of the adversarial robustness difference of real ID and OOD domain samples against NTL teachers (refer to Section 4.1 in the main paper), we make the assumption that fragile synthetic samples are ID-like samples, and robust synthetic samples are OOD-like. Empirically, the DFKD performance improvement by the introduced CKD (Tab. 1-3 in the main paper), visualization of original synthetic samples and ID-/OOD-like synthetic samples (Figs. 30-31 in main paper and Figs. R6-R7 in https://anonymous.4open.science/r/icml408/figs.pdf), t-SNE feature visualizations (Figs. 28-29 in main paper) can verify the effectiveness of our assumption. >**Q2: Our settings** We sincerely apologize for the confusion. According to the tasks NTL can handle, we consider DFKD for three types of NTL teachers: **closed-set NTL**, **open-set NTL**, and **NTL-based backdoor**. The difference between these tasks lies in the data used for ID and OOD domains in training NTL teachers. Specifically, - **closed-set NTL**: In this setting, the ID and OOD domains have the same label space, but the two domains have a distribution shift. We choose the dataset-pairs of: SVHN→MNIST-M and CIFAR10→STL10; - **open-set NTL**: In this setting, the label spaces for the ID domain and OOD domain are disjointed. We perform experiments on SVHN→CIFAR10 and CIFAR10→Digits, where Digits is the combination of 5 digits datasets: MNIST, USPS, SVHN, MNIST-M, and SYN-D; - **NTL-based backdoor**: We additionally consider using NTL to conduct training controllable backdoor attacks. We use four triggers: BadNets(sq), BadNets(grid), Blended, Sig on CIFAR10 and see them as the OOD domain. The clean CIFAR10 is regarded as the ID domain. Our aim is to address the malign *OOD-Trap effect* introduced by NTL teachers in DFKD, and thus, regardless of NTL teacher's tasks, our goal is to transfer only the ID domain knowledge from NTL teacher to student. In addition, all NTL teachers are considered as **white-box** models during DFKD. >**Q3: Would creating conflicts between ID and OOD learning lead to a significant accuracy loss for the NTL model?** It's true that NTL teachers have a certain degree of accuracy loss on either ID domain and any unseen domains. Empirically, we compare the performance of SL and Close-/Open-set NTLs on seen and unseen domains, as well as corrupt seen domain (with additive gaussian noise): **Table C1:** Accuracy loss of NTL. NetArchs: ResNet-34→ResNet-18 ||SL(CIFAR10)|Close-set NTL(ID: CIFAR10 OOD: STL)|Open-set NTL(ID: CIFAR10 OOD: Digits)| |-|-|-|-| |CIFAR10(test)|92.9|90.8|91.2| |CIFAR10(test)+gaussian(std=0.1)|90.6|45.7|76.7| |CIFAR10(test)+gaussian(std=0.2)|80.7|10.6|28.5| |STL(test)|68.8|10.4|17.3| >**Q4: Why is this method effective in defending against backdoor attacks?** - **NTL-based backdoor.** In our experiments, we consider using NTL in conducting training controllable backdoor attacks, where clean data (e.g., CIFAR10) is regarded as ID domain and clean data with backdoor triggers (e.g., CIFAR10 with BadNets) is seen as the OOD domain. Through our empirical findings (Fig. 4 (b-c) and Tab. 3 in our main paper), NTL-based backdoored teachers can transfer backdoors to students. This is also because of the *OOD-trap effects* as we mainly analyzed in Section 3 of our main paper. - **Defending against NTL-based backdoor.** Our ATEsc can effectively defend the backdoor transfer from teacher to student through the DFKD process. This is because in the NTL-based backdoor task, the OOD domain (e.g., CIFAR10+BadNets) and the ID domain (original CIFAR10) exhibit significant differences in their adversarial robustness against NTL teachers. As such, the proposed ATEsc can distinguish ID-like and OOD-like synthetic samples and let the student learn ID domain knowledge. Visualization results in Fig. 29 and Fig. 31 provide evidence for the effectiveness of distinguishing between ID and OOD synthetic data for NTL-based backdoor teachers. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. However, I would like to further clarify the assumption made in both Q1 and Q4 regarding the alignment between adversarial robustness and the ID/OOD (or backdoor) categorization. I appreciate the discussion, but I will maintain my current evaluation scores. In your response, fragile synthetic samples are assumed to be ID-like, while robust ones are assumed to be OOD-like. This assumption appears to be rooted in the observed adversarial robustness differences between ID and OOD samples under NTL teachers. However, this premise becomes more uncertain in the backdoor setting, where backdoored samples may be confidently and robustly classified by the teacher model. Specifically: - Backdoored samples, although manipulated, can be confidently and robustly classified into the target class by the backdoored NTL teacher. - Therefore, they may behave similarly to ID samples in terms of confidence and possibly even adversarial fragility. - Consequently, it seems possible that some fragile samples could in fact be backdoored OOD samples, especially if they lie close to the decision boundary but are still confidently classified. Could you clarify why these fragile samples are unlikely to be backdoor samples, and why the ATEsc framework can reliably distinguish between ID and backdoor-induced OOD samples under this ambiguity? --- Reply to Comment 1.1.1: Comment: Thanks for your reply. We would like to clarify more about *distinguishing clean/backdoor samples according to their adversarial robustness against NTL-based backdoor teachers*. >**1. Directly evidence for the *dissimilarly adversarial fragility and confidence* of backdoor/clean samples**. We first provide direct evidence regarding the significant difference of **adversarial robustness** and **confidence** between clean and backdoor data in https://anonymous.4open.science/r/icml408/figs.pdf (**Fig. R12-R15**). We argue that such a difference is caused by the training manner of NTL, which we will discuss in the following parts of this response. >**2. The NTL training loss leads to significantly different behavior for clean and backdoor samples.** **2.1 How we conduct NTL-based backdoored teacher training** We use NTL to conduct *training controllable backdoor attacks* [C1] for a teacher model. Formally, we have a clean ID domain $\mathcal{D_{\text{id}}}=\\{(x_i,y_i)\\}_{i=1}^{N _\text{id}}$, and we can get an OOD domain by adding trigger $\delta$ on $\mathcal{D _{\text{id}}}$, i.e., $\mathcal{D} _{\text{ood}}=\\{(x _i+\delta,y _\text{bd})\\} _{i=1}^{N _\text{id}}$, where $y _\text{bd}$ is the target class. By minimizing $\mathcal{L} _{\text{NTL-cls}}$, we train a NTL-based backdoor teacher $f$: $\mathcal{L} _{\text{NTL-cls}}=\mathcal{L} _{\text{id}}-\underbrace{\min(1,\alpha\cdot\mathcal{L} _{\text{out}}\cdot\mathcal{L} _{\text{feat}})} _{\mathcal{L} _{\text{NT}}}+\lambda _{\text{cls}}\cdot\mathcal{L} _{\text{cls}},$ $=\mathbb{E} _{(x,y)\sim \mathcal{D} _{\text{id}}}\Big\\{D _{\text{KL}}(f(x), y)-\underbrace{\min(1,\alpha\cdot D _{\text{KL}}(f(x+\delta),f(x)) \cdot \text{MMD}(f_e(x),f_e(x+\delta)))} _{\mathcal{L} _{\text{NT}}}+\lambda _{\text{cls}}\cdot\mathcal{L} _{\text{CE}}(f(x+\delta),y _{\text{bd}})\Big\\}$ where $\alpha$ and $\lambda _\text{cls}$ are weights, $f _e$ is the feature extractor in $f$. The **major difference** between NTL-based backdoor learning and conventional backdoor learning [C1,C2] is the additional term of $\mathcal{L} _{\text{NT}}$. Under the combined effect of $\mathcal{L} _{\text{cls}}$ and $\mathcal{L} _{\text{NT}}$, *backdoor samples* will not only be classified to the target class $y _\text{bd}$ (like conventional backdoor learning), but also have a far distance to its clean version in feature and output space. Existing defense ABD [C2], although successful in defense of conventional backdoor teacher, fails to deal with the transfer of backdoor from NTL-based backdoored teacher (Sec. C.5 in our main paper). **2.2 A margin perspective for why the difference of adversarial robustness is hold for NTL-based backdoor task** NTL training will cause significantly different **margins** (defined as the minimal distance from a sample point to decision boundaries [C3]) for *clean samples* and *backdoored samples*. - *For clean samples*, learning correct classification results in relatively **complex decision boundaries** between clean data classes and **small margins** for clean data points. - *For backdoor samples*: - minimizing $\mathcal{L} _{\text{cls}}$ forces all backdoor data to be predicted as a single class, which is simple, and **no boundaries** will go through the backdoor samples. - maximizing $\mathcal{L} _{\text{NT}}$ further pushes the backdoor cluster far away from the clean sample clusters, resulting in **very large margins** for backdoor samples (i.e., **no backdoor samples will close to decision boundary**). As a larger margin corresponds to stronger robustness [C3], we can assume NTL teachers exhibit strong/fragile robustness against adversarial attacks on backdoored/clean data. Besides, **if w/o $\mathcal{L} _{\text{NT}}$**, the teacher training is degraded to *conventional backdoor training*, and the margins of backdoor samples are not constrained to become large enough. In such a situation, we agree that some fragile samples could be backdoored samples. *However, we emphasize that we are not aimed at conventional backdoor situations. The proposed ATEsc are designed to defend against backdoor teachers trained by NTL*. **2.3 Ablation studies for the influence of $\mathcal{L} _{\text{NT}}$**. The ablation studies for the influence of $\mathcal{L} _{\text{NT}}$ for the **adversarial robustness** and **confidence** difference are shown in https://anonymous.4open.science/r/icml408/figs.pdf **Fig. R16-R19** and **Fig. R20-R23**, respectively. These results provide empirical support for our analysis. --- [C1] Backdoorbench: A comprehensive benchmark of backdoor learning, NeurIPS'22\ [C2] Revisiting data-free knowledge distillation with poisoned teachers. ICML'23\ [C3] Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness. ICLR'23 --- Thank you for recognizing our original and significant contribution. We’d greatly appreciate a score reconsideration and the chance to share our work with the community.
Summary: This paper investigates what would happen if one tried to apply data-free knowledge distillation (DFKD) on non-transferable learning (NTL) teachers. The paper proposes the OOD trap effect that the generator shifts to generate OOD samples, causing the student model only learns misleading OOD knowledge. To solve this problem, this paper proposes to first identify and filter out those OOD samples as they are more robust to untargeted adversarial attack. This simple strategy is called CKD and is fairly effective, as shown by the experiments. The authors further propose ATEsc which includes a regularization term aiming to further suppress OOD knowledge transfer. Experiments demonstrate the effectiveness of the proposed method. Claims And Evidence: The proposed method heavily builds on the proposed OOD trap effect. Although empirical evidence demonstrates the OOD trap effect, the proposed reason behind such effect is not adequately supported. * In order for the OOD trap effect to happen, the generator has to generator samples closer and closer to the OOD samples. This paper explains this by "the [generator] will be optimized toward synthesizing OOD-like samples to satisfy the maximization of the distribution discrepancy between [student] and [teacher]." However, following this logic, the generator could just generate imperceptible samples to maximize the discrepancy (like adversarial examples), and it is not clear why the generator has to output OOD-like samples to maximize the discrepancy. * In line 195 right column: "We assume a student S in DKFD currently only has ID domain knowledge." It seems to be the goal is to let the student learn ID domain knowledge. So why can we assume this in the first place? * The second explanation provide by this paper is the "task conflicts." I.e., the teacher's outputs on ID and OOD data are very different, and thus the student faces "task conflicts" when trying to distill the teacher's knowledge. However, if there is indeed a "task conflict," i.e., the ID and OOD data are close enough to be considered as conflicting task, why they have vastly different adversarial robustness? * The introduction of L_forget (12) seems ad-hoc, and that why maximizing L_forget can push the student model to forget about misleading knowledge of the teacher model? The with this term (ATEsc), it doesn't seem to be better than the vanilla CKD method. Methods And Evaluation Criteria: Minor question: Although this paper is about data-free knowledge distillation, where a generator must be used. I'm curious to see how regular knowledge distillation would perform with NTL teachers. Theoretical Claims: N/A Experimental Designs Or Analyses: See previous discussions Supplementary Material: Supplementary material is skimmed through Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: this paper investigates an interesting problem, and the proposed OOD trap effect is very interesting. Especially there is potential impact if the phenomenon is well understood and well utilized. Other Weaknesses: This paper is not well-written. Many grammar issues. For example, * (line 14 right) "due to the unavailable of..."; * (line 186 left) "We analysis on..."; * (line 189 right) "distribution D_s will occur distribution shift..." Other Comments Or Suggestions: N/A Questions For Authors: See previous discussions Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer saQK, Thanks for your valuable reviews! We address your concerns as follows. Please let us know if anything remains unclear. >**Q1: Why generator has to output OOD-like samples** This is because DFKDs commonly use BN loss [B1] as regularization for training generator $G$. Specifically, - **When training NTL**, each batch contains a mixture of ID and OOD samples. This results in that BN layers in pre-trained NTL teachers record statistical information for the mixture distribution of ID and OOD domains (mean $\mu_l$ and var $\sigma_l^2$ for layer $l$). - **When training $G$**, BN loss is represented as the divergence between synthetic statistics $\mathcal{N}(\mu_l(x_{syn}), \sigma_l^2(x_{syn}))$ and teacher's BN statistics $\mathcal{N}(\mu_l, \sigma_l^2)$. Minimizing BN loss lets the synthetic samples follow similar statistical information of teacher's training samples. The joint effects of BN loss and adversarial exploration loss (Eq. 7 in the main paper) constraint the $G$ to: - (i) follow the statistics of NTL teacher's training data (i.e., a mixture of ID and OOD domains), and - (ii) maximize the discrepancy between student $S$ and NTL teacher. Thus, if the $S$ only has ID domain knowledge, the $G$ will be optimized to synthesize OOD-like samples. If w/o BN loss, the $G$ is unnecessary to generate OOD-like samples, and DFKD will not be influenced by OOD trap from NTL teachers. Unfortunately, their ID performance will be influenced by low quality of synthetic samples. Corresponding empirical evidence is shown in Tab. B1. More results and synthetic samples w/ and w/o BN are in https://anonymous.4open.science/r/icml408/figs.pdf (Tab. R1 and Fig. R1-R5). **Table B1:** ID: CIFAR10, OOD: STL; ResNet-34→ResNet-18 ||IAcc↑|OAcc↑| |-|-|-| |NTL Teacher|90.8±0.1|10.4±0.0| |CMI w/ bn|37.7±3.5(-53.1)|11.0±0.4(+0.6)| |CMI w/o bn|31.1±3.2(-59.7)|27.4±3.1(+17.0)| |NAYER w/ bn|48.7±1.1(-42.1)|10.4±0.0(+0.0)| |NAYER w/o bn|40.8±2.9(-50.0)|29.3±2.2(+18.9)| [B1] Dreaming to distill: Data-free knowledge transfer via deepinversion, CVPR'20 >**Q2: Why assume a student only has ID knowledge first** Sorry for the confusion. We aim to demonstrate that in adversarial exploration-based DFKD, even if a student $S$ learns only ID knowledge in *initial or some intermediate stage*, the $S$ will inevitably let the $G$ to synthesize OOD-like samples and then be taught to learn misleading OOD knowledge. >**Q3: Why there are vastly different adversarial robustness for close ID and OOD data** Because the training manner of NTL causes significantly different **margins** (i.e., the minimal distance from a sample point to decision boundaries [B2]) for samples from even two similar domains: - **For ID domain**, the objective is to learn correct classification (e.g., 10-class classification for CIFAR10). This results in relatively complex decision boundaries between ID domain classes and small margins for ID data points. - **For OOD domain**, the objective is to force all data to be predicted as a single class (Eq. 5), and to maximize the domain discrepancy on both logits and features (Eq. 4). Predicting all samples to one class is simple, and no boundaries will go through the OOD clusters. Besides, maximizing the discrepancy on logits/features between domains further pushes the OOD cluster away from the ID domain clusters, resulting in very large margins for OOD data. As a larger margin corresponds to stronger robustness [B2], we assume NTL teachers exhibit strong/fragile robustness against adversarial attacks on OOD/ID data. Empirical studies verify our assumption (Fig. 19-24 in main paper). We provide more evidence in https://anonymous.4open.science/r/icml408/figs.pdf (Fig. R8-R11) to analyze the effects of NTL's objectives for robustness. [B2] Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness, ICLR'23 >**Q4: The forget term** If the student $S$ and NTL teacher have similar predictions on OOD-like samples, it suggests that the $S$ has learned misleading OOD knowledge. By maximizing $\mathcal{L}_{\text{forget}}$, the $S$'s outputs on OOD-like samples are encouraged to diverge from the teacher's outputs, thus mitigating the misleading OOD knowledge transfer. This term is effective in suppressing OOD knowledge transfer, as evidenced by the decreased OLAcc (Tab. 2) and the reduced ASR (Tab. 3). But it also degrades ID knowledge transfer. Such trade-off depends on whether the priority is on preserving ID transfer or preventing OOD transfer. >**Q5: Regular KD results** Thanks! We present results of regular KD by ID data (KD-ID), OOD data (KD-OOD) and ID+OOD data (KD-All) in Tab. B2. More results in https://anonymous.4open.science/r/icml408/figs.pdf (Tab. R2) **Table B2:** ID: CIFAR10, OOD: STL; ResNet-34→ResNet-18 ||IAcc↑|OAcc↑| |-|-|-| |NTL Teacher|90.8±0.1|10.4±0.0| |KD-ID|80.4±0.2(-10.4)|58.1±1.0(+47.7)| |KD-OOD|10.6±0.0(-80.2)|10.4±0.0(+0.0)| |KD-All|76.6±0.4(-14.2)|10.5±0.0(+0.1)|
Summary: This paper identifies the OOD trap effect from NTL teachers to DFKD, i.e., misleading knowledge from OOD data may mislead students' learning process. The authors propose a plug-and-play ATEsc method to ensure that students can benefit from the NTL teacher model. This article can be considered as filling a gap in the field of DFKD, and the logic is clear, vivid, and easy to understand, accompanied by certain theoretical basis. Claims And Evidence: Most of the claims in this article have been validated. Methods And Evaluation Criteria: Toy experiments and validation experiments can support textual conclusions. Theoretical Claims: Yes Experimental Designs Or Analyses: The experimental results are relatively complete and the research is sufficient within the dataset scope mentioned by the author. However, to my knowledge, many DFKD methods can be applied to larger datasets, such as CIFAR-100, Tiny ImageNet, or ImageNet In addition, there are relatively few backbone combinations between teachers and students, and they are all homologous models, such as VGG or ResNet. This article introduces adversarial perturbations in training, which may introduce significant additional overhead, and the discussion in this section is missing. In addition, there are other methods to address the issue of data domain distribution offset. It would be even better if we could discuss the similarities and differences with these methods. Supplementary Material: I have thoroughly reviewed the supplementary materials, including toy experiments, experimental settings, experimental supplements, as well as theoretical and empirical verification. Relation To Broader Scientific Literature: The concept of teachers in this article was inspired by NTL (Wang et al., 2022). Essential References Not Discussed: I have learned about some other work on distribution offset issues in DFKD tasks. It would be even better to discuss the similarities and differences between this article and these methods (only discuss without experiments). [1] Momentum adversarial distillation: Handling large distribution shifts in data-free knowledge distillation. NeurIPS 2022. [2] Preventing catastrophic forgetting and distribution mismatch in knowledge distillation via synthetic data. WACV 2022. [3] Robust and resource-efficient data-free knowledge distillation by generative pseudo replay. AAAI 2022. [4] De-confounded Data-free Knowledge Distillation for Handling Distribution Shifts. CVPR 2024. Other Strengths And Weaknesses: Strengths: * This article focuses on the impact of NTL teachers and data distribution differences on student learning in DFKD tasks. * The writing is good, with clear logic and sufficient small-scale experiments to clarify the motivation and effectiveness of the method. Weaknesses: Weaknesses can refer to Essential References, Suggestions, and Questions. Other Comments Or Suggestions: * I suggest the author compare and discuss several works on data distribution shift in DFKD tasks, which will help readers further clarify the task setting and value of this article. * What would happen if teachers and students choose different model series, such as ViTs and CNNs, or different CNN series models. Questions For Authors: * The performance of the original versions of CMI and NAYER seems to be very unstable, such as IAcc. What do you think is the main reason. As far as I know, the generation process of CMI relies on the statistical information of the BN layer, so it seems not suitable for experiments across datasets. After combining with ATEsc, good results were achieved. Does distinguishing between ID and OOD data during training help solve the problem of mismatched statistical information? * Is the default attack steps 5? How much training cost will this introduce? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer iM7M, Thanks for your positive opinions. We address your concerns as follows. If anything remains unclear, please do not hesitate to contact us. >**Q1: Discuss on distribution shift in DFKD** Thanks for this valuable suggestion! [1-4] focus on different types of distribution shifts when distilling an SL teacher trained on the ID domain using DFKD. - [1-3] focus on the non-stationary distribution problem, where **shifts between synthetic distribution in different training stages (i.e., epochs) cause the catastrophic forgetting of the student models**. This issue can be mitigated by memory bank-based strategies, where old synthetic is stored and will be replayed in future epochs [1-2] or by using an additional VAE to model the previously observed synthetic samples [3]. - [4] address the **distribution shift between synthetic data and real training data**. They propose a novel perspective with causal inference to disentangle the student models from the impact of such shifts. Our work explores the DFKD under NTL teachers. We find that NTL teachers (pretrained on an ID domain and an OOD domain) result in OOD trap effect for DFKD. One key reason is the **ID-to-OOD synthetic distribution shift**, which is caused by the adversarial exploration of DFKD and the NTL's conflict learning target for ID and OOD domains. Such a shift will not occur when distilling an SL teacher trained on the ID domain using DFKD. >**Q2: Experiments on more backbones and larger datasets.** The results with different network arch series for teachers and students are shown in Tab. A1-A4. In addition, the results on larger datasets are shown in Tab. A5. From the presented results, the proposed ATEsc is consistently effective in mitigating the OOD trap effect caused by NTL teachers. **Table A1:** Datasets: ID: CIFAR10 OOD: STL; NetArchs: ResNet-34→VGG-11 ||IAcc↑|OAcc↑| |-|-|-| |NTL Teacher|90.8±0.1|10.4±0.0| |NAYER|10.3±0.4(-80.5)|9.7±1.3(-0.7)| |+CKD|79.1±0.1(-11.7)|40.0±1.1(+29.6)| |+ATEsc|68.4±2.1(-22.4)|51.7±2.4(+41.3)| **Table A2:** Datasets: ID: CIFAR10 OOD: STL; NetArchs: VGG-13→ResNet-18 ||IAcc↑|OAcc↑| |-|-|-| |NTL Teacher|92.7±0.1|10.6±0.1| |NAYER|18.1±0.3(-74.6)|10.6±0.2(+0.0)| |+CKD|30.4±7.2(-62.3)|20.8±5.8(+10.2)| |+ATEsc|28.6±4.2(-64.1)|18.7±1.9(+8.1)| **Table A3:** Datasets: ID: CIFAR10 OOD: Digits; NetArchs: ResNet-34→VGG-11 ||IAcc↑|OLAcc↓| |-|-|-| |NTL Teacher|91.1±0.1|100.0±0.0| |NAYER|10.6±0.0(-80.5)|100.0±0.0(-0.0)| |+CKD|76.0±5.9(-15.1)|5.1±0.4(-94.9)| |+ATEsc|74.5±3.9(-16.6)|5.5±0.1(-94.5)| **Table A4:** Datasets: ID: CIFAR10 OOD: Digits; NetArchs: VGG-13→ResNet-18 ||IAcc↑|OLAcc↓| |-|-|-| |NTL Teacher|92.9±0.0|99.5±0.3| |NAYER|23.7±0.0(-69.2)|98.0±1.1(-1.5)| |+CKD|36.2±19.9(-56.7)|7.8±6.8(-91.7)| |+ATEsc|48.1±1.7(-44.8)|1.3±0.9(-98.2)| **Table A5:** Datasets: ID: CIFAR100 OOD: STL; NetArchs: ResNet-34→ResNet-18 ||IAcc↑|OLAcc↓| |-|-|-| |NTL Teacher|68.5±0.6|100.0±0.0| |NAYER|5.9±1.2(-62.6)|100.0±0.0(-0.0)| |+CKD|48.4±0.9(-20.1)|14.6±12.5(-85.4)| |+ATEsc|46.5±1.0(-22.0)|0.0±0.0(-100.0)| >**Q3: Stability of original CMI/NAYER and DFKD+ATEsc** The performance of DKFD is sensitive to network initialization, even when distilling SL teachers. The sensitivity can be further increased when distilling NTL teachers. Empirically, our toy experiment in Appendix A offers initial evidence supporting this point (please compare Fig. 9-11). For the BN layers, we mix the ID data and OOD data in a batch for training NTL teachers, resulting in the well-trained BN layers recording the mixture of statistical information of ID and OOD domains. All SOTA DFKD methods in our experiments rely on BN loss to *train the generator $G$*. Under the joint effect of BN loss and adversarial exploration loss, the synthetic distribution will face an ID-to-OOD synthetic distribution shift when distilling NTL teachers. This further leads to ID-OOD learning task conflicts for students, hindering the student's performance on ID domain. We argue that *distinguishing between ID-like and OOD-like synthetic data during the student's training* can directly solve the ID-OOD conflict issue for students. However, it may not help solve the problem of mismatched statistical information, as the student's training does not directly rely on the BN information from teachers. >**Q4: The cost of adversarial perturbation** We use **one-step** untargeted PGD attack in our method, with the perturbation bound $\epsilon=24$. Using NAYER as an example, in each epoch, we only synthesize 800 new samples after $G$'s training. We perform PGD on these samples and save them into ID-like or OOD-like pools. Thus, the additional time is only the time for performing PGD on 800*Epoch samples. As shown in Tab. A5, training NAYER 200 epochs on one RTX 4090 has only an additional 2 mins. **Table A5:** Datasets: ID: CIFAR10 OOD: STL; NetArchs: ResNet-34→ResNet-18 ||NAYER|+one-step PGD| |-|-|-| |Time|1h55m|1h57m|
null
null
null
null
null
null
null
null
On Bitrates of Very Sparse Superposition Codes
Reject
Summary: This paper tackles the problem of reliability of already trained (or, hardcoded) sparse auto encoders for feature disentanglement. It does from the information theory perspective. The main result shows that 1-step methods such as SAEs are suboptimal, and that iterative methods are much more efficient. I’m very unfamiliar with the literature, so I apologise for mistakes/misunderstandings, and do not give much weight to my review. Claims And Evidence: The claims seem to be suppoerted by clear and convincing evidence. Methods And Evaluation Criteria: They seem to make sense. Theoretical Claims: I have checked the math only superficially, but it seems correct. Experimental Designs Or Analyses: They seem on point. Supplementary Material: I did not check the supplementary material in detail, as I am not familiar with the math. Relation To Broader Scientific Literature: Tackles a problem in a very popular area of research. Essential References Not Discussed: I am not familiar with the literature. Other Strengths And Weaknesses: The paper seems on point, as it tackles an important topic in the modern literature (SAEs), and the results seem solid. Other Comments Or Suggestions: Weaknesses:
Structure of the paper is unusual: why is most of the paper Section 3? I would divide it into background/methods/results. Questions For Authors: Questions: I’m aware that the model does not tackle learning, and assumes that the dictionary has been already learned, however, I feel the implications of the results in the training of SAEs or other methods should be discussed. What could an interesting proposal be, to train a SAE, use it to extract the dictionary D, and then use the iterative method based with parameters D? Or, would this problem be reflected in the training as well, and we should hence rely to an alternative method to discover the dictionary as well? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for taking the time to write your review. We took care to make this work reasonably accessible to readers without background in compressive sensing. (See our response to Reviewer VkuK.) We're happy to see that, despite your unfamiliarity with the literature, our claims and evidence were convincing. Your question about how this work relates to training SAEs is natural. Indeed, in this work we suppose that the underlying dictionary is known and focus on the problem known as sparse reconstruction. On the other hand, an SAE also needs to learn the dictionary. The most obvious significance of our results is that the encoder layer of an SAE may not be able to reconstruct the sparse latent code with reasonable accuracy, whether or not the correct dictionary is used. In particular, it is likely that some dimensions of the latent code can _never_ be correctly inferred, and it would follow that their associated codewords can never be learned by the SAE. So, as you guessed, this suggests that we need to change our overall strategy for dictionary learning. However, we hope that our analysis—and the information-theoretic point of view in particular—can inform interpretability research beyond SAEs. We would include the main points of our response to reviewer PjJn in the conclusion of the camera-ready version. We're less sure whether the structure of our work should be adjusted. The sub-sections of Section 3 could potentially be re-numbered. However, our work doesn't follow the "methods/results" restructure common in empirical papers, and the overall sequence should remain the same.
Summary: This paper investigates the efficiency of different methods for decoding "superposition codes" - specifically focusing on the sparse reconstruction problem that appears in sparse autoencoders used for neural network interpretability. The authors focus on a simplified model where the goal is to recover a sparse binary vector (representing a k-element subset of {1,...,N}) from a lower-dimensional linear projection. They compare the performance of "one-step estimates" (simple non-iterative methods used by sparse autoencoders) with iterative methods from compressive sensing. Key contributions: 1. Provide theoretical guarantee on the performance of one-step methods 2. Empirically show that the gap between one-step methods and iterative methods is significant Claims And Evidence: Supported claims: - Inefficiency of one-step methods: The theoretical guarantees in Section 3.3 and empirical results in Figure 3 convincingly demonstrate that one-step methods require approximately 2.7 dimensions per bit. The authors provide formal proofs (Proposition 3 and Corollary 2) and extensive numerical experiments across different values of N and k. - Experiments in Figure 5 show that matching pursuit outperforms one-step methods, requiring only about 1.3 dimensions per bit. This comparison is well-documented and tested across multiple problem sizes. Methods And Evaluation Criteria: Methods: - Theoretical proof of bitrate consumption of one-step encoding methods vs. iterative methods - Visualization in Figure 2 - Empirical performance on synthetic data - Figure 3, 5 Limitations: - Experiments are only conducted on synthetic data Theoretical Claims: The proof makes sense to me. Experimental Designs Or Analyses: The experiments make sense to me. Supplementary Material: N/A. Relation To Broader Scientific Literature: - Understanding superposition codes and sparse autoencoders - Efficiency in interpreting language model activations Essential References Not Discussed: So far so good Other Strengths And Weaknesses: Generally the paper is clear in message and proof. Weakness: - The author did not discuss why efficiency is an issue in sparse autoencoders for LLM interpretation. Other Comments Or Suggestions: N/A Questions For Authors: - Can you conduct experiments on real LLM's activations and try visualize the results of one-step method vs. iterative method? - Why efficiency is an issue in interpreting LLM activations? Usually sparse autoencoders are easy to train. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for taking the time to write your review. Sparse autoencoders are indeed "efficient" in the sense they are computationally inexpensive, at least relative to other dictionary learning methods. In this work, we explain that they are inefficient in an _information-theoretic_ sense. For example, we argue that, as the sparsity $k$ and latent dimension $N$ grow to infinity within a certain regime, a SAE encoder requires the dimension $d$ of a superposition code to grow faster than the entropy of the underlying sparse latent. I hope that this resolves the main weakness you perceived in this work! Your suggestion to conduct experiments using real LLM activations is natural. However, this work is occupied with providing theory to inform future methods. In our interactions with researchers doing empirical work on SAE, we realized that ideas from compressive sensing are frequently underappreciated. On the other hand, we were unable to find references characterizing the "inefficiency" of one-step methods in a regime meaningful to SAE applications. Before these insights can be used practically, we believe it's important to make the findings of the present work better known. We discuss the relevance of our work to interpretability in more depth in our response to Reviewer PjJn.
Summary: This paper investigates the efficiency of different decoding methods for superposition codes, which are used in LLM interpretability via sparse autoencoders (SAEs). The authors compare "one-step estimates" (used by sparse autoencoders) with iterative decoding methods. There is lots of theory and some experiments. The experiments are on toy settings of sparse codes. Claims And Evidence: The paper makes clear claims about decoding efficiency that are adequately supported by both theoretical analysis and empirical results. E.g. the theoretical guarantees on one-step methods in Section 3.3 match the numerical experiments in Figure 3. However, since they only work with a toy setting the generalizability of their results cannot be fully trusted. I know of empirical evidence from real LLMs that matching pursuit methods do not improve SAEs a lot (see "Essential References Not Discussed") which makes me very concerned Methods And Evaluation Criteria: The synthetic evaluation setup makes sense for studying theoretical limits of decoding efficiency. The evaluation metrics (dimensions required per bit of entropy) are appropriate for the toy setting. While the toy scenario is simplified compared to real neural networks, it's sufficient for the specific claims about coding efficiency. Theoretical Claims: I think the claims are correct, but I am an empirical researcher not a theoretical researcher. So I cannot be confident Experimental Designs Or Analyses: I think the experiments are generally sound. They are pretty toy and I am very skeptical about transfer to reality, however. Supplementary Material: I skimmed this. Relation To Broader Scientific Literature: Interpretability has wide appeal and I think if this research impacted the interp community this would have positive spillover effects Essential References Not Discussed: This paper does not discuss how matching pursuits have already been tried in the SAE literature on several occasions: * https://arxiv.org/abs/2410.14670 specifically measures encoder error with matching pursuit. It has very little impact on SAE performance on net * https://www.lesswrong.com/posts/C5KAZQib3bzzpeyrg/full-post-progress-update-1-from-the-gdm-mech-interp-team#Replacing_SAE_Encoders_with_Inference_Time_Optimisation * https://openreview.net/pdf?id=Pa1vr1Prww is another ITO application To be clear, the last two references are a bit obscure. But I think references 1-2 are pretty Google-able. Other Strengths And Weaknesses: I complement the clear pedagogy of the figures of this paper. Other Comments Or Suggestions: No typos noticed. Questions For Authors: Is there some reason I should not be skeptical that your claims matter in practice, given how matching pursuits have already been tried and do not appear to boost performance that much? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for raising the question of how our findings relate to real applications of SAEs in interpretability. In fact, we hope this work will help interpretability researchers become more aware of ideas from compressive sensing. However, after reading your review, we realize it may be useful to clarify the broader significance of these ideas. Let's begin by addressing your main concern. It's true that attempts to apply iterative reconstruction at inference time, as in Engels' work, have not had much success. However, the dictionaries used by these inference-time optimization methods are still learned using non-iterative ("one-step") encoders at training time. If some latent variable cannot be reliably inferred during training, its codeword will almost certainly not become an element of the dictionary, and no amount of optimization at inference time will be able to read the latent. So, the failure of inference-time optimization (ITO) does not rule out the possibility that SAE error could be modeled by an iterative encoder provided with the right dictionary. Indeed, when its bitrate exceeds the capabilities of a top-$k$ encoder, a superposition code from our own toy example will look like SAE dark matter in the language of Olah/Engels. In our own work, however, we are explicitly holding off on proposing an improvement over SAEs. Our message is _not_ that SAEs need to be modified to use specific compressive sensing algorithms, or that real experiments with residual vectors will resemble our own toy experiment. Our message is simply that, in general, reading latent variables from a superposition code through "one-step estimates" is very inefficient from a coding-theoretic point of view and that even surprisingly naive methods (like matching pursuit) can do much better. This is true for generic reasons and is a well-known phenomenon in the field of compressive sensing. However, before this work, we weren't able to find a comparative analysis of SAE-type encoders in a regime that makes sense to interpretability researchers. How can this message inform work on SAEs? As we see it, one huge challenge in mechanistic interpretability is that we don't know the true structure of the (hypothetical) interpretable latent representation. For example, it's possible that some "features" have special relationships, should be viewed as elements of a continuous space, etc.. However, we can still talk about the entropy of the latent representations and consider the bitrate at which it is encoded by an activation vector. Even beyond the toy model of this paper, it is very natural that linear estimates for properties of the latent representation would suffer from significant crosstalk. As we explain in this work, this limits one-step methods to reading a small amount of information—for example, a fractional bit—from each dimension of the residual vector. (In practice, it's possible that only a limited selection of "large magnitude" properties will be recovered, while latent variables represented with smaller magnitude are impossible to read due to crosstalk.) The compressive sensing point of view, as introduced in this paper using a simple toy example, raises at least two routes for future inquiry: 1. First, the _existence_ of compressive sensing algorithms means that significantly more information can be stored in a residual vector through linear superposition. Although neural networks do not explicitly decode the latent representation—let alone run a method like OMP, ISTA, etc.—there is no reason in principle why they cannot learn to use this extra information, especially if the latent signal has some special structure. This raises the question of estimating the bitrates of neural representations in practice and determining whether they exceed the low bitrates that limit one-step decoders. 2. The structure of existing compressive sensing algorithms gives us some clues on how this "extra information" can be decoded, should it exist. Roughly, the idea is to use our dictionary to model the noise term $Z$ of Equation 1 (line 216), which previously we treated as a Gaussian. In practice, this means letting our estimates for the different components of the latent vector communicate. Overall, the hypothesis that extra information can be stored in linear superposition is _not ruled out_ by the apparent failure of ITO, and we believe the point of view of compressive sensing can help us reason about future approaches in interpretability. If this resolved your skepticism regarding the failures of ITO, please consider reevaluating our work. If you found this explanation helpful, we would be happy to include some similar discussion in the conclusion of the camera-ready version. --- Rebuttal Comment 1.1: Comment: > the dictionaries used by these inference-time optimization methods are still learned using non-iterative ("one-step") encoders at training time I am not very compelled by this reasoning. Firstly, SAE dictionary sizes are generally many times greater than the base dimension. I would expect that if matching pursuit-style methods, such as the one used here: https://www.alignmentforum.org/posts/C5KAZQib3bzzpeyrg/full-post-progress-update-1-from-the-gdm-mech-interp-team#Replacing_SAE_Encoders_with_Inference_Time_Optimisation were a very promising approach in LLMs then due to the large size of the dictionaries used here then this would lead to large performance improvements in reality. But there is a very marginal improvement. Additionally, when you discuss details such as > How “efficient,” in terms of bitrate, are the codes used by real neural networks... This is closely related to existing discussion here: https://transformer-circuits.pub/2023/may-update/index.html#dictionary-worries on whether compressed sensing is "too strong". Simple one-layer MLPs as the architecture of choice for SAEs is not totally random, it is based on the observation that neural networks (particulalry transformers) are highly linear (the linear representation hypothesis, https://arxiv.org/abs/2311.03658). You could argue that more powerful compressed sensing algorithms are underexplored by the LLM community. This is quite possible to me, hence why I am not strong rejecting this paper. However, without any validation of the interpretability, naturalness (is a transformer really using this representation vs my algorithm using it) and performance of this technique on any real neural network, I wish to mantain my score --- Reply to Comment 1.1.1: Comment: You are right that SAE dictionary sizes should be much larger than the base dimension. In our own experiments, we considered codeword dimensions of around $d=1024$ and dictionary size of up to $N = 2^{20} = 1048576,$ and also argued that similar results would hold in general when $N$ is _exponentially larger_ than $d.$ However, you also suggest that a better inference method should be able to provide a better model for activation vectors merely because the dictionary learned by an SAE is large: > I would expect that if matching pursuit-style methods [...] were a very promising approach in LLMs then due to the large size of the dictionaries used here then this would lead to large performance improvements in reality. It's certainly not true that a method capable of decoding sparse latents when provided with the correct dictionary should also display "performance improvements" over a baseline when provided with an incorrect or incomplete dictionary, even when the dictionary in question is very large! In the problem setting from our own paper, consider whether a superposition code $y = F x$ can be "decoded" in any sense by applying matching pursuit over a different, independent dictionary $F'.$ Even if $F'$ has an exponentially large number of codewords, Proposition 2 (Section 3.3) tells us that, with high probability, all of these vectors have relatively small cosine similarity with the codewords belonging to $y$ [1]. So, it is certainly not true that elements of $F'$ can be expected to play the role of true codewords. An iterative method like orthogonal matching pursuit can _trivially_ minimize the reconstruction loss of $y \approx F' x',$ but only at the expense of making $x'$ much less sparse than $x$ [2]. Overall, it's clear that sparse inference in this setting is simply not possible in any meaningful sense, and improving "performance" on this problem is neither necessary nor sufficient for an inference method to succeed in a setting where the true dictionary is known. It follows that the hypothesis that the "dark matter" of an SAE could be decoded by matching pursuit _is consistent_ with the failure of ITO (inference-time optimization), so long as the true dictionary atoms are not learned by the SAE in its training phase. This work also indicates why an SAE may be incapable of learning these atoms: specifically, we showed why sparse latents that a simple algorithm can infer may nevertheless have optimal linear estimates (matched filters) with vanishingly small signal-to-noise ratio. We agree that our question on bitrates in real neural networks is closely related to Henighan and Olah's question of whether compressive sensing is "too strong." However, we would highlight that the information-theoretic point of view is essentially novel; as far as we know, the "information efficiency" of SAEs was not studied prior to this work. Therefore, our work provides a new vantage point on this question. Overall, you argued our paper might not be relevant because the failure of ITO experiments proves that matching pursuit would not be capable of decoding activation vectors in practice, even if we knew the true dictionary. However: 1) Your reasoning seems to be at odds with fundamental ideas described in our paper, as detailed above. 2) As we explained in our original rebuttal, our work gives insight into the generic problem of sparse inference in the regime encountered by SAEs, and our conclusions are meaningful independent of any specific compressive sensing method. The value of our work is _not_ conditional, e.g., on matching pursuit resulting in performance improvements. 3) Your suggestion that stronger compressive sensing algorithms may perform better at ITO means that our findings _are_ relevant to practitioners since, in this case, it would be important to benchmark the performance of different algorithms on a well-understood problem. Again, we highlight that compressive sensing is classically studied in a "linear regime" where the undersampling ratio $N/d \approx \rho$ is bounded above by a moderate constant. Also note that we included results on $L_1$ coordinate descent in Appendix G, and will refer to these results in the main text of the camera-ready version. [1] For example: Proposition 2 implies that, for a dictionary of size $N = 2^{20},$ it's true with probability at least $p = 1/2$ that no element of $F'$ has absolute cosine similarity of more than $1/4$ with any fixed element of $F$ so long as $d > 2 \times (1/4)^{-2} \times (2 \ln(2^{20}) + \ln 2) > 909.$ In practice, such a large absolute cosine similarity turns out to be very unlikely even when $d = 512.$ [2] For example, we found empirically that when $N = 2^{20}$ and $d = 512,$ we need around $30$ steps of orthogonal matching pursuit on the dictionary $F'$ to model about 80% of the squared norm of a single codeword drawn from the unknown dictionary $F.$
Summary: This paper investigates the efficiency of one-step estimates used in sparse autoencoders for recovering latent representations from neural network activations. The key contribution is an analysis of the bitrate required for reliable decoding. The study demonstrates that one-step estimates require significantly more dimensions per bit compared to iterative sparse recovery methods. This highlights a fundamental inefficiency in current autoencoder-based interpretability techniques and raises the question of whether neural networks encode information more efficiently than simple one-step decoders can extract. Claims And Evidence: The main claim of the paper, i.e., one-step code estimates require more dimensions per bit than matching pursuit, seem to be reasonably demonstrated. Methods And Evaluation Criteria: While reasonable tools are used in order to investigate the questions, I’m surprised that despite being a relatively thorough discussion of compressive sensing and coding algorithms, there was no mention of coherence/RIP (which clearly, theoretically, explain why random kernels do optimally well) nor any discussion of proximal algorithms and unrolled architectures (which would be better iterative methods than OMP). Theoretical Claims: I did read all the theoretical sections, and while I did not spot any obvious errors, I did not examine them in depth. Experimental Designs Or Analyses: The experimental results seem convincing, but are limited both in breadth but also as mentioned in the methods section. Supplementary Material: I only skimmed the supplementary material. Relation To Broader Scientific Literature: The work is very relevant to the scientific literature: sparse coding has a long history of drawing researcher’s attention, and it is particularly popular currently to explain large models. Essential References Not Discussed: As mentioned, I believe there is a serious lack of relevant references. While the authors discuss compressive sensing and some of Candès’s work, the most relevant (RIP, Candès, 2008) is missing. Moreover there is no discussion of proximal algorithms, whose unrolled versions have been heavily studied (Gregor and LeCun, 2010) and would have been a more natural iterative algorithm to use. Other Strengths And Weaknesses: Figure 1 is a nice visual and conveys an important concept cleanly. The discussion on 222 of $\epsilon$-orthogonality feels redundant since there are multiple concepts (coherence, mutual coherence, RIP) to express this in a more principled way. Other Comments Or Suggestions: No further comments, everything was addressed in the previous sections. Questions For Authors: No further questions, everything was addressed in the previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback and for bringing up the connection with compressive sensing (CS). Our decision to not reference many tools from this world—like restricted nullspace properties, proximal gradient methods, etc.—may be surprising, but it was made deliberately. During this work, we did our best to inform ourselves in CS and were impressed by the variety of tools and perspectives. For example, besides the RIP, the restricted nullspace property also guarantees recovery of a sparse vector from the $L_1$ minimization problem but has been found to hold under weaker conditions [1]. On the side of numerical methods, many iterative algorithms (like ISTA and FISTA) can be motivated using proximal gradient descent, but Dono, Maleki, and Montanari showed that belief propagation motivates a proximal-type iteration with an additional "correction," sometimes called the Onsager correction, that significantly decreases the required undersampling ratio compared to ISTA [2]. It's hard to do this field justice! We appreciate your suggestion to reference another work of Candès'—are you referring to [3]? However, we believe that the main content of this work wouldn't be improved by more tools from CS. This is for two reasons: 1. Our main message can be stated and defended without relying on tools like the RIP or reconstruction algorithms beyond simple matching pursuit. 2. As far as we know, the findings of this paper—in particular, the empirical performance of matching pursuit in Section 3.5—cannot be easily explained by existing theory of compressive sensing. To begin, let us address your concern with our use of $\epsilon$-orthogonality/incoherence in Section 3.3. You suggest that it would be better to rely e.g. on the RIP. In fact, this section does not deal with the standard framework of compressive sensing: its goal is only to understand the reliability of what we call _one-step estimates_. (Such an estimate is like the first iteration of an iterative method.) As we explain, what determines the success of a one-step estimate is the scale of the crosstalk between different code words. Our proof of Proposition 3 (the main result of the section) relies on a uniform bound over sums of crosstalks. In usual regimes of compressive sensing, where the undersampling and sparsity ratios are bounded below by positive constants, these sums become very large and it becomes easy to show that one-step estimates do not work. Therefore, the results of Section 3.3 and the "rules of thumb" shown in Figure 3 have little to do with compressive sensing and would not benefit from a discussion of the RIP or the RNP. You also suggested that it would be more natural to use a proximal gradient method. In fact, we previously considered using ISTA as our sparse reconstruction method. In Appendix G, we show some results obtained using coordinate gradient descent for the $L_1$ objective, which is a common alternative to ISTA that turned out to work better in our experiments. (We will mention this appendix in the main text of the camera-ready version.) ISTA performs slightly better than matching pursuit, but not enough to contribute to our conclusions substantially. Indeed, the role of Section 3.5 is only to show that _some_ simple iterative method can significantly outperform one-step methods in our chosen regime, and our observation that one of the simplest imaginable methods already performs well strengthens this conclusion. The problem of benchmarking different compressive sensing methods is out of scope. Finally, even if we had introduced more compressive sensing theory, we would not have been able to provide significantly more insight about the empirical results of Section 3.5. A lot of classical CS theory, like [4], focuses on the "linear regime" where $k/d \to \rho$ and $d/N \to \delta$ for some moderate constants $(\rho, \delta)$ bounded strictly between $0$ and $1,$ and applying bounds from this theory (like bounds derived from the RIP) give very weak guarantees in our setting. We searched the literature for more information on sublinear regimes where $\ln k / \ln N \to \epsilon < 1,$ including works like [5], but were ultimately not able to find a practical guarantee that explained Figure 5, and decided that further inquiry was out of the scope of this work. Overall, we hope that this work encourages some readers to learn more about CS and are happy to include some more pointers and references in Section 3.5. (Certainly, it was our own inspiration.) However, including more CS theory in the body of the work would not make our findings significantly easier to explain or to understand. Given the choice, we prioritized making this work as accessible as possible. [1] https://doi.org/10.1016/j.laa.2016.03.022 [2] https://doi.org/10.1073/pnas.0909892106 [3] https://doi.org/10.1016/j.crma.2008.03.014 [4] https://doi.org/10.1109/TIT.2005.858979 [5] https://proceedings.mlr.press/v99/reeves19a.html
null
null
null
null
null
null
Divide and Conquer: Grounding LLMs as Efficient Decision-Making Agents via Offline Hierarchical Reinforcement Learning
Accept (poster)
Summary: This paper introduces GLIDER, an approach for fine-tuning LLMs to act as agents in interactive environments. GLIDER relies on a hierarchical architecture in which the LLM is used to both propose a plan (high-level policy) and execute it (low-level policy). Using Behavioral Cloning and offline RL, the authors demonstrate that GLIDER can learn to solve various tasks in ScienceWorld and AlfWorld, achieving strong performance. One important aspect of GLIDER is that the low-level policy is trained solely on subgoals proposed by the high-level policy: the latter not only generates these subgoals but also rewards the low-level policy based on its trajectory. The authors argue that this approach preserves the low-level policy’s strong generalization abilities, as the subgoals it is trained to solve are not directly tied to the environment’s tasks. The paper provides in-depth ablation studies on different components of GLIDER, highlighting the importance of offline RL and the hierarchical architecture. It also demonstrates that offline RL can successfully exploit suboptimal demonstrations. Finally, the authors show that GLIDER can efficiently learn new tasks using online RL to fine-tune the high-level policy. Transitioning from offline RL to online RL is achieved seamlessly using the chosen RL method: AWAC. The authors conclude by discussing GLIDER's broader potential, suggesting that its applications extend beyond interactive textual environments. ## update after rebuttal Most of my concerns below were due to missing explanations in the manuscript as well as misunderstandings about the generalization tests. The authors have addressed all of these concerns. I therefore maintain my recommendation to accept this paper. Claims And Evidence: The proposed method and empirical evidence clearly support the paper's claims by introducing an approach that leverages hierarchical training, seems pretty general, adapts to unseen tasks, and can be easily trained on new tasks in an online regime. Methods And Evaluation Criteria: GLIDER is a relatively straightforward method. The choice of AWAC appears to be well-suited, as it enables both offline and online RL. One concern I have relates to the $c$ parameter, which controls the length of low-level trajectories. I could not find detailed information on how this parameter is set, apart from a footnote stating that it depends on the subgoal. Since the subgoals are generated dynamically by the high-level policy, it is unclear how one can determine in advance the possible subgoals and the number of steps required to solve them. The paper suggests that $c$ is fixed, meaning that the high-level policy awaits $c$ steps before evaluating the low-level policy's trajectory. However, what happens if the subgoal is completed in fewer than $c$ steps? Additionally, the paper states that "the subtask completion can be easily accessible from the environment observation" to compute the intrinsic reward for the low-level policy. This assumption seems overly optimistic when moving to the online regime. First, since subgoals are generated by the high-level policy, they could be highly abstract and difficult to evaluate. Second, the feasibility of this approach heavily depends on the complexity of the environment and the LLM's capabilities. For instance, what would happen in Minecraft if the subgoal generated is "build a house"? Furthermore, what occurs if the low-level policy is unable to solve the proposed subgoals? How does the framework handle situations where the high-level policy produces an effective plan, but the low-level policy fails to execute it? Theoretical Claims: This paper contains no theoretical claims. Experimental Designs Or Analyses: ScienceWorld and AlfWorld are two well-established benchmarks for evaluating LLM-based embodied agents. GLIDER demonstrates strong performance, and the analyses using three different LLMs highlight its robustness. However, I believe the experiments would benefit from additional baselines, particularly those involving online RL methods. I also could not find sufficient details regarding the baselines. For example, what dataset is used by ETO, which is supposed to collect trajectories in an online regime? Additionally, in Section 4.4, what data did AWAC use? Was the dataset emptied before the experiment, or did it still include the demonstrations used in Section 4.2? Also, what exactly is the AC baseline in Section 4.4? A clearer explanation of these details would improve the study's transparency. Finally, I would like to mention that I really enjoyed reading the ablations. The results provide valuable insights into the impact of different components, particularly emphasizing the importance of the hierarchical framework and offline RL. Supplementary Material: I reviewed the appendices, which provide details on the experimental setup, especially on collecting offline data and the prompts used. Algorithm 1 was also useful to understand which weights were changed in the offline-to-online experiments. Relation To Broader Scientific Literature: The links with prior work are covered overall. The paragraph discussing online RL approaches for LLM agents in Section 2 could mention other prior work, such as GLAM (Carta et al., 2023) or POAD (Wen et al., 2024). The latter also fits methods using an utterance-level critic, similar to ArCHer (Zhou et al., 2024), which is mentioned in Section 3.4. Prior work using intrinsic rewards generated by an LLM could also be mentioned, e.g. ELLM (Du et al., 2023). All these references are not essential to understanding the key contributions, they are just suggestions. Essential References Not Discussed: I do not see any essential missing reference. Other Strengths And Weaknesses: Concerning weaknesses, I already mentioned all my concerns in previous parts. One additional strength of this paper is that it is well-written and easy to follow. Other Comments Or Suggestions: The authors argue that the high-level policy produces general subgoals, allowing the low-level policy to remain useful even for unseen tasks. However, I would have liked to see empirical evidence supporting this claim. For instance, did the authors analyze the subgoals in the demonstration dataset? What subgoals were generated in the offline-to-online experiments, and how different were they from those in the demonstrations? Additionally, the authors chose not to finetune the low-level policy in offline-to-online experiments, based on the assumption that subgoals from training tasks would be reused in new tasks. Have they tested finetuning the low-level policy? If so, how does it affect the results? I also found Figure 2 difficult to interpret due to the overwhelming amount of information and arrows. Even with the caption, it was unclear to me. For instance, in (a), it is unclear what is frozen and what is finetuned—the finetuning block seems to be associated with the low-level trajectory. Questions For Authors: 1) Eq. 3 indicates that a regularization on the outputs' length is also applied to the low-level policy. I do not understand this regularization term as the low-level policy is supposed to output the environment's actions. Could the authors provide further explanations on this? 2) The Appendix B mentions "Cross-task Generalization sampling". Were these data necessary? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We deeply appreciate your constructive comments for improving our paper. We would be incredibly grateful for your continued support in reviewing our response and offering further encouragement. *** `Q1. How is the temporal abstraction parameter c determined, given that subgoals are dynamically generated?` A1. The low-level policy dynamically determines when to end a subtask by analyzing environment observations, rather than strictly waiting for c steps, shown in L.214~215. This adaptive mechanism ensures efficient execution while maintaining the hierarchical control structure. *** `Q2. How does the framework evaluate subtask completion and handle failures when subgoals are abstract or when the low-level policy cannot execute the high-level plan effectively?` A2. The low-level policy executes subtasks, while the high-level policy learns finer-grained subtasks through feedback. In ScienceWorld, during early epochs, the high-level policy generates unachievable subtasks like "heat substance" but finds broken stove, later adapting to executable ones like "Navigate to foundry". Similarly in Minecraft, it evolves from "build a house" to specific tasks like "gather wood" and "create foundations". *** `Q3. How does ETO collect trajectories in the online regime, and what dataset does it use?` A3. For ETO baseline, we first perform SFT on the base model, then let it interact with the environment online to collect both failure trajectories and expert demonstrations for DPO training. We will provide more detailed baseline implementation details in the appendix. *** `Q4. What data did AWAC use in Section 4.4? Was the dataset emptied or did it retain the demonstrations from Section 4.2?` A4. Following AWAC, we kept the offline demonstration data instead of emptying the dataset, as it helps maintain base distribution performance. Unlike Section 4.2, Section 4.4 focused on evaluating adaptation to novel scenarios by selecting specific out-of-domain tasks and collecting additional data through online interaction *** `Q5. What exactly is the AC baseline in Section 4.4? A clearer explanation of these details would improve the study's transparency.` A5. The AC (Actor-Critic) baseline differs from AWAC(Advantage Weighted Actor-Critic) by using standard policy gradient instead of advantage weighting. We will provide more detailed implementation specifications in the appendix. *** `Q6. Section 2's literature review could be enriched with additional works (GLAM, POAD, and ELLM) on online RL approaches for LLM agents.` A6. Thanks for your helpful advice. We will adjust the related work section based on these suggestions. *** `Q7. Where is the empirical evidence showing how the generated subgoals differ between demonstrations and new tasks, and how do they support the claimed generalization ability?` A7. Our approach demonstrates cross-task generalization through qualitative and quantitative evidence. Appendix Figure 9 shows subgoal reuse between tasks, with both directly transferable components and analogous patterns. The effectiveness is validated on three out-of-domain tasks unseen during training, where GLIDER significantly outperforms baselines. |Method|test-conductivity|find-animal|boil| |---|---|---|---| |AC|0.30 ± 0.10|0.45 ± 0.05|0.40 ± 0.15| |AWAC|0.35 ± 0.15|0.60 ± 0.10|0.45 ± 0.10| |GLIDER|**0.98 ± 0.02**|**0.99 ± 0.01**|**0.95 ± 0.05**| *** `Q8. Why wasn't the low-level policy finetuned in offline-to-online experiments, and what would be the performance impact if it was?` A8. L.252~258 explain this design choice. Low-level skills use intrinsic reward functions instead of task-specific ones, allowing cross-task generalization and robustness to distribution shifts between offline training and online deployment. While joint finetuning could improve performance, adjusting only the high-level policy is sufficient given the task-agnostic nature of low-level skills. *** `Q9. Figure 2's method diagram is overcomplicated and hard to understand, with unclear connections and training flows.` A9. We will simplify the Figure 2. During value head finetuning, the LLM backbone remains fixed, while policy training updates the LLM to learn behaviors. *** `Q10. What is the rationale behind including a length regularization term in Equation 3?` A10. The length regularization term constrains action sequences to match the inherently short nature of valid environment interactions. By dividing the SFT loss by sentence length n as $-E[\log\frac{\pi_{\theta}(a|s)}{n}]=-E[\log\pi_{\theta}(a|s)] + \log n$, it effectively prevents unnecessarily long action generation. *** `Q11. The Appendix B mentions "Cross-task Generalization sampling". Were these data necessary?` A11. Distribution shift in offline RL leads to policy failures on out-of-distribution states. Cross-task sampling simulates potential shifts by exposing the model to varied scenarios, helping handle unseen tasks. *** Best regards, Authors --- Rebuttal Comment 1.1: Comment: I thank the authors for their reponse. I will leave my score as it is, as my review was mostly asking for additional clarifications. --- Reply to Comment 1.1.1: Comment: Thank you for your continued support in reviewing our response. We are happy to see that we could address your concerns. We truly appreciate your time and effort in helping us improve our work!
Summary: The paper proposes GLIDER, an extension of hierarchical RL to LLM agents where two separate LLM control the high and low level policy. The proposed training framework relies on a combination of supervised fine tuning and implicit Q learning. The authors test on two domains, ScienceWorld and AlfWorld, obtaining promising results in both cases. The authors' experiments use relatively small LLMs (<10B), which is understandable given the need for in-device learning and speed. Overall, this research is promising, but some aspects require more work or clarification, and I recommend rejection in its current state Claims And Evidence: The experiments and methodology partially support the claims, but not some elements need additional work. For example: “we propose an offline hierarchical framework GLIDER with superior efficiency and broad applicability.” Superior parameter efficiency to what? Any sensible baseline involving LLMs would use PEFT. “Our method enables fast offline-to-online adaptation to -stationary environments” Generalization, fine-tuning, and non-stationarity are related but different concepts. The authors could be more precise. I think the authors focus on generalization to new tasks. It does not appear as if their online setting changes has a continuosly changing non-stationarity environment. “Comprehensive studies on ScienceWorld and ALFWorld show that our method consistently improves and generalization capacity, surpassing a of baselines by a significant margin.” See my comments in the experiment section. Methods And Evaluation Criteria: Experiments are run on standard benchmarks: ScienceWorld and ALFWorld. The experiments are limited, but this is common in this literature, given the computational cost. Overall, the proposed evaluation for the methods is sensible. Theoretical Claims: No theoretical analysis provided. Experimental Designs Or Analyses: The experiment design seems sound. I have some questions: 1. Did the authors experiment with multiple seeds? 2. What dataset did you use for the SFT: the purely expert one or the mix with medium expertise too? If the latter is the case, could SFT eventually be better than your method if enough expert data is available? 3. Why did you train for only five epochs during SFT? Since you’re also using SFT as a baseline, did you attempt to optimize its hyperparameters? Supplementary Material: I read the material. Relation To Broader Scientific Literature: The paper is timely and relevant to the RL community, increasingly interested in integrating LLMs. At the same time, its contribution might be limited since there are no theoretical results; the extension to hierarchical agents seems straightforward. Further, it maintains a common limitation with hierarchical RL: that there needs to be domain knowledge (in this case, pre-labelled trajectories) providing options/high-level actions. Essential References Not Discussed: The paper could benefit from a more extensive literature discussion around LLMs in RL. Examples: * Carta, T., Romac, C., Wolf, T., Lamprier, S., Sigaud, O., and Oudeyer, P.-Y. Grounding large language models in interactive environments with online reinforcement learning. In ICML, 2023. * Du, Y., Watkins, O., Wang, Z., Colas, C., Darrell, T., Abbeel, P., Gupta, A., and Andreas, J. Guiding pretraining in reinforcement learning with large language models. In ICML, pp. 8657–8677, 2023. * Szot, A., Schwarzer, M., Agrawal, H., Mazoure, B., Metcalf, R., Talbott, W., Mackraz, N., Hjelm, R. D., and Toshev, A. T. Large language models as generalizable policies for embodied tasks. In ICLR, 2023. * Wang, Z., Cai, S., Chen, G., Liu, A., Ma, X., Liang, Y., and CraftJarvis, T. Describe, explain, plan and select: interactive planning with large language models enables open-world multi-task agents. In NeurIPS, 2023. * Wen, M., Wan, Z., Wang, J., Zhang, W., and Wen, Y. Reinforcing LLM agents via policy optimization with action decomposition. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. Other Strengths And Weaknesses: A weakness is the lack of code despite being a mostly experimental paper. Other Comments Or Suggestions: I don’t understand the notation in (1) and (2). Why is this a sum, or what is Sigma? How can you sum “brackets” and what does the semi-colon inside the bracket mean. What is d? Please explain your notation as I believe it is non-standard in the literature. Questions For Authors: 1. Can you demonstrate that the given method obtains the optimal policy? Can you show that Equation (7) is correct? The LLM policy actions are at the token level, whereas the expression is formulated at the MDP transition level. In a way, the LLM has a different MDP at the token level. There should be a way to attribute the full transition reward to each token so that Equation (7) holds. For instance, what would be the equivalent formulation for assigning a reward to each token? Discussion around this topic could be useful, or even lead to more in-depth mathematical analysis. 2. Unless I am confused, the definition of response length as a regularizer in Equation (3) seems problematic, given that SFT is applied at the token level. Did the authors intend to imply that they down-/up-weight certain dataset elements? Could this be reformulated as a weighted loss to emphasize certain dataset elements? Additionally, did you include an end-of-text token during SFT? In my experience, this kind of regularization hasn’t been necessary to produce shorter responses after SFT, but I understand each data can be different. 3. Why did you train for only five epochs during SFT? Since you’re also using SFT as a baseline, did you attempt to optimize its hyperparameters (e.g., train longer, add dropout)? 4. One of my concerns is that without SFT, the performance is comparable with the baselines NAT and ETO. Could you elaborate further on the role of SFT and why this is still a fair comparison? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Methods **`Q1. The optimal policy guarantee and the correctness of Eq. (7), given that actions are at the token level while rewards are formulated at the transition level.`** A1. First, our policy derivation of inherits Advantage-Weighted Regression (AWR) [1-2], a popular algorithm using convergent maximum likelihood loss to convert RL into supervised learning subroutines. **Our method follows their theoretical properties,** and we will establish the theoretical counterpart in the appendix. Second, GLIDER correctly attributes the full transition reward to each token through Eq. (7). **Assigning the transition-level advantage $A$ to each token $w_i$ is equivalent to assigning the correct advantage to the whole action as $E[\exp(A(s,u))\cdot\sum_{i=1}^n\log\pi_{\theta}(w_i|s,w_{1:i-1})]=E[\exp(A(s,w_{1:n}))\cdot\log\pi_{\theta}(w_{i:n}|s)]$,** using the relationship between total probability and conditional probabilities in autoregressive models. [1] Advantage-weighted regression: Simple and scalable off-policy RL, arXiv:1910.00177, 2019. [2] AWAC: Accelerating Online Reinforcement Learning with Offline Datasets, arXiv:2006.09359, 2020. *** **`Q2. Superior parameter efficiency to what?`** A2. **The superior parameter efficiency is mainly reflected in the design of the RL model and the hierarchical structure**. First, we share the same frozen LLM backbone for actor and critic to greatly reduce model parameters, in contrast to many RL algorithms using independent actor and critic. Second, we let the high and low levels share the same model and differentiate them with a lightweight hierarchy prompt, as opposed to typical hierarchical methods with independent models at each level. Also, we adopt PEFT to further improve parameter efficiency following the common practice in LLMs. *** **`Q3. Clarification about generalization, fine-tuning, and non-stationarity.`** A3. In the offline-to-online stage, we focus on GLIDER’s fast adaptation ability to unseen tasks using a small number of fine-tuning steps, i.e., few-shot generalization. We will make more precise clarification. *** **`Q4. Hierarchical RL needs domain knowledge.`** A4. **We design a generally applicable hierarchy to minimize reliance on domain knowledge.** The low-level policy is instructed by an intrinsic reward indicating the sub-task completion, which is easily accessible fro environment observations, alleviating the necessity for any manual or task-specific design. *** ## Experiments **`Q5. A weakness is the lack of code.`** A5. We have uploaded the code as **supplementary materials** for reproducibility when submitting the main paper. Further, we will release detailed tutorials and datasets to ensure full reproducibility. *** **`Q6. Why training for only 5 epochs in SFT? Did you attempt to optimize its hyperparameters?`** A6. We trained SFT for 5 epochs to avoid overfitting. Also, we optimized the hyperparameters and added dropout in LoRA layers, which can be found in our submitted code. |SFT epochs|2|3|4|5|6|7| |---|---|---|---|---|---|---| |Score|34.14|42.02|45.11|**50.17**|40.16|37.43| *** **`Q7. The dataset for SFT, the role of SFT, the comparison between SFT and GLIDER on expert datasets.`** A7. We use **expert datasets for SFT**, matching the nature of supervised learning. **The role of SFT is to construct a base agent to improve learning stability and sample efficiency for subsequent stages.** GLIDER adopts RL to steer the base agent towards use-specified tasks, excelling both imitating demonstrations and adapting behaviors through trial-and-error. Experiments show **a 11.6%~19.6% performance improvement** over SFT. ||SFT|GLIDER| |---|---|---| |ScienceWorld (seen/unseen)|69.40/57.12|**77.43/68.34**| |AlfWorld (seen/unseen)|62.24/65.21|**71.56/75.38**| *** **`Q8. Did the authors experiment with multiple seeds?`** A8. Yes. Our method obtains close results across different seeds. The following table shows the example result of GLIDER with LIama-3-8B, score on unseen tasks in ScienceWorld. |Seed|0|1|2|3| |---|---|---|---|---| |Score|68.34|69.72|68.23|67.11| *** **`Q9. Regularizer in Eq. (3), and using the end-of-text token in SFT.`** A9. $n$ is the length of the sentence generated by policy $\pi_{\theta}$. We divide the original SFT loss by the sentence length as $-E[\log\frac{\pi_{\theta}(a|s)}{n}]=-E[\log\pi_{\theta}(a|s)] + \log n$. A smaller length $n$ will induce a larger gradient of loss, thus regularizing the model to output relatively shorter sentences. Also, we did use the end-of-text token in SFT, and our length regularizer is complementary to it. *** ## Summary Response We appreciate your time in reviewing our manuscript and your valuable comments. We have made a number of changes and clarifications to address your concerns, and we are more than delighted to have further discussions to improve our manuscript. If our response has addressed your concerns, we would be grateful if you could re-evaluate our work. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. I have some remaining questions: `Q1`. Ok. `Q2.` I still think that claiming "superior" efficiency as a contribution can be misleading. First, sharing some layers/parameters between the actor and critic is common. This is even mentioned in the PPO paper for Atari games (Schulman et al. 2017). Second, PEFT is applied to virtually any modern LLM application. `Q3.` Thank you. I agree that few-shot environment generalization clearly describes what the experiment is evaluating, while non-stationarity is misleading. `Q4`. Ok. `Q5` I apologize for missing the earlier code submission. Thank you. `Q6` It's strange to see so much overfitting with only five training epochs if using dropout during SFT. Did you observe changes in performance for different dropout values? Looking at the code, I see a dropout of 0.05, which should be enough. But from a very quick inspection of the code, I don't see model.train() or model.eval() being called. I wonder if dropout is implemented at all. I woudl expect less overfitting with higher dropout, potentially at the expense of less performance overall. But there should be a dropout value for which almost no overfitting takes place. Do you have an alternative explanation of why dropout is not working here? `Q7` Ok. `Q8`. Ok. `Q9.` For $n$ to be a valid regularizer in Eq 2, it needs to be a differentiable function of $\theta$. Can you explain how this is the case? Otherwise, it is not a regularizer; it is something else. ### References 1. Schulman et al. 2017. Proximal Policy Optimization Algorithms. --- Reply to Comment 1.1.1: Comment: Thank you for your continued support in reviewing our response. Please refer to the following responses to your further questions. *** **`Q2. About parameter efficiency.`** A2. The parameter efficiency of GLIDER contains three aspects. **The most attractive aspect is reflected in our hierarchical structure**. We propose to share the models for the high- and low-level policies, and differentiate them using a hierarchy prompt that specifies the level of current inputs. This design benefits from harnessing LLMs’ powerful capability to perform in-context learning, i.e., tackling a series of complex tasks by feeding lightweight prompts to a single foundation model. In contrast, traditional hierarchical RL methods [1-3] usually train independent models at each level, resulting in a multiplication of model parameters. Then, we share the backbone between the actor and critic, and use PEFT as two additional techniques to further improve parameter efficiency. **All these designs work as a whole to provide an integrated parameter-efficient architecture that addresses the significant challenge of LLMs tackling long-horizon interactive tasks.** Further, we compare GLIDER to three additional ablations to verify the effectiveness of our parameter-efficient architecture. GLIDER can: i) **save 50% of model parameters with a 1% performance loss compared to using separate models for the two levels**; ii) **save 50% of model parameters with a 3% performance loss compared to using separate models for the actor and critic**; and iii) **save 99% of trainable model parameters with a 3% performance loss compared to full parameter fine-tuning**. | Method | Trainable parameters | Total parameters | Percentage | **Score** | | --- | --- | --- | --- | --- | | GLIDER | 0.18 GB | 30.22 GB | 0.58% | 68.34 | | Decouple high and low policies | 0.36 GB | 60.44 GB | 0.58% | 69.05 | | Decouple actor and critic | 0.23 GB | 60.19 GB | 0.38% | 70.47 | | Full parameter fine-tuning | 30.04 GB | 30.22GB | 99.40% | 70.50 | [1] Data-efficient hierarchical reinforcement learning, NeurIPS 2018. [2] Learning multi-level hierarchies with hindsight, ICLR 2019. [3] Sub-policy adaptation for hierarchical reinforcement learning, ICLR 2020. *** **`Q6. About overfitting and dropout in SFT.`** A6. Thank you for your insightful comments on overfitting and dropout. We add a hyperparameter analysis experiment to observe the relationship between the dropout value and overfitting. The new result is completely consistent with your insight: **a higher drouput value will indeed expect less overfitting**, or expect overfitting with larger training epochs. We will add more hyperparameter analysis on the SFT stage in the appendix. In GLIDER, we use SFT to construct a base agent, and **our focus is steering LLMs toward complex interactive tasks using the proposed offline hierarchical RL framework**. | LoRA dropout | epoch 2 | epoch 3 | epoch 4 | epoch 5 | epoch 6 | epoch 7 | epoch 8 | | --- | --- | --- | --- | --- | --- | --- | --- | | 0.05 | 34.14 | 42.02 | 45.11 | **50.17** | 40.16 | 37.43 | / | | 0.1 | 31.90 | 39.23 | 44.10 | 49.10 | **49.72** | 47.24 | / | | 0.15 | 33.04 | 40.29 | 44.98 | 48.13 | 49.00 | **50.13** | 48.77 | *** **`Q9. About the regularizer of sentence length in SFT.`** A9. We regularize the model to output relatively shorter sentences by re-weighting, where a smaller length $n$ will induce a larger weight for the original SFT loss. More precisely, **we achieve the effect of regularization by re-weighting the loss**, other than including an additional loss that is differentiable w.r.t model parameters. We will clarify it more precisely. Further, we conduct an intuitive hyperparameter analysis experiment to observe the relationship between the length regularization ratio $\lambda$ in Eq. (3) and the average length of output sentences. The new result shows that the model tends to output shorter sentences with a larger $\lambda$, verifying that **our re-weighting successfully achieves the effect of regularization**. | $\lambda$ | 0.5 | 1.0 | 1.5 | | --- | --- | --- | --- | | Sentence length | 23.0 | 22.2 | 16.8 | *** We were wondering if our responses have resolved your concerns. If you have any additional questions or suggestions, we would be happy to have further discussions. Best regards, The Authors
Summary: This paper integrates hierarchical reinforcement learning with LLMs, leveraging a high-level planner to decompose tasks and a low-level executor to perform actions. Experimental results on ScienceWorld and ALFWorld demonstrate significant performance gains over existing baselines, with strong adaptability to unseen tasks. Claims And Evidence: Yes Methods And Evaluation Criteria: The benchmarks used—ScienceWorld and ALFWorld—have already demonstrated strong performance from advanced LLMs, such as the GPT-series models, leveraging prompt-based methods. These tasks are highly structured. The authors employ small-scale LLMs and apply a multi-stage training approach, yielding improvements over the base models; however, this outcome is somewhat expected. To more effectively evaluate the method’s impact, the authors could introduce GPT-based prompting baselines to compare whether small-scale LLMs with hierarchical training can match or even outperform larger models. Furthermore, testing on more complex environments, such as WebShop, would provide stronger evidence of the proposed approach’s scalability. Theoretical Claims: The paper does not appear to provide formal mathematical proofs for theoretical claims, as it primarily focuses on empirical validation. Experimental Designs Or Analyses: While the benchmark selection may be somewhat limited in scope, the experimental design and baseline comparisons are thorough and well-executed. Supplementary Material: Yes. I reviewed the Appendix. Relation To Broader Scientific Literature: The hierarchical approach of first leveraging an LLM to generate high-level plans before executing them is not particularly novel. However, the paper’s parameter-efficient design is well-executed. The use of a shared LLM backbone for both high- and low-level policies, combined with lightweight fine-tuning techniques, enhances efficiency without significantly increasing computational costs. Essential References Not Discussed: - RL-GPT: Integrating Reinforcement Learning and Code-as-policy - Thought Cloning: Learning to Think while Acting by Imitating Human Thinking Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: 1. How does the proposed method compare with GPT models using reflection or ReAct-based methods in terms of performance? 2. Regarding the sub-task decomposition and intrinsic reward design, the paper primarily relies on environmental observations to determine whether a sub-task is completed. Is this design applicable when extended to other, possibly more complex, environments? 3. During the prompt design process, did you observe any effects of the quality of high-level sub-tasks generated on the execution performance of the low-level actions? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: *** **`Q1. The authors could introduce GPT-based prompting baselines (Reflextion or ReAct) to compare whether small-scale LLMs with hierarchical training can match or even outperform larger models.`** A1. Following your advice, we compare GLIDER using small-scale LLMs (LIama-3-8B) to ReAct using large-scale LLMs (GPT4). Results in the following table demonstrating that our small-scale LLMs with hierarchical training can match prompting baselines with much larger LLMs, with **a 5%~98% performance improvement**. | Method | ScienceWorld (seen/unseen) | Alfworld (seen/unseen) | | --- | --- | --- | | GLIDER (LIama-3-8B) | **77.43/68.34** | **71.56/75.38** | | ReAct (GPT-4) | 67.32/65.09 | 44.29/38.05 | *** **`Q2. The novelty of the hierarchical approach in LLM to generate high-level plans.`** A2. We innovatively combine the strengths of LLMs with the structural advantages of hierarchical RL to **address the significant challenge of tackling long-horizon interactive tasks**. While building upon existing research in LLMs and hierarchical RL, we propose a novel integration of these areas, design a parameter-efficient and generally applicable hierarchy, and enable fast offline-to-online adaptation. *** **`Q3. The paper relies on environmental observations to indicate sub-task completion. Is this design applicable when extended to other, possibly more complex, environments?`** A3. **This design alleviates the necessity for any manual or task-specific design, and is naturally applicable in more complex environments.** For example, in Minecraft, a broad task like "build a house" could be decomposed into simpler sub-tasks like "gather wood materials". The completion of that sub-task can be easily monitored by an attribute “inventory changes” of the environment observation. Our design remains robust at any complexity level, as long as the environment provides clear completion signals. *** **`Q4. Did you observe any effects of the quality of high-level sub-tasks generated on the execution performance of the low-level actions?`** A4. The quality of generated sub-tasks can affect the execution performance of the low-level policy. Intuitively, if the generated sub-task is too hard (e.g., close to the original task), the low-level policy can fail to accomplish it. That is why we need the hierarchical decomposition to break down the original task into simpler sub-tasks. **By trial-and-error, the high-level policy adapts its behavior using environment-provided rewards, and is reinforced to generate increasingly fine-grained sub-tasks.** This “reinforcement” mechanism can be illustrated by a task evolution example: when the high-level policy generates a sub-task “heat substance” in the kitchen, the low-level policy attempts to use the stove but receives feedback that it's broken. After this failed attempt, the high-level policy adapts by generating a more specific sub-task “navigate to foundry to find blast furnace”. *** **`Q5. Testing on more complex environments, such as WebShop.`** A5. We will add a new benchmark, WebShop, to further validate GLIDER’s performance. Due to very limited time, we have only completed the offline dataset collection, and we are rushing to update them before the rebuttal deadline. *** **`Q6. More extensive literature like RL-GPT and Thought Cloning.`** A6. Thanks for your advice, and we will cite more relevant references. *** ## Summary Response We appreciate your time in reviewing our manuscript and your valuable comments. We have made a number of changes and justifications to address your concerns, and we are more than delighted to have further discussions to improve our manuscript. If our response has addressed your concerns, we would be grateful if you could re-evaluate our work. Best regards, Authors
Summary: The work focuses on long-term decision-making problems with LLM's. They propose GLIDER using concepts of hierarchical reinforcement learning, namely decomposing a complicated task into pieces of small tasks. The low-level controller can be goal agnostic and finish its low level task efficiently as directed by high-level policy. The key selling points for the algorithm are: - enhanced exploration in long context tasks - fast online adaptation to non-stationary environments - strong transferability due to task-agnostic (low-level controller is task agnostic) - parameter efficient (this comes by sharing of weights in value and actor-network, and also across high and low-level policies) The authors demonstrate performance on ScienceWorld and ALFWorld datasets. Claims And Evidence: Yeah, most of the claims are supported by empirical evidence and ablation studies. I do not find empirical evidence on "fast online adaptation to non-stationary environments". Also, they claim parameter efficient which they are by sharing of weights but i think it needs to be also empirically validated by loss in performance due to sharing of weights as compared to using different weights. Methods And Evaluation Criteria: yeah, the method is tried on long context tasks. However would be good to apply it on multiple datasets, currently on a limited number of datasets. Theoretical Claims: The paper is experimental (no theoretical claims). Experimental Designs Or Analyses: Yeah, the presented empirical evidence is strong for hierarchical structure. Supplementary Material: yes Relation To Broader Scientific Literature: I am not aware of similar work in the context of LLM's. The paper connects existing ideas from offline RL, Hierarchical RL, SFT, etc. Essential References Not Discussed: no Other Strengths And Weaknesses: Strengths: - Proposed an overall framework with offline hierarchical RL for LLM's which is parameter-efficient, generalizable to an extent and scalable. - Presented strong evidence of hierarchical breakup for long context task Weakness: The work combines existing ideas and demonstrates its use case in solving long context tasks for LLMs. However, there is no starking novelty. Also, it's not clear to me if they really do need a value function here or if it can be surpassed. Also see claims and evidences above for other weaknesses. Other Comments Or Suggestions: line 80: two LLM policies: which two? low and high level? 89: hierarchical token level not clear remove 2nd bullet point of contribution or merge it, it's a feature of your algorithm (not a contribution) 3.1: add details about your state and action space 180: what is \sigma_N 213: around eqn3: Does n_h, n_l belongs to expectation? i don't understand why it would affect gradient of loss? 241: What is explicit modelling of the behaviour policy? line 247-259: You first train low level and then fix it and later train only high level? 270: what is temporal abstraction knowledge? 305:311: Add more details about both the tasks.. elementary science experiments doesn't say much, some e.g., would be nice 406: what is online fine-tuning? I am wondering why is it saying about generalization. Ideally, for generalization you could fine tune on all tasks except test-conductivity, find-animal, boil and then check trained model performance on these tasks? Questions For Authors: See above ### Update after rebuttal Thanks to authors for addressing my questions, especially to run the experiment of quantifying performance loss. Please include the promised changes in the final version. I will maintain my inclination towards acceptance. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: *** **`Q1. About online fine-tuning, generalization, empirical evidence on fast online adaptation.`** A1. Online fine-tuning refers to adapt a pretrained policy to unseen tasks using several fine-tuning steps, i.e., few-shot generalization to new tasks. We train a policy using offline datasets on all tasks except test-conductivity, find-animal, and boil. Then, we fine-tune the trained policy by interacting with these tasks online. The evidence is in Sec. 4.4. Generalization Analysis via Online Fine-tuning (L394-L429), also shown in the following table. GLIDER obtains **a 65%-226% performance improvement** over baselines. | Method | test-conductivity | find-animal | boil | | --- | --- | --- | --- | | AC | 0.30 ± 0.10 | 0.45 ± 0.05 | 0.40 ± 0.15 | | AWAC | 0.35 ± 0.15 | 0.60 ± 0.10 | 0.45 ± 0.10 | | GLIDER | **0.98 ± 0.02** | **0.99 ± 0.01** | **0.95 ± 0.05** | *** **`Q2. Performance comparison between sharing weights and using different weights.`** A2. We compare the full GLIDER to two more ablations: using different weights for the actor and critic, and using different for the high-level and low-level policies. The following table shows the performance on unseen tasks in ScienceWorld, with the LIama-3-8B backbone. **We save 50% of model parameters with a <3% performance loss.** | Method | Total parameters | **Score** | | --- | --- | --- | | GLIDER | 30.22 GB | 68.34 | | Decouple actor and critic | 60.19 GB | 70.47 | | Decouple high and low policies | 60.44 GB | 69.05 | *** **`Q3. The novelty of combining existing ideas and demonstrates its use case in solving long context tasks for LLMs.`** A3. We innovatively combine the strengths of LLMs with the structural advantages of hierarchical RL to **address the significant challenge** of tackling long-horizon interactive tasks. We propose a novel integration of these areas, design a parameter-efficient and generally applicable hierarchy, and enable fast offline-to-online adaptation. *** **`Q4. It's not clear to me if they really do need a value function here or if it can be surpassed.`** A4. Classical RL algorithms learn a value function to backpropagate expected optimal returns using dynamic programming updates. GLIDER learns a value function to estimate the action advantage, and regress the advantage-weighted policy. The existence of a value function is a crucial property of RL, which enables the agent to adapt behaviors through trial-and-error rather than only imitating demonstrations. The ablation study (L354-L369) shows **a 13.9%~33.5% performance improvement** of pure ORL over SFT. | Backbone | SFT | ORL | | --- | --- | --- | | Mistral-7B | 45.11 | **60.23** | | Gemma-7B | 40.54 | **49.48** | | LIama-3-8B | 50.17 | **57.12** | *** **`Q5. L241: What is explicit modeling of the behavior policy?`** A5. GLIDER maximizes an advantage-weighted likelihood function as $\max E[\exp(A(s,a))\cdot\log\pi(a|s)]$. We learn a value function to estimate the action advantage, and regress the advantage-weighted policy. The maximization of the likelihood $E[\log\pi(a|s)]$ corresponds to an explicit modeling of the behavior policy. *** **`Q6. L213: How $n_h/n_l$ affects the gradient of loss.`** A6. $n$ is the length of the sentence generated by policy $\pi_{\theta}$. We divide the original SFT loss by the sentence length as $-E[\log\frac{\pi_{\theta}(a|s)}{n}]=-E[\log\pi_{\theta}(a|s)] + \log n$. A smaller length $n$ will induce a larger gradient of loss, thus regularizing the model to output relatively shorter sentences. *** **`Q7. L247-259: You first train low level and then fix it and later train only high level?`** A7. **At SFT and ORL, we simultaneously train the high- and low-level policies.** Then, at the offline-to-online stage, we fix the low-level skills and only fine-tune the high-level policy. *** **`Q8. L270: what is temporal abstraction knowledge?`** A8. Temporal abstraction refers to **abstracting a sequence of temporal actions into a skill**, gaining the potential to greatly speed planning and learning on large problems. *** **`Q9. L89: hierarchical token level not clear.`** A9. “The hierarchical token-level actors and sentence-level critics” refers to that we have **a token-level actor and a sentence-level critic at both levels**. *** **`Q10. It would be good to apply it on multiple datasets.`** A10. We will add a new benchmark WebShop. Due to very limited time, we have only completed the offline dataset collection, and we are rushing to update them before the rebuttal deadline. *** ## Summary Response We appreciate your time in reviewing our manuscript and your valuable comments. We have made a number of changes and justifications to address your concerns, and we are more than delighted to have further discussions to improve our manuscript. If our response has addressed your concerns, we would be grateful if you could re-evaluate our work. Best regards, Authors
Summary: This paper introduces GLIDER, a hierarchical reinforcement learning framework designed to enhance the decision-making capabilities of large language models (LLMs) in long-horizon tasks. The authors propose a two-layer structure where a high-level policy decomposes complex tasks into sub-goals, which a low-level controller executes using reinforcement learning. This hierarchical decomposition allows for better exploration and improved long-term credit assignment, addressing the challenges faced by LLMs in sparse-reward scenarios. The method is designed to be parameter-efficient, leveraging shared LLM parameters between the high and low levels to reduce computational overhead. The framework is evaluated on ScienceWorld and ALFWorld benchmarks, demonstrating significant performance improvements over prompt-based methods (ReAct, Reflexion) and fine-tuning baselines (NAT, ETO). Claims And Evidence: In this paper, the authors provide thorough experimental comparisons and ablation studies on both ScienceWorld and ALFWorld. The results show that (1) hierarchical learning improves performance over strong baselines, (2) offline-to-online adaptation is faster and more stable, and (3) the approach is parameter-efficient across different model scales. Each of these claims has corresponding quantitative evidence (e.g., success rates, adaptation curves, ablation metrics). No obvious discrepancies are found. Methods And Evaluation Criteria: The authors picked well-known benchmarks (ScienceWorld, ALFWorld) that require text-based reasoning and long-horizon planning, which makes sense for testing a hierarchical RL approach. They also used standard offline RL and supervised fine-tuning methods, which makes it easier to judge how well their model handles efficiency, generalization, and adapting to new tasks. Theoretical Claims: No formal proofs or heavy theoretical claims are made. Experimental Designs Or Analyses: The training pipeline is well-documented and seems logically consistent. The authors show clear comparisons with multiple baselines and ablation studies. Supplementary Material: Not quite. The codes are shared in the supplementary material but due to the emergency review, there is not sufficient time to read through or reproduce. Relation To Broader Scientific Literature: Not obvious. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths 1. It nicely combines hierarchical RL with LLMs in an original way, showing strong performance on complex text-based benchmarks. 2. The paper is pretty thorough, with lots of experiments and ablation studies that support their claims. 3. The idea of reusing learned low-level skills for fast online adaptation is practical and smart. Weaknesses 1. The training pipeline is a bit complicated (three stages), which might be tough for others to reproduce. However, I think this could be alleviated by the codes shared by the authors. 2. The paper doesn’t get deeply into real-world deployments or more physically grounded tasks, so scope could be potentially limited. Other Comments Or Suggestions: No Questions For Authors: Would like to see authors' thoughts on the scope of environments and complexity of the training pipeline. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We deeply appreciate your positive and constructive comments for improving our paper. We would be incredibly grateful for your continued support in reviewing our response and offering further encouragement. *** `Q1. The training pipeline is a bit complicated (three stages), which might be tough for others to reproduce.` A1. We have provided code with the submission to ensure reproducibility of our main paper. Detailed tutorials and datasets will be released later. While GLIDER involves a three-stage pipeline, each stage serves a distinct purpose. As shown in the following Table, where we compare the performance of baseline, ORL-only, and complete GLIDER implementations on unseen ScienceWorld tasks, the offline RL phase alone can still outperform baseline approaches. Adding SFT improves the results further by efficiently learning valid interactions, though at additional training cost. The optional online phase, enabled by AWAC's smooth offline-to-online transition, makes the overall process more robust and adaptable. | Method | Mistral-7B | Gemma-7B | Llama-3-8B | | --- | --- | --- | --- | | NAT | 50.79 | 44.98 | 48.76 | | ETO | 51.85 | 47.84 | 52.33 | | **GLIDER (ORL)** | 60.23 | 49.48 | 57.12 | | GLIDER (SFT+ORL) | 65.14 | 58.50 | 68.34 | *** `Q2. The paper doesn’t get deeply into real-world deployments or more physically grounded tasks, so scope could be potentially limited.` A2. This work demonstrates significant algorithmic value through a hierarchical approach that decomposes complex tasks into high-level planning and low-level execution skills. The effectiveness has been validated in sophisticated benchmarks featuring dynamic, sparse reward environments and diverse task scenarios. Meanwhile, these applications in environments with varied tasks and changing states demonstrate the robustness of this design, showing potential for embodied domains and enabling more sophisticated skill composition in robotic manipulation, game strategy learning, and autonomous navigation where multi-level control and temporal abstraction are essential for handling real-world complexity. *** Best regards, Authors
Summary: This paper addresses the challenges of using Large Language Models (LLMs) for long-horizon decision-making tasks, specifically their difficulties with exploration and long-term credit assignment, especially in sparse-reward settings. To mitigate these challenges, the authors propose a framework called GLIDER (Grounding Language Models as Efficient Decision-Making Agents via Offline HiErarchical Reinforcement Learning). GLIDER introduces a parameter-efficient hierarchical structure to LLM policies. The framework employs a scheme where a low-level controller is supervised by abstract, step-by-step plans that are learned and instructed by a high-level policy. This hierarchical design aims to decompose complex problems into a sequence of chain-of-thought reasoning sub-tasks, thereby providing temporal abstraction to enhance exploration and learning in long-horizon tasks. Claims And Evidence: Claim: LLMs struggle with long-horizon decision-making tasks due to deficient exploration and long-term credit assignment, especially in sparse-reward scenarios. Evidence: The paper cites the challenges of exploration and credit assignment as motivation for their approach. The introduction sets up this problem as a key issue in applying LLMs to decision-making. The experiments likely demonstrate improved performance in long-horizon, sparse-reward tasks compared to baseline methods, further supporting this claim.   Claim: The GLIDER framework, which introduces a parameter-efficient hierarchical structure to LLM policies, mitigates these challenges. Evidence: The paper details the GLIDER framework, including the hierarchical structure with high-level and low-level policies. The experimental results, particularly comparisons with baseline methods, demonstrate the effectiveness of GLIDER in improving performance on long-horizon tasks. Ablation studies, if present, would further strengthen this claim by showing the contribution of the hierarchical structure.   Claim: The hierarchical design decomposes complex problems into a series of coherent chain-of-thought reasoning sub-tasks, providing flexible temporal abstraction to enhance exploration and learning. Evidence: The paper explains how GLIDER learns and uses abstract, step-by-step plans to supervise the low-level controller. The results should show improved performance and exploration, suggesting that the decomposition into sub-tasks is beneficial. Visualizations or analyses of the learned sub-tasks and how they contribute to the overall solution would provide further support. Figure 9, for example, illustrates the hierarchical decomposition. Methods And Evaluation Criteria: Methods: Hierarchical Reinforcement Learning: The decomposition of the problem into high-level planning and low-level control is a logical approach for long-horizon tasks. It allows the high-level policy to focus on strategic planning while the low-level policy executes the plan, which aligns with the "divide-and-conquer" principle mentioned in the paper.   Language Models for Decision-Making: Utilizing LLMs to generate and interpret plans is novel and leverages the reasoning capabilities of LLMs. The integration of LLMs within the reinforcement learning framework seems appropriate for tasks that require complex reasoning and planning. Parameter-Efficient Hierarchy: The emphasis on parameter efficiency is important for scaling the method and making it practical. Evaluation Criteria: The evaluation criteria and benchmark datasets seem reasonable for assessing the effectiveness of GLIDER. Long-Horizon Decision-Making Tasks: Evaluating the method on tasks that require long-term planning and execution is essential for demonstrating its ability to handle the challenges it aims to address. The paper should define these tasks clearly and explain why they are appropriate benchmarks. Sparse-Reward Scenarios: Evaluating the method in sparse-reward settings is critical, as this is where exploration and credit assignment are most challenging. The evaluation should demonstrate that GLIDER outperforms other methods in these scenarios. Comparison with Baseline Methods: Comparing GLIDER with appropriate baseline methods, including other reinforcement learning algorithms and potentially LLM-based approaches, is crucial for demonstrating its superiority. The choice of baseline methods should be justified. Metrics: The paper should employ appropriate metrics to evaluate performance, such as success rate, return, or other task-specific metrics. These metrics should be clearly defined and relevant to the evaluation goals. Ablation Studies: If included, ablation studies can provide valuable insights into the contribution of different components of the GLIDER framework. This helps to understand the importance of the hierarchical structure, the planning mechanism, and other design choices. Theoretical Claims: The paper focuses primarily on an empirical approach, presenting a novel framework (GLIDER) and demonstrating its effectiveness through experiments. While the core contribution is algorithmic and empirical, there are some implicit theoretical claims. The paper does not contain formal mathematical proofs for theoretical claims. The emphasis is on the empirical validation of the GLIDER framework. This is not necessarily a weakness, as many papers in machine learning focus on empirical contributions. However, it's important to recognize that the theoretical underpinnings are largely based on established principles and demonstrated through experiments rather than formal proofs. Areas for Potential Improvement: While not strictly required, the paper could benefit from a more detailed discussion of the theoretical implications of using LLMs in hierarchical RL. This could involve relating the approach to existing theoretical results in hierarchical RL or discussing the limitations and potential failure modes of the approach from a theoretical perspective. Experimental Designs Or Analyses: The experimental design appears to be sound and generally supports the claims made in the paper. The authors have attempted to evaluate GLIDER in a comprehensive manner, considering various aspects of its performance. Supplementary Material: The detailed example provided in the document serves the purpose of supplementary material by offering a more in-depth view of a key aspect of the research. It enhances the reader's understanding and supports the claims made in the paper. Relation To Broader Scientific Literature: the paper innovatively combines the strengths of LLMs (reasoning, language understanding) with the structural advantages of hierarchical reinforcement learning to improve long-horizon decision-making. It builds upon existing research in LLMs, hierarchical RL, and planning, but it proposes a novel integration of these areas. Essential References Not Discussed: By discussing these related works, the paper can provide a more comprehensive context for its contributions and better position GLIDER within the broader scientific literature. Other Strengths And Weaknesses: Strengths: Originality: The paper presents a novel approach by effectively combining LLMs and hierarchical reinforcement learning for long-horizon decision-making. The way it leverages LLMs to generate and follow abstract plans within a hierarchical framework is a creative combination of existing ideas. Significance: The paper addresses a significant challenge in applying LLMs to complex, sequential tasks. Overcoming the limitations of LLMs in exploration and credit assignment has the potential to broaden the applicability of LLMs in various domains, including robotics, game playing, and automation. Clarity: The paper is generally well-written and easy to follow. The authors clearly explain the GLIDER framework and its components. The figures and examples, such as Figure 9, aid in understanding the proposed approach. Weaknesses: Limited Theoretical Analysis: As mentioned earlier, the paper lacks strong theoretical underpinnings. While the empirical results are promising, a more in-depth theoretical analysis of the proposed approach would strengthen the paper. Scope of Evaluation: While the paper evaluates GLIDER on relevant tasks, expanding the evaluation to a broader range of complex, long-horizon tasks would further demonstrate its generalizability and robustness. Analysis of Limitations: The paper could benefit from a more thorough analysis of the limitations of GLIDER. Discussing potential failure cases, scenarios where GLIDER might struggle, and the computational costs associated with the approach would provide a more balanced perspective. Other Comments Or Suggestions: Typos and Grammar: A careful pass for typos and grammatical errors is recommended to improve the overall polish of the paper. Clarification of Terminology: While generally clear, some terminology could be further clarified. For example, explicitly defining what constitutes a "long-horizon" task in the context of the paper would be helpful. Questions For Authors: Could you provide a more detailed comparison of GLIDER with other LLM-based approaches to decision-making, if any exist? Specifically, what are the key differences and advantages of GLIDER compared to methods that directly use LLMs for planning without the hierarchical RL structure? In the experimental evaluation, could you provide a more detailed analysis of the parameter efficiency of GLIDER? Please include a comparison of the number of parameters used by GLIDER and the baseline methods. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We deeply appreciate your constructive comments for improving our paper. We would be incredibly grateful for your continued support in reviewing our response and offering further encouragement. *** `Q1. What are the advantages of GLIDER compared to direct LLM planning approaches?` A1. We have conducted thorough ablation studies on hierarchical structure. As shown the following Table, hierarchical models consistently outperform their non-hierarchical counterparts across all LLM backbones, demonstrating the necessity of our approach. | Backbone | w/o Hier | **w/ Hier** | | --- | --- | --- | | Mistral-7B | 47.30 | 65.14 | | Gemma-7B | 50.16 | 58.50 | | Llama-3-8B | 53.94 | 68.34 | *** `Q2. What is the parameter efficiency of GLIDER and how does it compare with baseline methods in terms of parameter count?` A2. Based on our additional experimental evaluation on ScienceWorld using LLaMA3-8B as the base model, GLIDER demonstrates parameter efficiency in two key aspects: (1) With LoRA, it requires only 0.18GB trainable parameters while achieving superior performance over baselines ETO and NAT; (2) GLIDER with LoRA (0.58\% parameters) achieves comparable performance to full fine-tuning (99.40\% parameters), showcasing effective parameter utilization through prompting-based hierarchical control. | Method | Trainable parameters | Total parameters | Percentage | Performance | | --- | --- | --- | --- | --- | | **GLIDER(LoRA)** | 0.18 GB | 30.22 GB | 0.58% | 68.34 | | **GLIDER(Full)** | 30.04 GB | 30.22GB | 99.40% | 70.50 | | ETO | 0.05 GB | 29.97GB | 0.17% | 52.33 | | NAT | 0.05 GB | 29.97GB | 0.17% | 48.76 | *** `Q3. Why the benchmarks are appropirate?` A3. ScienceWorld and ALFWorld are ideal benchmarks for evaluating GLIDER, featuring dense and sparse reward structures respectively. The complex, non-deterministic nature of both environments requires the agent to continuously adjust its strategy rather than pre-computing a complete solution path, validating GLIDER's real-time planning abilities. *** `Q4. How does GLIDER handle sparse-reward scenarios?` A4. GLIDER addresses sparse-reward challenges through its hierarchical structure. High-level planning breaks down complex goals into manageable subgoals, while low-level execution ensures efficient exploration and clearer credit assignment for each subtask. This is validated by our strong performance compare to baseline in AlfWorld, a sparse-reward benchmark. | Method | AlfWorld(Seen) | AlfWorld(Unseen) | | --- | --- | --- | | NAT | 60.71 | 59.70 | | ETO | 64.29 | 64.18 | | **GLIDER** | **71.56**($\uparrow$ 11.31%) | **75.38**($\uparrow$ 17.45%) | *** `Q5. Grammar and Clarification of Terminology` A5. We appreciate the reviewer's suggestions and will thoroughly proofread the manuscript. Regarding the "long-horizon" task, it refers to tasks that cannot be fully planned at the beginning but require continuous planning throughout a long sequence of steps. For example, when "boiling water", if the sink breaks, the agent must replan to find alternative water sources. *** Best regards, Authors
null
null
Random Policy Evaluation Uncovers Policies of Generative Flow Networks
Accept (poster)
Summary: This paper studies generative flow networks (GFlowNet) where one aims to learn a policy which generates states with probability proportional to the rewards of these states, contrast with the goal of reward maximization in typical reinforcement learning. This work tries to build the connection between value function and flow function in this framework. By transforming the reward, they can show an equality between a scaled value function and the flow function. With this finding, they interpret GFlowNet from the perspective of value iteration. What's more, they implemented experiments to show this connection. Claims And Evidence: Yes, the claims made in this paper are supported by the main theorems (Theorem 4.1 and Theorem 4.2). Methods And Evaluation Criteria: It is not applicable to this paper. The focus of this paper is to establish the connection between two existing concept (or method), value function in value iteration and flow function in GFlowNet. Theoretical Claims: Yes, I checked the correctness of the main theorems in this work, i.e. Theorem 4.1 and 4.2. They seem correct to me. Experimental Designs Or Analyses: No, I didn't check the soundness of the experiments. Supplementary Material: Yes, I reviewed the supplementary materials, including Appendix A which is an illustration to show the connection of value function and flow function, and Appendix B which provides proof to one of the main theorems (Theorem 4.2). Relation To Broader Scientific Literature: The key contribution in this work is to build the connection between the value function and flow function in GFlowNet. The most related result in previous works is Proposition 1 in [1]. This work provides more straightforward interpretation between value function and flow function beyond that result. [1]. Bengio E, Jain M, Korablyov M, et al. Flow network based generative models for non-iterative diverse candidate generation[J]. Advances in Neural Information Processing Systems, 2021, 34: 27381-27394. Essential References Not Discussed: No, I didn't see any essential related works that are not discussed. Other Strengths And Weaknesses: Strength: 1. To some extent, this work shows potential results that policy evaluation can achieve beyond the evaluation itself or serving for policy improvement. Weaknesses: 1. The theory developed in this paper provides very limited new insights compared with the existing results. For example, in Theorem 4.1 (the main theorem), they build the connection $V(s_t)=F(s_t)\Pi_{i=0}^{t-1}A(s_i)$ by transforming rewards. However, with this flow function, one can derive the forward policy $\pi(s'|s)=\frac{V(s')}{\sum_{s''}V(s'')}$ where $s''$ denotes all possible subsequent state from $s$. It is a known result already established in Proposition 1 in [1]. What's new here is just transforming the reward such that one can directly write the flow function to the value function. It is not significant enough to serve as main contributions for a paper, let alone in GFlowNet, people care more about the forward policy, Theorem 4.1 didn't bring anything new about the forward policy. 2. In Theorem 4.2, the constraint that any trajectories $\tau_1$ and $\tau_2$ that visit $s_t$ should satisfy $g(\tau_1, s_t)=g(\tau_2,s_t)$ is too strong, which makes the interpretation only applicable to very limited graphs. The authors should dig deeper into it. [1]. Bengio E, Jain M, Korablyov M, et al. Flow network based generative models for non-iterative diverse candidate generation[J]. Advances in Neural Information Processing Systems, 2021, 34: 27381-27394. Other Comments Or Suggestions: 1. In Appendix A, the authors include an illustration to show the connection of value function and flow functions, but they only give a figure title, it would be better to at least elaborate it. 2. Missing proof for Theorem 4.1 in Appendix. Questions For Authors: My main concerns are listed in the weaknesses part, at the moment I don't have questions towards the understanding of this paper. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks to the reviewer for the time and effort in the feedback. We would like to clarify the novelty and significance of our contributions as follows: >Q1: Concerns regarding insights: Theorem 4.1 didn't bring anything new about the forward policy. While the reward transformation is a key component in our algorithm, our work’s novelty and impact extend far beyond this step. The focus of our work is not to connect the 'value functions and flow functions in the GFlowNet', but to address a critical gap in the literature by bridging **GFlowNets** and **standard (non-MaxEnt) RL**, an underexplored and overlooked connection that holds broad implications for both theory and practice. Unlike previous works that necessitate flow-matching or entropy regularization [1,2,3], we demonstrate that a GFlowNet forward policy with reward-matching properties can emerge naturally from standard RL components like policy evaluation. The core of GFlowNets' research is not about defining forward policies in different formulations ($\pi(x) \propto R(x)$), but rather developing effective methods to learn these policies that sample proportionally to rewards. Proposition 1 in [1] explains why MaxEnt RL (Buesing et al., 2019 Haarnoja et al., 2017) are biased by the number of trajectories leading to each terminal state in non-injective environments (i.e., $\pi(x)=\frac{n(x)R(x)}{\sum_{s' \in X}n(x')R(x')}$). In contrast, our Theorems 4.1-4.2 provide distinctive contributions, serving as the foundation of RPE to demonstrate how a standard RL component (policy evaluation) can surprisingly achieve reward-matching properties without requiring specialized and sophisticated flow-matching objectives or entropy regularization. This fundamentally redefines how GFlowNets can be understood and implemented. - **Novel theoretical contributions**: We uncover an unexpected bridge between standard (non-MaxEnt) RL and GFlowNets, showing that a basic component in RL, i.e., policy evaluation, can naturally achieve the same reward-matching property as in GFlowNets with minimal modifications. **Our work is not a trivial reformulation or mere reward transformation, but offers a fundamentally new insight into the capabilities of standard RL components**. Our analysis demonstrates that GFlowNets’ core mechanisms can be reinterpreted through the lens of random policy evaluation in standard RL. This reshapes our understanding of GFlowNets’ learning dynamics, resolving an open question in the field. - **Algorithmic contributions**: Based on our main theoretical results in Theorems 4.1-4.2, we develop a simple yet effective RPE algorithm using only policy evaluation, which eliminates the need for complex flow-matching conditions [1,2,3]. Despite its simplicity, our extensive experiments show that RPE matches or exceeds the performance of well-established GFlowNet methods [1,2,3], validating both our theoretical insights and practical utility. - **Expanding understanding**: Our work provides novel insights into how reward-matching properties can naturally emerge from policy evaluation of a random policy under appropriate reward transformation. This expands the important understanding that was overlooked and opens up promising directions for future research at the intersection of RL and GFlowNets. In summary, our work makes substantial contributions by revealing a novel, simpler way to achieve reward-matching policies in GFlowNets through standard RL components under appropriate conditions. [1] Bengio et. al., Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation, NeurIPS 2021 [2] Bengio et. al., GFlowNet Foundations, JMLR 2023. [3] Malkin et. al., Trajectory balance: Improved credit assignment in GFlowNets, NeurIPS 2022 >Q2: Concerns regarding the assumption in Theorem 4.2 Thanks for your question. Please refer to our response to Reviewer ghFJ's Q1 for a detailed discussion due to space limitations. >Q3: Questions about Appendix A. Thanks for your question. We will elaborate on more details in Appendix A in the revision to explain the connections between the flow/value functions in GFlowNets, MaxEnt RL, and our RPE approach. >Q4: About proof for Theorem 4.1 Thanks for your question. We clarified in Appendix B (lines 674-675) that Theorem 4.1 represents a special case of the more general Theorem 4.2. For the sake of conciseness and to avoid redundancy in the presentation, we chose to omit the explicit proof of Theorem 4.1 in the appendix (the proof can be readily derived by following the same logical framework presented in the proof of Theorem 4.2, with $B(s_{i+1}) = 1$ and there is exists exactly one path from $s_0$ to any state). To ensure completeness and facilitate better understanding, we will include the proof for this special case for Theorem 4.2 in the revision. We hope these clarifications effectively address your concerns and highlight our key contributions. We welcome any further questions or discussions. --- Rebuttal Comment 1.1: Comment: Thank the authors for their efforts on the rebuttal. After reading the elaborations, I am still having main concerns unsolved. There are usually two settings discussed for GFlowNet problem, one is bijective setting or tree MDP, another is non-injective setting or DAG. For tree MDP which is trivial, it is clearly stated in the existing work [1] that "Interestingly, in such a case one can express the pseudo-value of a state V(s) as the sum of all the rewards of the descendants of s." It already implies most results in this work instead of explicitly stating it is the value function of the random policy. The most contents of [1] is trying to solve the non-injective setting which is more challenging. However, in this work, it seems to me that the challenges faced by the perspective of value iteration in this setting is simply framed as an assumption which I pointed out in my official review. I think the authors should deep into this challenge setting instead of adding restrictions. Overall, I think the current work can be a good starting point towards this direction instead of a work explored the problem thoroughly that ready to be published. [1]. Bengio E, Jain M, Korablyov M, et al. Flow network based generative models for non-iterative diverse candidate generation[J]. Advances in Neural Information Processing Systems, 2021, 34: 27381-27394. --- Reply to Comment 1.1.1: Comment: Dear Reviewer trFY, UFa1, ghFJ, and wvJJ, Thanks to Reviewer trFY for the time in the follow-up comments. While we appreciate the engagement, which provides us the opportunity to further clarify our work, **we identified several key points that we believe stem from misunderstandings or factual errors (especially after private correspondence with authors of [1]).** We address these concerns point-by-point to ensure our insights, contributions, and methods are correctly understood. **1. On Proposition 1 in [1] and its difference with RPE** > This work is "a known result already established in Proposition 1 in [1] -- implies most results in this work. We respectfully clarify that this is a misunderstanding of both Proposition 1's role in [1] and our contributions. - **Proposition 1 in [1] highlights biases introduced by MaxEnt RL in DAG structures ([2])**, particularly how value functions in MaxEnt RL (denoted by $\hat{V}$, which modified the RL objective with entropy regularization, that is different from the traditional value function $V$ in standard non-MaxEnt RL studied in our work) are biased by the number of paths to terminal states. In addition, **in our case, the value is NOT a sum of children values (and NOT in log-scale -- see [2] for full analysis), but mean instead (and also with other developments for a practical algorithm with new insights)**. While both MaxEnt and non-MaxEnt RL involve values, their definitions are different. Thus, Proposition [1] is distinct from our work, which theoretically and empirically demonstrates how policy evaluation for a random policy (non-MaxEnt RL) can yield value functions that effectively derive reward-matching policies. - More importantly, Proposition 1 is analytical in nature, and is not about providing a tractable solution for learning reward-matching policies. Instead, the **Flow Matching** algorithm introduced in [1] builds on this analysis to offer a tractable method for policy learning (while our RPE provides a new and different method). In contrast, our work offers a fundamentally different and tractable approach. We propose a **policy evaluation variant for a FIXED RANDOM policy**, grounded in our theoretical finding that the random policy's value function (with scaled rewards) naturally aligns with the flow function in GFlowNets. `This insight provides a new and simpler alternative to the FM objective in [1] and other more complex developed objectives, including DB and TB, achieving competitive performance on standard and widely-adopted benchmarks.` Please refer to our response to Q1 for Reviewer trFY for more details of our distinctive contributions (due to space limitations). **2. Mischaracterization of our method as "Value Iteration"** > ... by the perspective of value iteration in this setting .... > ... they interpret GFlowNet from the perspective of value iteration > ... the connection between ... value function in value iteration and flow function in GFlowNet. These statements mischaracterize our method, which is based on **policy evaluation**, not value iteration. Policy evaluation is a foundational component of policy iteration, but it does not involve policy improvement (a key aspect of value iteration) [3]. This distinction is critical since our work simplifies GFlowNet training by reframing it as policy evaluation, avoiding the complexity of other complex learning objectives while achieving competitive performance across benchmarks. **3. Clarification on the role of the assumption** > The challenges faced by the perspective of value iteration in this setting is simply framed as an assumption. As thoroughly explained in our rebuttal, we emphasize that the assumption we make is satisfied across a broad range of domains within the GFlowNets literature (extensive tree and non-tree scenarios, `as elaborated in our response to Q1 for reviewer ghFJ`). Overall, we appreciate the Reviewer’s time and effort in reviewing our work. **While we acknowledge the value of critical feedback, we hope the above clarifications address the factual errors and misunderstandings raised.** **We believe our work provides a novel and meaningful contribution to GFlowNets research, which is also recognized by other reviewers both in terms of theoretical insights and practical performance:** Our work establishs a "`novel connection`" (UFa1, wvJJ) between GFlowNets and a fundamental RL problem, supported by "`well-motivated`" (ghFJ) and "`sound theoretical claims`" (ghFJ), with "`promising`" (UFa1) and "`SOTA`" (wvJJ) empirical results achieved by "`novel`" (wvJJ) RPE method. **References** [1] Bengio, Emmanuel, et al. Flow network based generative models for non-iterative diverse candidate generation. NeurIPS 2021. [2] Buesing et al. Approximate inference in discrete distributions with monte carlo tree search and value functions. AISTATS 2020. [3] Sutton et al. Reinforcement learning: An introduction. Cambridge: MIT press, 1998.
Summary: The authors present a relationship between Generative Flow networks and RL via policy evaluation. Specifically, the authors claim that the value function obtained from evaluating a uniform policy is closely related to the flow function in GFlowNets. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No Experimental Designs Or Analyses: Yes Supplementary Material: n/a Relation To Broader Scientific Literature: This paper broadly relates to solving sequential decision making problems with reinforcement learning and/or generative flow networks. Here the authors specifically investigate benchmarks where the solution space is multimodal i.e. there are many diverse solutions that achieve high reward since generative flow networks are well equipped to capture the multiple modalities. Essential References Not Discussed: n/a Other Strengths And Weaknesses: **Strengths** - The authors’ insight that evaluating a random policy approximately matches the flow function leads to a simple, practical, and novel algorithm that bridges RL and GFlowNets. - The proposed RPE algorithm empirically validates the claim that random policy evaluation matches the flow function with high accuracy results on various benchmarks. - The algorithm achieves SOTA results on discovering diverse solutions while maintaining high accuracy when compared to both GFlowNets and max-entropy RL. **Weaknesses** - The Max Entropy RL baselines, specifically DQN, are not state of the art. Why was the proposed method not compared against a more recent method like Soft Actor-Critic (SAC)? Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the useful feedback and positive assessment of our work, and for noting that our work builds a simple, practical, and novel algorithm that bridges RL and GFlowNets. We carefully address your concerns as follows: >Q1: Questions about Max Entropy RL baselines Thanks for your insightful question! Our choice of Soft DQN and Munchausen DQN as the max-entropy RL baselines was guided by prior research [1], considering the discrete action spaces in our experimental settings. While SAC was originally designed for continuous control problems and would require discretization in our context, Soft DQN is naturally suited for discrete spaces, with Munchausen DQN representing an enhanced variant with documented performance advantages. To directly address the reviewer's concern about SAC, we conduct additional experiments incorporating SAC as a baseline on the RNA1 generation task, following the cleanRL's implementation. The results illustrated in Figure 2 in https://anonymous.4open.science/r/RPE-added-experiments-57A9 show that SAC achieves comparable performance to Soft DQN in terms of mode discovery but is less efficient in terms of accuracy, which is consistent to prior research [1] demonstrating that discrete SAC typically exhibits inferior performance compared to Soft DQN and Munchausen DQN in discrete action spaces for similar tasks. **Notably, our proposed RPE method achieves superior performance across all metrics against these established baselines.** [1] Tiapkin et. al., Generative Flow Networks as Entropy-Regularized RL, AISTATS 2024 --- Rebuttal Comment 1.1: Comment: I appreciate the authors providing an additional discrete-SAC baseline and I am satisfied with their response. Several other reviewers have brought up concerns about the correctness of theorems, whether the assumptions made in the paper hold for the non-tree DAG case, etc which are valid. Another reviewer had issues with the novelty of this method. While I am not entirely familiar with the literature on GFlowNets, it seems this connection between the value function under uniform policy evaluation and GFlowNets in the non max-Ent RL setting has not been discovered or previously studied. If that's the case, then I believe the simplicity of this approach and the connection it reveals will be beneficial to both RL and GFlowNet communities and thus I will maintain my score. --- Reply to Comment 1.1.1: Comment: Dear reviewer wvJJ, Thanks for your time and effort in the follow-up comment! We are pleased to know that our rebuttal has addressed your concerns, and we appreciate the supportive feedback and for recognizing the simplicity and potential impact of our work! We would like to take this opportunity to summarize the novelty and contributions of our work: - Theoretical connections and algorithm design: We investigate and generalize the theoretical connections between value functions (in standard non-MaxEnt RL) and flow functions in both tree-structured and non-tree-structured DAGs. While prior work has explored certain theoretical connections for values in MaxEnt RL and flows in GFNs and in restricted tree-structured cases [1,2], our work significantly extends these ideas to the more general and widely applicable non-tree DAG setting, which provides new theoretical insights to account for the structural complexity of DAGs. (Additionally, as we have thoroughly explained in the rebuttal (e.g., response to Q1 for reviewer ghFJ and further clarification for reviewer UFa1), we would like to remark that the assumption we make is satisfied in a broad range of domains within the GFlowNets literature, including both tree and DAG tasks.) Building on these theoretical insights, we propose a novel and practical algorithm, RPE, leveraging our established theoretical connection between value functions and flows, which has not been previously discovered or studied. RPE is built upon ideas from policy evaluation in RL and forward policy structure in GFlowNets to offer a novel and unified approach that is both theoretically grounded and empirically validated. - Empirical validation across benchmarks: Our method, RPE, achieves superior performance compared to existing GFlowNet algorithms and MaxEnt RL methods across diverse benchmarks (including both tree and DAG scenarios). This can be attributed to a fundamental algorithmic design advantage of RPE: it evaluates a fixed uniform policy $\pi$, while both standard GFlowNets and MaxEnt RL algorithms need to estimate flows/values for continuously evolving policies. This distinctive approach helps RPE eliminate key sources of non-stationarity that can contribute to training instability. In addition, RPE adopts a simplified parameterization that learns only the flow function $F_{\theta}$, from which the sampling policy can be directly derived, which can reduce the potential approximation error from function approximators [3]. While there have been a number of works studying the connection between GFlowNets and variational inference [4], MCMC [5], generative models [6], and MaxEnt RL (which modified RL objectives) [1], our work provides distinctive and novel contributions by bridging the gap between standard non-MaxEnt RL and GFlowNets in general tree-structured and a number of non-tree-structured DAG problems. We believe our work provides significant new insights and practical tools for both communities. Furthermore, our work is not limited to theoretical insights. By extending the framework to non-tree DAGs, designing the RPE algorithm, and demonstrating its effectiveness through extensive experiments, we believe our contributions address both theoretical and practical gaps in the field, offering a new algorithm that has not been studied in previous works. Overall, we sincerely thank you for your time and effort in reviewing our work, as well as for your valuable suggestions and feedback! [1] Tiapkin et. al., GFlowNets as Entropy-Regularized RL, AISTATS 2024 [2] Yoshua Bengio et. al., GFlowNet Foundations, JMLR 2023. [3] Shen et. al., Towards Understanding and Improving GFlowNet Training, ICML 2023 [4] Malkin et. al., GFlowNets and variational inference, ICLR 2023 [5] Tristan et. al., Generative flow networks: a markov chain perspective, arXiv 2023 [6] Zhang et. al., Unifying generative models with GFlowNets and beyond, ICML Workshop 2022
Summary: The paper presents a connection between GFlowNets and non max-ent RL. It leverages insights in the special case with a uniform policy to establish the connection, which leads to the development of the RPE algorithm. Empirical results suggest that RPE achieves competitive performance with existing GFlowNet training and entropy-regularized RL baselines. ### Update after rebuttal I thank the authors for their extensive efforts in the rebuttal. The additional clarifications on the applicability of scaling factors and additional evidence on the empirical advantage of the proposed method well addressed my concerns. While I'm not familiar with the GFlowNet literature and only have an understanding of MaxEnt RL, revealing the connection between value functions and flow functions in non-DAG scenarios seems novel and serves as an important step to bridge the research efforts along these two topics. The assumptions in the framework seem to be lenient enough to cover a wide range of environments in the literature. For the reasons above, I kept my score to be positive. Claims And Evidence: Claims are well-supported. Methods And Evaluation Criteria: * The approach is well-motivated, but it is unclear if RPE retains good performance in environments with non-uniform backward policies. Theoretical Claims: * Theoretical claims are sound. Experimental Designs Or Analyses: Empirical validations involve multiple standard benchmark environments. However, it's not shown how the asymptotic performance of the proposed algorithm compares to baselines, for RNA1-RNA4 (Fig. 5) beyond training iterations 1e4. Supplementary Material: I reviewed all parts of the supp. Relation To Broader Scientific Literature: Relation to the literature is well explained in the paper, by introducing a few works linking GFlowNets to MaxEnt RL and highlights the contribution of connecting GFlowNets with standard RL in this work. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weaknesses: * In the non-tree DAG case, requiring the scaling factor to be constant across paths to any state is a strong assumption that holds in the benchmark being considered but not necessarily beyond. This is discussed in Section 4.2 and does not decrease the value of the theoretical insights, but limits the applicability of the proposed algorithm in practice. Other Comments Or Suggestions: N/A Questions For Authors: * How does the algorithm compare with baselines in terms of training variance and stability? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable feedback and for a positive assessment of our work! We carefully address your concerns as follows: >Q1: Concerns about path invariance scaling factor $g$ Thanks for your question. We would like to emphasize that this condition is satisfied in a broad range of domains of GFlowNets literature, including most standard benchmarks and important application areas: 1) all tree-structured problems naturally satisfy this condition, e.g., sequence/language generation [1,3], red-teaming [2], and prompt optimization [4]; 2) non-tree structured DAGs where the number of parent and children states remains independent of the specific trajectory to the state, like the widely-studied set generation tasks[4], and real-world applications including feature selection[5], recommender systems[6], experimental design optimization[1], portfolio construction, etc. The key requirement is that the task can be modeled as a DAG where the ratio of children states or parent states remains consistent regardless of the specific path taken to reach the state. These examples demonstrate that the condition applies to a wide range of practical generative tasks within the GFlowNet framework. However, as noted in Section 4.1.2, we acknowledge that RPE's simplicity imposes limitations in certain environments, such as the toy HyperGrid task with boundary conditions. Yet, this simplicity is precisely what makes our findings both surprising and significant. Our work provides the first rigorous characterization of when GFlowNets and policy evaluation align, bridging a critical gap in understanding the relationship between (non-MaxEnt) RL and GFlowNets. This fundamental insight unlocks new possibilities and advances both fields by leveraging their complementary strengths. Despite being based on basic policy evaluation principles for a uniform policy, our method achieves performance comparable to many well-developed GFlowNet algorithms while maintaining remarkable simplicity across diverse environments, making it a valuable foundation for future exploration in GFlowNets and RL research. >Q2: Questions about training variance and stability Thanks for your question. As demonstrated in Figures 3,4,5,6 in the main text, RPE exhibits both consistently improving performance and remarkably stable learning throughout the training process. It is noteworthy that RPE has a fundamental algorithmic design difference (advantage): it evaluates a fixed uniform policy $\pi$, while both standard GFlowNets and MaxEnt RL algorithms need to estimate flows/values for continuously evolving policies. This distinctive approach helps RPE eliminate key sources of non-stationarity that can contribute to training instability. To ensure statistical reliability, all experimental results are averaged across multiple random seeds following evaluation standards in previous works [7]. For the RNA1 generation task, `RPE demonstrates the lowest standard deviation among all baseline methods` (please see Table 1 in https://anonymous.4open.science/r/RPE-added-experiments-57A9). The reduced variance of RPE can be attributed to reduced complexity in the learning objective through fixed policy evaluation and simplified parameterization that learns only the flow function $F_{\theta}$, different from privious state-of-the-art GFlowNets method like trajectory balance (TB) [7], which can incur high variance due to the reliance on Monte-Carlo estimation techniques. >Q3: Performance of RPE beyond training iterations 1e4. To further evaluate RPE's sustainability beyond the standard training regime, we extend our experimental analysis by training for 1e5 iterations (10$\times$) on the RNA1 task, and compare it against the most competitive algorithms from both Soft RL and GFlowNets paradigms, including Munchausen-DQN and TB in this task. The results presented in Fig.1 in https://anonymous.4open.science/r/RPE-added-experiments-57A9 demonstrate that RPE maintains its performance advantage throughout the extended training period, which consistently outperforms these top-performing baselines, validating its effectiveness as a stable and high-performing approach. **References** [1] Jain et. al., GFlowNets for AI-Driven Scientific Discovery, Digital Discovery, 2023 [2] Lee et. al., Learning diverse attacks on large language models for robust red-teaming and safety tuning, ICLR 2025 [3] Hu et. al., Amortizing intractable inference in large language models, ICLR 2024 [4] Yun et. al.,Learning to Sample Effective and Diverse Prompts for Text-to-Image Generation, CVPR 2025 [4] Pan et. al., Better Training of GFlowNets with Local Credit and Incomplete Trajectories, ICML 2023 [5] Ren et. al., ID-RDRL: a deep reinforcement learning-based feature selection intrusion detection model, Scientific Reports 2022 [6] Liu et. al., Generative Flow Network for Listwise Recommendation, KDD 2023 [7] Malkin et. al.,Trajectory balance: Improved credit assignment in GFlowNets, NeurIPS 2022
Summary: This paper explores a connection between GFlowNets and policy evaluation in the specific setting of undiscounted Reinforcement Learning with only terminal rewards. The central idea revolves around the flow constraint in GFlowNets, which shows that the flow out of a state must equal the total in-flow from its successor states. The authors draw a parallel between this constraint and the objective of policy evaluation, where the value of a state is the average of the values of its descendant states. The key difference identified is a normalization factor (the number of descendants) in policy evaluation. The paper proposes scaling the flow function in GFlowNets to account for this normalization, suggesting a potential way to perform policy evaluation using GFlowNets, which is shown to work for tree-structured state spaces. ## update after rebuttal After discussing with the author, I will maintain my original assessment given the limitations of the methods and their similarity to existing works Claims And Evidence: The paper claims a connection between GFlowNets and policy evaluation and proposes a method to bridge this gap, particularly for undiscounted RL with only terminal rewards. The theoretical result for the tree case (Theorem 4.1) seems plausible based on the explanation. However, I'm not sure about if it's correct for the general DAG case (Theorem 4.2). Methods And Evaluation Criteria: The proposed method involves scaling the flow function F(s) in GFlowNets based on the number of "branches" or cumulative inverse of the number of descendants to align with the policy evaluation objective. The evaluation seems to primarily focus on the theoretical derivations, with limited mention of empirical evaluation or specific evaluation criteria. I'm not sure about the motivation behind this specific approach and its advantages, as well as its relation to existing policy evaluation methods like Soft Policy Iteration. Theoretical Claims: I'm unsure about the validity of Theorem 4.2, which extends the connection to the general DAG case. The definition of g(τ,s_t) is clear. However, the subsequent definition of R′(x)=R(x)g(τ,x) is ambiguous. In a DAG, multiple trajectories τ can lead to the same terminal state x. This raises a concern as the left side of the equation (R′(x)) should ideally be independent of the specific trajectory taken, while the right side (R(x)g(τ,x)) explicitly depends on τ through g(τ,x). This suggests a potential flaw in the formulation for the general DAG case. The correctness of the proof for Theorem 4.2 is therefore in question. Experimental Designs Or Analyses: The experiments are conducted on predicting DNA or RNA sequences andn molecule generation. I wonder why the comparison on a standard RF setting is not presented. Supplementary Material: I checked the proof for theorem 4.2, but I'm not sure about if it's correct Relation To Broader Scientific Literature: This work is related to Soft Policy Iteration (control as inference), where the policy is indeed often treated as uniform. The paper should discuss how this approach compares to and differs from existing methods in the context of policy evaluation and control. Essential References Not Discussed: However, a more comprehensive understanding of the GFlowNet and policy evaluation literature might reveal missing relevant works. Other Strengths And Weaknesses: Strengths: The paper attempts to establish a novel connection between GFlowNets and a fundamental problem in Reinforcement Learning, policy evaluation. The result for the tree case (Theorem 4.1) seems promising. Weaknesses: The main weakness lies in the ambiguity and potential flaw in the extension to the general DAG case (Theorem 4.2), particularly concerning the τ-dependence in the definition of R′(x). The lack of clarity in definitions and assumptions, such as the condition in line 629-630, also detracts from the paper's strength. Furthermore, the motivation for this specific approach and its relation to existing RL methods are not clearly articulated. The paper also lacks details on experimental validation. Other Comments Or Suggestions: - The authors should rigorously define all variables in the equations to improve clarity. - The ambiguity surrounding the definition of R′(x)=R(x)g(τ,x) in the context of DAGs with multiple trajectories reaching the same terminal state needs to be addressed. How is the τ-dependence resolved? - The condition mentioned in lines 629-630, "If any trajectories τ_1 and τ_2 that visits s_t satisfy g(τ_1,s_t)=g(τ_2,s_t)," needs to be clearly stated whether it is an assumption and, if so, how it can be generally satisfied in a DAG. In line 068 (2nd column), the backward probability P_B(s∣s′) seems to have a typo and should likely be P_B(s′∣s). - The paper would benefit from a clearer explanation of the motivation behind using this specific approach for policy evaluation. What are the potential advantages or insights it offers compared to existing methods? - A discussion on how this work relates to Soft Policy Iteration and the "control as inference" framework would help contextualize the contribution. Questions For Authors: 1. In the definition of R′(x)=R(x)g(τ,x) for the general DAG case (Theorem 4.2), how is the τ-dependence handled given that multiple trajectories can reach the same terminal state x? Please clarify why R′(x) on the left-hand side would be τ-independent. 2. Regarding the condition stated in lines 629-630: "If any trajectories τ_1 and τ_2 that visits s_t satisfy g(τ_1,s_t)=g(τ_2,s_t)," is this an assumption that needs to hold for your theoretical results to be valid? If so, under what conditions on the DAG structure and the transition probabilities would this assumption be satisfied in general? 3. In line 068 (2nd column), for the backward probability, is P_B(s∣s′)=F(s→s′)/F(s′) a typo? Should it be P_B(s′∣s)=F(s→s′)/F(s) or P_B(s′∣s)=F(s→s′)/F(s′)? 4. What is the specific motivation for using this approach to connect GFlowNets and policy evaluation? What advantages does it offer over existing policy evaluation techniques, particularly in the context of undiscounted RL with only terminal rewards? 5. How does this work relate to the literature on Soft Policy Iteration and the "control as inference" paradigm, where policies are often treated as uniform distributions? What are the key differences and similarities? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your valuable suggestions and feedback! We carefully address your concerns as follows: >Q1: $\tau$-dependence regarding transformed rewards $R(x)g(\tau,x)$ Thanks for your question! This concern is addressed by the assumption in Theorem 4.2: "For any trajectories $\tau_1$ and $\tau_2$ that visit $s_t$, they must satisfy the condition $g(\tau_1, s_t) = g(\tau_2, s_t)$." Under this assumption, the transformation from $R(x)$ to $R'(x)=R(x)g(\tau,x)$ is well-defined, since the transformed reward function $R'(x)$ remains consistent regardless of the specific trajectory taken to reach the terminal state $x$ (and this property naturally holds in a number of standard GFlowNet benchmarks -- Q2). We will also update the notation to eliminate any ambiguity. >Q2: questions about the condition g(τ_1,s_t)=g(τ_2,s_t) Thanks for your question. Indeed, this condition is a necessary assumption for the validity of Theorem 4.2. We remark that this assumption can be satisfied in a number of standard GFlowNets problems. Please refer to our response to Reviewer ghFJ's Q1 for a detailed discussion. >Q3: Definition of P_B Thanks for the question. This is not a typo, and $P_B$ is defined as $P_B(s|s')=\frac{F(s\to s')}{F(s')}$ following the GFlowNets literature (Eq. (17) in [1]), representing the probability of having come from state $s$ given that we are currently at state $s'$ (defined in terms of the flow). >Q4: Concerns regarding motivations and contributions. The motivation for connecting GFlowNets and policy evaluation stems from a critical gap in the current understanding of GFlowNets' relationship with RL frameworks. While previous research has extensively explored connections between GFlowNets and various ML methods, their link to RL has largely been limited to MaxEnt RL [2,3] via modified objectives with entropy regularization. However, the connection between GFlowNets and standard (non-MaxEnt) RL remains largely unexplored, despite the shared foundation of sequential decision-making. This gap limits cross-disciplinary insights (e.g., leveraging RL’s efficiency for GFlowNets or enriching RL with GFlowNets’ diversity). Our work fills this crucial gap and makes the following key contributions: (1) Surprisingly, we find that the flow functions in GFlowNets naturally correspond to value functions in uniform policy evaluation. This insight leads to our key finding that reward-matching effects, previously achieved through complex GFlowNets algorithms [1,4], **can be accomplished through simple policy evaluation.** (2) Based on this theoretical foundation, our proposed RPE method achieves competitive performance with reduced complexity. (3) Our work challenges the prevailing belief that GFlowNets are intrinsically tied to MaxEnt RL and provides a fundamentally new perspective on GFlowNets learning dynamics. In addition, traditional policy evaluation is merely a component for policy improvement; RPE reframes it to generate diverse, high-quality candidates via uniform policy evaluation with transformed rewards, expanding its utility to domains requiring both quality and diversity. >Q5: relations to Soft Policy Iteration (SPI) and the "control as inference" framework Thanks for your insightful question. Our work shares conceptual connections with SPI, while establishing key distinctions that highlight the novelty of our work: (1) SPI necessitates iterative cycles of policy evaluation and policy improvement to derive the desired policy, while RPE evaluates only the uniform policy, transforming its value function to flow functions to achieve reward matching. (2) SPI and 'control as inference' framework both explicitly incorporate entropy regularization terms to ensure policy stochasticity, while RPE achieves superior reward-matching performance through standard policy evaluation utilizing a summarization operator, eliminating the need for explicit regularization. Our experimental results demonstrate RPE's superior performance compared to both SoftDQN and MunchausenDQN (which are advanced soft RL algorithms for discrete environments) in terms of the standard GFlowNets evaluation protocol. We will add detailed discussions between them in the revision. >Q6: Comparison on a standard RF setting Since the goal of our work is to validate that a simple RPE algorithm can achieve competitive reward-matching performance as existing complex GFlowNets, we evaluate RPE and extensive baselines using well-established benchmarks in the GFlowNets literature [8,9], e.g., TFBind/RNA/molecule generation, which requires a critical balance between diversity and quality. [1] Yoshua Bengio et. al., GFlowNet Foundations, JMLR 2023. [2] Tiapkin et. al., Generative Flow Networks as Entropy-Regularized RL, AISTATS 2024 [3] Tristan et al., Discrete probabilistic inference as control in multi-path environments, UAI 2024 [4] Bengio et. al., Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation, NeurIPS 2021 --- Rebuttal Comment 1.1: Comment: Thank you for your response. Firstly, there's indeed an typo in the original paper. On page 2, in the second column, line 14, the definition of P_B(s|s′) is given as F(s → s′)/F(s). And the correct definition should be P_B(s|s′) = F(s → s′)/F(s'). I apologize, as I also made a similar error in my initial review, which may have caused the misunderstanding. Secondly, regarding Theorem 4.2, as we discussed, the current assumption might be quite restrictive, potentially limiting the theorem's applicability to a broader range of scenarios. It'll be better if this could be shown to hold under more general conditions (other than applying to only the tree stucture). Otherwise, I will likely maintain my current assessment. --- Reply to Comment 1.1.1: Comment: Dear reviewer UFa1, Thanks for your time and effort in the follow-up comment, and for pointing out the typo, which we will update in the revision. We also appreciate the opportunity to further clarify the generality of the assumption introduced in Theorem 4.2. We apologize that, due to space limitations, we could not elaborate on the generality of the assumption in the initial rebuttal and had to `defer our response to Reviewer ghFJ's Q1`. In this response, we aim to provide a detailed clarification of the reasoning behind this assumption, its applicability within the GFlowNets framework, and its implications for practical applications. Theorems 4.1-4.2 are formulated within the context of GFlowNets, which address **constructive tasks** by sampling compositional objects through a sequence of constructive actions. These tasks are modeled as **Directed Acyclic Graphs (DAGs)**, where agents do not revisit the same state within a single episode, ensuring the graph remains acyclic. This structural property of DAGs inherently supports the assumption in Theorem 4.2, **which can be satisfied in a broad range of domains within the GFNs literature, beyond just tree-structured problems** (e.g., sequence/language generation [1,2], red-teaming [3], and prompt optimization [4]): - **Non-tree structured DAGs:** `Many standard GFlowNets benchmarks satisfy this assumption` due to the constructive nature of these DAG tasks, where states are compositional objects built via sequences of actions. Specifically, in most practical scenarios, the number of actions at a state $s_i$ ($|A(s_i)|$) and the in-degree of its child states ($|B(s_{i+1})|$) are inherently linked by the symmetry of compositional generation. For example, in tasks where objects are built via k-step additive processes (e.g., sets or sequences generation [1,5]), the number of valid actions at step $i$ (e.g., adding one of $n-i$ remaining elements) scales inversely with the in-degree of the resulting state (e.g., $i+1$ ways to remove an element to return to its parent). This relationship ensures the ratio $\frac{|A(s_i)|}{|B(s_{i+1})|}=\frac{n-i}{i+1}$, a trajectory-independent constant determined solely by the combinatorial rules of the task. Even in non-uniform DAGs (e.g., molecular graphs with variable branching like the prepend/append MDP studied in our paper), consistency arises when transitions satisfy local balance: if every path to $s_{i+1}$ requires a fixed number of parent states proportional to the action space at $s_i$, the ratio remains stable. Therefore, our assumption can naturally hold in these DAG tasks, since the ratios remain consistent regardless of the trajectory $\tau$. We remark that many practical examples with DAG structure also satisfy this assumption, including widely-studied set generation tasks [5], feature selection [6], recommender systems [7], and experimental design optimization. **Moreover, our empirical results validate RPE's superior performance on commonly studied DAG tasks like TFBind/RNA/Molecule sequence generation.** Thus, we argue that this assumption is not a restriction for GFlowNets problems, **since it broadly applies across extensive tree and non-tree (DAG) scenarios**. However, as we acknowledge in Section 4.1.2, there are certain corner cases, such as the toy HyperGrid task with boundary conditions (where task symmetry fails for states at edges), where this assumption may not hold. We view this limitation as a natural consequence of the simplicity of our method. **Yet, it is precisely this simplicity that makes our findings both surprising and impactful**, as our work provides the first rigorous characterization of when GFNs and policy evaluation align. Despite its simplicity, our method achieves performance comparable to existing GFlowNet algorithms across diverse benchmarks (including tree and DAG), offering a practical and theoretically grounded approach that bridges the gap between standard (non-MaxEnt) RL and GFNs. We believe this makes our work a valuable foundation for further exploration in both fields. We hope this response addresses the concern and demonstrates the generality and significance of our findings, and we will further expand the discussion of this assumption in the revision. [1] Jain et. al., GFlowNets for AI-Driven Scientific Discovery, Digital Discovery 2023 [2] Hu et. al., Amortizing intractable inference in large language models, ICLR 2024 [3] Lee et. al., Learning diverse attacks on large language models for robust red-teaming and safety tuning, ICLR 2025 [4] Yun et. al.,Learning to Sample Effective and Diverse Prompts for Text-to-Image Generation, CVPR 2025 [5] Pan et. al., Better Training of GFlowNets with Local Credit and Incomplete Trajectories, ICML 2023 [6] Ren et. al., ID-RDRL: a deep reinforcement learning-based feature selection intrusion detection model, Scientific Reports 2022 [7] Liu et. al., Generative Flow Network for Listwise Recommendation, KDD 2023
null
null
null
null
null
null
One Stone, Two Birds: Enhancing Adversarial Defense Through the Lens of Distributional Discrepancy
Accept (poster)
Summary: This paper proposes DDAD (Distributional-Discrepancy-based Adversarial Defense), a novel two-pronged adversarial defense method that leverages statistical adversarial data detection (SADD) to improve robustness against adversarial attacks. The paper's contributions include: 1. Demonstrates that minimizing distributional discrepancy (via Maximum Mean Discrepancy, MMD) can reduce adversarial risk. 2. Introducing Two-Pronged Defense Mechanism: Combines: 1)Detection: Uses an optimized MMD (MMD-OPT) to distinguish clean examples (CEs) from adversarial examples (AEs). 2) Denoising: Applies a denoiser to transform detected AEs before classification, instead of discarding them. 3. Extensive experiments on CIFAR-10 and ImageNet-1K, showing improved clean and robust accuracy over state-of-the-art (SOTA) adversarial defenses against adaptive white-box attacks Claims And Evidence: yes Methods And Evaluation Criteria: Pros: 1. The combination of MMD-based detection and adversarial denoising is novel and addresses the limitations of traditional adversarial detection methods. Unlike previous SADD-based methods that discard detected adversarial samples, DDAD recovers useful information via denoising. 2. Establishes a formal connection between adversarial risk and distributional discrepancy. Proves that minimizing MMD can reduce the expected loss on adversarial examples, strengthening the theoretical foundation of SADD-based defenses. 3. Evaluates multiple model architectures (ResNet, WideResNet, Swin Transformer). Tests against strong adaptive white-box attacks (PGD+EOT, AutoAttack). Demonstrates generalization to transfer attacks and robustness improvements on ImageNet-1K. 4. Unlike generative-based adversarial purification methods, DDAD does not rely on explicit probability density estimation, which is often unreliable for large datasets. Balances robustness and accuracy trade-offs better than existing denoiser-based approaches. Cons: 1. DDAD relies on batch-wise processing for adversarial detection, requiring a minimum batch size for stable performance. This limitation makes real-time, single-sample inference challenging, which reduces practicality for real-world applications. 2. The training phase requires optimizing MMD-OPT and training a separate denoiser, which introduces additional computational overhead. The paper does not compare training time or efficiency trade-offs with SOTA defenses. 3. While DDAD is evaluated against PGD+EOT and AutoAttack, stronger adaptive adversaries (e.g., BPDA, multi-step AutoAttack, or gradient-free query-based methods) could be tested. Adaptive attacks that target the MMD-OPT feature space should be explored. 4. The method is evaluated primarily on CIFAR-10 and ImageNet-1K, but its applicability to real-world security-sensitive domains (e.g., medical imaging, autonomous driving) is not discussed. How well DDAD generalizes to non-vision domains (e.g., NLP, speech models) remains an open question. Theoretical Claims: yes, it is ok Experimental Designs Or Analyses: see Methods And Evaluation Criteria Supplementary Material: yes, all Relation To Broader Scientific Literature: see Methods And Evaluation Criteria Essential References Not Discussed: see Methods And Evaluation Criteria Other Strengths And Weaknesses: see Methods And Evaluation Criteria Other Comments Or Suggestions: see Methods And Evaluation Criteria Questions For Authors: see Methods And Evaluation Criteria Ethical Review Concerns: no need Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## 1. Batch-wise Processing In our humble opinion, the practicality of a method should be evaluated in the context of specific scenarios and application requirements, which means there is no absolute 'practical' or 'impractical' method. The key message we want to deliver here is: **batch-wise evaluation is not impractical, but it will have some costs**: - **Proposed solution:** For user inference, single samples provided by the user can be dynamically stored in a queue. Once the queue accumulates enough samples to form a batch, our method can then process the batch collectively using the proposed approach. - **Costs for this solution:** The main cost is waiting time to accumulate enough samples (e.g., 50). However, in high-throughput scenarios (e.g., Google's terminals), this delay is minimal (often <2 seconds). For applications with stricter latency requirements, the batch size can be dynamically adjusted based on the incoming data rate to minimize waiting time. For instance, if the system detects a lower data arrival rate, it can process smaller batches to ensure timely responses. - **Comparison with SOTA AP methods:** Diffusion-based AP methods can handle single-sample inputs but suffer from slow inference speeds (e.g., DiffPure [1] takes ~4 seconds per CIFAR-10 image on an A100 GPU). In contrast, our method averages only ~0.003 seconds per image. Assuming there are 1000 images, DiffPure would take 4000 seconds to complete the inference, while our method only takes 3 seconds. Therefore, if the waiting time to form a batch is less than 3997 seconds, our method is more time-efficient than DiffPure. Thus, diffusion-based AP methods can hardly be applied to a system where data arrives quickly. Instead, our method can handle it, demonstrating that batch-wise evaluation is not impractical. Overall, it is a trade-off problem: using our method for user inference can obtain high robustness, but the cost is to wait for batch processing. Based on the performance improvements our method obtains over the baseline methods and the fact that current SOTA AP methods are generally slow at inference, we believe the cost is feasible and acceptable. [1] Diffusion Models for Adversarial Purification, ICML 2022. ## 2. Training and Inference Efficiency We provide comparisons of training time with 3 representative AT-based methods [1][2][3] in Table 1. Notably, the current SOTA AT method [4] requires generating 50M synthetic images, making it extremely time-consuming. The point we want to highlight is: even compared to simpler AT methods, our method demonstrates significantly better efficiency. We also provide comparisons of inference time with 2 SOTA diffusion-based AP methods [5][6] in Table 2. Table 1: Training time (hours: minutes: seconds) of different methods on CIFAR-10 with 2 x NVIDIA A100. The target model is RN-18. |Method|Training Time| |-|-| |[1]|00:55:54| |[2]|01:27:28| |[3]|01:04:19| |DDAD|00:28:17| Table 2: Inference time per image (seconds) of different methods on CIFAR-10 with 1 x NVIDIA A100. The target model is WRN-28-10. |Method|Inference Time per Image| |-|-| |[5]|3.934| |[6]|14.902| |DDAD|0.003| [1] Towards Deep Learning Models Resistant to Adversarial Attacks, ICLR 2018. [2] Theoretically Principled Trade-off between Robustness and Accuracy, ICML 2019. [3] Improving Adversarial Robustness Requires Revisiting Misclassified Examples, ICLR 2020. [4] Better Diffusion Models Further Improve Adversarial Training, ICML 2023. [5] Diffusion Models for Adversarial Purification, ICML 2022. [6] Robust Evaluation of Diffusion-Based Adversarial Purification, ICCV 2023. ## 3. Adaptive Attack According to [1], PGD+EOT is considered the **strongest** attack against diffusion-based AP methods, while AutoAttack is **strongest** against AT-based methods. Our proposed adaptive attacks (both PGD+EOT and AutoAttack) additionally target DDAD's detection mechanism (i.e., MMD-OPT), and adaptive PGD+EOT proves most effective in breaking DDAD. Moreover, Table 3 in our paper demonstrates DDAD achieves the **best** average performance against **adaptive BPDA+EOT attack** across various baselines. Finally, please kindly check the results against adaptive AutoAttack in **Section 2 & 3 of Reviewer 3bTb's responses**. [1] Robust Evaluation of Diffusion-based Adversarial Purification, ICCV 2023. ## 4. Applicability of DDAD Thank you for your concern! As specified in our problem setting, we primarily focus on robust classification, which is the standard setting adopted by most existing defense methods. In our humble opinion, even though classification is relatively straightforward, achieving fully robust model predictions remains challenging, indicating that robustness would likely be even harder to attain in more complex tasks. However, we agree that generalizing DDAD to other tasks/domains is an interesting direction and we leave it as future work.
Summary: This work first validates the effectiveness of the SADD-based approach through theoretical analysis and mathematical proofs. To address the limitations of traditional SADD-based methods in utilizing AEs, the authors innovatively propose the DDAD method. By introducing an additional denoiser training module, the proposed method effectively eliminates adversarial noise interference, thereby significantly improving data utilization in data-constrained scenarios. Experimental results demonstrate that the DDAD method achieves SOTA performance on two benchmark datasets, providing a novel solution for enhancing model robustness in few-shot learning scenarios. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, I have thoroughly verified and validated all mathematical derivations and formula proofs presented in this work. Experimental Designs Or Analyses: Yes, I have checked the experimental designs and analyses. Supplementary Material: Yes, I have conducted a comprehensive review of all supplementary materials. Relation To Broader Scientific Literature: This work builds upon the existing SADD-based framework while addressing its limitations through the innovative integration of a denoiser module. Experimental results demonstrate significant performance gains, validating the effectiveness of this architectural improvement. Essential References Not Discussed: No. Other Strengths And Weaknesses: The strengths of this work have been previously highlighted. Below are some questions and concerns regarding this study: 1. In Remark 1 of the 'Problem Setting' section, there appears to be a potential typographical issue in the final sentence: '..., the ground-truth ial domains are equal in our problem setting'. Could the authors please verify and clarify this statement? 2. In the 'Evaluation Settings' subsection of the 'Experiments' section, the authors state: "Notably, we find that our method achieves the worst case robust accuracy on adaptive white-box PGD+EOT attack, Therefore, we report the robust accuracy of our method on adaptive white-box PGD+EOT attack for Table 1 and 2." Could the authors please clarify: a) The authors mention using both PGD+EOT and AutoAttack for evaluation at the beginning of this section, but only report PGD+EOT results in Tables 1 and 2. Could you explain why AutoAttack results were omitted? 3. Regarding the experimental design in the 'Evaluation Settings' subsection, the authors employ different attack methods to evaluate AT, AP, and the proposed method. Could the authors please: a) Justify the fairness of this experimental setup? b) Clarify the specific meaning of 'Robust Accuracy' in Tables 1 and 2, given that different attacks were used for different methods? A more consistent evaluation framework using uniform attack methods across all compared techniques would strengthen the comparative analysis. 4. While the authors describe computational resources in Section E.8, could the authors please provide a more detailed comparison of the training efficiency with traditional methods? 5. In the ablation studies, Table 7 shows the "Sensitivity of DDAD to the threshold" where the robust accuracy drops to nearly zero when the threshold value increases from 0.01 to 0.03. Could the authors please: a) Explain this dramatic performance degradation? b) Provide insights into the underlying mechanism causing this sensitivity? Other Comments Or Suggestions: See ‘Other Strengths And Weaknesses’ Questions For Authors: See ‘Other Strengths And Weaknesses’ Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## 1. Rendering Errors of LaTeX Thank you for pointing out this unexpected rendering issue caused by LaTex! The entire sentence in line 112 is: *'...the ground-truth labelling functions for the clean and adversarial domains are equal in our problem setting.'* This statement is supported based on Assumptions 1&2. We will fix this issue in the updated version of our paper! ## 2 & 3. Clarification of Experimental Settings Sorry for the confusion! Indeed, we did not omit AutoAttack results intentionally. Instead, we follow the principle of evaluating each method under its **worst-case scenario** (i.e., the strongest attack against that method). Therefore, 'Robust Accuracy' in Tables 1 and 2 refers to: - For AT-based methods, we measure the robust accuracy of AutoAttack, since it is empirically the strongest attack for AT-based methods, as demonstrated by [1]. - For AP-based methods, we measure the robust accuracy of PGD+EOT attack, since it is empirically the strongest attack for AP-based methods, as shown in [2]. - For DDAD, we measure the robust accuracy of adaptive PGD+EOT attack, which additionally targets both the denoiser and the detector modules (see Algorithm 3). We empirically observed that adaptive white-box PGD+EOT provides the worst-case scenario for DDAD compared to adaptive white-box AutoAttack. Thus, each defense method is evaluated **under its own strongest attack setting**, ensuring a **relatively fair** comparison since it mitigates evaluation bias (e.g., AT-based methods perform well on PGD+EOT because they have seen PGD examples during training). Similarly, if we only consider attacking the denoiser and the classifier (i.e., the white-box setting for AP-based methods), DDAD can achieve around 77% robustness on PGD+EOT and 81% on AutoAttack, which is **clearly not fair** for both AT and AP methods. We provide the full results in Table 1 below. Table 1: Clean and robust accuracy (%) against adaptive white-box PGD+EOT ($\ell_\infty, \epsilon = 8/255$) and adaptive white-box AutoAttack ($\ell_\infty, \epsilon= 8/255$) on CIFAR-10. * means this method is trained with extra data. We show the most successful defense in **bold**. |Type|Method|Clean|PGD+EOT|AutoAttack| |---|---|---|---|---| |AT|Gowal et al. (2021)|87.51|66.01|63.38| |AT|Gowal et al. (2020)*|88.54|65.10|62.76| |AT|Pang et al. (2022a)|88.62|64.95|61.04| |AP|Yoon et al. (2021)|85.66|33.48|59.53| |AP|Nie et al. (2022)|90.07|46.84|63.60| |AP|Lee & Kim (2023)|90.16|55.82|70.47| |Ours|DDAD|**94.16**|**67.53**|**72.21**| [1] RobustBench: a standardized adversarial robustness benchmark, NeurIPS D&B 2021. [2] Robust Evaluation of Diffusion-Based Adversarial Purification, ICCV 2023. ## 4. Training and Inference Efficiency We provide comparisons of training time with 3 representative AT-based methods [1][2][3] in Table 1. Notably, the current SOTA AT method [4] requires generating 50M synthetic images, making it extremely time-consuming. The point we want to highlight is: even compared to simpler AT methods, our method demonstrates significantly better efficiency. We also provide comparisons of inference time with 2 SOTA diffusion-based AP methods [5][6] in Table 2. Table 1: Training time (hours: minutes: seconds) of different methods on CIFAR-10 with 2 x NVIDIA A100. The target model is RN-18. |Method|Training Time| |-|-| |[1]|00:55:54| |[2]|01:27:28| |[3]|01:04:19| |DDAD|00:28:17| Table 2: Inference time per image (seconds) of different methods on CIFAR-10 with 1 x NVIDIA A100. The target model is WRN-28-10. |Method|Inference Time per Image| |-|-| |[5]|3.934| |[6]|14.902| |DDAD|0.003| [1] Towards Deep Learning Models Resistant to Adversarial Attacks, ICLR 2018. [2] Theoretically Principled Trade-off between Robustness and Accuracy, ICML 2019. [3] Improving Adversarial Robustness Requires Revisiting Misclassified Examples, ICLR 2020. [4] Better Diffusion Models Further Improve Adversarial Training, ICML 2023. [5] Diffusion Models for Adversarial Purification, ICML 2022. [6] Robust Evaluation of Diffusion-Based Adversarial Purification, ICCV 2023. ## 5. Sensitivity of DDAD to the Threshold Values The performance degradation occurs in ImageNet-1K, and this sensitivity can be attributed to 2 major reasons: (1) the evaluation of ImageNet-1K uses AEs with lower perturbation budgets compared to CIFAR-10, which makes it reasonable to use a smaller threshold value for ImageNet-1K. (2) ImageNet-1K is a large-scale dataset, so it may require more samples to optimize MMD-OPT. It is possible that the current MMD-OPT for ImageNet-1k has not been fully optimized, leading to such sensitivity (for now, we only use 1000 training samples to train MMD-OPT for ImageNet-1K). Luckily, we still have a range of threshold values that can produce stable and competitive results on ImageNet-1K. In the future, it would be interesting to see whether MMD-OPT will be more stable to threshold values on ImageNet-1K if we increase the training samples.
Summary: This paper proposes a two stages adversarial defence method based on distribution discrepancy between clean samples and adversarial samples. Firstly, the authors train a model called MMD-OPT by maximise the MMD between distribution of clean data and adversarial data. Then, the MMD-OPT can act as a guidance to train a denoiser to denoise the adversarial sample to the clean one. During the inference stage, the MMD-OPT can act as a detector to detect adversarial samples, and the trained denoiser can purify the perturbation inside the adversarial samples. With the purified data and clean data, the classifier will obtained higher accuracy on this mixed dataset. Experiments are conducted on CIFAR10 and ImageNet-1K demonstrate the effectiveness of the proposed method. Claims And Evidence: The authors made several claims in the paper. However, I do think several claims lack clear evidence. For example, the paper states that minimizing distributional discrepancy can reduce the expected loss on adversarial examples (AEs). This is quite intuitive, however, there is no evidence to support this claim, either from theoretical aspect or the empirical aspect. Methods And Evaluation Criteria: The proposed method is a two-stage method, whose idea is to regard the test data includes both clean samples and adversarial samples. The method firstly identifies the adversarial samples within the dataset, then purifies these adversarial samples. For the detector of adversarial samples, the authors propose to this detector by maximising its output of MMD distance between clean samples and adversarial samples. The detector then acts as a guidance for the denoiser network of adversarial samples to make the denoised data has smaller MMD distance with the clean data. This whole pipeline is clear motivated and makes sense. Theoretical Claims: I do not check the theorem in the paper very carefully. For me, the contribution mainly lies in the proposed pipeline. The theoretical part is just an auxiliary to improve the soundness of motivation from theoretical aspects. Experimental Designs Or Analyses: The experiment part follows common practice of adversarial defence problems in the field, which evaluate the proposed method with small-scale CIFAR10 dataset and large-scale ImageNet-1K dataset. However, I think the compared methods are somehow too out-of-date since most of them are published at conferences at least two years ago. More recent methods are needed to be added to compare with. For the ablation study about the ratio of AEs within the mixed dataset, I am wondering why do not plot the curves of other compared methods in Figure 2. Supplementary Material: I have reviewed the supplementary material. Most part of supplementary material is the supplemented experiment of the proposed method. Relation To Broader Scientific Literature: This paper may have potential to benefit the researchers in the field of adversarial defence and adversarial purification. Essential References Not Discussed: I do not have more references to suggest. Other Strengths And Weaknesses: Weaknesses: * The authors highlight the effectiveness of using distributional discrepancy to indicate clean samples and adversarial samples. However, there is no clear evidence to support this point. Additionally, the authors choose to use MMD to measure the discrepancies, there is a need to discuss or compare with other distributional discrepancies. * In this paper, the authors assume that we have access to a set of labeled clean data and adversarial data. Given these labeled data, the authors are able to train the MMD-OPT to act as a guidance to train the denoiser in the next step. However, this assumption is not satisfied in the plain adversarial defence setting. * The experiment part lacks an important ablation study, which is to the effectiveness of two terms in Eq. (8) respectively. As indicated in line 272, $\alpha$ is set to be 0.01, which is a quite small value. How much does the cross-entropy constraint will affect the performance of the trained denoiser? Other Comments Or Suggestions: I do not have further comments. Please refer to weaknesses and questions part. Questions For Authors: A question just comes up to me after I read the paper. For an adversarial sample, it can be viewed as some invisible perturbation $\epsilon$ into the clean image so that the perturbed image can cheat the classifier with no clear change from the visual perspective. While the perturbation in adversarial images is quite minor, the distance of distribution between AEs and CEs might not be too large. However, as suggested by the authors, it seems that after projecting AEs and CEs into the Hibert space, the distance between these two projected distributions becomes larger. Does this phenomenon can also be observed in other latent space? Moreover, what if we add perturbation into the Hibert space of CEs to form the adversarial samples? Does MMD can still be used indicate CEs and AEs? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## 1. Evidence of 'Minimizing distributional discrepancy can reduce the expected loss on AEs.' In our paper, we derive a theoretical bound to support our claim, i.e., $R(h, f_\mathcal{A}, \mathcal{D_A}) \leq R(h, f_\mathcal{C}, \mathcal{D_C}) + d_1(\mathcal{D_C}, \mathcal{D_A})$. In previous literature [1] [2], the upper bound of the risk on the target domain is **always** bounded by one extra constant, e.g., $R(h, f_\mathcal{A}, \mathcal{D_A}) \leq R(h, f_\mathcal{C}, \mathcal{D_C}) + d_1(\mathcal{D_C}, \mathcal{D_A}) + C$. If $C$ is large, minimizing $d_1(\mathcal{D_C}, \mathcal{D_A})$ can hardly reduce $R(h, f_\mathcal{A}, \mathcal{D_A})$. In contrast, we derive an upper bound **without any extra constant C**, which means minimizing $d_1(\mathcal{D_C}, \mathcal{D_A})$ can more effectively reduce $R(h, f_\mathcal{A}, \mathcal{D_A})$. This is a **major contribution** of our work. We will clarify this more in the updated version of our paper! [1] Domain adaptation: Learning bounds and algorithms. [2] A theory of learning from different domains. ## 2. Recent SOTA Baseline Comparison We compare several **most recent SOTA defense methods** on RobustBench with DDAD. We will include the results in the updated version of our paper. Table 1: Clean and robust accuracy (%) of recent SOTA methods on CIFAR-10. We show the most successful defense in **bold**. [3]* uses an ensemble of networks (ResNet-152 + WRN-70-16) and 50M synthetic images. |Method|Clean|AutoAttack|Avg| |-|-|-|-| |WRN-28-10| |[1]|92.16|67.73|79.95| |[2]|92.44|67.31|79.88| |DDAD|**94.16**|**72.21**|83.19| |WRN-70-16| |[2]|93.25|70.69|81.97| |[3]*|**95.19**|69.71|82.45| |DDAD|93.91|**72.58**|**83.25**| [1] Decoupled Kullback-Leibler Divergence Loss, NeurIPS 2024 [2] Better Diffusion Models Further Improve Adversarial Training, ICML 2023 [3] MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly Mixed Classifiers, TMLR 2024 ## 3. Plot Other Methods in Figure 2 Thank you for your suggestion! The main reason is that including all baseline methods will make Figure 2 look **very messy**. Therefore, we use a table (please see Table 5 in our paper) in Appendix E.3 to clearly compare our method with baseline methods under different proportions of AEs in a batch. ## 4. Discuss Other Distributional Discrepancy Metrics We discuss two representative statistical alternatives of MMD here: **Wasserstein distance** and **energy distance**. - Compared to Wasserstein distance, MMD has two major advantages: - The estimator of MMD is **unbiased**, while the estimator of Wasserstein distance is **biased**. Therefore MMD estimator is **more accurate** than Wasserstein estimator, especially when the dimension of the data is large. - Wasserstein distance requires solving a transportation problem, which is **computationally more expensive than MMD**, which means using Wasserstein distance will be slow for large datasets. - Compared to energy distance, **MMD offers greater flexibility** due to its use of kernel functions, which allow it to capture intricate differences between distributions and adapt to various tasks by selecting appropriate kernels. It is particularly well-suited for high-dimensional data, as its reliance on embeddings in a reproducing kernel Hilbert space **mitigates issues like the "curse of dimensionality" that can affect energy distance.** Moreover, MMD is sensitive to higher-order statistics of distributions, enabling it to **capture subtle discrepancies beyond mean and variance**. We will add a short section to discuss these statistical measurements in the updated version of our paper! ## 5. Assumptions for Training Setting To train the MMD-OPT, we only require access to the **clean training data**. AEs used for MMD-OPT are generated based on the clean training data. This assumption is reasonable and commonly used in adversarial training-based methods and other defenses requiring AEs during training. ## 6. Ablation study of $\alpha$ in Eq.(8) Increasing $\alpha$ in Eq.(8) means focusing less on the effect of MMD-OPT, and thus the robust accuracy will decrease, as supported by our theoretical analysis. We will include the results in the updated version of our paper! Table 1: Our clean and robust accuracy (%) under different $\alpha$ against adaptive white-box PGD+EOT $\ell_\infty (\epsilon = 8/255)$ on CIFAR-10. |$\alpha$|Clean|PGD+EOT| |---|---|---| |0.01|94.16|67.53| |0.05|94.16|50.70| |0.1|94.16|45.35| ## 7. AEs in the Latent Space Thank you for your insightful question! The philosophy is grounded in the findings of [1]: Regardless of the type of latent space (Hilbert space or otherwise), if adding perturbations into the latent representation of CEs does not increase the MMD value significantly, it indicates minimal distributional discrepancy between AEs and CEs. Consequently, these AEs can hardly deceive classifiers. [1] Maximum Mean Discrepancy Test is Aware of Adversarial Attacks, ICML 2021 --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed rebuttal. My concerns have been resolved. Therefore, I am raising my score to 4. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 3Zsd, We are glad to hear that your concerns have been addressed! Many thanks for your reply and increasing your score to 4: Accept! We want to thank you again for providing this valuable feedback to us. Your support would definitely play a crucial role for our paper. Best regards, Authors of Submission14032
Summary: The paper introduces Distributional-Discrepancy-based Adversarial Defense (DDAD), a two-pronged approach that leverages Maximum Mean Discrepancy (MMD) to detect adversarial examples (AEs) and a denoiser to restore them. The paper provides a theoretical justification linking distributional discrepancy minimization to a reduction in expected adversarial loss. Then the MMD-OPT is used for training the detector and denoiser. Extensive experiments on CIFAR-10 and ImageNet-1K demonstrate that DDAD achieves higher clean and robust accuracy than state-of-the-art (SOTA) adversarial defense methods. Claims And Evidence: **CLAIM 1**: The paper proposed a novel DDAD method. * EVIDENCE 1: The paper introduces DDAD, which integrates Maximum Mean Discrepancy (MMD) for adversarial detection and a denoiser for adversarial purification. The method is novel in that it does not discard detected adversarial examples but instead attempts to restore them using a denoiser. The authors provide a clear methodology, including the training and inference process, as well as a theoretical justification for minimizing distributional discrepancy to reduce adversarial risk (Sections 3 and 4). * CONCERN 1: While the combination of MMD-based detection with a denoiser is a meaningful extension, the novelty might be incremental given existing adversarial purification techniques. The authors compare their method to adversarial training (AT) and adversarial purification (AP), but they do not extensively benchmark against prior hybrid methods. Although the authors claim that the previous two-pronged method, MagNet (Meng& Chen, 2017), is outdated, it is not fair to exclude them. Without the comparison with MagNet (Meng & Chen, 2017), the advance in the landscape of two-pronged methods is not clear. **CLAIM 2**: DDAD can improve clean and robust accuracy by a notable margin against well-designed adaptive white-box attacks. * EVIDENCE 2: The paper provides experimental results in Tables 1 and 2, showing that DDAD outperforms state-of-the-art adversarial training and purification methods in both clean and robust accuracy under adaptive white-box attacks (PGD+EOT and AutoAttack). The results are consistent across different architectures and datasets (CIFAR-10 and ImageNet-1K). Additionally, the ablation studies (Section 5.4) indicate that both MMD-OPT and the denoiser contribute to the improved performance. * CONCERN 2: The paper claims to evaluate against adaptive white-box attacks, but it primarily uses existing attack frameworks (PGD+EOT and AutoAttack). The authors implement an “adaptive” attack specific to DDAD (Algorithm 3), but it is unclear if this attack is truly optimized to break the defense. A stronger evaluation would involve gradient obfuscation checks and testing stronger adaptive attacks, such as AutoAttack variants specifically tuned against detection-based methods. Without this, the robustness improvements could be overestimated. **CLAIM 3**: DDAD can generalize well against unseen transfer attacks (see Section 5.3). * EVIDENCE 3: The paper presents results in Table 4, showing that DDAD-trained models maintain high robust accuracy against transfer attacks from different model architectures (e.g., WideResNet-28-10 -> ResNet-50, Swin Transformer). The experiments use PGD+EOT l_inf and C&W l2 attacks with various perturbation budgets, demonstrating DDAD’s transferability. * CONCERN 3: The paper does not compare DDAD’s transfer robustness against other methods—while the absolute numbers are reported, it is unclear if DDAD generalizes better than baseline adversarial purification or training methods in the same setting. A fairer evaluation would compare relative transfer robustness across different defenses. Methods And Evaluation Criteria: **Method**: MMD-based detection aligns well with the problem of distinguishing adversarial vs. clean distributions. Unlike traditional SADD-based methods that discard adversarial examples, DDAD attempts to recover adversarial examples, which could preserve useful information. The theoretical justification (Section 3) provides a formal grounding for the method by linking distributional discrepancy minimization to reducing adversarial risk. **Evaluation**: The authors evaluate DDAD on CIFAR-10 and ImageNet-1K, two well-established image classification benchmarks in adversarial robustness research, using standard metrics (clean and adversarial accuracy). The comparison includes state-of-the-art adversarial training (AT) and adversarial purification (AP) methods. Theoretical Claims: No. Experimental Designs Or Analyses: The benchmark used CIFAR-10 and ImageNet-1K which are widely adopted in the area. The method is compared to multiple methods but the two-pronged method is ignored, which is my major concern. Supplementary Material: No. Relation To Broader Scientific Literature: The paper builds on prior work in adversarial detection, purification, and theoretical robustness analysis by proposing DDAD, a hybrid approach that detects and denoises adversarial examples instead of discarding them. It extends statistical adversarial detection methods (e.g., MMD-based detection) by introducing MMD-OPT, an optimized discrepancy measure for more effective adversarial differentiation. Unlike prior adversarial purification methods that rely on generative models (e.g., GANs, diffusion models), DDAD uses distributional discrepancy minimization for adversarial recovery, improving computational efficiency. The paper also provides theoretical insights, linking distributional discrepancy reduction to adversarial risk minimization, refining previous bounds in domain adaptation and adversarial training. These advancements position DDAD as a novel, theoretically grounded, and empirically validated defense mechanism. Essential References Not Discussed: Not found. Other Strengths And Weaknesses: Strength: * The authors gave an in-depth understanding of the success of the proposed DDAD method. Weakness * (minor) The contributions of the paper are not easy to capture from the introduction. What is the novelty of the theoretical analysis? Existing theoretical insights are not revisited leaving the novelty unclear. I tried to summarize my understanding but the points are not clear from the paper. Other Comments Or Suggestions: In the abstract, the high-level concept of MMD-OPT is not clearly introduced, causing some confusion. Is it a loss function? Questions For Authors: * Are there other major methodological and theoretical contributions in the paper except the proposed DDAD method? What is the major novelty in the DDAD compared to previous methods? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## 1. Comparison to MagNet We acknowledge that MagNet [1] is a very good work. The main reasons we did not include MagNet as a baseline are: (1) MagNet is outdated since it was published 8 years ago, and (2) MagNet cannot defend against adaptive attacks, placing it significantly behind current SOTA methods. Please kindly check Table 1. Table 1: Clean and robust accuracy (%) against adaptive white-box AutoAttack and PGD+EOT attacks on CIFAR-10. We show the most successful defense in **bold**. |Method|Clean|AutoAttack|PGD+EOT| |-|-|-|-| |WRN-28-10| |MagNet|84.53|0.00|0.00| |DDAD|**94.16**|**72.21**|**67.53**| [1] MagNet: A Two-pronged Defense against Adversarial Examples. ## 2. Novelty of Theoretical Analysis The theoretical bound we derive is **tighter** than previous work [1] [2]. Previously, the upper bound of the risk on the target domain is **always** bounded by one extra constant, e.g., $R(h, f_\mathcal{A}, \mathcal{D_A}) \leq R(h, f_\mathcal{C}, \mathcal{D_C}) + d_1(\mathcal{D_C}, \mathcal{D_A}) + C$. If $C$ is large, minimizing $d_1(\mathcal{D_C}, \mathcal{D_A})$ can hardly reduce $R(h, f_\mathcal{A}, \mathcal{D_A})$. In contrast, we derive an upper bound **without any extra constant C**, which means minimizing $d_1(\mathcal{D_C}, \mathcal{D_A})$ can more effectively reduce $R(h, f_\mathcal{A}, \mathcal{D_A})$. This is a **major novelty** of our theoretical analysis. [1] Domain adaptation: Learning bounds and algorithms. [2] A theory of learning from different domains. ## 3. Novelty of DDAD In our humble opinion, our proposed method is significantly different from existing SOTA AP methods. We summarize the novelty of DDAD from several key perspectives: - **Philosophically**, DDAD focuses on minimizing distributional discrepancies, which fundamentally differs from existing AP methods relying primarily on density estimation. - **Theoretically**, we derive a tighter and more informative theoretical bound to support the design of DDAD, which is a major contribution of our work. - In terms of **training efficiency**, precise density estimation typically requires powerful models (e.g., diffusion models) that are computationally intensive and time-consuming to train. In contrast, learning distributional discrepancies is inherently simpler and more feasible, making DDAD both effective and efficient. - In terms of **inference efficiency**, diffusion-based AP methods suffer from slow inference speeds due to repeated calls to the forward process of diffusion models. Our method, however, achieves significant performance improvements without compromising inference speed. ## 4. Adaptive Attack According to [1], PGD+EOT is considered the **strongest** attack against diffusion-based AP methods, and AutoAttack is **strongest** against AT-based methods. Our proposed adaptive attacks (both PGD+EOT and AutoAttack) additionally target DDAD's detection mechanism (i.e., MMD-OPT), and adaptive PGD+EOT proves **most effective** in breaking DDAD. For **gradient obfuscation checks**, DDAD achieves the **best** average performance against adaptive BPDA+EOT attack across various baselines (**see Table 3 in our paper**). Also, please kindly check the results against adaptive AutoAttack in **Section 2 & 3 of Reviewer 3bTb's responses**. [1] Robust Evaluation of Diffusion-based Adversarial Purification. ## 5. Transferability Across Different Defenses - Thank you for your concern! We would like to clarify that the purpose of conducting transferability experiments is not to claim SOTA performance, but rather to ensure our method maintains good robustness against unseen attacks, as it relies on AEs to train MMD-OPT and the denoiser. Table 4 demonstrates our method's strong transferability across different attacks, architectures, and perturbation budgets. - Due to the time-consuming nature of these baseline experiments, we will upload the corresponding results once completed. In the meantime, we provide some intuitions here: AT-based methods are expected to exhibit weaker transferability, as they encounter specific AEs during training [1], while AP-based methods are likely to demonstrate better transferability since they are independent of AE types [2]. [1] Geometry-aware Instance-reweighted Adversarial Training. [2] Diffusion Models for Adversarial Purification. ## 6. Clarification of MMD-OPT MMD-OPT builds upon MMD, which has a learnable kernel $k_w$. We can obtain optimized $k_w$ (we denote it as $k^*_w$) by maximizing Eq.(5) in our paper. Then, MMD-OPT is the MMD estimator with an optimized kernel $k^*_w$. During training, MMD-OPT serves as an objective to update the denoiser's parameters. During inferencing, MMD-OPT serves as a metric to measure the distributional discrepancies between two batches. We will clarify it in the updated version of our paper! --- Thank you for your time! If our response has resolved your major concerns, we would appreciate a higher score :)
null
null
null
null
null
null
ICLShield: Exploring and Mitigating In-Context Learning Backdoor Attacks
Accept (poster)
Summary: This paper focuses on the backdoor threat of LLMs from the perspective of in-context learning (ICL). By theoretically analyzing this threat as a two-fold learning mechanism, this paper further proposes an effective defense method against such attacks. ## update after rebuttal Thanks for your rebuttal. My concerns are addressed. Claims And Evidence: Overall correct. See the discussions below. Methods And Evaluation Criteria: As justified by the theoretical analysis, the proposed defense ICLShield is plausible against ICL backdoor attacks. Theoretical Claims: The overall theoretical formulation is clear, but some details are missed. For example, what is the optimization goal in eq (3)? are $y_i$ and $y_j$ different or same? Experimental Designs Or Analyses: The evaluation is comprehensive enough, covering multiple attacks and models, and includes many in-depth analysis. Supplementary Material: Yes Relation To Broader Scientific Literature: This paper expose good potential on deeper understanding of LLM safety. Essential References Not Discussed: The theoretical analysis through latent concept is similar to an theoretical analysis of ICL-based jailbreaking attacks (through harmful/safe distribution decoupling) [1], and the connection can be further discussed. [1] Jailbreak and Guard Aligned Language Models with Only Few In-Context Demonstrations https://arxiv.org/pdf/2310.06387 Other Strengths And Weaknesses: Code is provided. Other Comments Or Suggestions: The title running *Submission and Formatting Instructions for ICML 2025* was not changed. Questions For Authors: See above comments. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1:** Are $\mathbf{y}_i$ and $\mathbf{y}_j$ different or same? **A1:** Thank you for pointing out our error. In Eq.2, using the clean label $\mathbf{y}_j$ for the poisoned input is incorrect; this should be replaced with the attack target $y_t$. We will address this issue in the revision. For ICL backdoor attacks, the attack target can be the clean label of poisoned inputs; for example, in SST-2 that the attack target is negative, the attacker only adds triggers when the label is negative; or it can be an incorrect answer, such as for target refusal where the attack target is a refusal response like “I'm sorry I can't answer that.”. **Q2:** What is the optimization goal in eq (3)? **A2:** Thank you for pointing out that the optimization goal in Eq.3 might cause confusion. In this equation, $\mathbf{x}$ denotes a clean test instance while $\hat{\mathbf{x}}$ is its poisoned version. $\mathbf{y}_{gt}$ denotes the ground truth results of this test instance while $\mathbf{y}_t$ denotes the attack target. The backdoor attacks objective in Eq.3 is to find the poisoned demonstrations that maximize the probability of predicting the ground-truth output when the input does not contain the trigger, while maximize the probability of predicting the backdoor target when the input contains the trigger. **Q3:** The theoretical analysis through latent concept is similar to an theoretical analysis of ICL-based jailbreaking attacks (through harmful/safe distribution decoupling) [1], and the connection can be further discussed. **A3:** Thank you for pointing out the connection with [1]. In their theoretical analysis, [1] demonstrates the feasibility of ICL-based jailbreak and defense by showing that adding a sufficient number of poisoned examples can trigger jailbreak behavior, while adding enough clean examples can suppress it. However, their analysis does not consider how the content or properties of individual examples influence attack or defense effectiveness. As a result, both attack and defense examples in [1] are selected randomly. In contrast, our theoretical analysis goes one step further. We not only confirm that increasing clean examples reduces attack success but also identify which clean examples are most effective by analyzing the upper bound of the attack success rate. Specifically, we introduce three key factors, i.e. number, similarity to the trigger, and confidence, that guide a more targeted and efficient defense strategy. We appreciate your suggestion and will include a discussion of [1] in the related work section of our camera-ready version. **Q4:** Regarding title issue. **A4:** Thank you very much for pointing out the error. We will make corrections in the camera-ready version.
Summary: This paper addresses the vulnerability of in-context learning (ICL) in large language models (LLMs) to backdoor attacks, where adversaries manipulate model behavior by poisoning ICL demonstrations. The authors propose the ​dual-learning hypothesis, positing that LLMs simultaneously learn task-relevant and backdoor latent concepts from poisoned demonstrations. They derive an upper bound for backdoor attack success, governed by the ​concept preference ratio (task vs. backdoor posterior probabilities). Based on this, they introduce ​ICLShield, a defense mechanism that dynamically adjusts the ratio by selecting clean demonstrations via confidence and similarity metrics. Experiments across 11 LLMs and diverse tasks demonstrate state-of-the-art defense performance (+26.02% improvement over baselines), with adaptability to closed-source models like GPT-4. Claims And Evidence: Claims are supported by clear evidence. - LLMs simultaneously learn both task and backdoor concepts, which influence the output probability. - Well-illustrated in 4.1 with three supported literatures. - The vulnerability of the ICL backdoor effect is dominated by the concept preference ratio. - Well-formulated and derived in 4.2 and 4.3. Methods And Evaluation Criteria: **Method**: - The dual-learning hypothesis is intuitive and well-formulated. - However, the assumption of the known poisoned demonstrations is too strong for the defender. In section 5.2, the proposed defense required the selection of clean and poisoned demonstration. If we know the poisoned demonstration in ICT and can control the numbers of clean/poisoned examples, why not directly eliminate the harmful ones? **Evaluation**: - The evaluation metrics on CA and ASR are considered enough for defense performance. Theoretical Claims: The theoretical claims are well-derived and convincing in section 4. Experimental Designs Or Analyses: Pros: - The experiments across various tasks, defenses, datasets, and models are convincing. Supplementary Material: The experimental support and proofs in the supplementary are strongly convincing. Relation To Broader Scientific Literature: The relation to the literature is well-illustrated in the Related Work section and Theoretical Analysis section. Essential References Not Discussed: None. Other Strengths And Weaknesses: Pros: - The paper-writing is good enough and easy to read. - The discussed problem in the ICL backdoor is new, and the method is the first work for defense. Cons: - See the *Methods And Evaluation Criteria* above. Other Comments Or Suggestions: There may be a typo in Figure 2, *More trigger similar example* on the upper-right, but it seems to be less. Questions For Authors: It can be considered a good paper as a whole, with clear illustrations/proofs/results visualization and adequate experiments. However, my main concern lies in the basic setting and practical scenario of the defense. I may raise the score if this concern can be addressed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1:** However, the assumption of the known poisoned demonstrations is too strong for the defender. In section 5.2, the proposed defense required the selection of clean and poisoned demonstration. If we know the poisoned demonstration in ICT and can control the numbers of clean/poisoned examples, why not directly eliminate the harmful ones? **A1:** Thank you for pointing this out. We would like to clarify that our method does not assume prior knowledge of which examples are poisoned. We apologize if the wording in Section 5.2 caused confusion. In our setting, a "poisoned demonstration" refers to a prompt that may contain both clean and poisoned examples. However, our defense does not require identifying or removing the poisoned ones. Instead, we enhance the demonstration by selecting and adding clean examples that achieve high semantic similarity to poisoned demonstration and achieve high clean label confidence under the condition of poisoned demonstration. This selection process is guided by our theoretical analysis and aims to reduce the overall influence of potential poisoned examples. We do not control or manipulate the number of poisoned examples. The only requirement is access to a clean dataset. Considering that there are many available clean datasets, it is easy for us to obtain clean examples. We will revise the relevant wording in the camera-ready version. **Q2:** Concern about practical scenario. **A2:** Thanks for pointing this out. We highlight two real-world scenarios where poisoned demonstrations in ICL may occur: 1) Agent systems. As discussed in [a], many autonomous agents construct ICL prompts dynamically to call LLM APIs. Attackers can tamper with internal memory or retrieved examples, injecting poisoned demonstrations without users noticing. 2) Shared prompt templates. As discussed in [b], prompt templates may be shared and reused across users to reduce the cost of designing prompts for the same task. Malicious contributors can embed poisoned examples into these templates to affect the results of LLMs. These cases show that users may not have full control over ICL content, and thus a test-time defense like ICLShield is necessary to ensure safe and correct outputs. We will clarify it in revision. [a] Liu et al. “Compromising LLM Driven Embodied Agents with Contextual Backdoor Attacks,” in IEEE Transactions on Information Forensics and Security, 2025. [b] Wang et al. "Wordflow: Social Prompt Engineering for Large Language Models," in ACL, 2024. **Q3:** There may be a typo in Figure 2, More trigger similar example on the upper-right, but it seems to be less. **A3:** Thank you for your careful reading of our paper and helping us reduce typos. Actually, our expression here is correct. The down arrow in the figure refers to reducing $P_M(\mathbf{y}_{gt} \mid \mathbf{x}, \mathbf{\theta}_2)$. As analyzed in Section 5.1, in order to reduce the probability of the ground-truth output under the attack latent, the clean example needs to contain content that is similar to the trigger. Therefore, we need more trigger similar examples. We realize that the reference in the figure may cause ambiguity, and we will modify it in the revision. --- Rebuttal Comment 1.1: Comment: Thanks for your clear illustration. My concerns are addressed, and I will raise my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer b7Nb, We really appreciate your prompt feedback and the amount of time you have spent reviewing our paper! We sincerely thank you for your valuable comments and suggestions. The paper's quality has been greatly enhanced by your tremendous efforts. Appreciated! Best regards, Authors of submission 5403
Summary: The author first uses theoretical analysis to model the ICL backdoor attack success bound. Based on the formulation in the theory, the author claims that more clean demonstrations with larger similarity to the trigger and higher confidence can diminish the attack success rate. Based on this observation, the paper proposes a novel defense method against the in-context learning backdoor attacks named ICLShield. Comprehensive experiments have demonstrated the state-of-the-art defense performance. Claims And Evidence: Yes, the claims made in the submission can be supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense. Theoretical Claims: Yes, all the proofs in the appendix. Experimental Designs Or Analyses: Yes, I have checked the experimental designs and analyses. Supplementary Material: Yes, all the appendixes. Relation To Broader Scientific Literature: This paper mainly provides a theoretical analysis for the factors that may affect the in-context learning backdoor attack success rate. It provides a defense solution for the previous LLM threats of in-context backdoor attacks. Essential References Not Discussed: Lack of including the essential reference: [1] Mo, Wenjie, Jiashu Xu, Qin Liu, Jiongxiao Wang, Jun Yan, Chaowei Xiao, and Muhao Chen. Test-time backdoor mitigation for black-box large language models with defensive demonstrations. arXiv preprint arXiv:2311.09763 (2023). Though the reference utilizes in-context learning to mitigate training-time backdoor attacks, it applies similar methods with clean demonstrations and similarity selection. It even considers including extra reasoning process. Normally, the training time backdoor attack is stronger than the in-context learning backdoor. I believe some defense for training-time backdoor can be extended to the in-context learning backdoor attacks. Other Strengths And Weaknesses: Strengths: Good theoretical analysis to motivate the defense method. Though some of the defense strategies seems trivial, the authors still model the attack bound and use equation to provide insights on that. Weakness: 1. Not enough experiments to demonstrate the effective of two selection methods. 2. Some fundamental motivations about the In-context Learning Backdoor Attacks and corresponding defense. (Please see Questions for details) Other Comments Or Suggestions: No other comments. Questions For Authors: 1. Could you please more experiments to demonstrate your effectiveness of similarity and confidence selection? Previous work [1] has already demonstrated the effectiveness of more clean demonstrations would help mitigate the backdoor attacks. I would suggest adding random selection clean demonstrations with the same shot numbers as one baseline in the main experiments. 2. Besides, if we treat the in-context learning similar to the fine-tuning process, adding more clean demonstrations is just like reduce the ratio of poisoned examples in training-time backdoor attacks. Thus, I think even randomly adding clean demonstrations could already significantly reduce the ASR. Please explain. 3. Lack of motivation for practical scenarios of in-context backdoor attacks and defenses. Normally, the in-context learning is used for improving the inference performance. Thus, the demonstrations should be controlled totally by users. What's the motivation for users to include poisoned demonstrations during the in-context learning. 4. What's the advantage of your method comparing directly detecting or checking the poisoned demonstrations and remove them for in-context learning? Though the ICLShield could reduce ASR, it still cannot totally remove the impacts of poisoned demonstrations. While for the detection-based method, the poisoned demonstrations can be totally removed. [1] Mo, Wenjie, Jiashu Xu, Qin Liu, Jiongxiao Wang, Jun Yan, Chaowei Xiao, and Muhao Chen. Test-time backdoor mitigation for black-box large language models with defensive demonstrations. arXiv preprint arXiv:2311.09763 (2023). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Due to the space limitation, Table R1 and Table R2 are provided in https://anonymous.4open.science/r/ICML-Rebuttal-745C/. **Q1:** Comparison with [1]. **A1:** We have included the method proposed in [1]—random clean sample insertion, similarity-based retrieval, and self-reasoning—as baselines in our experiments on GPT-Neo-1.3B for both classification and generative tasks. **Experimental results** in Table R1 show that our method consistently outperforms [1] across all settings. Under the same shot number, our method reduces ASR by 16.12%, 20.75%, and 29.58% compared to [1]’s three strategies, demonstrating superior defensive performance. We believe [1]’s **limited effectiveness stems from its design motivation**. It assumes poisoned LLMs can be corrected through reasoning elicitation, but in ICL backdoor settings where demonstrations are already compromised, added clean examples can also be hijacked by poisoned behavior. For instance, even with self-reasoning, the model may learn to suppress reasoning and directly output the target label, especially in SST-2 and Targeted Refusal. While both methods mitigate backdoors by adding clean examples, **our motivation and methodology differ**. [1] is empirically driven, while our approach is theory-driven. We formally analyze ICL backdoor attacks and derive an upper bound on ASR, identifying three key factors: (1) number of clean examples, (2) similarity to the trigger, and (3) confidence in the correct label. Guided by this theory, our selection method achieves better performance and offers deeper insight into the mechanism of ICL backdoors. **Q2:** Effectiveness of our selection methods. **A2:** We compare our proposed similarity selection, confidence selection, and ICLShield with the random selection and similar sample selection in [1] on GPT-NEO-1.3B for classification and generative tasks. The experimental results in Table R1 show that using similarity selection alone reduce ASR by 8.04% and 12.49% and only using confidence selection reduce ASR by 13.35% and 17.80% compared to random and similar sample selection. ICLShield, which integrates both selection methods, provides superior defense, lowering ASR by 16.12% and 20.75% to methods in [1]. These experimental results emphasize the effectiveness of our selection methods. **Q3:** Regarding random selection. **A3:** Our additional experiment, labeled 'repeat,' involved repeatedly adding the same clean example. As shown in Table R1, although this reduced the poisoning rate, it failed to improve trigger similarity or answering confidence, resulting in minimal defensive effect particularly for SST-2 and Targeted Refusal, where ASR dropped only 1.99% and 2.95%. This suggests that the effectiveness of random selection arises not from dilution of poisoned data, but from the incidental increase in trigger similarity and confidence. However, such gains are limited. In contrast, our selection method explicitly optimizes both factors, enabling stronger defense. **Q4:** Regarding practical scenarios. **A4:** We highlight two practical scenarios, agent systems and shared prompt templates, where users lack fully control over ICL content and need a test-time defense method like ICLShield for safe outputs. A more detailed introduction to these practical scenarios is provided in the **A2** response for **Reviewer b7Nb**. We will also clarify it in revision. **Q5:** Comparison on backdoor detection. **A5:** We compare ICLShield with five representative backdoor detection methods across three categories: ONION[a] and AttDef[b] (abnormality-based), BDDR[c] and MDP[d] (masking-based), and PKAD[e] (model-mismatch-based), under a label-consistent attack on GPT-Neo-1.3B (SST-2). As shown in Table R2, ICLShield consistently achieves the largest ASR reductions (52.70%, 58.48%, 62.11%, 45.94%, and 56.22% compared to ONION, AttDef, BDDR, MDP, and PKAD) across all baselines. This is because prior methods rely on restrictive assumptions that do not hold in our setting: ONION assumes non-natural triggers, masking-based methods require noticeable output shifts. And PKAD relies on distributional discrepancies between clean and poisoned samples, that none of which are present under natural, clean-label triggers. In contrast, ICLShield does not rely on model outputs or trigger characteristics, enabling robust detection across attack types, including challenging clean-label attacks where labels remain unchanged and triggers are linguistically natural. [a] ONION: A Simple and Effective Defense Against Textual Backdoor Attacks. EMNLP, 2021. [b] Defending pre-trained language models as few-shot learners against backdoor attacks. NeruIPS, 2024. [c] Bddr: An effective defense against textual backdoor attacks. Computers & Security, 2021. [d] Defending against Insertion-based Textual Backdoor Attacks via Attribution. ACL, 2023. [e] PKAD: Pretrained Knowledge is All You Need to Detect and Mitigate Textual Backdoor Attacks. EMNLP, 2024.
null
null
null
null
null
null
null
null
Generalized Category Discovery via Reciprocal Learning and Class-Wise Distribution Regularization
Accept (poster)
Summary: This paper aims to solve Generalized Category Discovery (GCD). This task seeks to discover both known and novel categories from unlabeled data, leveraging another labeled dataset with only known categories. Different from previous works that mainly aim to boost model performance on novel categories, this work mainly aims to improve model performance on known categories. To do so, the authors introduce an auxiliary branch with a trainable token to distill known categories for the main branch. Furthermore, the authors introduce Class-wise distribution Regularization (CDR) to mitigate the leaning bias toward known categories. Experiments on wildly-used benchmarks validate the effectiveness of the proposed method. Claims And Evidence: 1. The claim that previous methods struggle with identifying known categories is validated through experiments in Fig. 3. 2. The proposed method aims to boost model performance on known categories, however, the performance gain for known categories is not obvious on some benchmarks (e.g., cifar-10 and cifar-100). The authors should explain the reason for this phenomenon. Methods And Evaluation Criteria: 1. For the method, adding an extra branch to provide additional self-supervised signals for known categories is intuitive. However, the authors should explain how to guarantee the quality of pseudo labels from the additional branch. So experiments about the accuracy of the pseudo labels from the additional branch should be conducted. 2. For the evaluation, the used benchmarks and evaluation criteria are representative. 3. What's the difference between the proposed 'base class accuracy' and the 'base' in the result tables? Theoretical Claims: Theorem 3.1 is from a previous work and Theorem 3.2 is provided with detailed proof. Experimental Designs Or Analyses: The experimental designs and analyses can validate the effectiveness of the proposed method. Supplementary Material: Yes, it mainly includes proof and additional experimental results. Relation To Broader Scientific Literature: This paper investigates GCD from a different perspective by investigating model performance on known categories. Essential References Not Discussed: Missing reference: [1] Flipped classroom: Aligning teacher attention with student in generalized category discovery. NIPS 2024 [2] Happy: A Debiased Learning Framework for Continual Generalized Category Discovery. NIPS 2024 Other Strengths And Weaknesses: The proposed model is novel and the experiments validate the effectiveness of the proposed method. Other Comments Or Suggestions: 1. I suggest the authors to move Fig.3 foward. It is far away from the claim in Line 062 now. Questions For Authors: 1. Experiments now assume half of the samples from known categories are labeled, what about model performance with fewer labeled samples (e.g., 25% of samples from known categories are labeled)? And what about model performance with more novel categories (e.g., 75% of categories are assumed to be novel)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1:** Experiments about the accuracy of the pseudo labels from the additional branch should be conducted. **A1:** Thank you for this valuable suggestion. We have conducted a comprehensive analysis of pseudo-label accuracy across all datasets, comparing our auxiliary branch (AUX), main branch (CLS), and the SimGCD baseline: | | CIFAR10 | CIFAR100 | ImageNet100 | CUB200 | SCars | Aircraft | Herbarium19 | | --- | --- | --- | --- | --- | --- | --- | --- | | SimGCD | 98.4 | 83.6 | 95.4 | 80.5 | 80.7 | 72.8 | 68.6 | | RLCD (CLS) | 98.4 | 86.3 | 95.6 | 86.9 | 90.2 | 75.5 | 76.3 | | RLCD (AUX) | **98.6** | **87.1** | **96.2** | **88.3** | **91.4** | **75.9** | **76.4** | These results demonstrate that our auxiliary branch consistently produces **more accurate pseudo-labels** across all datasets, with particularly significant improvements on fine-grained datasets (CUB200: +7.8%, SCars: +10.7%). We will incorporate the analysis into our revision. **Q2:** What's the difference between the proposed 'base class accuracy' and the 'base' in the result tables. **A2:** Thank you for the insightful comment. The "oracle base accuracy" (OB) metric refers to pseudo-label accuracy of base class prediction. OB only considers the base class logits and excludes novel class prediction, providing a direct assessment of the model's discrimination of base classes. The "Base" column in our results tables reports the standard clustering accuracy on base classes after applying the Hungarian algorithm for label assignment. **Q3:** The performance gain for known categories is not obvious on some benchmarks (e.g., CIFAR-10 and CIFAR-100). **A3:** Thank you for this insightful observation. The varying performance gains across datasets can be attributed to several factors: - **Dataset Complexity of CIFAR-10**: CIFAR-10 is a simple classification task with near-optimal existing results (>97%), leading to a ceiling effect. Its 10 coarse-grained classes also yield high prediction confidence, limiting the impact of our CDR component. - **Methodological Differences of CIFAR-100**: While our approach falls slightly behind CMS on CIFAR-100 base classes, CMS employs a fundamentally different paradigm (pretraining and clustering) without an all-class parametric classifier. This design choice naturally favors base class performance but limits novel class generalization, where our method demonstrates superior results. We would like to highlight that RLCD is designed to achieve balanced performance across both base and novel classes rather than optimizing for base classes alone. The above discussion will be included in our revision. **Q4:** What about model performance with fewer labeled samples (e.g., 25% of samples from known categories are labeled)? And what about model performance with more novel categories (e.g., 75% of categories are assumed to be novel)? **A4:** Thank you for this excellent suggestion. We have conducted extensive experiments varying both the proportion of labeled samples and the ratio of novel categories on the CUB200 dataset: **Varying Labeled Sample Proportion:** | | CUB (50% labeled) | | | CUB (25% labeled) | | | CUB (10% labeled) | | | | --- | ---: | --- | :--- | ---: | --- | :--- | ---: | --- | :--- | | | All | Base | Novel | All | Base | Novel | All | Base | Novel | | SimGCD | 60.3 | 65.6 | 57.7 | 51.0 | 52.8 | 49.7 | 34.6 | 31.8 | 37.3 | | RLCD | **70.0** | **79.1** | **65.4** | **63.8** | **67.5** | **61.1** | **46.7** | **45.3** | **48.0** | | Improvement | +9.7 | +13.5 | +7.7 | +12.8 | +14.7 | +11.4 | +12.1 | +13.5 | +10.7 | **Varying Novel Category Proportion:** | | CUB (50% novel) | | | CUB (60% novel) | | | CUB (75% novel) | | | | --- | ---: | --- | :--- | ---: | --- | :--- | ---: | --- | :--- | | | All | Base | Novel | All | Base | Novel | All | Base | Novel | | SimGCD | 60.3 | 65.6 | 57.7 | 56.2 | 65.1 | 53.3 | 51.8 | 68.3 | 49.1 | | RLCD | **70.0** | **79.1** | **65.4** | **62.5** | **75.6** | **58.1** | **56.8** | **78.1** | **53.2** | | Improvement | +9.7 | +13.5 | +7.7 | +6.3 | +10.5 | +4.8 | +5.0 | +9.8 | +4.1 | These results reveal several important insights: - **Label Efficiency**: Reducing labeled data degrades performance, yet our RLCD maintains substantial performance advantages even with severely limited labeled data (10% labeled), demonstrating its superiority in low-resource scenarios. - **Scalability to Novel Classes**: As the proportion of novel classes increases, our method still outperforms SimGCD across all metrics, though the margin narrows for novel classes. This is expected as the clustering problem becomes more challenging with more novel classes. The strong performance on base classes is sustained primarily due to the limited class number. **Q5:** Missing literatures: FlipClass (NeurIPS 2024), Happy (NeurIPS 2024). **A5:** Thank you for bringing these works to our attention. We will incorporate the suggested references in our revised manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for your responses, I have increased my score. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your recognition of our work and the improved rating.
Summary: This manuscript addresses the Generalized Category Discovery (GCD) task, which aims to classify unlabeled data containing both existing (base) and novel (unknown) classes. Existing parametric methods often compromise the discriminability of known classes in order to identify novel classes. To remedy this shortcoming, the authors propose a single-stage Reciprocal Learning Framework (RLF) and a Class-wise Distribution Regularization (CDR) approach. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: yes Relation To Broader Scientific Literature: No Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths**: ​ 1. The manuscript is well-written and logically coherent. ​ 2. The proposed reciprocal learning framework is simple and effective, introducing constraints on old/known classes in a way that enhances performance. **Weaknesses**: ​ 1. It lacks a comparison with the latest state-of-the-art method, FlipClass [1]. ​ 2. Because CDR penalizes distribution-level discrepancies between different views, the model tends to produce more balanced class predictions rather than heavily favoring any single class. If one class truly occupies a very small portion of the dataset, the model may naturally lean toward majority classes across several batches. Consequently, CDR might forcibly boost predictions for minority classes, leading to a certain degree of over-correction. In other words, it does not incorporate prior knowledge of strongly skewed class distributions; it merely aims to ensure sufficient agreement and sharpness in class-level predictions from two different views. In the presence of severely long-tailed or extremely imbalanced data, this can introduce errors. The authors do not analyze this scenario in detail. Other Comments Or Suggestions: None Questions For Authors: Please see weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1.** It lacks a comparison with the latest state-of-the-art method, FlipClass [1]. **A1.** Thank you for highlighting this important work. We observed that FlipClass employs asymmetric augmentation and mixup techniques for performance enhancement from their supplementary materials. For a fair comparison, we incorporated similar advanced augmentation strategies into our approach, resulting in further improvements across datasets: | | CIFAR10 | CIFAR100 | ImageNet100 | CUB200 | SCars | Aircraft | Herbarium-19 | | --- | --- | --- | --- | --- | --- | --- | --- | | FlipClass | **98.5** | 85.2 | 86.7 | 71.3 | 63.1 | 59.3 | 46.3 | | RLCD | 98.2 | **85.6** | **87.2** | **72.5** | **65.7** | **62.2** | **47.5** | The results demonstrate that our method outperforms FlipClass on 6 out of 7 GCD datasets, with particularly significant improvements on fine-grained datasets (SCars: +2.6%, Aircraft: +2.9%). **Q2.** Because CDR penalizes distribution-level discrepancies between different views, the model tends to produce more balanced class predictions rather than heavily favoring any single class. If one class truly occupies a very small portion of the dataset, the model may naturally lean toward majority classes across several batches. Consequently, CDR might forcibly boost predictions for minority classes, leading to a certain degree of over-correction. In other words, it does not incorporate prior knowledge of strongly skewed class distributions; it merely aims to ensure sufficient agreement and sharpness in class-level predictions from two different views. In the presence of severely long-tailed or extremely imbalanced data, this can introduce errors. The authors do not analyze this scenario in detail. **A2.** We appreciate this thoughtful analysis of CDR and are pleased to address these concerns with both theoretical insights and empirical results: - **Implicit adaptive weighting in CDR:** In long-tailed distributions, majority classes typically yield higher-confidence predictions [2], leading to lower assigned probabilities for minority classes. According to class-wise distribution definition, $m_k=\frac{1}{\sum_{i=1}^N p_i^{(k)}}\left(\sum_{i=1}^N p_i^{(k)} p_i\right)$, this results in **reduced weighting** of high-confidence majority class samples when computing minority class distributions. Consequently, CDR naturally adjusts sample contributions, establishing an adaptive weighting mechanism that helps **mitigate over-correction**. - **Balanced learning via CDR and CE**: Our RLCD combines cross-entropy (CE) loss with CDR. While standard CE training in long-tailed settings often biases predictions toward majority classes [2], CDR counteracts this by promoting balanced learning across all classes, **mitigating excessive skew**. This is quantitatively demonstrated in Fig. 11, where our method reduces the Root Mean Square Error (RMSE) by 1.48 and 3.98 for base and novel classes, respectively, compared to the baseline. - **Strong performance in the long-tailed setting**: Our RLCD achieves state-of-the-art results on the Herbarium-19 dataset, which features a naturally long-tailed distribution. As reported in Table 1, RLCD surpasses prior methods by **1.3% overall** and **2.5% on base classes**, demonstrating robustness in such scenarios. Furthermore, we would like to emphasize that RLCD leverages both the Reciprocal Learning Framework (RLF) and CDR. As shown in Tables 1 and 2, the experimental results demonstrate its versatility across **generic**, **fine-grained**, and **long-tailed** scenarios, consistently delivering superior performance. [1] Lin H, An W, Wang J, et al. Flipped classroom: Aligning teacher attention with student in generalized category discovery. NeurIPS, 2024.\ [2] Menon A K, Jayasumana S, Rawat A S, et al. Long-tail learning via logit adjustment. ICLR, 2021. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed reply, which addressed all my concerns. Thus, I have increased my score. --- Reply to Comment 1.1.1: Comment: Thank you for your kind review and improved rating. We are pleased to address your concerns.
Summary: The paper proposes a novel approach for Generalized Category Discovery by introducing a Reciprocal Learning Framework and Class-wise Distribution Regularization. RLF enhances base class discrimination through an auxiliary branch that distills reliable base-class predictions to the main branch, while CDR mitigates learning bias toward base classes by enforcing consistency in class-wise probability distributions. Experiments on GCD benchmarks demonstrate state-of-the-art performance, with significant improvements in both base and novel class accuracy. Claims And Evidence: RLF improves base discrimination via cross-branch distillation. While ablation studies show gains, the mechanism of reliable soft labels from the auxiliary branch is not validated, and there is no analysis of pseudo-label accuracy or error rates. Methods And Evaluation Criteria: The auxiliary branch design of RLF is intuitive but under-explored and freezing the shared transformer block may limit flexibility. Benchmarks are standard, but the ACC metric alone is insufficient. Clustering metrics (NMI, ARI) and robustness to class imbalance are not reported. The assumption of known total class count $ |Y_u| $ is impractical; results with estimated $ K $ (Table 4) show significant performance drops but are not discussed thoroughly. Theoretical Claims: Theorem 3.2’s proof (Appendix A.1) correctly uses Cauchy-Schwarz but assumes $ m_k$ and $ m'_k $ are exactly aligned, which is unrealistic in practice. The link between CDR and "boosting novel performance" (Sec. 3.3) is not theoretically justified. Experimental Designs Or Analyses: Critical parameters ($ \alpha, \beta$) are set to 0.5 without sensitivity analysis. Fig. 6 shows performance variation but lacks justification for the chosen values. The impact of removing InfoNCE (Sec. 3.1) is not isolated. Table 3 conflates multiple components, making it unclear which contributes most. Supplementary Material: The appendix includes proofs, dataset splits, and parameter counts but omits implementation details. Visualization (Fig. 10) is qualitative and lacks quantitative support. Relation To Broader Scientific Literature: The work builds on parametric GCD (SimGCD, LegoGCD) and SSL but does not discuss connections to *open-world semi-supervised learning* or *multi-task distillation*. The auxiliary branch resembles multi-exit networks, but prior art is unmentioned. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths**: Practical design with negligible inference overhead. Improved base-class performance validated across datasets. **Weaknesses**: RLF is an incremental extension of multi-branch SSL, and CDR resembles distribution alignment in domain adaptation. While performance gains are clear, the method does not address the core challenges of GCD (unknown class counts, domain shift). Other Comments Or Suggestions: Fig. 1’s caption is unclear; "base logits" and "all logits" need explicit definitions. Questions For Authors: 1. How does RLF perform *without* the shared transformer block? Does freezing the block limit novel-class adaptation? 2. Table 4 shows performance drops with estimated $K $. Can CDR be adapted to handle noisy $K $? 3. Can Theorem 3.2 be extended to non-ideal cases (when $ m_k \neq m'_k $)? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Q1.** There is no analysis of pseudo-label accuracy. **A1.** Please refer to our response to Reviewer UDum-Q1. **Q2.** Fig. 6 shows the performance variation of parameters α and β but lacks justification for the chosen values. **A2.** Parameters α and β control the strength of distillation and regularization, respectively. Since the supervised loss weight λ=0.35 in the baseline (SimGCD), we explore values in the range [0,1] to match a comparable scale. **Q3.** The impact of removing InfoNCE (Sec. 3.1) is not isolated. **A3.** We have conducted additional experiments to isolate the impact of InfoNCE: | | CIFAR100 | | | CUB200 | | | | --- | ---: | --- | :--- | ---: | --- | :--- | | | All | Base | Novel | All | Base | Novel | | RLCD + InfoNCE | 82.8 | 83.4 | 81.6 | 68.9 | 76.7 | 65.0 | | RLCD | **83.4** | **84.2** | **81.9** | **70.0** | **79.1** | **65.4** | Removing InfoNCE consistently improves performance across all metrics. This is because InfoNCE tends to push apart same-class features, which impairs feature discrimination. --- **Q4.** Table 3 conflates multiple components, making it unclear which contributes most. **A4.** Table 3 presents the contribution of each component. The important facts are that RLF benefits base class performance, while CDR significantly improves novel class performance. **Q5.** RLF is an incremental extension of multi-branch SSL, and CDR resembles distribution alignment in domain adaptation. **A5.** We would like to highlight several key distinctions: - **Lightweight Architectural Design:** Unlike traditional multi-branch SSL methods (e.g., MoCo, BYOL) that require separate networks with significant memory and computational overhead, our approach inserts a **single auxiliary token** in the last transformer block. - **Cross-View Consistency vs. Cross-Domain Alignment:** CDR is inherently different from domain adaptation methods. CDR operates on **different views** of the same samples to **increase prediction confidence** by enforcing cross-view consistency. This contrasts with domain adaptation, which aims to align distributions across distinct domains. --- **Q6.** While performance gains are clear, the method does not address the core challenges of GCD (unknown class counts, domain shift). **A6.** Our work primarily focuses on improving standard GCD performance, following recent approaches like SimGCD (ICCV 2023) and LegoGCD (CVPR 2024). Regarding the challenges: - **Unknown Class Counts**: From our response to Reviewer 54Bx-Q2, our method can refine the estimation through a multi-stage process. - **Domain Shift**: While domain shift is a challenge in GCD, methods like HiLo (ICLR2025) address it by leveraging domain adaptation techniques. We plan to extend our approach to domain-shift scenarios in future work. **Q7.** Fig. 1's caption is unclear; "base logits" and "all logits" need explicit definitions. **A7.** We will clarify the caption in our revision. "Base logits" refers to the output probabilities for base classes only, while "all logits" include the output probabilities for both base and novel classes. --- **Q8.** How does RLF perform without the shared transformer block? Does freezing the block limit novel-class adaptation? **A8.** We conducted additional experiments comparing three configurations: | | CIFAR100 | | | CUB200 | | | | --- | ---: | --- | :--- | ---: | --- | :--- | | | All | Base | Novel | All | Base | Novel | | Shared block | **83.4** | **84.2** | 81.9 | **70.0** | **79.1** | 65.4 | | Separate block | 83.1 | 83.7 | **82.2** | 69.5 | 76.6 | **66.0** | | Frozen block | 80.5 | 80.7 | 80.2 | 63.3 | 71.3 | 59.4 | Using a separate block leads to a **slight performance drop**, as sharing the transformer block allows for learning more robust, task-generalizable parameters. Freezing the block, however, causes a **substantial decline** across all metrics, underscoring the importance of fine-tuning for effective adaptation to both base and novel classes. This aligns with **common practice** in GCD works, where fine-tuning the final block is critical to address domain shifts in downstream datasets. **Q9.** Table 4 shows performance drops with estimated K. Can CDR be adapted to handle noisy K? **A9.** Our method inherently handles noisy class number estimates through its multi-stage refinement process, as detailed in our response to Reviewer 54Bx-Q2. **Q10.** Can Theorem 3.2 be extended to non-ideal cases (when mk≠mk′)? **A10.** We would clarify $m_k ≠ m_{k'}$ is **indeed the common scenario**, as these distributions come from different views of batch samples. Theorem 3.2 characterizes the **optimization objective** rather than the **initial condition**. It aims to maximize distribution similarity between the two views while approximating one-hot. --- We sincerely appreciate the reviewer’s effort in evaluating our paper and hope that our responses adequately address the concerns.
Summary: This paper studies the task of Generalized Category Discovery (GCD). It builds upon parametric-based GCD methods, and proposes a Reciprocal Learning Framework (RLF) that introduces an auxiliary branch devoted to base classification. Within the framework, the main branch filters the pseudo-base samples to the auxiliary branch while the auxiliary branch provides more reliable soft labels for the main branch, leading to a virtuous cycle. The paper further incorporates Class-wise Distribution Regularization (CDR) to mitigate the leaning bias towards base classes. Experiments validate the superiority of the proposed method. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: There is no theoretical analysis in this paper. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: This paper proposes to employ both main and auxiliary branch, which ensures the basic class learning. This idea is novel to the community of GCD. Essential References Not Discussed: Please include more recent papers [R1,R2,R3] in GCD to reflect the potential trends of this field. References: [R1]. Active Generalized Category Discovery. CVPR 2024. [R2]. Federated Generalized Category Discovery. CVPR 2024. [R3]. Happy: A Debiased Learning Framework for Continual Generalized Category Discovery. Other Strengths And Weaknesses: Strength: 1. This paper is well-motivated and easy to follow. 2. The proposed auxiliary learning with additional learnable tokens ensures the learning of the basic classes, which fundamentally improves the overall performance. 3. The performance gain is remarkable compared with prior arts. Weaknesses: 1. Could the authors explain how the auxiliary helps the main branch intuitively, as well as the effect of Class-wise Distribution Regularization? 2. Could the method estimate the number of new classes instead of borrowing off-the-shelf results? 3. Please include more recent papers [R1,R2,R3] in GCD to reflect the potential trends of this field. References: [R1]. Active Generalized Category Discovery. CVPR 2024. [R2]. Federated Generalized Category Discovery. CVPR 2024. [R3]. Happy: A Debiased Learning Framework for Continual Generalized Category Discovery. Other Comments Or Suggestions: Please consider making the dicussion of related work more comprehensive, as in Weakness 3. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1.** Could the authors explain how the auxiliary helps the main branch intuitively, as well as the effect of Class-wise Distribution Regularization? **A1.** Thanks for your valuable feedback. The auxiliary branch supports the main branch in two key ways: - **Improve Base Performance:** The auxiliary branch provides more reliable pseudo labels to guide the main branch in maintaining high base class accuracy. - **Enhance Feature Learning:** Since both branches share the final feature extraction block but are assigned complementary tasks (base-only vs. all-class classification). Consequently, the model is encouraged to learn more robust parameters and develop better feature representations. Class-wise Distribution Regularization (CDR) serves two key purposes: - **Mitigate Prediction Bias:** Through class-wise regularization, CDR treats each class equally. Consequently, CDR reduces prediction bias toward base classes, enabling more novel class samples to be correctly classified as novel. - **Boost Novel Performance:** CDR implicitly enhances prediction confidence, allowing the novel classifier to better match novel samples and significantly improving novel class accuracy. --- **Q2.** Could the method estimate the number of new classes instead of borrowing off-the-shelf results? **A2.** Thank you for this constructive comment. The category number can be estimated using machine learning algorithms such as semi-supervised k-means and agglomerative clustering. Beyond directly adopting off-the-shelf estimates, we introduce a multi-stage refinement process that leverages our improved feature representations to **enhance estimation** precision: - **Initial Estimation:** We begin with an off-the-shelf method, such as semi-supervised k-means, to establish a baseline estimate. - **Refinement:** After training, we re-estimate the number of classes using the enhanced feature representations from our model. - **Retraining:** The refined estimation is then used to retrain the model for improved performance. We validate this approach on ImageNet-100, CUB200, and SCars, with results shown below: | | ImageNet-100 (K: 100) | | | CUB200 (K: 200) | | | SCars (K: 196) | | | | --- | ---: | --- | :--- | ---: | --- | :--- | ---: | --- | :--- | | | All | Base | Novel | All | Base | Novel | All | Base | Novel | | Est. K (1st) 109/231/230 | 84.4 | 93.2 | 80.0 | 68.4 | 77.1 | 64.1 | 58.6 | 76.4 | 50.8 | | Est. K (2nd) 106/218/212 | **85.3** | **93.1** | **81.4** | **69.2** | **76.3** | **65.7** | **59.9** | **77.6** | **51.3** | The table shows that the second estimation **more closely** approximate the ground-truth K, consistently enhancing performance across datasets. The improved estimation stems from our model’s **superior feature representation**, which also manifests in clearer class separation in the t-SNE visualizations (see Appendix A.8). Notably, clustering-based methods like GCD and CMS cannot leverage this multi-stage refinement, as their training excludes parametric classifiers. **Q3.** Please include more recent papers [R1, R2, R3] in GCD to reflect the potential trends of this field. **A3.** We appreciate this suggestion and will incorporate these recent papers in our revision to provide a more comprehensive view of the field's development.
null
null
null
null
null
null
Inductive Gradient Adjustment for Spectral Bias in Implicit Neural Representations
Accept (poster)
Summary: This paper introduces a practical Inductive Gradient Adjustment (IGA) method to address spectral bias in Implicit Neural Representations (INR) by using inductive generalization of the eNTK-based gradient transformation matrix. The effectiveness of IGA is evaluated across a wide range of INR tasks, and a theoretical justification for its impact on spectral bias is also provided. Claims And Evidence: The claims are well-articulated and supported by compelling evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are logical and well-founded. Theoretical Claims: I have reviewed the proof in this paper, and it appears to be correct. Experimental Designs Or Analyses: I have reviewed the experimental design and analyses, and most aspects are well-presented. However, there is limited discussion regarding training time. The use of the eNTK-based gradient transformation matrix to address spectral bias in INR involves computing the eNTK, which is computationally expensive, as noted in Section 3.2. It is important to address this issue and consider potential solutions, such as INT [1] (“Nonparametric teaching of implicit neural representations”). Supplementary Material: Not applicable. Relation To Broader Scientific Literature: This paper aims to enhance the accuracy of Implicit Neural Representations (INR), which could be advantageous for fields related to INR. Essential References Not Discussed: The references are comprehensive. Other Strengths And Weaknesses: **Strengths**: - The paper is well-structured and easy to follow. - The evaluation of IGA is thorough across a wide range of INR tasks. - The theoretical explanation of IGA's impact on spectral bias is also provided. **Weaknesses**: - The notation for $\Theta$ in Theorem 3.2 is unclear. - There is limited discussion regarding training time. Other Comments Or Suggestions: - There is an extra parenthesis in Theorem 3.1; “(max(gi(λ))” should be written as “max(gi(λ))”. - The explanation of spectral bias could be clearer, similar to the one provided in [2] ("Fourier features enable networks to learn high-frequency functions in low-dimensional domains"). - The term gi(λi) should be defined prior to its introduction in Equation (2). Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Discussion on training time *** Thanks for your overall recognition of our method, experimental design, and analysis. Following your suggestion, we provide a further discussion about training time of our method as a supplement to Table 6 in our Appendix. Please refer to our response **“Time and memory analysis”** to Reviewer 7Wmr. The complete results can be found in: https://anonymous.4open.science/r/iga-inr-icml2025-3A4D/time_memory.md. Overall, IGA averages 1.56× the training time and 1.34× the memory of the baseline but achieves greater improvements (on average, **2.0× those of prior FR and BN**, as shown in the main text). Compared to the vanilla adjustment method by full eNTK matrix, **IGA saves at least 1/4 of the training time**. Thus, IGA improves fitting accuracy by enhancing training dynamics without significantly increasing time. The additional training time overhead comes from the matrix decomposition. As you mentioned, computing the eNTK and performing eigenvalue decomposition for the population data is intractable due to both memory consumption and computational complexity. Thanks to our theoretical foundations (Theorem 3.1 & 3.2), we only need to decompose the eNTK matrix of the sampled data and inductively generalize them to the population data, which allows for a significant reduction in computational cost while maintaining the effectiveness of spectral bias mitigation. Table 6 in our Appendix shows that a small amount of the sampled data can also lead to a nontrivial improvement. # Discussion on potential solutions *** Thanks for your constructive advice. We acknowledge INT as a potentially insightful approach and have cited it in the line 014 (right column) of our paper. We will include a following discussion of such methods in the final version. However, the problems that INT aims to address and its use of eNTK are different from ours. INT aims to address **the costly training of INRs via the sample selection**, while our IGA aims to **improve the spectral bias of INRs via the training dynamics adjustment**. Based on the nonparametric teaching perspective, the eNTK in INT is used to compute the functional gradient;samples with larger functional gradients are prioritized for selection. **Therefore, the specific values are not required for INT, only the relative magnitudes matter.** They found that the functional gradient is positively correlated with the discrepancy between the MLP output and the target function; thus, direct computation of the eNTK can be avoided. While, for IGA, based on the linear dynamics perspective, eigenvalues of the eNTK matrix control the convergence speeds of the MLP on the corresponding eigenvectors; the uneven distribution of eigenvalues leads to varying convergence speeds across feature directions, which is considered as one of the potential causes of spectral bias. IGA aims to uniform these eigenvalues of the underlying eNTK matrix during training to balance the convergence speeds and overcome spectral bias. **Therefore, explicit computation of specific eigenvalues and the corresponding eigenvectors on the eNTK matrix is required for IGA.** As you mentioned that the computational cost of the eNTK is enormous, IGA adopts the inductive generalization of the eNTK-based gradient transformation matrix estimated from the sampled data. This avoids the computation of the eNTK matrix for the entire dataset, significantly reducing computational overhead, while still providing a non-trivial improvement. Compared to the vanilla method, i.e, full eNTK matrix, this **saves at least 1/4 of the training time**, as mentioned in the answer "Discussion on training time". # Response for writing-related comments *** Thanks for your patient review of our theorem and proofs. $\Theta(\cdot)$ in Theorem 3.2 of the main text denotes a tight bound. Concretely, we take the $\epsilon_2=\Theta(m^{-3/2})$ in line 215, right column as an example. $\epsilon_2=\Theta(m^{-3/2})$ indicates that as the variable $m$ grows large, $\epsilon_2$ behaves asymptotically like $m^{-3/2}$ up to constant factors. That is, as stated in the formal version of Theorem 3.2 (in Section A.4 of the Appendix), $\epsilon_2=\frac{\eta k^3 R_0^2}{m^{3/2}}$, where $\eta, k, R_0$ are constants. Following your suggestions, we will remove the extra parenthesis in Theorem 3.1 (line 186) for readability, define $g_i(\lambda)$ before Equation 2, and refine our explanation of spectral bias for clarity.
Summary: The goal of the paper is to mitigate spectral bias in implicit neural representations by changing the training dynamics. The paper considers the well-known connection between the neural tangent kernel (NTK) and the linear training dynamics, which reproduces spectral bias through the eigenvalues of the NTK matrix. The main idea is to introduce a preconditioning matrix into the gradient updates during training, where the preconditioner is designed to compensate for the spectral bias in the NTK. There are two theoretical contributions required to realize this idea practically: (1) showing that the empirical NTK is a good approximation of the NTK as network width increases, since the empirical NTK is more practical to compute, and (2) showing a method to approximate the empirical NTK via batchwise computations that are computationally tractable even for large datasets. Combining these theoretical contributions, the paper proposes a practical algorithm called Inductive Gradient Adjustment (IGA), that can be applied to existing INR architectures to mitigate spectral bias during training. Claims And Evidence: Of the three bullet point contribution claims at the end of the introduction, the first two seem redundant with each other (ie. could be combined into one bullet). An alternative suggestion would be to have these two claims be separated into a theoretical contribution and an algorithmic contribution, since (as described in the summary section of the review) there are interesting theoretical contributions that can be separated from (but help justify and make practical) the algorithmic idea of using a preconditioner matrix to correct the spectrum of the NTK during training. The theoretical claims are well validated by the first two toy experiments, and the improvements on the image fitting task are compelling real-world validation. Methods And Evaluation Criteria: Overall I find the methods and evaluation criteria are clear and well thought out. The toy experiments do validate the theory, and the image fitting results are compelling. However, the 3D shape and radiance field results are very similar to prior work, with only a marginal improvement. Theoretical Claims: I appreciate the structure of how the theoretical contributions are introduced. Section 3.2 gives a clear overview of what the theoretical contributions are and how they will be put together to create the IGA training procedure. That said, the informal statement of Theorem 3.1 is a bit confusing in that it’s not entirely clear where the assumptions end and the statement of the result begins. It’s also not entirely clear where the functions g_i come from; can they be chosen at will from the set of all Lipschitz functions? The informal statement of Theorem 3.2 also uses some notation that is not entirely clear. In particular, there is a statement “...such that [quantity A in absolute value], [quantity B in norm] < epsilon.” Does this mean that max(quantity A in absolute value, quantity B in norm) < epsilon? I’m also a bit confused why there are brackets (starting at p and ending at r_e) in the main result, since if these were removed (and if p were reordered) then we would have exactly the j_i’th row of \tilde K_e in the expression, which might be clearer to read. Experimental Designs Or Analyses: The first experiment is a toy 1D signal fitting experiment (in Figure 1) to verify the theoretical claims. In this toy setting where the analytical NTK and empirical NTK are both tractable, this experiment shows that the proposed NTK approximation does provide similar performance to the empirical NTK, which itself does slightly worse but still decently similarly compared to the analytical NTK. This experiment also shows that the approximation quality improves as network width increases from 1024 to 4096. The second experiment (Figures 2 and 3) also uses a 1D toy function with known frequency decomposition, and validates the theoretical prediction that changing the number of NTK eigenvalues that are adjusted does have qualitatively the expected impact on spectral bias. The remaining experiments show compelling improvements on 2D image fitting and more incremental improvements on 3D shape and radiance field fitting tasks. Supplementary Material: The supplement includes more formal statements and proofs of the theoretical results as well as additional exposition of the IGA method, ablation studies, and per-image/scene results. It appears thorough, though I did not check carefully. One suggestion here would be to allot more space to Figure 9, since the heatmaps are a bit too small to read (the size of heatmaps in Figure 10 is good). Relation To Broader Scientific Literature: The second paragraph of the introduction draws a distinction between methods that mitigate spectral bias through model architecture versus through training procedure, and seems to imply that it’s simpler or easier to modify training procedure. I’m not sure why this would be true; many of the existing architecture-based INRs use very simple changes to a standard MLP, often just changing the activation function (e.g. SIREN, WIRE) or adding an input embedding (e.g. Fourier features). Later in the introduction the authors make the point that their proposed IGA training strategy can be used in conjunction with these architectural modifications for further benefit; to me this is a stronger justification for the proposed method. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A; discussed in the other sections of the review. Other Comments Or Suggestions: The theoretical contribution is, in my view, stronger than the introduction of the paper led me to expect. In particular, the idea of approximating the empirical NTK to make it practical as a gradient preconditioner is clever and elegant, so I’d like to see it mentioned earlier. Overall the writing is straightforward to follow, but some of the writing would benefit from copy editing. For example, the last sentence of the abstract might intend to say “tailoring the impacts of spectral bias” instead of “tailored impacts on spectral bias”? There is also some opportunity to tighten up the writing and remove redundancy. For example, in the left column of page 2 there are 6 sentences that describe “purposefully” overcoming spectral bias by adjusting the empirical NTK spectrum. On line 120 (left column), “improve” should be “improves.” Section 3.1 is a helpful and well-written introduction to spectral bias from an NTK perspective. I might suggest separating it into a “background” section rather than having it as part of the “method” section, since it is a summary of important background rather than a novel contribution. Figure 1 caption: “on time” should be “in time.” Figure 3: The line colors are difficult to distinguish without zooming in (particularly the blue and purple lines). The lines might be more easily distinguished either by changing colors or by adding some variation in line style (e.g. dotted, dashed, markers). Figure 4: The result is compelling, but to improve presentation I suggest (1) including the ground truth image, and (2) including the acronym definitions in the caption. Questions For Authors: It would be helpful to provide a bit of exposition about the g_i below equation 2. Are these free parameters to be chosen when constructing the preconditioning matrix S, or are they constrained or derived in some way? Also, please refer to some questions embedded in other parts of the review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Answer for 3D shape and radiance field results *** Thanks for recognizing our experiments and evaluation approach. We further analyze IGA by comparing it with Fourier Reparameterization (FR) and Batch Normalization (BN), two key training dynamics adjustment methods for mitigating INR spectral bias. For 3D shape task, compared to the vanilla baseline models, FR and BN achieve an average IoU improvement of $9 \times 10^{-4}$ and $6 \times 10^{-4}$, respectively, across five scenes. IGA further achieves an IoU improvement of $4.1 \times 10^{-3}$, which is on average **5$\times$ that of FR and BN**. For the neural radiance field, FR and BN achieve 0.12dB and 0.14dB in PSNR, respectively. The average improvement of IGA is up to 0.24dB, which is nearly **2$\times$ that of FR and BN**, as mentioned in line 417-418, right column. Please note that the improvements achieved by all these methods are non-trivial, especially considering that they are achieved without altering the network inference structure or inference speed. # Clarification of theorems *** Thanks for recognizing our theoretical contributions and careful reading. We apologize for any ambiguity in the statement. For Theorem 3.1, there are two assumptions. The first one is that $v_i^{\top}\tilde{v}_i >0$. This assumption is easy to satisfy in practice, as both $\tilde{K}$ and $K$ are real symmetric matrices, and $\tilde{K}$ gradually converges to $K$, leading to increasingly similar eigenvectors. The second one is that $\eta < ( ) $. This assumption is made to satisfy the requirements of the linear dynamic analysis. For Theorem 3.2, the statement is indeed meant to convey that max(quantity A in absolute value, quantity B in norm) < epsilon. As you suggested, removing the brackets '[]' and reordering $p$ would make the expression clearer and exactly result in the $j_i$-th row of $\tilde{K}_e$. **We will incorporate the above suggestions into the final version.** # Discussion about the Lipschitz functions $g_i$ *** Thanks again for careful reading of our theorems and insightful comments. We will revise the Eq. 2 to provide more exposition of $g_i$. Initially, the functions $g_i, i=1,..., N$ are introduced to represent the potential eigenvalue adjustment functions. During the analysis, we found that **a Lipschitz continuity condition on these functions serves as a simple yet effective condition for the equivalence between $\tilde{K}$-based and $K$-based adjustments**. This condition ensures that, for a set of N Lipschitz functions, there exists a specific Lipschitz constant upper bound, allowing for the existence of a width $m$ such that all adjusted eNTK eigenvalues converge to their corresponding potential adjusted NTK eigenvalues. Following this theoretical analysis, In practice, we simply map the largest $n$ eigenvalues of the $\tilde{K}$ to the $(n+1)$st largest eigenvalue of $\tilde{K}$, keeping the remaining eigenvalues unchanged (please refer to Eq. 5 in our paper). The corresponding operation for the potential corresponding $K$ is mapping the largest $n$ eigenvalues of the $K$ to the $(n+1)$st largest eigenvalue of $K$, and others remaining unchanged. In this setting, the potential errors of these mapped $n$ eigenvalues is dominated by the $(n+1)$st largest eigenvalue. As the eigenvalues from the $(n+1)$-th largest to the smallest are kept unchanged, it follows that their $g_i$functions are the Identity function, with a Lipschitz constant equal to 1. Our wide results and empirical analysis show that the aforementioned simple $g_i$ setting effectively balances the convergence speeds of different frequency components. # Answer for relation to Broader Scientific Literature *** Thanks for your thoughtful comments. The second paragraph indeed aims to highlight the distinction between training dynamics-based methods and architecture-based approaches such as SIREN and WIRE. As you pointed out, our method can be combined with architectural modifications for further benefits, providing a stronger justification. We will revise this in the final version. # Refinements on Writing *** Thanks for your valuable writing suggestions. As you pointed out, the first two contributions at the end of the introduction may appear somewhat redundant. We appreciate your suggestion to distinguish **theoretical and algorithmic contributions**. We will revise the manuscript to better highlight our theoretical insights. Follow your suggestions, we will **highlight the empirical NTK approximation earlier** and restructure Section 3.1 into a **“Background” section**. Fig. 9 will be **resized** to match Fig. 10, and **ground truth** will be added to Fig. 4. **Abbreviation definitions** will be included in captions, and **line distinctions** in Fig. 3 will be adjusted. Minor typos and redundancies such as "improves" will also be carefully revised.
Summary: The paper presents a Neural Tangent Kernel-based approach to improving the spectral bias of implicit neural representations, which have been shown to be biased towards low frequencies. The paper summarizes the NTK theory and proposes a way to estimate the K matrix using a subset of the training samples, making this estimation tractable, compared to computing the actual matrix. This matrix can then be used, similarly to previous work, to steer the training of INRs into having less bias. Theoretical guarantees for the estimation are provided, as well as experiments. The experiments are: two simple experiment on function approximation, useful for an analysis on the proposed approach but not for drawing practical conclusions; an experiment on image representation; an experiment on 3D shape representation; an experiment on neural radiance fields. All the experiments show that, in the proposed settings, the method can be applied to existing methods and outperform them. ##Update after rebuttal The rebuttal addressed most of my concerns, especially the ones about practicality in terms of time and memory requirements. I still believe an auto decoder-based 3D shape experiment would be more effective and useful than the one proposed, but overall I propose acceptance of the paper. Claims And Evidence: The claims are mostly supported by the shown evidence. The only exception, in my opinion, being the claim at page 2 line 97, where the proposed method is labeled as "practical". While the analyses and experiments indeed show that the method works in the tested scenarios and with the tested baselines, for the method to be convincingly practical an analysis on its additional time and memory requirements would be needed. A partial analysis is provided in the supplementary, table 6. However it is only shown for one experiments, it does not mention memory requirements, and the time impact is shown to be highly sensitive to the choice of p. I believe a more detailed analysis should be provided for all experiments, with data about time and memory requirements being added to the tables in the main paper. Methods And Evaluation Criteria: Yes, the method seems well grounded and justified, providing tangible benefits. Theoretical Claims: I checked the claims to the best of my abilities, however I could not verify the proofs due to my limited expertise in NTK theory. Experimental Designs Or Analyses: Yes. The experiments are mostly well designed, and they include several standard scenarios, baselines and metrics, which overall are convincing about the potential of the proposed approach. E1) As mentioned above, a time and memory requirements analysis is required to support the claim about practicality, but it's currently very limited and relegated to the appendix. E2) The methods tested in the main paper (ReLU MLP, MLP with PE, Siren, Vanilla Nerf) do show that the proposed approach improves on vanilla approaches, however more recent and complex methods have been proposed, and a few of them should be used to convince the reader about the practical usability of the proposed framework. The supplementary additionally shows Gauss, WIRE and FINER for image representation. MFN[1]/Bacon[2] should be considered too, as well as for shape representation, and a more recent and powerful NeRF model should be added as well. I believe these results belong to the main paper, as they serve to support the claims made by the authors. E3) Additionally, the experiment on 3D shape representation is not fully convincing for 2 reasons. 1) While IOU is an important metric, Chamfer Distance should be reported as well. In my experience they can behave differently, and they are both considered standard in this field. 2) As far as I understand, a simple setting is shown where each network is overfitted to a single shape. This is not considered a very useful scenario in practice, since INRs are usually used in an auto-decoder fashion to learn multiple shapes. In the auto-decoder scenario, different methods can actually behave quite differently, so this would be a much more useful experimental setting. [1] Multiplicative Filter Networks, Rizal Fathony, Anit Kumar Sahu, Devin Willmott, J Zico Kolter, ICLR 2021 [2] BACON: Band-limited Coordinate Networks for Multiscale Scene Representation, David B. Lindell, Dave Van Veen, Jeong Joon Park, Gordon Wetzstein Supplementary Material: Yes, but not in detail. Sections B to I. Relation To Broader Scientific Literature: The theoretical contributions seem well contextualised in the literature. My only point is, as mentioned above, about additional experimental comparisons. Essential References Not Discussed: To the best of my knowledge, no, except for the additional experimental methods mentioned above. Other Strengths And Weaknesses: Strengths: S1) The proposed method seems original and well grounded in its analysis S2) The paper is well written and easy to follow Weaknesses: W1) Experimental section does not fully convince of the validity of the method (see above) Other Comments Or Suggestions: These did not impact the review: Page 1 Line 33-35 column 2: it's not clear which methods have an impact on complexity. Some of the mentioned ones don't, as far as I know (such as SIREN) Page 6 Line 322 column 1: "fixed last layer in" -> "fixed last layer as in"? Page 6 Line 298-301 column 2: I've found the phrasing confusing Questions For Authors: I would like the authors to comment on my questions about the experimental analysis Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Time and memory analysis *** Thanks for suggesting a more detailed time and memory requirements analysis. We conducted a measurement of the training time and memory of IGA across all experiments. For example, in the **1D simple function approximation**, the vanilla SIREN takes 28ms/iter (ms/iter denotes milliseconds per iteration) with 1009MB memory; SIREN+IGA takes 45ms/iter with 1575MB memory; adjustments (w/o IGA) by the full eNTK matrix, i.e., the eNTK matrix obtained from the population data, takes 198ms/iter with 5881MB memory. In the **2D image approximation**, the vanilla PE takes 74ms/iter with 3609MB memory; PE+IGA takes 87ms/iter with 4167MB memory. The full eNTK adjustment requires >100GB memory, which is practically infeasible. In the **3D shape representation**, the vanilla ReLU takes 113ms/iter with 2163MB memory; ReLU+IGA takes 163ms/iter with 2231MB memory. In the **5D neural radiance fields**, the vanilla NeRF takes 160ms/iter with 9451MB memory; NeRF+IGA takes 286ms/iter with 11033MB memory. The complete results can be found in: https://anonymous.4open.science/r/iga-inr-icml2025-3A4D/time_memory.md. Overall, IGA averages 1.56× the training time and 1.34× the memory of the baseline but achieves greater improvements (on average, **2.0× those of prior FR and BN**, as shown in the main text). Compared to the full eNTK matrix, IGA **saves at least 1/4 of the training time and memory**. Thus, our method improves fitting accuracy by enhancing training dynamics without significantly increasing time or memory. Notably, since IGA achieves the same results with fewer iterations, it often allows us to achieve results comparable to baseline methods in less or even shorter time. We present examples in the above link. # Practical issue *** Enhancing network performance by improving training dynamics without affecting the inference holds significant practical value. It helps improve inference performance in cases with limited computational resources. NTK theory has made significant progress in analyzing the training dynamics and spectral bias of INR and shows the theoretical potential to adjust dynamics. However, directly applying it to most INR models for better performance remains impractical due to the intractability of NTK matrix. **IGA provides two practical solutions**: first, it proves and validates the effectiveness of eNTK (which is generally computable) in adjusting training dynamics; second, it introduces inductive generalization of the transformation matrix derived from the eNTK matrix of sampled data to adjust the overall training dynamics. With these two solutions, IGA achieves greater improvements (on average, **2.0× those of prior FR and BN**, as shown in the main text) without significant additional costs—requiring only 45% more training time and memory on average compared to the baseline. Please note that FR and BN also cost more training time and memory but with only minor improvement. Thus, IGA is a practical and effective training dynamics adjustment method for INR tasks. # Additional experiments *** Thanks for suggesting more models to further show IGA's practical usability. As you mentioned, we have included 2D results of IGA with Gauss, WIRE, FINER in our Appendix and now provide 3D shape results. Due to time constraints and considering that MFN is the backbone of BACON, we extend IGA to MFN for both 2D and 3D tasks. Furthermore, for the neural radiance fields, we extend IGA to the well-known DVGO. The results are listed in the table of the following link: https://anonymous.4open.science/r/iga-inr-icml2025-3A4D/new_exp.md. Previous training dynamics methods, i.e., FR and BN, are also compared. From the numerical results, **IGA consistently outperforms FR and BN across 2D, 3D, and 5D tasks**, aligning with the main paper. These results further support our claims. We will add these results in the main paper. # Chamfer Distance *** Thanks for insightful comments about 3D shape representation. Following your suggestion, we computed Chamfer Distance using the code from “Occupancy Networks” and reported the results in the link: https://anonymous.4open.science/r/iga-inr-icml2025-3A4D/chamfer_distance.md. As you mentioned, Chamfer Distance and IoU can behave differently. Despite this difference, **IGA still outperforms the baseline and other training dynamics adjustment methods, FR and BN.** # Discussion on 3D Shape Representation *** Thanks for valuable suggestions on the 3D task. The setting of 3D shape representation in the main text is simple but has been adopted by many classic INR works to test the representation capability of INR models, such as PE, SIREN, MFN, Gauss, WIRE, and FINER. Recent works on training dynamics of INR, such as FR and BN, have also used this setting to evaluate the effectiveness of their methods. Therefore, this setting is suitable for demonstrating that IGA can better improve training dynamics to enhance INR’s representation performance for 3D shapes.
null
null
null
null
null
null
null
null
MxMoE: Mixed-precision Quantization for MoE with Accuracy and Performance Co-Design
Accept (poster)
Summary: Mixture-of-expert LLMs are getting important to reduce the traning cost without sacrificing the final model quality. With more MoE LLMs released, it's important to optimize the serving performance for them. Quantization is an important technique to optimize the LLMs serving by use low-precision data types to store the parameters. The authors propose to use different quantization schemes (i.e., the low-precision data type for parameter and activation, and the group-size for channel-wise quantization) at the **linear-block** level instead of expert level. Besides, the authors also implement more efficient kernels to execute the quantized layers. Experiments show that the proposed method MxMoE achieves 2.4 lower Wikitext-2 perplexity than GPTQ at 2.25-bit and delivering up to 3.4x speedup over full precision. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: Efficient execution of quantized MoE LLMs. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths** - Provides an efficient implementation for quantized MoE layers. **Weaknesses** - The novelty of quantizing at the block level is incremental. - Although the authors implemented new kernels to support the proposed quantized MoE layer, these contributions seem more like engineering improvements rather than groundbreaking innovations. **Details** The novelty of quantizing at the block level is incremental. As the authors mention in the background section, prior works like MC-MOE have already observed that different experts have varying levels of significance or importance and could be quantized into different low-precision data types. These works also employ linear programming to determine the optimal quantization strategy. The new kernel implementation appears to be more of an engineering improvement. Since the quantization granularity is at the linear level, it is natural to parallelize different experts with different quantization schemes within a single kernel by using separate thread blocks to execute these layers or by utilizing different CUDA streams to launch distinct kernels. The innovations in this aspect seem more like engineering contributions to the community rather than fundamental algorithmic advancements. The accuracy does not show significant improvement in the W3.5-A16 setting compared to the uniform baseline GPTQ. From Table 1, MxMoE (W3.5-A16) achieves similar average accuracy to GPTQ (W3.5-A16), and MxMoE only demonstrates better accuracy at even lower precision (e.g., 2.25-bit). If MxMoE or similar methods that use different quantized types for different experts only prove effective at very low precision (<3-bit), their impact and contribution would be considerably diminished. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your thoughtful feedback. We would like to provide further clarification on several aspects of our work that you have mentioned: 1. Novelty of the Approach and Similarities with Prior Work - **Linear-Block-Level Quantization**: Our approach is driven by empirical observations rather than a straightforward extension of prior work. Figure 1(a) highlights significant sensitivity differences between linear blocks, and Table 3 confirms the effectiveness of our approach. - **Objective: Balancing Model Accuracy and Hardware Efficiency**: Unlike prior work that solely optimizes model accuracy, MxMoE jointly considers both accuracy and system efficiency. Hardware-friendly quantization strategies improve efficiency but often degrade accuracy (Figure 6). Our objective function is designed to incorporate both quantization perturbation and the runtime cost of a mixed-precision scheme. And accurately modeling runtime cost is nontrivial, requiring deep analysis of hardware characteristics and parallel algorithm design, as detailed in Section 4.2. - **Scope: Weight-Only vs. Weight + Activation Quantization**: MC-MoE focuses on extremely low-bit weight-only quantization, whereas MxMoE optimally allocates bit-widths for both activation and weight parameters. This approach enables MxMoE to leverage low-precision arithmetic units on GPUs more effectively (Figure 5). - **Performance: Theoretical vs. Real Speedup**: Previous methods relied on existing kernels like HQQ, achieving only theoretical speedups without real-world reductions in wall-clock time (Figure 2). In contrast, MxMoE is designed from the ground up for hardware efficiency, leading to actual runtime improvements. - **Automation: End-to-End Mixed-Precision pipeline**: MxMoE is the first framework to fully automate the generation of mixed-precision schemes and fused operators tailored to the generated scheme which is not achieved in any prior works. 2. MxMoE Is Engineering Optimization - MxMoE is not simply providing one kernel but rather offering an automated method for the mixed-precision allocation and generating customized fused operators. The design and optimization of mixed-precision GEMM have been an ongoing area of research[3][4], which continue to push the boundaries of quantization research. MxMoE addresses the more challenging problem of mixed-precision GroupGEMM, which is a much more complex problem that goes beyond simple parallelization of different precision tasks in thread-blocks or streams. - In fact, we initially implemented MxMoE with multi-stream, but unfortunately we found it could not fully exploit hardware. We show the comparison(RTX 4090, Qwen1.5, W4A16 and W8A8 mixed MoE-Block): | #Tokens 1024|TFLOPS| |-|-| |FP16|70.117| |W4A16|119.32| |W8A8|140.95| |MxMoE|163.20| |multi-stream|142.19| The results demonstrated that our approach achieves superior efficiency. - Assigning task to CTAs is fundamentally the minimum makespan problem, an NP-hard challenge. "parallelize different experts with different quantization schemes within a single kernel by using separate thread blocks" won't work as it suffers from workload imbalance. To address this, we design a tile scheduler that distributes workload evenly among all CTAs(Line 295). 3. Experimental Results - At 3.5 bit, quantization perturbation is relatively limited. Both GPTQ and MxMoE perform well at this setting. As the bit-width decreases, the accuracy advantage of MxMoE becomes more evident. This extreme low-bit compression is highly relevant for resource-constrained model deployment, as explored in several works such as [1][2]. - MxMoE performs bit-width allocation for both activation and weight. As demonstrated in Table 1 and Figure 5, MxMoE shows a significant accuracy advantage over existing methods in weight-activation quantization, pushing the accuracy-efficiency Pareto frontier to new heights. In conclusion, we believe **MxMoE introduces novel contributions in both algorithmic design and system-level optimization**. We sincerely appreciate the reviewer’s feedback and the opportunity to clarify our contributions. We hope our responses have effectively addressed the concerns raised. Given these clarifications, we hope the reviewer may reconsider the overall evaluation of our work. We would be happy to further discuss any remaining questions or concerns.\ [1] Ma, Shuming, et al. "The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits." CoRR (2024).\ [2] Yuan, Zhihang, et al. "PB-LLM: Partially Binarized Large Language Models." The Twelfth International Conference on Learning Representations.\ [3] Wang, Lei, et al. "Ladder: Enabling Efficient {Low-Precision} Deep Learning Computing through Hardware-aware Tensor Transformation." 18th USENIX Symposium on Operating Systems Design and Implementation (OSDI 24).\ [4] Lin, Yujun, et al. "Qserve: W4a8kv4 quantization and system co-design for efficient llm serving." arXiv preprint arXiv:2405.04532 (2024). --- Rebuttal Comment 1.1: Comment: Thanks the authors addressing my concerns. I decide to change my score from 2 to 3. --- Reply to Comment 1.1.1: Comment: Thank you for your response and for increasing your score. We appreciate the time and effort you have spent providing valuable feedback. Best regards, Authors.
Summary: This paper presents a framework for exploring the design space of mixed-precision quantization in mixture-of-experts (MoE) models. It considers the variation in quantization sensitivity of linear blocks within models, allocating different bitwidths based on sensitivity. Additionally, it takes into account the frequency of block activation to apply different quantization strategies. The proposed mixed-precision approach is claimed to achieve a speedup over uniform quantization while maintaining equivalent accuracy. The method is evaluated using DeepSeek, Mixtral, and Qwen MoE architectures, with comparisons across various quantization methods. Claims And Evidence: The primary claim is that the proposed mixed-precision quantization framework improves inference speed compared to uniform quantization at the same accuracy level. The claim is supported by experimental results demonstrating speedups on multiple MoE models. However, some key methodological aspects, such as how sensitivity and activation frequency incorporated into the quantization strategy, could be elaborated further. Additionally, the paper should provide more explanation of the quantization methods used in comparison, such as GPTQ and Quaro. Methods And Evaluation Criteria: The evaluation is based on well-known MoE models (DeepSeek, Mixtral, and Qwen) and considers several quantization techniques. The datasets used for evaluation appear appropriate, and the comparisons are reasonably extensive. However, more clarity is needed in describing how different quantization configurations are selected. Additionally, while the paper repeatedly mentions hardware-friendly quantization, it does not explicitly define what hardware efficiency entails—whether it refers to latency, throughput, or other tradeoffs. Theoretical Claims: The paper does not present any theoretical claims, and therefore, there are no proofs to verify. Its contributions are primarily empirical. Experimental Designs Or Analyses: The experimental setup appears reasonable, but certain details are missing. Specifically, the paper describes the allocator, kernel generator, and task scheduler in Section 4.1 and Figure 1, but their implementation details remain unclear. Some information is presented in Section 4.3, but it does not sufficiently explain how these components function together. A more detailed breakdown of the experimental setup would be beneficial for reproducibility and understanding. Supplementary Material: The paper has no supplementary material. Relation To Broader Scientific Literature: The paper references several relevant works but lacks sufficient background discussion on some key concepts. Specifically, more context on GPTQ and Quaro, two quantization methods used in the experiments, would be helpful. Similarly, the features, differences, and challenges of the selected MoE models (DeepSeek, Mixtral, and Qwen) should be elaborated upon to provide a clearer motivation for their inclusion. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: * The paper presents a useful and practical approach to improving MoE model efficiency through mixed-precision quantization. * A good comparison is made between different MoE models and quantization schemes. * The experimental results demonstrate performance improvements and validate the approach. Weaknesses: * The paper lacks clarity in several areas, including definitions of key terms (e.g., hardware-friendly quantization) and details on the allocator, kernel generator, and task scheduler. * The related background on quantization methods (GPTQ and Quaro) and MoE models is insufficient. * Some sections are repetitive, restating parameter sensitivity and activation frequency without elaborating on their exact implications. * The experimental design could be explained in greater detail for better reproducibility. Other Comments Or Suggestions: * Clarify the definition of hardware-friendly quantization—does it prioritize latency, throughput, or another metric? * Reduce redundancy in discussions of parameter sensitivity, activation frequency, and hardware characteristics. Instead, elaborate on how they are measured and affect performance. * Provide clearer explanations of the quantization methods used in comparison. * Expand the descriptions of DeepSeek, Mixtral, and Qwen MoE architectures, highlighting their relevance to this study. Questions For Authors: * Can you clarify what hardware-friendly quantization means in your context? * Could you provide more details on the allocator, kernel generator, and task scheduler implementation? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the reviewer’s engagement.
Summary: This paper presents MxMoE, a mixed-precision quantization framework for MoE models. The development of MxMoE is driven by three key factors: 1. Parameter Sensitivity, linear blocks exhibit significant variability in their sensitivity to quantization. 2. Expert Activation Frequencies, activation frequencies of experts vary considerably across MoE blocks. 3. Hardware Characteristics, the weight-activation quantization scheme can be optimized to align with its actual arithmetic intensity for enhanced performance. ## update after rebuttal I asked some questions regarding the rebuttal, and the authors' response makes sense. I have maintained my score. Claims And Evidence: yes Methods And Evaluation Criteria: The benchmarks and evaluation criteria are widely used to assess the performance and computational efficiency of quantized models. For instance in baseline method, QuaRot [1]. To thoroughly evaluate computational throughput, we recommend that the authors incorporate additonal benchmarks with varied length distributions. [1] https://proceedings.neurips.cc/paper_files/paper/2024/hash/b5b939436789f76f08b9d0da5e81af7c-Abstract-Conference.html Theoretical Claims: This paper does not contain proofs for theoretical claims. Experimental Designs Or Analyses: To the best of my knowledge in this field, most of the experimental designs or analyses are sound. I still have concerns regarding why MxMoE uses the 5-5 setting in Table 1 for weight-activation quantization. Could the observed improvements simply be attributed to the 1-bit increase in precision? I would appreciate it if the author could clarify the rationale behind that. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: I have not come across any papers that share the key contributions of this one. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: There are some typos for the user's further improvements, such as the incorrect use of capitalization in line 194 and Figure 3. Captions can be further refined; for instance, directly presenting a configuration like g128 and g-1 is not reviewer-friendly for those unfamiliar with the field. Questions For Authors: [1] Why MxMoE uses the 5-5 setting in Table 1 for weight-activation quantization. Could the observed improvements simply be attributed to the 1-bit increase in precision? (See Section "Experimental Designs Or Analyses") [2] Can the authors incorporate additional benchmarks with varied length distributions to conduct throughput-related tests? (See Section Methods And Evaluation Criteria) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of our work. Your insightful questions and suggestions have been instrumental in enhancing the quality of our manuscript. Below, we address your concerns in detail: 1. Rationale for the W5A5 Configuration in Table 1 - **Why does the "strange" W5A5 setting arise?** This configuration is intentional. During bitwidth allocation, we set an average 5-bit activation and weight budget for MxMoE. On our experimental platform (RTX 4090), Tensor Cores support 4-bit and 8-bit operations, allowing both W4A4 and W8A8 configurations to benefit from low-precision Tensor Core acceleration. The additional 1-bit budget provides MxMoE with greater flexibility in bitwidth allocation, meaning that certain activations are assigned 8-bit precision while others remain at 4-bit. Although the average bitwidth increases by only 1 bit, MxMoE automatically identifies activations that are more sensitive to quantization through sensitivity analysis and assigns them a higher bitwidth to mitigate accuracy degradation. This exemplifies the advantage of mixed-precision quantization over uniform quantization. - **Does the accuracy gain stem solely from the additional 1-bit?** Yes. The improved accuracy of W5A5 over W4A4 is primarily due to extreme outliers in the input activations of the down_proj layer—a phenomenon well-documented in prior research (e.g., [1, 2]). Unlike standard outlier features, these activations are particularly difficult to quantize effectively at 4 bits, leading to precision loss. MxMoE mitigates this issue by allocating higher bitwidths to such activations, thereby preserving model accuracy. Our findings align with prior work [2], which demonstrates that suppressing extreme outliers substantially improves post-quantization model performance. We will clarify this rationale in the revised manuscript and provide a case study on bitwidth allocation. 2. Throughput Evaluation with Varied Length Distributions We agree that evaluating computational throughput across diverse input lengths is essential for real-world applicability. In response to your suggestion, we have expanded our throughput experiments. Due to time constraints, we conducted experiments on DeepSeekV2-Lite using the Humaneval-X dataset. In this study, we employed the exact same MxMoE(W5A5) configuration as in Table 1 and Figure 5. To assess performance across varied sequence lengths, we set the batch size to 1—ensuring that input lengths is varied, unlike in the compute-bound scenarios in Figure 5. We then processed the entire dataset, computing the average computational throughput (total FLOPs divided by execution time) and comparing it to FP16 performance. The length distribution of the dataset can be found at https://huggingface.co/datasets/THUDM/humaneval-x. | Dataset | FP16 TOPS | MxMoE(W5A5) TOPS | Speedup | | ----------- | ----- | ----------- | ------- | | humaneval-x | 38.19 | 107.36 | 2.8 | |||| As shown in the results, although the speedup is slightly lower than in the fully compute-bound scenario, MxMoE(W5A5) maintains a strong acceleration ratio even on datasets with varied-length short sequences. This highlights the robustness of the mixed-precision approach. We will expand this experiment in the revised manuscript and add more models and dataset tests. Thank you very much for your suggestions! 3. Typos and Figure Refinements We sincerely appreciate your careful review and apologize for the typographical errors. These will be corrected in the revised manuscript. Additionally, we will refine the caption of Figure 3 to explicitly define terms such as g128 (quantization group size of 128) and g-1 (per-channel/token quantization) to enhance clarity. Thanks again for your advice! Once again, we deeply appreciate your valuable feedback. Your suggestions have been instrumental in improving the quality of our manuscript. If you have any further suggestions or concerns, we would be delighted to discuss them! [1] Sun, Mingjie, et al. Massive Activations in Large Language Models. First Conference on Language Modeling.\ [2] Lin, Haokun, et al. Duquant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs. NeurIPS 2024. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. I have reviewed both the rebuttal and the paper for some time. It seems there may be some misunderstanding regarding my question. I was asking whether the performance gains over the baselines are due to the additional 1-bit precision. If your answer is yes, and you acknowledge that a higher bit width for activation is crucial, could you please include another heterogeneous quantization strategy while maintaining the 5-5 setting to help convince me of MxMoE’s effectiveness? This would allow me to maintain a positive overall assessment. Thanks again. ### Update: The reply makes sense to me. Thanks to all the authors for their efforts. I kept my score positive. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable feedback. In response, we conducted two experiments. Firstly, we test Qwen1.5-MoE with the model perplexity on wikitext-2 (sequence length: 4096) to analyze the effects of various quantization bit settings (RTN per-channel/token symmetric quantization). For reference, full precision perplexity is **6.791**. The results are shown in the table below (rows: weight bits; columns: activation bits): Perplexity (Lower is Better) | #Bits (W-ACT) | 4 | 5 | 6 | 7 | 8 | |:-------------:|:---------:|:------:|:------:|:-----:|:-----:| | 4 | 68079.039 | 41.433 | 11.298 | 9.406 | 8.068 | | 5 | 12305.585 | 38.707 | 9.715 | 8.169 | 7.335 | | 6 | 14251.822 | 26.297 | 9.216 | 8.196 | 7.204 | | 7 | 18151.474 | 34.775 | 9.747 | 8.182 | 7.325 | | 8 | 19091.917 | 38.99 | 9.525 | 8.26 | 7.278 | Observation: Increasing the activation bitwidth dramatically reduces the perplexity while the benefits of increasing the bitwidth of weight are comparablely margin. In particular, **moving from 4-bit to 5-bit activations results in significant performance improvement**. While further increasing the bitwidth beyond 5 bits continues to lower the perplexity, the gains become less pronounced. The experimental results in the table confirm the effectiveness of heterogeneous quantization strategy. To better illustrate the effectiveness of MxMoE, we conducted an additional experiments (Qwen1.5-MoE). In below experiments, both Quarot and MxMoE were evaluated under RTN-quantized weight settings (fully aligned with the Quarot-RTN configuration in [1]. This decision was made because GPTQ is too slow for processing weights). The performance of Quarot under various precision(uniform quantization) settings is summarized below: | Setting | w4a4 | w5a5 | w6a6 | w7a7 | w8a8 | |:----------:|:------:|:-----:|:----:|:-----:|:-----:| | Perplexity | 36.385 | 7.998 | 6.99 | 6.852 | 6.814 | MxMoE(W5A5-RTN): **7.160** Conclusion: 1. **Quarot also shows a marked improvement when moving to a W5A5 configuration compared to W4A4** 2. Quarot still underperforms relative to MxMoE. This difference is attributable to MxMoE's ability to **identify sensitive parameters and allocate higher precision to protect them (e.g. 8-bit for sensitive activations, 4-bit for non-sensitive activations)**, thereby better mitigating the quantization perturbation. 3. MxMoE achieves wall-clock time reduction under the W5A5 setting through mixed-precision strategy while **uniform W5A5, W6A6, W7A7 quantization schemes are not supported by modern hardware(e.g. GPU)**. We sincerely appreciate your prompt, thoughtful feedback and particularly value your insightful observation about methodological details. In the revised version, we will include an in-depth case study that discusses these experimental results and further illustrates the effectiveness of MxMoE. Thank you again for your constructive feedback and for engaging in this detailed review process. We are happy to provide any additional information or experiments to address any remaining concerns! [1] https://proceedings.neurips.cc/paper_files/paper/2024/hash/b5b939436789f76f08b9d0da5e81af7c-Abstract-Conference.html
Summary: The paper introduces MxMoE, a mixed-precision quantization framework tailored for Mixture-of-Experts (MoE) models, aiming to address deployment challenges posed by their large memory footprint and computational demands. Key insights include: 1. Heterogeneous quantization sensitivity: Linear blocks within MoE experts exhibit varying sensitivity to bitwidth reduction. 2. Divergent expert activation frequencies: Computational characteristics (e.g., memory-bound vs. compute-bound operations) differ across experts. MxMoE optimizes bitwidth allocation at the linear-block granularity (rather than expert-level) and generates specialized GPU kernels for parallel execution of mixed-precision Group-GEMM operations. It formulates the problem as an Integer Linear Programming (ILP) task, balancing quantization-induced accuracy loss and runtime efficiency. Evaluations on models like DeepSeek-V2-Lite and Mixtral-8×7B show MxMoE achieves 2.4× lower perplexity than GPTQ at 2.25-bit and 3.4× speedup over full-precision execution, outperforming uniform quantization baselines. Claims And Evidence: Yes. Methods And Evaluation Criteria: The trade-off parameter $r$ requires tuning for different models and use cases, which is not thoroughly explored. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: There is a lack of validity analysis experiment for each part of the method, especially Micro-Kernel Specialization and Resource Configuration. Supplementary Material: No Relation To Broader Scientific Literature: Not sure. Essential References Not Discussed: Not sure. Other Strengths And Weaknesses: 1. Novel granularity: Linear-block-level quantization allocation improves accuracy-efficiency trade-offs compared to expert-level or uniform approaches. 2. Hardware-algorithm co-design: Integrates parameter sensitivity, activation patterns, and hardware constraints (e.g., roofline model) into a unified framework. 3. Performance gains: Demonstrates significant improvements in perplexity and throughput across multiple models and workloads (memory/compute-bound). Other Comments Or Suggestions: No. Questions For Authors: Please see "Experimental Designs Or Analyses" and "Methods And Evaluation Criteria" Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s constructive feedback. Below, we address the two key concerns raised: 1. Exploration of Trade-off Parameter $r$ for Different Models and Use Cases Our ablation study (Figure 6) explores the impact of the hyper-parameter $r$ on model accuracy and hardware efficiency. Our analysis demonstrates that increasing $r$ prioritizes accuracy (e.g., lower perplexity) at the expense of efficiency (e.g., lower throughput). **MxMoE can automatically adjust $r$ according to use case**: there are typically some constraints such as memory budget, precision budget, and latency budget in model quantization. MxMoE satisfy these constraints by automatically adjusting $r$. Additionally, users may manually tune $r$ to balance hardware efficiency and model accuracy, selecting a suitable value based on their specific requirements. Basically, if a user prioritizes higher accuracy, $r$ should be set closer to 1, whereas if better speed is desired, $r$ should be closer to 0. While Figure 6 presents results for a single model, we emphasize that the observed trend is consistent across all evaluated architectures. **For simplicity and reproducibility, we adopt $r=0.75$ as the default value in all experiments, except for low-bit weight-only quantization**, where we set $r=1.0$ (Lines 363, 373, 378). This exception aligns with edge deployment scenarios, where memory constraints dominate, making it crucial to maximize compression while preserving accuracy. 2. Validity Analysis for Method Components **Micro-Kernel Specialization** MxMoE mitigates the combinatorial explosion of mixed-precision configurations by automating kernel generation (Line 289). To illustrate the effectiveness of our approach, we compare our strategy with two alternative solutions: - **Developing a universal kernel to handle all precision combinations**: This approach would compromise kernel performance. We provide a breakdown demonstrating the limitations of this method relative to our micro-kernel specialization. Specifically, the kernel for W4A4-per-channel could theoretically share the same software pipeline with W4A4-group128, but enforcing universality significantly degrades performance (test shape $[8192, 8192, 8192]$): | Kernel Type | W4A4_per-channel TOPS | W4A4_group128 TOPS | |-| - |-| | W4A4_per-channel(Specialized) | 1070.5303| N/A | | W4A4_group128(Specialized)| N/A | 667.3349| | Unified Kernel| 929.1997 | 412.0268 | |||| Unifying the two pipelines requires introducing runtime condition checks, which hinder loop unrolling in the MAC-loop. Moreover, to support group-size=128, the per-channel kernel’s tile-size selection is constrained, making configurations such as tile_k=256 infeasible. - **Developing separate kernels for each configuration**: While handcrafted kernels could match performance, they require substantial engineering effort. If a given hardware platform supports five quantization candidates (e.g., w2a6, w4a16, w8a8, w4a4, w4a4 with group-size 128), implementing individual kernels for all possible configurations would require $5!=120$ kernels! In contrast, our micro-kernel specialization approach requires implementing only 5 configurable micro-kernels, which are automatically combined by the kernel generator to form optimized fused operators. **Resource Configuration** To resolve resource mismatches caused by heterogeneous micro-kernels (e.g., varying warp counts per thread block, as shown in Figure 4), we employ the following strategies: - Max-resource allocation: Ensuring all parallel units reach the resource ceiling for correctness. - Slice-K optimization: This optimization improves shared memory utilization and achieves speedup compared to a baseline without carefully config, we provide a case study: The W4A16-per-channel quantization with/without resource configuration optimization(test shape $[16, 8192, 8192]$). | Method | Best Config| W4A16 TOPS | | -- | - | -- | | $\mathrm{MxMoE}$ | Tile: [16, 64, 128]; Warp: [1,2,2] | 74.8983 | | $\mathrm{- Resource Configuration}$ | Tile: [16, 128, 128]; Warp: [1,4,1] | 52.4288 | |||| The result shows that with resource configuration optimization, MxMoE automatically discovered a better tile configuration (assign 2 warps along k-dimension, which is not a common configuration in previous kernel such as Marlin[1] e.t.c.), which brings 42% speedup. We will enhance the revised manuscript with additional ablation studies and quantitative comparisons to reinforce these claims. We sincerely appreciate the reviewer’s valuable suggestions, which we believe further strengthen the technical rigor and reproducibility of our work. [1] Frantar, Elias, et al. "Marlin: Mixed-precision auto-regressive parallel inference on large language models." Proceedings of the 30th ACM SIGPLAN 2025.
null
null
null
null
null
null
Revisiting Convergence: Shuffling Complexity Beyond Lipschitz Smoothness
Accept (poster)
Summary: This paper establishes convergence of random reshuffling methods under a l-smoothness condition, where l is a function rather than a constant. Claims And Evidence: New results that improve our understanding of random reshuffling approaches are potentially interesting, as random reshuffling is heavily used in practice. This paper does a good job of reviewing existing literature and placing their result in the right context. Essentially they broaden the coverage of convergence results for RR methods to a new smoothness criterion that wasn't previously covered. It is a fairly general condition and it covers several problems of interest. In terms of impact, although this is a useful contribution to the literature it's not ground breaking, and doesn't fundamentally change our understanding of RR methods, so I would lean towards an accept but not a strong accept. Methods And Evaluation Criteria: N/A Theoretical Claims: I have not checked any theory from the Appendix. Results in the main body of the paper are precisely stated and make sense. Experimental Designs Or Analyses: Experiments seem reasonable and address the content of the paper. Error bars are shown, graphs are clear. For this sort of theory paper experiments are optional and don't need to address large scale problems. Supplementary Material: No Relation To Broader Scientific Literature: See Above Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Please check that all uses of the term Lipschitz refer to Lipschitz smoothness, as there is a distinction between a function being Lipschitz and Lipschitz smooth. In some places (Line 80) when a function is described as non-Lipschitz the authors mean non-Lipschitz smooth. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging our contribution and careful attention to details. We will make modifications as suggested by the reviewer.
Summary: This paper investigates shuffling-type gradient methods without assuming Lipschitz smoothness, which is often not satisfied in practice for many machine learning models. The authors propose stepsize strategies that enables convergence guarantees under a more general smoothness condition called "$\ell$-smoothness." They prove convergence rates for nonconvex, strongly convex, and general convex cases under both random reshuffling and arbitrary shuffling schemes. The paper provides theoretical contributions that match or extend existing best-known rates while requiring weaker smoothness assumptions, and validates the approach with experimental results. Claims And Evidence: The central claims about convergence rates under ℓ-smoothness are supported by theoretical analyses. While the authors demonstrate through counterexamples why Lipschitz smoothness is too restrictive in practice, the practical impact of their theoretical contribution is less clear. Given that prior work has already established that shuffling SGD can outperform vanilla SGD, the experimental value in this paper should be focused differently. The core theoretical contribution is extending convergence guarantees to the ℓ-smoothness setting, which is more general than Lipschitz smoothness. A more meaningful experimental approach would have been to: 1. Demonstrate convergence on problems that specifically violate Lipschitz smoothness but satisfy ℓ-smoothness, which they do attempt with their DRO and phase retrieval examples, but remains unclear for the Image Classification. 2. Compare performance when using their theoretically-derived stepsizes versus other common choices to validate whether their analysis leads to practically superior parameter settings. 3. Test the limits of the approach by examining problems with varying degrees of non-Lipschitz behavior (different values of the parameter p in the ℓ-smoothness condition) Simply showing that shuffling-type methods outperform vanilla SGD repeats what's already known rather than validating the unique aspects of this paper's theoretical contribution. The experiments would be more compelling if they directly connected to the paper's novel analytical insights rather than reestablishing the general benefits of shuffling. Methods And Evaluation Criteria: The proposed methods are theoretically sound, but the assumptions are under discussed. Notably, Assumption 4.3 (which relates the variance of component gradients to the norm of the full gradient) appears unconventional, as I have not encountered it in previous literature, and it potentially represents a strong constraint on the problem class. The experimental findings are discussed in the Claims And Evidence section of this review. Theoretical Claims: I roughly checked the proof of Theorem 4.4, and it appears rigorous. Experimental Designs Or Analyses: The experimental section provides some validation of the theoretical results, except for some limitations proposed already. Supplementary Material: I examined the initial sections of the supplementary material, reviewing the proofs and supporting lemmas up through Theorem 4.4. Relation To Broader Scientific Literature: The work builds on existing literature in shuffling-type gradient methods and generalized smoothness conditions. While it makes connections to both areas, the novel contribution is primarily in combining these two streams of research rather than developing fundamentally new insights in either area. Essential References Not Discussed: I am not aware of any essential references that have been overlooked in this submission. Other Strengths And Weaknesses: Strengths: 1. The paper addresses an important gap by relaxing the Lipschitz smoothness assumption 2. The theoretical results are mathematically sound and extend previous work. In particular, the analysis involving stopping time seems to be a novel contribution. 3. The experimental results do show some benefits of shuffling-type methods Weaknesses: 1. Except for the novel usage of stopping time, the theoretical novelty is somewhat incremental, building heavily on existing approaches 2. The dependency on 1/δ is polynomial rather than logarithmic, which is a significant limitation 3. The experimental validation doesn't sufficiently isolate the impact of the paper's specific contributions, and the connection between theory and practice is not firmly established 4. The constants G' in the arbitrary shuffling results could be impractically large Other Comments Or Suggestions: The proof techniques would benefit from more detailed exposition in the main text, while the supplementary proofs could be refined for clarity and precision. Additionally, providing more intuitive explanations and practical discussions about why ℓ-smoothness matters in real-world applications would significantly strengthen the motivation for this work. Questions For Authors: Do those counterexamples satisfy (L0, L1)-smoothness? What is the motivation to further relax (L0, L1)-smoothness to $\ell$ smoothness? Can the (theoretical/empirical) results be generalized to other common strategies, like diminishing stepsizes? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the careful reading and constructive feedback. Below we respond to each point in detail: **Assumption 4.3:** Since component gradients behave as gradient estimations of the full gradient, this assumption can be viewed as a generalization of the more common assumption $\mathbb{E}[\|\nabla F(w;\xi)-\nabla F(w)\|]\leq \sigma^2$, which is used in most $\ell$-smooth work, e.g., [1], [2]. **W1:** We want to mention that another key novelty is our analysis for the condition $t<\tau$, where each step's analysis must account simultaneously for both the event $t<\tau$ and the preceding steps within the same epoch due to shuffling. This dual conditioning has not been addressed by prior work that focuses only on either shuffling or $\ell$-smoothness. Additionally, assumption 4.3 is relaxed compared to other $\ell$-smooth literature. **W2:** Achieving a logarithmic dependency on $1/\delta$ remains an open challenge under $\ell$-smoothness, and establishing such a result would constitute a significant advancement. As noted in Remark 4.5, most existing works under $\ell$-smooth assumptions exhibit polynomial dependencies on $1/\delta$. Intuitively, under $\ell$-smoothness, the parameter $\delta$ accounts for bad cases from both variance and smoothness, whereas $L$-smooth scenarios only considers the former. Thus, significantly improving this dependency is nontrivial and would require substantial theoretical breakthroughs. **W3:** We agree that the experiments can better support our theory following the advice in 'claim and evidence' section. However, it can be hard to implement in reality, since the function $\ell$ and constant $p$ are hard to determine. We leave it as a future work to design a better algorithm to work in practice. **W4:** We agree with that. As we mentioned in Section 4.3, the potentially large constant $G'$ in Theorems 4.9 and 4.12 is indeed undesirable. These theorems are intended as initial steps toward a broader analysis without any variance assumptions. **Q1:** These counterexamples generally do not necessarily satisfy the $(L_0,L_1)$-smoothness conditions. One motivation to move from $(L_0,L_1)$-smoothness to generalized $\ell$-smoothness is to cover functions such as double exponential functions $f(x)=a^{b^x}$ or rational functions $f(x)=P(x)/Q(x)$, where $P$ and $Q$ are polynomials, which satisfy $(L_0,L_1)$-smoothness conditions but not $\ell$-smooth. **Q2:** Yes, we can use the theoretical results and give diminishing stepsizes. As suggested in theorem 4.4, as long as the stepsizes satisfy the constraints, they can be either constant or diminishing. [1] Li, Haochuan, Alexander Rakhlin, and Ali Jadbabaie. "Convergence of adam under relaxed assumptions." Advances in Neural Information Processing Systems 36 (2023): 52166-52196. [2] Xian, Wenhan, Ziyi Chen, and Heng Huang. "Delving into the convergence of generalized smooth minimax optimization." Forty-first International Conference on Machine Learning. 2024.
Summary: - Most of the existing literature on shuffling SGD establishes the convergence rate under traditional gradient lipschitz continuity, which is a condition that does not hold in neural networks. To address this problem, the paper studies the convergence rate of shuffling SGD under the generalized smoothness assumption. - The paper proves the convergence rates of shuffling SGD for strongly convex, convex, and nonconvex functions, each under both random reshuffling (RR) and arbitrary shuffling schemes. The derived convergence rates recover the current best-known convergence rates established under the traditional gradient Lipschitzness assumption. Finally, the paper validates its theoretical findings by toy experiments. --- **Update after Rebuttal** Firstly, I apologize for not being actively engaged during the rebuttal phase. Below, I summarize my current perspective on the paper: - My initial main concern regarding this paper was related to the technical novelty; I originally viewed the paper as combining existing techniques from shuffling SGD and general Lipschitz continuity. After carefully reviewing the authors' response and revisiting the paper, I find that the authors have addressed my concerns effectively. - The authors' response for the dependency on $p$ in Theorems 4.9 and 4.12 fully resolves my concern. However, the current paper remains slightly confusing. Specifically, in Section 4.1 (nonconvex case), both RR and arbitrary permutation schemes employ the bounded gradient assumption. Yet, for the strongly convex and non-strongly convex cases, RR results are presented under the bounded gradient assumption, whereas results for the arbitrary permutation scheme omit this assumption. If there is no particular reason for excluding the theorems on arbitrary scheme under bounded gradient assumption, I suggest the authors include these as well. Overall, I raise my score to 3 and lean toward acceptance. Claims And Evidence: Most claims are well-supported by theorems and propositions. Methods And Evaluation Criteria: This paper is purely theoretical and does not involve empirical evaluation or benchmark datasets. Theoretical Claims: I briefly checked the proofs of Theorems 4.4, 4.6, 4.8, and 4.9, and did not identify any significant flaws. Experimental Designs Or Analyses: The paper includes experimental results, but they are primarily intended to verify the theoretical findings rather than serve as a main contribution. As a result, I did not thorougly review the experimental design and analysis. Supplementary Material: Since this paper is purely theoretical, I did not review the supplementary material. Relation To Broader Scientific Literature: While most prior work on shuffled SGD, such as results on random reshuffling, assumes Lipschitz-smooth gradients to establish convergence rates, this paper generalizes the framework by considering a weaker smoothness assumption. This generalization enhances the applicability of shuffling SGD, particularly in the context of neural network training. Essential References Not Discussed: Essential references are appropriately cited and discussed in the paper. Other Strengths And Weaknesses: Strengths - The paper derives the convergence rate of shuffling SGD under a weaker assumption (generalized smoothness) than prior works (Lipschitz continuous gradient). - The paper establishes the convergence rates for nonconvex, convex, and strongly-convex cases. The proofs seem correct and sound. - The gradient assumption (Assumption 4.3) is relaxed compared to prior works on shuffling SGD. Weakness - The main weakness of this paper is the novelty. In my view, this paper is simply a combination of two well-established topics—shuffling SGD and generalized smoothness—without introducing new technical analysis. In my understanding, the proofs of the theorems rely on the introduction of the random variable $\tau$ to leverage local smoothness properties (just like [Li et al., 2023]), which are then applied to existing lemmas from the shuffling SGD literature. As a result, this work seems to be a straightforward application of existing results rather than a fundamentally novel contribution. - While this paper expresses the convergence rates in terms of $n$ and $\epsilon$, recent studies on shuffling SGD ([Liu et al., 2024, Koloskova et al., 2024]) provide a more detailed characterization of convergence rates, including additional parameters such as $F(w_0)-F^*$, $\mu$ (strong convexity), $\sigma$. It would improve the completeness of the results if this paper also expressed the rates in a similarly detailed manner. Other Comments Or Suggestions: Typos: - In line 191R, “and” is missing in the end of the line - In line 258R, “ondom” → “on dom” - In line 285L, “Algorithm 1 arbitrary scheme” → “Algorithm 1 with arbitrary scheme” Questions For Authors: Q1. I have a question regarding Theorems 4.9 and 4.12. Unlike Theorems 4.4, 4.6, 4.8, and 4.11, these two theorems do not include $p$ in their convergence rates. Why is this the case? Are these convergence rates tight? It seems more natural for these theorems to also depend on $p$. Q2. In Theorem 4.9, is the step size choice $\eta_t = \frac{6 \log T}{\mu n T}$ correct? Compared to the step size in Theorem 4.8, this choice is smaller by a factor of $n$. Could you clarify this discrepancy? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your detailed and helpful comments. Please find our responses below: **W1:** We appreciate your perspective but respectfully disagree that the paper is merely combining existing results. As we emphasize in Section 4.2, the main technical challenge arises specifically from analyzing the case when $t<\tau$. Each step in our analysis is conditioned not only on the event $t<\tau$ but also on all previous steps within the same epoch, due to the shuffling method. This dual conditioning has not been addressed by prior work that focuses only on either shuffling or $\ell$-smoothness. Additionally, our results hold under the relaxed gradient assumption (Assumption 4.3), and even without it, thereby broadening their scope compared to existing $\ell$-smooth literature. We hope the reviewer can reconsider the novelty of our contribution in this context. Moreover, even if the novelty aspect were debated, considering the popularity of shuffling algorithms and restrictive nature of Lipschitz smoothness assumptions, our contributions extend theoretical guarantees to a broader class of problems. Thus, we respectfully suggest that novelty alone should not be considered a decisive reason for rejection. **W2:** We agree and will explicitly add the additional parameters for completeness. Thank you for this suggestion. **Q1:** Thanks for bringing up this important point. We apologize for any confusion caused. Indeed, complexities in Theorems 4.9 and 4.12 should not be directly compared to those in Theorems 4.8 and 4.11 due to the potentially large constant $G'$, which implicitly depends on $p$, as discussed in Section 4.3. The results in 4.9 and 4.12 are meant to be initial analyses for scenarios without the variance assumption. For completeness, if we did apply variance Assumption 4.3 and follow proof in Theorem 4.6, we could derive corresponding complexities of $\mathcal{O}(n^{\frac{p}{2}+1}\epsilon^{-\frac{1}{2}})$ and $\mathcal{O}(n^{\frac{p}{2}+1}\epsilon^{-\frac{3}{2}})$ for strongly convex and non-strongly convex cases respectively, with arbitrary shuffling scheme. **Q2:** Thank you for catching this typo. You are correct—the proper step size should be $\eta_t=\frac{6\log(T)}{\mu T}$. The learning rate currently listed in the draft mistakenly corresponds to the inner-loop step size. We appreciate your attention to details.
Summary: This paper studies convergence upper bounds of shuffling-based SGD on finite-sum minimization problems, focusing on random reshuffling SGD as well as SGD with arbitrary permutation (i.e., theorems that hold for any choice of permutations). The key contribution of this paper is the extension of standard (Lipschitz) smoothness assumption on the component functions to a generalized assumption named $\ell$-smoothness. For $\ell$-smooth functions with sub-quadratic $\ell$, the paper carries out convergence analysis of random reshuffling and arbitrary permutation-based SGD on nonconvex, strongly convex, general convex functions. Numerical experiments are carried out to compare performance of with-replacement SGD and shuffling-based SGD variants. ## Update after rebuttal As I stated in my Rebuttal Comment, the authors' response addressed most of the questions I had, and I raised my score to 3 to reflect this. Having said that, I believe the paper has room for improvement in terms of clarity; I hope that the authors reflect the clarifications (e.g., on the dependence of different rates on $p$) in the next revision. Claims And Evidence: - Comments on theoretical claims are deferred to the "Strengths and Weaknesses" section. - The authors present several experiments, but unfortunately the performance gap between with-replacement SGD vs shuffling-based variants does not look significant except Figure 1. I doubt it's fair to claim better performance of random-reshuffling SGD or fixed-shuffling SGD based on Figures 2 and 3. Also, for these experimental results to corroborate the theoretical results, random-reshuffling SGD should have converged the fastest across different settings; however, it seems that there is no clear winner. Methods And Evaluation Criteria: This paper does not propose a new method, and it analyzes existing methods theoretically under a relaxed set of assumptions. There are empirical evaluation results comparing the convergence speed of different methods, and I think the criteria for evaluation are sound. Theoretical Claims: I unfortunately did not have the time to check the details of the proofs in the supplementary material. I hope the authors provide a better intuition on why the requirement on the number of epochs $T = \Omega (poly(1/\delta))$ is difficult to remove (which I point out below in the Strengths and Weaknesses section). Experimental Designs Or Analyses: The design of experiments comparing the performance of with-replacement SGD and three variants of shuffling-based SGD looks quite standard to me, hence no issue identified. Supplementary Material: The supplementary material is mainly about omitted proofs. I unfortunately did not have the time to check the details of the proofs in the supplementary material. Relation To Broader Scientific Literature: This paper studies popular variants of SGD, so it may have some broader impact on other scientific areas that involve optimization. Essential References Not Discussed: The paper seems to include most of the essential references, but the authors should mention Mishchenko et al (2020) and Ahn et al (2020) when they discuss prior works on nonconvex optimization. Mishchenko et al are one of the first groups of people to prove convergence rates for nonconvex smooth optimization, and Ahn et al (2020) study nonconvex Polyak-Łojasiewicz functions. Also, it'd be more complete if the paper cites "Convergence of Random Reshuffling Under The Kurdyka-Łojasiewicz Inequality" by Li et al (2023). Other Strengths And Weaknesses: Strengths - Extending the existing analyses of shuffling-based SGD to a wider function class of $\ell$-smooth functions is definitely meaningful. - The authors honestly and directly discuss limitations of their results, which I appreciate. Weaknesses - First of all, to be strict, the paper violates the templates because the authors omitted the placeholder for author names. Unlike other papers, I don't see "Anonymous Authors" right below the title, as well as the footnote (attached to the placeholder) at the bottom of page 1. - The biggest weakness of the results presented in this paper is the dependence of the epoch count $T$ on $1/\delta$. Each of the theorems on random reshuffling comes with an unfortunate $T = \Omega(poly(1/\delta))$ requirement on $T$, implying that for tiny choices of $\delta$ we would need to run a ton of epochs to meet the requirement. Given that dependences on $1/\delta$ in high-probability guarantees are typically poly-logarithmic, this is a big shortcoming in my opinion. The paper seems to build upon Theorem 5.3 of Li et al (2023a), and I understand that the existing theorem shares the same limitation. However, this doesn't mean that the improvement of poly to polylog dependence on $1/\delta$ is impossible, so the authors should discuss this in the paper. - Some of the theorems have missing assumptions, although minor. In Theorem 4.11, we need to additionally assume that a finite $w_*$ exists. For Theorem 4.12, is $G'$ always guaranteed to be finite even when the sublevel set is unbounded? I guess so, but I'm not 100% sure. - I find the proof sketch confusing. The authors claim that they demonstrate that smoothness is maintained with high probability along the training trajectory, but smoothness is a global property of a function, and has nothing to do with a specific trajectory. Other Comments Or Suggestions: - In Line 102, the term RGA is used before being introduced a few lines below. Questions For Authors: 1. Is there any hope for improving dependencies on $1/\delta$? 2. For Theorem 4.9 and 4.12, why do these theorems have no dependence on $p$? Due to the absence of $p$, it looks to me that the epoch complexity of any arbitrary scheme is better than random reshuffling whenever $p > 1$, which does not sound right to me. Can you clarify why? 3. After most theorems, the authors discuss step size choices and the resulting epoch/gradient computation complexities. It is quite difficult to follow why these particular choices of step sizes are used and how different dependencies are derived. Specifically, I failed to derive the $-\frac{p}{2-p}$ terms that arise in the exponents of $\delta$ when specifying the dependency of $T$ on $1/\delta$. Can you elaborate on such derivations? 4. When talking about "one possible step size" in Lines 207 and 242, I see no dependence on $p$, which is different from the choice of $\eta$ in Lines 198 and 238 that involve $p$. Are the authors talking about the special case of $p=0$ here? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the careful reading and constructive feedback. Below we respond to each point in detail: **Q1:** Achieving a logarithmic dependency on $1/\delta$ remains an open challenge under $\ell$-smoothness, and establishing such a result would constitute a significant advancement. As noted in Remark 4.5, most existing works under $\ell$-smooth assumptions exhibit polynomial dependencies on $1/\delta$. The only exception is in Theorem 4.1 of [1], where independence between steps allows for a log dependence; however, they turn to polynomial dependence in Theorem 6.2 a few pages later, after the independence is lost. That means no log complexity has been achieved with dependence between steps and $\ell$-smoothness. Intuitively, under $\ell$-smoothness, the parameter $\delta$ accounts for bad cases from both variance and smoothness, whereas $L$-smooth scenarios only considers the former. Thus, significantly improving this dependency is nontrivial and would require substantial theoretical breakthroughs. We respectfully ask the reviewer to reconsider whether this aspect should be regarded as a major weakness, given the current state-of-the-art under similar assumptions. **Q2:** Thanks for highlighting this, we apologize for the confusion. The complexity bounds in Theorems 4.9 and 4.12 should not be directly compared to those in Theorems 4.8 and 4.11. The constant $G'$ in Theorems 4.9 and 4.12 can potentially be very large (as we mention in Section 4.3) and is implicitly influenced by $p$ through $G'$. These results serve as initial steps toward analyzing cases with no variance assumptions whatsoever. For completeness, if we were to adopt the variance Assumption 4.3 (similar to Theorem 4.6), we could derive results following proof of Theorems 4.6, yielding total gradient evaluation complexities of $\mathcal{O}(n^{\frac{p}{2}+1}\epsilon^{-\frac{1}{2}})$ and $\mathcal{O}(n^{\frac{p}{2}+1}\epsilon^{-\frac{3}{2}})$ for strongly convex and non-strongly convex cases with arbitrary scheme, respectively. **Q3:** To clarify the dependency of $T$ on $\delta$: note first that we have $H=\mathcal{O}(\delta^{-1})$, which implies $G=\mathcal{O}(\delta^{-\frac{1}{2-p}})$ and consequently $L=\mathcal{O}(\delta^{-\frac{p}{2-p}})$. Combining these with the constraints $\eta^3 T \leq \mathcal{O}(L^{-2})$ and $T \geq \frac{32\Delta_1}{\eta\delta\epsilon^2}$ gives the specified polynomial dependencies. Sorry for the confusion here, we will clarify this reasoning further in the appendix of a revision. **Q4:** Not really. The parameter $p$ is implicitly included within $L$ as we have $L=\mathcal{O}(n^{\frac{p}{2}})$. **W3:** Thanks for catching the missing assumption here, and $G'$ is finite indeed, but we need to add a short proof for that. We will add these in revision. **W4:** Sorry for the confusion from the wording. By "smooth along the training trajectory," we meant that smoothness conditions hold between consecutive points during training, not necessarily globally across the entire trajectory. We recognize this wording was imprecise and will clarify this point explicitly to avoid misunderstandings. We greatly appreciate the reviewer’s insights and suggestions, which have significantly improved the clarity of our manuscript. [1] Li, H., Rakhlin, A., Jadbabaie, A. (2023). Convergence of adam under relaxed assumptions. --- Rebuttal Comment 1.1: Comment: Thank you very much for the response. The response addresses most of the questions I had, and I hope that the authors reflect the clarifications in the next revision (i.e., making the $p$ dependencies more explicit and making the RR vs arbitrary scheme rates more comparable). Although I still have some reservations due to the dependence on $1/\delta$, I feel more positive about the paper. I have decided to raise my score to 3. One more suggestion: The paper's title is a bit too general and does not clearly reflect the key messages/contributions. I recommend changing the title in the next revision. --- Reply to Comment 1.1.1: Comment: Thank you so much for the positive reconsideration and helpful suggestions! We will definitely address these points explicitly in the revised version and get a clearer title for the paper. Thanks again for your time and insights.
null
null
null
null
null
null
Compact Matrix Quantum Group Equivariant Neural Networks
Accept (poster)
Summary: The authors consider the problem of extending group equivariant neural networks to compact quantum groups. Compact quantum groups are generalization of commutative groups. In commutative groups, the complex-valued functions on the group form a commutative C*-algebra. In contrast, the functions on Compact quantum groups form a noncommutative C*-algebra. Non-commutative C*-algebras arise in many applications, including quantum mechanics and statistical mechanics applications, and Non-commutative C*-algebras have a non-commutative geometry associated to them. ## update after rebuttal Thank you for these potential applications. I have adjusted my score accordingly, but will defer to the other reviewers' judgment as I am not very familiar with the applications of this subject area. Claims And Evidence: The authors do a nice job explaining noncommutative geometry, and constructing a neural network for what the authors refer to as “easy” Compact quantum groups. However, the authors should better explain why one might want to construct such a neural network: What are the applications of Compact quantum groups and noncommutative geometrys, and what are the applications of such a neural network? Methods And Evaluation Criteria: It is a bit unclear to me what would be the application of the authors' proposed neural networks, and what the main previous works they are improving on (even theoretical improvements). Moreover, it would be helpful to provide experiments, as this would allow comparison to the performance of previous work, and might also provide a real-world example where an improvement can be shown. This would make it easier to evaluate the significance of the paper. Theoretical Claims: I did not carefully check the proofs in the appendix. Experimental Designs Or Analyses: It would be helpful to provide experiments, as this would allow comparsison to the performance of previous work, and might also provide a real-world example where an improvement can be shown. This would make it easier to evaluate the significance of the paper. Supplementary Material: I did not carefully check the proofs in the appendix. Relation To Broader Scientific Literature: It is a bit unclear to me what would be the application of the authors' proposed neural networks, and what the main previous works they are improving on (even theoretical improvements). Explaining this better would make it easier to effectively evaluate the paper. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Please see answers to the above questions. Other Comments Or Suggestions: N/A Questions For Authors: It would be helpful if the authors could explain why one might want to construct such a neural network: What are the applications of Compact quantum groups and noncommutative geometrys, and what are the applications of such a neural network? Also, it would be helpful to give a more direct comparison and specify the improvements over prior work, if at all possible. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their critique of our work. We are glad that the reviewer has recognised our main theoretical contribution (namely, constructing a neural network for easy compact matrix quantum groups and characterising its weight matrices), and that they felt that we explained non-commutative geometry well, which was not a trivial task given its technical nature. The reviewer questions why one would want to construct such a neural network, as well as its potential applications. We wish to emphasise that the particular constructions behind our formalism have already appeared directly in real-world applications. In addition to the tasks that we listed in the introduction to our paper, we note that quantum symmetric groups $S_n^+$ have been studied in relation to the Potts model, a type of physical model in statistical mechanics, by Goshwami and Hossain [1]; in elementary particle physics on $n$ half-spin particles by Woronowicz [2] (where the author notes on page 120 that interchanging particles is no longer given by the swap operator, and in effect states that the symmetry object is $S_n^+$ – this is discussed further in Weber [3, Section 3.2]); and they have also appeared in the study of non-local games in quantum information theory by Lupini et al. [4]. We have also noted in our response to Reviewer Admb that in AI, for data in the form of a set or a graph, modelling quantum symmetries with our networks could offer advantages over classical approaches that use group equivariant neural networks. We feel that we could form a concise argument along these lines to provide the more intuitive explanations and motivation for the theoretical results contained within our paper that the reviewer would like to see. The reviewer also asks us to give a more direct comparison and specify the improvements over prior work. Our work builds upon a body of work in group equivariance where the authors derive weight tying schemes to guarantee equivariance to a particular group, such as [5, 6, 7, 8, 9]; However, the key difference and major contribution of our work is that we guarantee equivariance to compact matrix quantum groups, which generalise groups in some sense, namely by also deriving weight tying schemes for certain compact matrix quantum groups (the “easy” ones). Studying equivariance to compact matrix quantum groups and obtaining a characterisation of these weight matrices is our main theoretical improvement over these previous works since quantum group equivariance has not been explored before in the machine learning literature. The ability to model such equivariance is needed because, as the previously stated applications demonstrate, only quantum groups (and not groups) can model the relevant symmetries in these tasks. Finally, on the subject of experiments, we would like to emphasise that although our contribution is predominantly a theoretical one, we recognise that experiments on real-world data would provide valuable insights. Given that this is the first work on quantum group equivariance, we have struggled to find an appropriate dataset that we could apply these networks to (and consequently no benchmarks exist, hence even if we had an appropriate dataset we would not be able to demonstrate an improvement over any previous methods since they don’t exist). However, we look forward to demonstrating the broader potential of our methods in future work. [1] Debashish Goswami and S. K. Asfaq Hossain (2022). Quantum symmetry on Potts model. J. Math. Phys. 63, 043504. [2] Stanisław Lech Woronowicz, Twisted SU(2) Group. An Example of a Non-Commutative Differential Calculus. Publ. Res. Inst. Math. Sci. 23 (1987), no. 1, pp. 117–181 [3] Weber, M. (2020). Quantum symmetry. Snapshots of modern mathematics from Oberwolfach, 5. [4] Lupini, M., Mančinska, L., Roberson, D.E.: Nonlocal games and quantum permutation groups. J. Funct. Anal. 279(5), 108592 (2020) [5] Zaheer, M., Kottur, S., Ravanbakhsh, S., Poczos, B., Salakhutdinov, R. R., and Smola, A. J. (2017). Deep Sets. In Advances in Neural Information Processing Systems. [6] Maron, H., Ben-Hamu, H., Shamir, N., and Lipman, Y. (2019). Invariant and Equivariant Graph Networks. In International Conference on Learning Representations. [7] Ravanbakhsh, S., Schneider, J., and Poczos, B. (2017). Equivariance Through Parameter-Sharing. In Proceedings of the 34th International Conference on Machine Learning. [8] Godfrey, C., Rawson, M. G., Brown, D., and Kvinge, H. (2023) Fast computation of permutation equivariant layers with the partition algebra. In ICLR 2023 Workshop on Physics for Machine Learning. [9] Pearce-Crump, E. (2023) Brauer’s Group Equivariant Neural Networks. In Proceedings of the 40th International Conference on Machine Learning.
Summary: The authors propose a new type of equivariant neural networks for handling symmetries described by compact matrix quantum groups and obtain new weight matrices for these groups. The new methods are motivated by the study of non-commutative geometry's symmetries (i.e., quantum symmetries) appearing in quantum groups. To be more precise, they use Woronowicz–Tannaka–Krein duality to construct neural networks equivariant to quantum symmetric data and characterize all the weight matrices needed for those models. Claims And Evidence: The claims are supported with appropriate evidence as far as I could understand the material Methods And Evaluation Criteria: Yes, no issues regarding this matter Theoretical Claims: I checked the theoretical claims and didn't find any issues as far as I understood the material Experimental Designs Or Analyses: No experiments Supplementary Material: No, unfortunately didn't check them Relation To Broader Scientific Literature: No issues regarding this matter Essential References Not Discussed: The authors did a comprehensive literature review. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: pros: - the paper introduce a new class of equivariant nerual networks for compact quantum groups with applications in physics (explained in the paper) cons: - the material is super technical and less available to most part of the community Questions For Authors: - line 143: what do you mean by having elements generating the algebra? can you clarify this a bit in the paper - what is the computational complexity of running Procedure 1? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We are pleased that they recognise that our work “introduces a new class of equivariant neural networks for compact quantum groups with applications in physics, as explained in the paper.” We are also glad that they found the claims to be supported with appropriate evidence, and that we gave a comprehensive literature review. While our work is necessarily technical due to its solid mathematical foundation, we have taken deliberate steps to enhance its accessibility. This includes providing concrete examples within the main text and appendix as well as a comprehensive appendix with all of the necessary background material to aid the general reader’s understanding. To answer the reviewer’s questions: - on line 143, we mean that the compact matrix group A is the universal C*-algebra $C^*(u_{i,j})$ having no relations (as in Definition B.22) – we will clarify this in an amended version of the paper. - the computational complexity of Procedure 1 depends on several factors, including the specific compact matrix quantum group being considered, the implementation of the algorithm, and how the two-coloured diagrams are generated. Because of these variables, it is not possible to give a single, unified complexity measure. The complexity will generally be influenced by the size of the matrices and the specific structure of the compact matrix quantum group in question. --- Rebuttal Comment 1.1: Comment: Dear authors, thank you so much for your response to my comments. I keep supporting this paper though my confidence is limited due to my relatively sparse knowledge in this field.
Summary: - This work addresses the limitation that group equivariant neural networks cannot learn from the data with a non-commutative geometry. - Specifically, it derives the existence of compact matrix quantum group equivariant neural network, a new type of equivariant neural network that encodes symmetries described by compact matrix quantum groups - It characterizes the weight matrices that appear in compact matrix quantum group equivariant neural networks for the easy compact matrix quantum groups, obtaining characterizations of equivariant weight matrices for some compact matrix groups that have not appeared in the previous literature. Claims And Evidence: All claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: It is a pure theoretical work, and the proposed compact matrix quantum group equivariant neural network can handle data with non-commutative geometries in theory. Theoretical Claims: I merely checked all proofs for the claims and theorems in the main body of the paper, and found no errors. Experimental Designs Or Analyses: There is no experimental design or analysis entailed in this work. Supplementary Material: I did not review the appendix due to the limited time. Relation To Broader Scientific Literature: The main result of the paper is the derivation a new class of neural networks, termed compact matrix quantum group equivariant neural networks, that allows for learning data with intrinsically non-commutative characteristics. Essential References Not Discussed: No. Other Strengths And Weaknesses: - The paper has a top presentation quality. - The contribution of the paper is limited to the theoretical level, but still original and significant in that it can potentially give rise to neural networks that address data with non-commutative features. Other Comments Or Suggestions: I don't have any comments or suggestions. Questions For Authors: - Could you please elaborate how to train a compact matrix quantum group equivariant neural network with standard backpropagation algorithm in the case of, for example, easy compact matrix quantum groups? - Are there any scenarios in the AI domain where quantum group equivariant neural network could potentially be superior to group equivariant neural networks? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s thoughtful and encouraging critique of our work. We are delighted that they recognise the “original and significant” contributions that are contained in our paper, and that they found the paper to have a “top presentation quality”. We felt that the reviewer really understood the purpose of our paper and, in particular, how our work “addresses the limitation that group equivariant neural networks cannot learn from data with a non-commutative geometry.” To answer the reviewer’s questions: - The training of our proposed networks for the easy compact matrix quantum groups with the standard backpropagation algorithm works in exactly the same way as for a standard MLP: the weight matrices have numerical entries (weights) in the same way as standard MLPs – the only difference is that the requirement to achieve equivariance (for a particular easy compact matrix quantum group) induces a certain amount of weight tying in the weight matrices themselves (and we have shown exactly how these weight matrices are constructed in Theorem 7.10 of our paper). - While we anticipate that major applications of our networks will be in physics, we also see potential in AI domains. As we have discussed in our response to Reviewer sFDU, the underlying constructions of our formalism have already appeared in real-world applications where only quantum groups (and not groups) can model the relevant symmetries. Within AI, our work provides a quantum-symmetric analogue of established models such as Deep Sets [1] and Invariant and Equivariant Graph Networks [2], since, for example, $S_n^+$ is the quantum symmetric group on a set of $n$ objects, and $S_n^+$ can also be used to model the quantum symmetries of a graph with $n$ nodes. Hence, for data in the form of a set or a graph, modelling quantum symmetries with our networks could offer advantages over classical approaches that use group equivariant neural networks. [1] Zaheer, M., Kottur, S., Ravanbakhsh, S., Poczos, B., Salakhutdinov, R. R., and Smola, A. J. (2017). Deep Sets. In Advances in Neural Information Processing Systems. [2] Maron, H., Ben-Hamu, H., Shamir, N., and Lipman, Y. (2019). Invariant and Equivariant Graph Networks. In International Conference on Learning Representations.
Summary: The authors present the definition of a class of equivariant neural network which are equivariant with respect to a more general definition of group, namely of a quantum matrix group. The strategy relies as describing the non-commutative geometry by using non-commutative C* algebras. The authors show that they explicit derive weight representation for a class of simple quantum groups. Claims And Evidence: See below. Methods And Evaluation Criteria: - Theoretical Claims: There are several theoretical claims. I am not very familiar with the field but I did not encounter any issues. Experimental Designs Or Analyses: While there are some examples at the end of the paper, I do not seem to have found any experiment involving learning from data. I think this would be very important to justify the feasibility of the proposed method, since the claim is that it defines the weight matrices for a linear layer. Supplementary Material: I did not review the SI. Relation To Broader Scientific Literature: To make the paper more accessible to a more general audience, and to convey the message that the formalism is indeed useful to solve concrete problems in various fields, would be good to review how some more standard work using (non-commutative) group theory or algebras can be understood from the perspective of the presented formalism, for instance - Le et al, "Parameterized hypercomplex graph neural networks for graph classification" - De Bortoli et al, "Riemannian score-based generative modelling" It would be interesting to see if there is any application or extension of the proposed formalism for generative modeling on non-commutative spaces (eg., SO(N) here) - Bertolini et al, "Generative Modeling on Lie Groups via Euclidean Generalized Score Matching" - Zhu et al, "Trivialized Momentum Facilitates Diffusion Generative Modeling on Lie Groups" Essential References Not Discussed: - Other Strengths And Weaknesses: The paper is very well written despite its technical nature. Other Comments Or Suggestions: - Questions For Authors: - Given that SO(3) (or any SO(N) )is a non-commutative group, how do the traditional equivariant network fit into the proposed construction? - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We are pleased that they found the paper to be “very well written despite its technical nature” and that they did not encounter any issues with our theoretical contributions. We are also pleased that the reviewer is excited about the potential applications of our work to generative modelling on non-commutative spaces: we would be very keen to see if the ML community is able to build upon the formalism that we have proposed in our paper to demonstrate such applications in practice. We welcome the feedback to try to make the paper more accessible to a more general audience. This has naturally been quite a challenging task given the high technical requirements that are needed to be able to understand its content – as the reviewer has noted, we have included a number of examples of the construction both in the main paper and the appendix and we have also supplemented them in the appendix with comprehensive background sections on all the material that is needed to understand the work. However, we will look for additional ways to help enhance the paper’s clarity, impact, and intuitive accessibility in a revised edition of our paper. We wish to highlight that although our contribution is predominantly a theoretical one, an empirical test for the feasibility of equivariance in the weight matrices is not required, since the weight matrices are derived mathematically and are proven to be equivariant – this is Theorem 7.10 in our paper. Our work establishes a theoretical foundation that enables future empirical studies, providing a framework for researchers to explore its applications in real-world machine learning tasks. The reviewer would like us “to convey the message that the formalism is indeed useful to solve concrete problems in various fields.” We wish to emphasise that the particular constructions behind our formalism have already appeared directly in real-world applications, which we have highlighted in our response to Reviewer sFDU. We note that in these applications, only quantum groups (and not groups) can model the relevant symmetries. We have also noted in our response to Reviewer Admb that in AI, for data in the form of a set or a graph, modelling quantum symmetries with our networks could offer advantages over classical approaches that use group equivariant neural networks. Finally, on the reviewer’s main question about $SO(3)$ and $SO(n)$: - In order for equivariant networks for $SO(3)$ (or any $SO(n)$) to fit into the proposed construction, we need the layers to be a tensor power or $\mathbb{R}^n$ or $\mathbb{C}^n$ – this is by the design of the proposed construction in Section 6. However, the characterisation of these networks has already been shown by Pearce-Crump in [1], so their work fits perfectly into our construction as a special case. We will add a comment on this point in the main paper itself. [1] Pearce-Crump, E. (2023). Brauer’s Group Equivariant Neural Networks. Proceedings of the 40th International Conference on Machine Learning, PMLR 202:27461-27482, 2023.
null
null
null
null
null
null
Bridging Layout and RTL: Knowledge Distillation based Timing Prediction
Accept (spotlight poster)
Summary: This paper presents a cross-stage knowledge distillation framework for timing prediction. Under this framework, the layout aware teacher model distills layout charactersistics to an RTL-level student models. Experimental results demonstrate significant improvement compared with other prediction models. Claims And Evidence: The claims seem supported by experiments. Experiments on Table 3 clearly demonstrate the benefit of both distillation and finetuning, with distillation being more important. Table 4 further demonstrate importance of multi-granularity knowledge distillation. Methods And Evaluation Criteria: Benchmarks used seemed to be unclear. The authors claimed to use 2k RTL designs sourced from open-source projects. No further details are presented. Also remains unclear why not use similar data compared with open-source versions of prior work. Theoretical Claims: The paper does not make any theoretical claims. Experimental Designs Or Analyses: Experiments seem solid. Ablation studies are well conducted. Supplementary Material: I did not fully review the supplementary materials. Relation To Broader Scientific Literature: The authors improve upon prior work by distilling layout-aware models to RTL code level models. Authors were able to present improvements compared with prior works. Essential References Not Discussed: None. Other Strengths And Weaknesses: The authors idea of distilling layout aware models to RTL-level models is novel and effective. However the dataset and test used in the paper remains unclear. More comparision with prior work could also benefit the paper. Other Comments Or Suggestions: None. Questions For Authors: (1) Can the authors clarify on the dataset used for training/testing? Why not direct use already open-source data from prior work. Will new data (and models) from authors be open-sourced? (2) How did the authors conduct experiements with MasterRTL and RTL-Timer? Were they retrained on same data? (3) Can the authors compare with more existing prior work other than from Fang? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Dear Reviewer wjHP,** We sincerely appreciate the reviewer's positive remarks and valuable suggestions regarding our work, particularly concerning the sources of our benchmark data, the open-source plan, the fairness of comparisons with MasterRTL/RTL-Timer, and the discussion on comparison with other models. Below, we provide detailed answers to the points you raised. ## 1. Benchmark and Dataset Explanation We have carefully chosen not to rely exclusively on certain existing publicly available datasets (e.g., ISCAS-89, ITC-99) because they are limited in size and diversity relative to our specific RTL-Layout prediction goals. For some large circuits, such as some large-scale CPUs or other designs, there may be many duplicate or essentially identical paths in the circuit, but the timing analysis is more associated with paths, which makes the distribution of the existing datasets not particularly satisfying; we wanted to minimize the occurrence of duplicate paths in the datasets and to make the dataset distribution as even and diverse as possible in order to allow more physical information to be learned by our model. Instead, we collected more circuits of **different types, functions, and sizes** to reflect real-world industrial needs, including small arithmetic blocks, DSP modules, RISC-V subsystems, etc. This makes the prediction task harder, but we think it's more practical, pervasive, industrially valuable, and closer to the needs of actual industrial processes. These 2000+ designs come from GitHub, OpenCore, Hugging Face, and the open-source RISC-V project, ensuring broad functionality and design complexity coverage. The construction and cleaning of the dataset took us close to 5 months (see the reply to Reviewer DZ14 for more information on Layout data construction). ## 2. Open-Source Plan of Dataset and Codes Furthermore, we have partially anonymized and **released a subset of these designs and codes on public repositories (https://anonymous.4open.science/r/icml2025RTL-CBFD/)**; however, the size of back-end data and the space limitations on anonymous platforms prevent a complete release at this stage. We will expand our open dataset, aiming to foster community progress in AI4EDA. We will keep enhancing our open-source repository (both the dataset and the code) to facilitate replication and iterative improvement by the community. Thanks to its graph-oriented modeling and distillation-based learning, we strongly believe our framework can be generalized to more designs. ## 3. Comparisons with MasterRTL and RTL-Timer All results reported in the paper were derived by re-training and evaluating both MasterRTL [1] and RTL-Timer [2] on precisely the same dataset and evaluation protocols used by our method. Specifically: 1. Both are open-source implementations, and we reproduced them faithfully on our data, adhering to the authors' original procedures. 2. We used the **same train/validation/test partitioning and ground-truth labels** for all methods. 3. We evaluated using identical error metrics (MAPE, PCC, R²) for consistency. ## 4. Comparison with Other Models We appreciate your recommendation to compare more thoroughly with related methods. Given that MasterRTL and RTL-Timer are among the most competitive open-source approaches for RTL-level timing prediction, demonstrating improvements over them provides a fair and substantial benchmark. According to prior works (e.g., [3], [4]), they didn't show the same performance as MasterRTL and RTL-Timer (e.g., > 10% gap in MAPE). We chose MasterRTL and RTL-Timer as the strongest public baselines. ## Conclusion Thanks again for the reviewer's supportive review and for highlighting the significance of benchmark diversity, fair comparisons, and open-source work. We will refine the discussion on data sources, continue expanding our dataset for broader coverage, and work to open-source the code with the full dataset. We hope the clarifications and updates will further underscore the practicality and robustness of our method. *We greatly value the reviewer's feedback and look forward to any additional suggestions. Looking forward to getting a higher rating from you!* ## References [1] Fang, Wenji, et al. "Masterrtl: A pre-synthesis ppa estimation framework for any rtl design." *2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD)*. IEEE, 2023. [2] Fang, Wenji, et al. "Annotating slack directly on your verilog: Fine-grained rtl timing evaluation for early optimization." *Proceedings of the 61st ACM/IEEE Design Automation Conference*. 2024. [3] Xu, Ceyu, Chris Kjellqvist, and Lisa Wu Wills. "SNS's not a synthesizer: a deep-learning-based synthesis predictor." *Proceedings of the 49th Annual International Symposium on Computer Architecture*. 2022. [4] Sengupta, Prianka, et al. "How good is your Verilog RTL code? A quick answer from machine learning." *Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design*. 2022. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttals on addressing the concerns regarding benchmark diversity. However, I do feel that incoporating more baselines would greatly strengthen the paper. It would also be interesting to see whether the proposed knowledge distillation approach would benefit other works/models such as [3,4]. --- Reply to Comment 1.1.1: Comment: **Dear Reviewer wjHP,** Thanks for your thoughtful insights on incorporating additional baselines and investigating how our knowledge distillation (KD) framework might assist other models. We agree that broadening baseline comparisons and exploring the general applicability of our KD approach will enrich the ML for EDA community. Below, we offer details on additional baselines and applying our KD design within a path-based model like SNS. ### 1. Incorporating More Baselines As you point out, extending our experiments with additional baselines strengthens our work. We considered SNS [3] and similar approaches [4], but their unreleased code/data made fair reproduction difficult. Hence, we re-implemented SNS on our dataset and integrated our KD strategy into the re-implementation as closely as possible. Notably, SNS includes randomizing methods (e.g., DFS), causing performance random fluctuations; we thus provide one representative outcome. **SNS Performance (Without Distillation):** ``` PCC R² MAPE Arrival Time (AT): 0.33 -0.76 83% WNS: 0.68 0.45 70% TNS: 0.41 -0.18 85% ``` It shows that SNS's performance is considerably lower than RTLDistil's. The relatively poor performance is due to SNS employing a path-based approach, which struggles to capture complex physical properties (e.g., RC parasitics) and surrounding circuit information critical for accurate layout-level timing predictions. ### 2. Applying Knowledge Distillation to SNS By adapting our multi-granularity KD to the path-based SNS model, we implemented a partial version of subgraph- and global-level, but did not implement node-level due to the principles of the model itself: 1. **Node-Level Distillation** - *Challenge:* SNS encodes circuits as sampled paths using a lightweight Transformer. Registers appear across multiple paths, lacking a unified per-register embedding. Thus, straightforward one-to-one node distillation is hardly feasible. 2. **Subgraph-Level Distillation** - *Feasibility:* Paths sharing the same sink or source register can be grouped to approximate local cones. We create a subgraph embedding by aggregating the path Transformers' hidden states and align these embeddings with the teacher's subgraph outputs. 3. **Global-Level Distillation** - *Straightforward:* SNS produces a global design-level prediction, making applying a global-level Smooth L1 loss between teacher and student representations simpler. **SNS Performance (With Distillation):** ``` PCC R² MAPE Arrival Time (AT): 0.71 0.52 41% WNS: 0.73 0.68 53% TNS: 0.81 0.70 55% ``` This significantly improves SNS, yet remains below RTLDistil due to fundamental architectural constraints. It exhibits limitations in single-point AT prediction, possibly due to the lack of good node-level distillation. The path-based approach inherently struggles to capture the surrounding circuit context, critical for layout-level timing prediction. ### 3. Inherent Constraints of Path-Based Methods Our findings underscore how path-based algorithms, typified by SNS, face inherent challenges in capturing complex layout- and design-level interactions: - **Limited Circuit Context:** Accurate layout timing is determined by a range of geometrical and parasitic factors that extend beyond single paths. - **Physical Parameter Integration:** Resistance, capacitance, coupling, and congestion effects must be comprehensively integrated. Path-based sampling often oversimplifies these interdependencies. - **Long-Distance Dependence:** Surrounding circuit information outside the direct path significantly affects timing accuracy. Path-based methods often find it difficult to capture these long-term dependencies. By contrast, our GNN-based RTLDistil framework, combined with the domain-specific asynchronous forward-reverse propagation, captures entire local and global contexts, thus embedding physical knowledge more holistically into the distilled student model. ### 4. Concluding Remarks We appreciate your encouragement to explore broader baselines and experiment with KD in other modeling paradigms. These endeavors confirm that our approach is both *general*—improving other frameworks—and *powerful* when used in a fully GNN-based setting. This further highlights KD's strong potential to speed up and refine EDA tasks, especially under the "left-shift" paradigm. Moving forward, we will: - Refine our multi-granularity KD framework for potential collaboration with more EDA problems. - Expand our open-source resources to facilitate reproducibility and development. Your deep consideration clarifies how our KD approach can influence broader ML and EDA challenges. *Thanks again for your time and valuable feedback. We hope these additional experiments and explanations address your concerns. We very much hope to earn your affirmation and a higher rating!*
Summary: The paper proposes RTLDistil, a framework aimed at bridging the gap between early-stage RTL timing prediction and accurate layout-level timing analysis. The method leverages a dual-model setup: a high-capacity teacher model operating on a layout netlist graph and a lightweight student model working on an RTL-level Simple Operator Graph (SOG). The core idea is to use multi-granularity knowledge distillation that encompasses node-, subgraph-, and global-level alignments, along with an asynchronous forward-reverse propagation strategy to transfer precise physical characteristics from the teacher to the student. The authors claim that RTLDistil reduces prediction errors compared to previous methods such as MasterRTL and RTL-Timer. Claims And Evidence: The claims in the paper are generally well-supported by corresponding analyses and experiments. Building on this, I have the following question: - To facilitate an effective "left-shift" in the design process, it would be beneficial to compare the results with analytical STA methods in terms of computational efficiency. This comparison would provide readers with a clearer understanding of the achieved accuracy-efficiency trade-off. Methods And Evaluation Criteria: Overall, I believe the proposed method is a valuable exploration in bridging the gap between early-stage RTL timing prediction and accurate layout-level timing analysis. The approach appears well-reasoned. However, I have the following concerns regarding its technical contributions: - Forward-Reverse Propagation Strategy: The bidirectional propagation mechanism, which aggregates fan-in and fan-out information, closely resembles standard bidirectional message-passing techniques widely used in GNNs for RTL [1]. The asynchronous adaptation seems like a minor modification, making the technical contribution of this aspect relatively limited. - Multi-Granularity Alignment: Aligning features at the node, subgraph, and global levels is a common technique in GNNs, as seen in [2]. While its integration within a knowledge distillation framework tailored for EDA is interesting, it represents an incremental improvement rather than a fundamentally novel contribution. [1] Lopera, Daniela Sánchez, and Wolfgang Ecker. "Applying GNNs to timing estimation at RTL." Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design. 2022. [2] Zhang, Muhan, and Pan Li. "Nested graph neural networks." Advances in Neural Information Processing Systems 34 (2021): 15734-15747. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: - The performance achieved by the teacher model still appears relatively limited. A more in-depth analysis of the teacher model’s quality and the current limitations of the student model’s performance would be beneficial. It would be helpful to clarify whether these limitations stem from the teacher model’s capability, the learning process, or other factors. - How are the losses across the three granularities balanced? Is this parameter sensitive to performance? What is the computational cost of a grid search? Additionally, is the grid search a one-time process, or does it need to be adjusted for different models, tasks, or datasets? Supplementary Material: I've checked all the supple submitted. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: It would be helpful to add a detailed discussion on the similarities and differences in terms of method and results of many closely related papers on the same topic, such as: [1] Moravej, Reza, et al. "The Graph's Apprentice: Teaching an LLM Low Level Knowledge for Circuit Quality Estimation." arXiv preprint arXiv:2411.00843 (2024). [2] Zhong, Ruizhe, et al. "Preroutgnn for timing prediction with order preserving partition: Global circuit pre-training, local delay learning and attentional cell modeling." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 15. 2024. [3] DynamicRTL: RTL Representation Learning for Dynamic Circuit Behavior Other Strengths And Weaknesses: Please refer to the above sections. Other Comments Or Suggestions: Please refer to the above sections. Questions For Authors: - Although the paper demonstrates improvements over baseline approaches, to what extent do the current results meaningfully enable the claimed 'left-shift' of the design process? A more detailed analysis of the achieved results, supported by relevant references, would be beneficial. - This may be an ambitious question, but I am curious about how RTLDistil performs on large-scale industrial designs (e.g., SoCs with millions of gates). The experiments primarily focus on medium-sized designs, yet scalability to larger designs is crucial for practical adoption. - In Table 4, why do different distillation objectives lead to varying results for RTLDistil? Specifically, why does the w/ Node objective yield a promising MAPE while other metrics show less favorable outcomes? Are there insights into the underlying distribution of the achieved results? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Dear Reviewer 55gZ,** Thanks for your insightful comments. Your feedback greatly helped us refine our work. Below, we address your major concerns. ## 1. Efficiency vs. STA and Left-Shift The entire flow from RTL to Layout to STA completion usually takes hours to days under rough quantitative estimation. Our method achieves several orders of magnitude faster runtime (see our response to Reviewer DZ14 for runtime details), with PCC 0.92, R² 0.85, and MAPE 16.87% for AT prediction—sufficient for early-stage RTL optimization [1], allowing the design flow to shift left. ## 2. Domain-specific Asynchronous Forward-Reverse Propagation While bidirectional GNNs exist, we focus on an asynchronous forward-reverse approach tailored to timing semantics and physical contexts. This provides a deeper coupling between AI modeling and timing prediction domain-specific requirements: - We incorporate multi-round, asynchronous updates to capture the long-range dependencies of registers, cells, and wires. - This schema includes a form of "reverse constraint" flow, reflecting the physical insight that timing slack on downstream cells can impose constraints on upstream segments, especially under realistic RC conditions. - Ablation results (Appendix) confirm simpler forward-only or fewer propagation rounds significantly degrade accuracy. This is caused by insufficient access to information about the surrounding circuit. Our design goes beyond standard message passing to encode domain-specific timing semantics. ## 3. Multi-Granularity Distillation: Tailored to Circuit Structures Our contribution lies in adapting multi-granularity distillation to **EDA-specific adaptation**: - **Tailoring to timing.** Each granularity naturally corresponds to specific circuit structures relevant to timing: node-level for register endpoints, subgraph-level for critical local paths (fan-in/out cones), and global-level for capturing overall congestion (e.g., WNS/TNS). - **Simultaneous cross-stage distillation.** By tuning the student at three granularities during distillation under the teacher, we alleviate the lack of physical cues at RTL by implicitly embedding critical layout-level physical information into the RTL GNN while focusing on the circuit's local and overall situation. As reviewer said, different distillation goals exhibit different behaviors because each of the three granularities plays a different role. As in Table 4, removing any one or two granularities reduces performance, confirming their complementary roles. W/ Node performs well on AT's MAPE because it ignores the global or several paths in relation to each other and focuses only on registers. This leads to relatively good predictions in numerical values (MAPE) but weaker performance in correlation metrics (PCC, R²) than the full model, not giving the best results. ## 4. Teacher Model Limits and Distillation Balance Our layout-level teacher is more accurate than the student but less precise than full STA due to the complexity of real RC extraction and GNN inherent limitations. The teacher’s precision naturally binds the student. Regarding grid search of distillation-loss weights (α, β, γ), we observed small fluctuations across different metrics. We believe the optimal balance varies with data properties, circuit complexity, and task objective focus. Via coarse-grained grid search (lower computational overhead), we find equal weights (α = β = γ) that show superior multi-task average performance and make the ablation experiments clearer. ## 5. Distinction from Related Work Compared to prior methods: - [2] uses LLMs but lacks scalability and physical layout fidelity. - [3] operates at layout stage, limiting early RTL optimization. - [4] targets dynamic runtime behavior, not static timing. Our method uniquely shifts accurate physical timing prediction to the RTL stage via cross-stage distillation. ## 6. Scalability to Industrial Designs We are scaling to large designs (e.g., BOOM). While back-end optimization of million-gate designs is very demanding regarding runtime (more than days) and hardware resources, our graph-based architecture has shown scalability in principle, and we will endeavor to report results as large circuits are available. ## Conclusion Thanks for your constructive input, which helped us better articulate our approach's novelty and practical utility. *We appreciate your time and look forward to further suggestions. Hope to get a higher rating from you!* ## References [1] Fang, Wenji, et al. "Annotating slack directly on your verilog: Fine-grained rtl timing evaluation for early optimization." [2] Moravej, Reza, et al. "The Graph's Apprentice: Teaching an LLM Low Level Knowledge for Circuit Quality Estimation." [3] Zhong, Ruizhe, et al. "Preroutgnn for timing prediction with order preserving partition: Global circuit pre-training, local delay learning and attentional cell modeling." [4] DynamicRTL: RTL Representation Learning for Dynamic Circuit Behavior
Summary: The paper describes a multi-level distillation framework to train a tool that can predict final timing characteristics of a synthesis flow from RTL-level description. The paper provides interesting detailed analysis of their results and argues that their results are much more accurate than prior efforts, many of which stop at an intermediate "gate-level" presentation. Claims And Evidence: The paper argues their accuracy is much better than SOTA algorithms on a range of circuits. They argue and use ablation studies to evaluate the benefits of their multi-hierarhical approach. Their reported numbers are indeed impressive and their approach IMHO has value. However, I think the evidence is not convincing. In particular, the results from final layout of a circuit depend on a myriad of specific parameters to the specific place and route tool and defined characteristics of the clock tree and whether or not certain optimizations are set or not (e.g., joint clock and data optimization), which can yield a run-time vs performance trade-off. A small change in initial area utilization during placement, for example, can yield quite significant differences post place-and-route. The description of the experiments do not describe these tool parameters in any detail and this reviewer is left to assume that the parameters are left the same between training and evaluation. This suggests that the model they train may only good be good for predicting timing for this specific set of tool parameters. Given experts often tweak these parameters from design to design, this issue should at the very least be discussed. I have no doubt that their approach has the potential benefits they claim, but I think the actual numbers they provide should be explained and the limitations of their approach should be more fully described.. The paper also claims that the models are computationally efficient, but do not provide any run-time analysis of the models. This should be added. Methods And Evaluation Criteria: The set of benchmark circuits is only vaguely designed. In most EDA/CAD papers the results of each benchmark should be included and this is missing from this paper. I would have expected to see this in an anonymous git or a table in the appendix. Not sure why this is not here, but it makes re-producing these results quite difficult. Theoretical Claims: There are no theoretical claims in the paper. Experimental Designs Or Analyses: See above. Supplementary Material: I reviewed the supplemental material and appreciate the more detailed correlation studies. These results give me more confidence that their approach has merit, but the above stated issues should still be addressed. Relation To Broader Scientific Literature: This paper is fundamentally about using ML for EDA and in that domain seems like reasonable work. It is not clear how useful or interesting it will be to the ICML community, but I am reviewing it as if it is. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. My score has been adjusted to account for the very nice rebuttal the authors wrote. Thanks for the clarification. Questions For Authors: Please address the questions I raised above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Dear Reviewer DZ14,** Thank you for your constructive comments and recognition of our work's core contributions. We greatly appreciate your thoughtful feedback, which has helped us further refine and clarify several key aspects of our methodology. Below, we provide a structured response to your concerns. ## 1. Multiple EDA Back-end Parameters for Iterative Design Optimization In constructing our dataset, we designed a unified, fully-automated back-end flow using state-of-the-art commercial tools—Synopsys Design Compiler and Cadence Innovus—with a consistent set of optimization switches (e.g., gate sizing, buffer insertion, cell movement, etc.). However, the circuits in our dataset were **not finalized under a single fixed configuration**. For each design and each back-end optimization-related parameter (e.g., density thresholds, routing constraints, clock constraints), we **automatically tried multiple sets of values and iteratively explored multiple different configurations**, often conducting **tens of design runs**, until the circuit reached a state where: - Placement density no longer increased, and - Timing metrics converged stably through repeated optimization. This convergence point is a practical proxy for physical design quality, reflecting an optimization level comparable to manually refined industrial flows. By doing so, we avoid biasing our dataset toward a singular "super-convergent" setting and instead generate diverse yet quality layouts more representative of industrial standards. Importantly, this means our model is **not tuned to predict timing under a specific tool configuration**, but rather aims to approximate the **best achievable timing performance** after realistic optimization—an objective more aligned with industrial design goals. ## 2. Industrial Relevance and Methodological Scalability We acknowledge that experienced experts in the industry may pursue further manual tuning to push timing closure closer to its theoretical optimum. However, given our goal of building a large RTL-to-Layout database covering many diverse designs, it is tough to explore every possible EDA configuration or foundry setting exhaustively. While our approach may not guarantee absolute optimality for each circuit, it reflects a **robust and converged implementation quality**, providing a meaningful basis for our timing prediction framework. By generating datasets by "trying multiple parameters iterations until convergence", our multi-granularity cross-stage distillation framework maintains strong generality across various designs, process nodes, and EDA flows. The student model is designed to be easily extensible—new designs, tools, or manufacturing conditions can be seamlessly integrated. Rather than overfitting to a specific toolchain or configuration, our framework demonstrates that cross-stage learning is broadly applicable and practical. Once more finely tuned industrial data become available, our method can directly incorporate such data to enhance model accuracy further. ## 3. Runtime and Computational Efficiency Performing the full RTL-to-Layout timing analysis flow—including synthesis, place-and-route, STA, etc.—typically consumes **hours or even days** for large circuits. In contrast, our GNN-based predictor operates at the RTL level, completing end-to-end inference in **seconds to a few minutes** for circuits ranging in size from thousands to millions of gates. For example, on a 400K-gate CPU design running on an A100 GPU server: - Logic synthesis takes ~30 minutes - Backend optimization (e.g. place-and-route, etc.) takes over 6 hours - STA requires ~33 minutes By comparison: - SOG extraction: ~65 seconds - Final GNN inference: <100 seconds This enables a **speedup of over 10×**, even against standalone STA, and far more when compared to the entire back-end flow. Moreover, our GNN model supports **parallel processing**, enabling efficient handling of large-scale circuits and making it viable for early design-stage performance estimation. ## 4. Data Availability and Reproducibility To support reproducibility and peer evaluation, we have anonymized and **released a subset of our code and data through an anonymous repository (https://anonymous.4open.science/r/icml2025RTL-CBFD/)**. Due to the large file sizes and platform storage limitations, only a partial dataset is available at this stage (a clearer description of the data aspects is included in the reply to Reviewer wjHP; please check it out). We intend to release the **full dataset and code** upon paper acceptance, ensuring transparency and enabling broader research engagement. ## Conclusion Thanks for your thoughtful feedback. Your comments have helped us improve the clarity of our presentation, particularly around our data collection methodology, the generalizability of the proposed framework, and its practical runtime advantages. *We sincerely appreciate your review and very much hope to earn your affirmation and a higher rating!*
null
null
null
null
null
null
null
null
On the Duality between Gradient Transformations and Adapters
Accept (poster)
Summary: The authors derive an equivalence between LoRA / adapter training methods, which optimise a low rank addition to weight matrices to reduce optimiser memory usage, and methods that project gradients themselves down to low rank (such as Galore), also to reduce optimiser memory usage. Their result generalises more specific cases that have been shown in recent works. They then investigate the practical insights that might be gained from this observation, performing various LLM pretraining experiments with a range of adapter / gradient projection techniques, and propose a simple modification in the distributed training scenario motivated by the correspondence they prove, which improves performance. ### Update after rebuttals I have maintained my score of "Strong accept" - please see rebuttal comment Claims And Evidence: - The primary claim of the paper is that they prove that adapter methods are equivalent to gradient low-rank projection methods, which is true. Their theorem covers a general case that applies to most of the recent works such as lora and galore. - Their secondary claim of utilising this theory to develop a better algorithm in the distributed training case is also backed up by good experiments, with relatively large models (1B). Methods And Evaluation Criteria: - The proposed theorem and corresponding proof are relevant to the well-known lora and galore algorithms. In particular, they assume that the gradients are projected down to a low-rank subspace using a matrix $\mathbf S$, and the optimiser steps are then projected back up to the full space using its transpose $\mathbf S^T$. This is the appropriate setup when $\mathbf S$ has orthonormal rows / columns, or in the case where $\mathbb E[\mathbf S \mathbf S^T]=\mathbf I$. I would also be interested to know whether a similar equivalence can be proved in the case where you project back up using the Moore-Penrose pseudo-inverse, which is equivalent to $\mathbf S^T$ when $\mathbf S$ is orthogonal but not in general. However, this case might not be of practical relevance anyway, as the Moore-Penrose inverse would be an expensive step to compute as part of an optimisation loop. - The comparison between various adapter and gradient transformation methods in Table 2 was interesting, though I wasn't sure how this was directly relevant to the equivalence they proved. - The proposed modification to training in the distributed setting definitely sounds clever, since they use different projections for the different workers to ensure that the overall parameter updates are roughly full rank. However, this seems to be specifically in the one-sided LoRA setting, and I'm not sure whether the problems they fix are relevant in the more usual two-sided LoRA setting. Theoretical Claims: I checked all the proofs in Appendix A. Theorem 1's proof is a neat proof by induction that takes less than a page. The other two proofs, for proposition 2 and corollary 3, are straightforward applications of Theorem 1 using Kronecker factored matrices, but are still useful to have written in full. Experimental Designs Or Analyses: The experimental setup is sound, pretraining LLama models on large text datasets with LoRA adapters from scratch. I was slightly confused by what the authors meant by the "Transformer++" architecture, as I could not find this in the cited LLama paper. Supplementary Material: I checked the proofs in the appendix. Relation To Broader Scientific Literature: The discussed schemes of lora, galore, relora etc. are all very relevant and discussed methods in the current PEFT literature. As mentioned in this paper, previous papers have proved equivalences between some of these schemes, such as one-sided lora and galore, and this paper serves to generalise these proofs to a much wider setting that includes many of these schemes. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: ### Other strengths - I think the proposed equivalence is very useful, as many PEFT schemes like this have been proposed, and it turns out that in many cases they are equivalent. ### Other weaknesses - The experiments in the paper, other than the distributed training setting, do not seem entirely relevant to the main point of the paper. However, since the main point of the paper is the proved equivalence, the experiments are of less importance. Other Comments Or Suggestions: - Please define the notation $\text{vec}^{-1}$ first used on line 195. - I didn't fully understand the points about why different schemes need different persisted matrices in Table 1. Slightly more explanation would be helpful. - I also found the discussion around Table 4 slightly confusing. It seems to show that in the baseline, having more workers is beneficial, but in the lora experiments, having more workers is detrimental? - It would be useful to clarify what the metrics are in all table captions (I think they are all test perplexity?). - In proof of theorem 1, I found equation 5 hard to follow, but it was just the chain rule. It might be worth slightly rewriting this small segment to make it clearer. - Typo: in line 759, "imilarly" Questions For Authors: - What is the Transformer++ architecture? Perhaps I have missed something, but I cannot find it in Touvron et al 2023. - Is it possible to prove a similar equivalence where you use the moore-penrose pseudo-inverse to project back up? (I.e. your low rank projection isn't necessarily orthogonal). - (not very important but I was curious): Do you have an explanation for why the SVD curve in Figure 1 plateaus over time? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your review! We address your questions below: ### Key points > Is it possible to prove a similar equivalence where you use the moore-penrose pseudo-inverse to project back up? Great question! It is possible that an equivalence exists if one is not using a LoRA adapter. For example, one could imagine defining a PyTorch primitive function whose forward pass behaves like a regular (one-sided) LoRA adapter $S^\top A$, but whose backward pass computes the gradient of $A$ using the Moore-Penrose inverse instead of $S^\top$. In principle this should be equivalent to training with GaLore (with a Moore-Penrose up-projection) in the optimization trajectory sense, though we are not quite sure what the mathematical interpretation of this would be. > The comparison between various adapter and gradient transformation methods in Table 2 was interesting, though I wasn't sure how this was directly relevant to the equivalence they proved. The main aim of Table 2 is to compare (empirically) different choices of $S$, specifically to understand the trade-off between test PPL and estimated memory usage. The results suggest that Rademacher matrices offer better memory efficiency at a slight PPL penalty, and randomized semi-orthogonal matrices offer an even smaller penalty, though perhaps not a memory improvement (since it is not as clear how one could rematerialize them efficiently). > The proposed modification to training in the distributed setting definitely sounds clever [...] however, this seems to be specifically in the one-sided LoRA setting [...] Yes, you are correct---this result is specific to the one-sided LoRA setting. We focused on this setting because this is what the theorems we prove focus on. That said, we also tried running a full two-sided distributed training setting (which would be equivalent to distributed ReLoRA/LTE; 20.97 PPL [200M] and 13.72 PPL [1.3B]) and found it to underperform the one-sided LoRA with worker-aware initialization. We will include these results in the next version of the paper. > Do you have an explanation for why the SVD curve in Figure 1 plateaus over time? Great question! We suspect that towards the end of training, since the learning rate has annealed, the model is actually shifting very little in parameter space. This means that the gradient is also shifting fairly little. This means that our SVD-based gradient estimator, which is optimal (in the Frobenius error sense) when applied to the gradient the SVD was computed on, remains fairly optimal. > I was slightly confused by what the authors meant by the "Transformer++" architecture Thanks for pointing this out. This part is unclear, and we will fix this in the next version of the paper. By Transformer++ all we mean is a Transformer with the changes made in the Llama architecture, e.g., RMSNorm instead of LayerNorm, SwiGLU activations, rotary embeddings (some papers use this term). ### Other comments/suggestions Thank you for your comments here—we will address them all in the next version of the paper. We also include a brief explanation/clarification where appropriate: > Please define the notation vec^-1 first used on line 195. If $\text{vec}(\cdot)$ sends a matrix to its vectorized form (i.e., we stacked up the columns), then $\text{vec}^{-1}(\cdot)$ takes a big stacked vector and unstacks it into a matrix of the right size. > I didn't fully understand the points about why different schemes need different persisted matrices in Table 1 This point is related to your earlier question about Table 2. Table 1 is trying to be explicit about the cases where it should be possible to just persist the seed that generates a matrix, and regenerate it on-the-fly as needed. This means additional memory is saved since we can get away with just storing the seed for that matrix (~4 bytes), as opposed to storing the matrix itself. > I also found the discussion around Table 4 slightly confusing. It seems to show that in the baseline, having more workers is beneficial, but in the lora experiments, having more workers is detrimental? Great question---In Table 4 we simultaneously vary the number of workers and the rank of the gradient transformation/LoRA adapter, except in the case of the baseline, where we only vary the number of workers. Since we allow all workers to train for the same number of tokens, this means that for the baseline, increasing the number of workers increases the effective number of total tokens we train on *without* any penalty (i.e., reducing the rank). So the baseline will always benefit from more workers without any real negative effects, whereas the other approaches will suffer from the rank being decreased as the number of workers is increased. We mention this briefly in the caption, but we will make sure to make this clearer. > It would be useful to clarify what the metrics are in all table captions (I think they are all test perplexity?). That’s right, they are all test perplexity. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have also read the other reviews and responses, and would like to maintain my score of "Strong Accept". I think this is an excellent piece of work, and the equivalence proved is interesting and should help combine previously separate research directions.
Summary: This paper studied the connection between weight-efficient optimization and gradient-efficient optimization of transformers, and found that optimizing a model in a compressed gradient space is equivalent to optimizing just an additive low-rank adapter. Through theoretical analysis and empirical study, the authors showed the equivalence between those two lines of efficient optimizations. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes. The theoretical derivations are solid to me. Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: This paper is a revisit to previous memory-efficient training methods [1, 2, 3] for transformers. It aims to discover the connections and build equivalence between them, both theoretically and empirically. [1] Zhao, Jiawei, et al. "GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection." International Conference on Machine Learning. PMLR, 2024. [2] Hu, Edward J., et al. "LoRA: Low-Rank Adaptation of Large Language Models." International Conference on Learning Representations. 2022. [3] Lialin, Vladislav, et al. "ReLoRA: High-Rank Training Through Low-Rank Updates." International Conference on Learning Representations. 2024. Essential References Not Discussed: No Other Strengths And Weaknesses: ### Strengths: - The writing is clear and easy to follow. - The derivations make sense to me. ### Weaknesses: - The empirical study is not clear enough to me, regarding supporting the previous claim about the equivalence between weight-efficient optimization and gradient-efficient optimization. Table 2 is more like a performance comparison instead of showing any connections between different methods. Besides, the analysis in this table is also ambiguous. Other Comments Or Suggestions: No Questions For Authors: 1. Could you further explain your findings from experiments in Sec 4.1? Like how do these results support the equivalence between weight-efficient optimization and gradient-efficient optimization, and how is this equivalence used to build more efficient training methods? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your response! We will break down your question into parts: > Could you further explain your findings from experiments in Sec 4.1? The key goal of Sec 4.1 is to understand the interplay between the choice of gradient transformation S and overall performance. One important aspect of this interplay is that some transformations are expensive to update (e.g., the SVD transformation works well according to the GaLore paper, but requires running an SVD which is expensive), and others take more memory to store (e.g., the SVD transformation requires you to persist that matrix in memory, but if you are using random Gaussian matrices you just need to store a seed that you will use to regenerate this matrix—this means the whole matrix can be compressed into a handful of bytes). Table 1 tries to summarize these trade-offs conceptually. For each gradient transformation method, we describe what the parameters (of linear layers) in the model would look like using this approach, which of these parameters would be trained/frozen, and which of these need to be persisted in memory. Table 2 shows the empirical results (in terms of PPL on a held-out subset of SlimPajama) of training 200M and 1B using each of these methods; since the LoRA formulation is amenable to quantizing the frozen weight, we also include for each method the results using INT8 quantization and NF4 quantization. Beyond the test PPL, we also report the estimated memory usage of each approach. The most important point of Table 2 is that, at a minor loss of PPL (i.e., +0.1-0.3 PPL), one can use other transformations that offer memory efficiency (e.g., Rademacher gradient transformations). The other takeaways from the table are that (i) while two-sided gradient transformations may be intuitively nice, they don’t seem to work as well, and (ii) INT8 quantization gives you memory savings without any meaningful PPL degradation, but NF4 starts to incur a non-trivial PPL penalty. But we agree that section 4.1 could be explained better and will make sure to expand on the above points in the next iteration of the paper. > The empirical study is not clear enough to me, regarding supporting the previous claim about the equivalence between weight-efficient optimization and gradient-efficient optimization. Table 2 is more like a performance comparison instead of showing any connections between different methods > [...] how do these results support the equivalence between weight-efficient optimization and gradient-efficient optimization Since we prove that there is a mathematical equivalence between training with gradient transformations and training with a weight transformation (aka., a linear, one-sided adapter), the goal of our experiments is *not* to empirically verify this equivalence (since this would be redundant), but instead to leverage it by exploring, e.g., the interplay between choice of gradient transformation and empirical language modeling performance. That said, we will add a small scale experiment in the appendix where this equivalence is validated empirically, by showing that the loss curves of two toy models remain the same during training. > [...] and how is this equivalence used to build more efficient training methods? Beyond the new gradient transformations we propose in Table 2, our biggest contribution in terms of more efficient training methods is in combining gradient transformations with distributed training. In particular, in Table 3 we show that making different workers have different gradient transformations *and* making these transformations pairwise orthogonal (i.e., the distributed random method) is better than having them be orthogonal (but not pairwise orthogonal) or having them be the same across workers. In Table 4, we also study the effect of varying the number of workers, and find that the benefits of the distributed random method become more pronounced as the model is partitioned across more and more workers (each training a smaller slice of the model).
Summary: This paper explores a memory-efficient approach to neural network optimization by mapping gradients into a lower-dimensional space, reducing the memory overhead of both gradient accumulation and optimizer states. After performing updates in this reduced space, parameters are returned to the original space via the transpose of the linear transformation. The authors demonstrate that optimizing in this transformed gradient space is equivalent to reparameterizing the model through a linear adapter—which additively modifies the parameters—and training only the adapter. By employing a Kronecker-factorization of this linear map, the paper further clarifies the connection between GaLore and LoRA, showing they coincide (for one-sided LoRA) under this unified perspective. Claims And Evidence: Yes. Methods And Evaluation Criteria: The proposed method is interesting but more datasets are needed to be explored. Theoretical Claims: Yes, and they seemed correct to me. Experimental Designs Or Analyses: Yes, and in comparison with the related works different model sizes need to be investigated to show if the method works. Supplementary Material: Yes, the proof and some experiments and the architectural details. Relation To Broader Scientific Literature: The idea is interesting and could be practical in terms of scale. Essential References Not Discussed: - Other Strengths And Weaknesses: The paper has comprehensive related works. It is also well written and well organized Other Comments Or Suggestions: - Questions For Authors: I wrote more comments above but how the convergence or the collapse of the method will be with this method? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review! Our main objective in this paper was to explore pretraining, and thus we selected a pretraining dataset that matches the distribution of pretraining data (SlimPajama, which is based on RedPajama, that tried to match the pretraining data used by Llama, one of the most successful open-source LLMs). In terms of model size, because of compute constraints in an academic setting, we scaled as large as we could go (1B). As much as we’d love to scale further, we cannot do a pretraining run of a larger scale on our resources. > I wrote more comments above but how the convergence or the collapse of the method will be with this method? Besides the standard convergence results that apply to ML models of this scale, we note that, due to the nature of our equivalence, the theorems in the GaLore paper should still apply (subject to those theorems’ conditions, e.g., any restrictions on the choice of projection). Empirically, we find that all approaches we consider have similar convergence behavior.
Summary: This paper investigates the duality between linear gradient transformations and adapter-based reparameterizations in memory-efficient LLM training. In essence, it shows that applying linear transformations to gradients (as in GaLore) is equivalent to reparameterizing the model via adapters (like one-sided LoRA). This connection is interesting, as it unifies several existing methods and could potentially inspire new techniques. However, while the comprehensive summary and explicit unification of existing methods are very helpful, its conceptual insight is somewhat already quite clear from the literature, the contributions beyond what is already known appear incremental. Claims And Evidence: The paper claims that its generalized duality result not only recovers the known equivalence between GaLore and one-sided LoRA but also extends to more general settings. Although the authors provide a comprehensive summary of various optimizers and their relationships (as seen in Table 1), the result itself is rather straightforward. In the empirical evaluation, the best-performing method on the 1B model is still the gradient SVD adapter version of GaLore, and when compared to LoQT, the only difference is a slight change in quantization precision. These findings, while interesting, do not convincingly demonstrate fundamentally new methods that offer a significant improvement over existing ones. Methods And Evaluation Criteria: There are a few minor limitations. See "Claims And Evidence" and "Experimental Designs Or Analyses" Theoretical Claims: The paper’s theoretical result—the equivalence between gradient transformations and adapter reparameterizations—is elegant but not surprising. Generalizing this duality appears to be a natural extension of previous work, and it does not suggest fundamentally new techniques. In particular, the derivation does not yield any novel optimizer that outperforms existing ones, but rather shows that the methods are different views of the same underlying mechanism. Experimental Designs Or Analyses: It seems that in the pretraining experiment, the authors mostly follows the setups (same number of tokens, etc) of the Galore paper (perhaps with different context length but this is not specified in the paper). However, according to the 1B experiment all baselines uniformly performs much better than the original setup. I would appreciate a comparison under the same setup to give readers a sense of how well these methods performs. Moreover, while ReLoRA is presented as a strong baseline in the paper, its performance in prior benchmarks has generally been weak. This discrepancy raises questions about whether the reported improvements are robust and representative. Supplementary Material: I reviewed the proof of the Theroem. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: A notable strength of this work is its focus on an important direction—understanding the equivalence between optimizers and weight-space transformations in memory-efficient training. This is a valuable perspective that could potentially lead to new insights. On the downside, the technical contributions are incremental. The generalization of the duality between Galore and LoRA is quite straightforward, and more importantly, did not lead to new research insights (in terms of predicting new methods beyond what is known). The empirical results, particularly on the largest benchmark, do not convincingly demonstrate the intended claims. Other Comments Or Suggestions: N/A Questions For Authors: - How do you justify the experimental setup, particularly for the 1B model, where all baselines perform uniformly better than reported in previous work? Would a side-by-side comparison under the exact same setup offer different insights? - Given that ReLoRA is shown as a strong baseline in your paper, how do you reconcile this with prior benchmarks where ReLoRA has performed poorly? - Can you comment on whether the duality result leads to any fundamentally new methods or improvements beyond providing a unifying view of existing techniques? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your review! We identify three key criticisms pointed out in your review, and address them below ### 1: Conceptual insights in the duality derivation are simple Indeed, while the essence of our derivation was observed previously in the literature, our derivation is more general; it considers a linear adapter applied to any parameter vector and it takes into account optimizer choice (previous works were only showing the equivalence for parameter matrices of linear layers and SGD, a non-stateful optimizer). This view is productive: it allows us to establish a connection between Kronecker-factored gradient transformations and recent two-sided variants of LoRA adapters which have been found to perform well for parameter-efficient finetuning, in particular MoRA, PMSS, and LoRA-XS. This is to the best of our knowledge novel in the literature. We think that this more general result is a worthwhile contribution to the literature. ### 2: The empirical contributions borne out by the insights of the derivation are incremental. A key criticism seems to be that despite proposing other gradient transformations (e.g., Rademacher, randomized semi-orthogonal), the SVD transformation (i.e., GaLore) is still the highest performing transformation in Table 2 in terms of PPL in the 1B case. However, in Table 2 we see that the SVD transformation underforms the proposed approaches in the more memory-efficient NF4 quantization (i.e., QLoRA) setting: the Rademacher and Random-Orthogonal projections in this setting perform nontrivially better. Furthermore, in the 1B setting without quantization (or with INT8 quantization), the gap between the SVD and some of the other approaches is fairly modest (e.g., 0.1-0.3 PPL), and it is important to take into account that some of these are more memory efficient than the SVD transformation (see memory column in Table 2 for, e.g., Rademacher transformations). We also kindly note that a large portion of our paper was dedicated to the application of the gradient transformation–adapter duality to distributed training, though it seems this wasn’t mentioned in the review. In particular, we show that choosing the transformations so they are worker-aware leads to better results than making them randomly different across workers or making them the same across workers. Finally, we want to emphasize that the goal of the paper wasn’t necessarily to achieve state-of-the-art memory-efficient pretraining results, but more to give a flavor for how our duality can unify existing approaches and lead to new techniques. ### 3: Discrepancy between our results and those in the GaLore paper First, we’d like to stress that our setup is different from the GaLore paper in many respects, so it is hard to do a direct comparison. For example, we use a different dataset (SlimPajama instead of C4), we use gradient accumulation, our maximum sequence lengths are substantially higher (2K instead of 256), and our optimizer hyperparameters are not the default Adam hyperparameters, but instead are more consistent with values used in more recent, large LLM training runs (we use the ones for Llama-2 and OLMO). More importantly, our decision to use a different setup was deliberate: Since we focus on pretraining, we want to be in the setting that most closely resembles recent LLM training runs. Therefore, we do not see the discrepancies between our results and those in the GaLore paper as a reflection that our setup is suboptimal; rather, we see our results, in part, as an attempt to apply GaLore to a different setting. We also note that such nontrivial differences in performances due to different training setups is common: for example, the Flora paper’s replication of GaLore (in Table 6 of the appendix) also shows substantially different results. ## Answers to individual questions 1. See point 3. 2. This is a great question—we were also puzzled by this. Here, we actually think it is worthwhile taking a step back and looking at the results and models more broadly. Mathematically (from our duality derivation), it is clear that GaLore (and thus one-sided LoRA training) partitions the parameter space and trains only along a subspace of it. This reduces the effective number of parameters in our model, which would usually lead to a performance hit. Further, from our derivation, ReLoRA can be seen as GaLore where the projection is learnt by the optimizer, so unless this small number of new parameters introduce a lot more instability, we arguably shouldn’t expect the results to change much. Both of these observations manifest in our results. From this lens, we think our results are arguably what we’d a priori expect. While we cannot be sure why previous work doesn’t also show this same trend, but we suspect that this may have to do with the differences in the experimental setup (see point 3 above). 3. See point 2.
null
null
null
null
null
null
Causality Inspired Federated Learning for OOD Generalization
Accept (poster)
Summary: I am not familiar with the field of this submission. I suggest the AC find another reviewer and disregard my comments. Apologies for the inconvenience. Claims And Evidence: I am not familiar with the field of this submission. I suggest the AC find another reviewer and disregard my comments. Apologies for the inconvenience. Methods And Evaluation Criteria: I am not familiar with the field of this submission. I suggest the AC find another reviewer and disregard my comments. Apologies for the inconvenience. Theoretical Claims: I am not familiar with the field of this submission. I suggest the AC find another reviewer and disregard my comments. Apologies for the inconvenience. Experimental Designs Or Analyses: I am not familiar with the field of this submission. I suggest the AC find another reviewer and disregard my comments. Apologies for the inconvenience. Supplementary Material: I am not familiar with the field of this submission. I suggest the AC find another reviewer and disregard my comments. Apologies for the inconvenience. Relation To Broader Scientific Literature: I am not familiar with the field of this submission. I suggest the AC find another reviewer and disregard my comments. Apologies for the inconvenience. Essential References Not Discussed: I am not familiar with the field of this submission. I suggest the AC find another reviewer and disregard my comments. Apologies for the inconvenience. Other Strengths And Weaknesses: No Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your rigorous and responsible review.
Summary: The authors identify a limitation in federated learning, arguing that existing methods primarily capture invariant causal features across clients/environments while failing to account for variant causal features. To address this, they propose a method designed to capture both invariant and variant causal features—both of which are direct causes of the target variable, y. Their proposed architecture, FedUni, is trained to extract all possible causal features from input data, acknowledging that some extracted features may be non-causal. To mitigate the risk of incorporating spurious correlations, they introduce a feature compressor. Additionally, they incorporate a causal intervention module on the client side, leveraging a counterfactual generator to create counterfactual examples. The authors provide extensive experiments and theoretical analysis, claiming that their method significantly enhances out-of-distribution (OOD) generalization. Claims And Evidence: While the approach appears well-motivated, the effectiveness of capturing variant causal features—especially under real-world federated settings—remains an open question. A deeper examination of the robustness of their counterfactual generation and feature compression mechanisms would be valuable in assessing the practical impact of this work. Methods And Evaluation Criteria: The motivation and proposed method are well-founded, and the choice of benchmark datasets is appropriate. Theoretical Claims: No concern. Experimental Designs Or Analyses: Experiments are reasonable. Supplementary Material: No. Relation To Broader Scientific Literature: It is relevant to the broader machine learning community. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: The paper has a good motivation. However, it lacks clarity to explain and support each component. In lines 281–283, the authors state, "Firstly, with the causal intervention module, Z_S can be eliminated by minimizing the causal effect of environmental changes." However, this claim seems questionable, as the authors themselves acknowledge that some detected causal features may, in fact, be non-causal. It remains unclear how misidentified non-causal features are effectively reduced while true causal features are amplified. Further clarification on this mechanism would be beneficial. Other Comments Or Suggestions: Already mentioned in above. Questions For Authors: Please address my comment in "Other Strengths And Weaknesses". Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: In response to the reviewer's concern that “some detected causal features may, in fact, be non-causal,” we clarify that the features described above are indeed causal. The misunderstanding appears to stem from our use of the misleading term “**fake causal features**” to denote causal features that **exist in general but are absent in the current image**. Importantly, “fake causal features” do not refer to ***misidentified*** non-causal features. Perhaps a more precise term would be “**inactive causal features**” and we will replace “fake causal features” with “inactive causal features” in the final version. More specifically, after eliminating the non-causal features $Z_S$ in lines 281–283, the remaining causal features $Z_C^g$ can be further divided into **active causal features** $Z_C^U$ and **inactive causal features** $Z_F^U$. The feature compressor then performs adaptive feature selection based on the test data distribution $U$, retaining the active causal features $Z_C^U$ and discarding the inactive causal features $Z_F^U$ that are not present in the current data. For example, a *cat paw* is a valid causal feature for classifying a cat. However, in an image where only a *cat head* is visible, the *cat paw* becomes an inactive causal feature. Inactive causal features may confuse the model, leading to performance degradation, so they should be excluded from the final features. Thus, the introduction of the counterfactual generator enables the elimination of non-causal features $Z_S$, while the feature compressor eliminates inactive causal features $Z_F^U$ and amplifies active causal features $Z_C^U$.
Summary: This paper addresses the challenge of OOD generalization in FL by rethinking how causal features are extracted across clients. The authors argue that instead of limiting the global feature extractor to invariant features (like the traditional FL methods), it should capture the union of all causal features present in the data, thereby preserving richer and more diverse information that can enhance OOD performance by leveraging both collaborative training and targeted causal interventions. The proposed FedUni comprising three core components. 1) A comprehensive feature extractor is designed to identify all potential causal features from any input across clients. 2) A fake causal feature compressor is employed to filter out client-specific or spurious features that do not contribute to the target task. 3) A causal intervention module on the client side uses a trainable counterfactual generator to create modified examples that simulate distribution shifts while preserving semantic content. Experimental results on different datasets and theoretical analysis are provided in the paper. Claims And Evidence: The process of capturing causal features while rejecting fake ones relies heavily on the performance of a counterfactual generator. This separation could be in practice challenging and dependent on the model type, architecture, size, hyperparameters, etc. In this paper, a very simplistic model of two convolution layers along with some normalization was considered; but, how we can justify using such simplistic model for the purpose, especially for practical size images (not to small pixel size)? Methods And Evaluation Criteria: Yes. Theoretical Claims: Seem to be correct. Experimental Designs Or Analyses: 1. The experimental evaluation in this paper is undermined by the use of outdated and overly simplistic models, namely ResNet-18, a one-hidden layer MLP, and AlexNet. These choices do not reflect the current SOTA architectures (ResNet-50/101, ViT, etc) that are standard in top-tier research. These modern architectures have fundamentally different inductive biases, scalability, and computational characteristics. Therefore, the results currently presented in this paper may not reliably generalize to realistic settings. 2. The number of clients seems to be small, N=10, to capture real-world scenarios. In many recent papers on FL, much more clients are often considered, e.g., 100. 3. Key practical issues of FL such as non-iid data and client sampling have not been considered. Supplementary Material: Yes, all. Relation To Broader Scientific Literature: The problem of OOD generalization can be also addressed through personalized FL (PFL). It seems that the authors claim that the PFL approach is limited, by citing a couple of works such as (Tang 2023, 2024). But, the literature of PFL is very rich, and some of them explicitly tackle the issue of OOD generalization while numerous others tackle the issue implicitly (but, possibly effectively). Few examples are: [Ref 1] Exploiting Personalized Invariance for Better Out-of-distribution Generalization in Federated Learning [Ref 2] Personalized Federated Learning with Contextualized Generalization [Ref 3] Ditto: Fair and Robust Personalized Federated Learning I am hoping to see fair and proper comparison against those approaches of PFL. Essential References Not Discussed: See above comment. Relevant references for PFL for OOD generalization should be cited (and if possible, the performance should be compared). Also, FOOGD: Federated Collaboration for Both Out-of-distribution Generalization and Detection (NeurIPS 2024) should be considered (and if possible, the performance should be compared). Other Strengths And Weaknesses: Another weakness: In the paper, the intervention module is a key component, which generates counterfactual examples that simulate distributions shifts. However, it is not clear how the authors can guarantee that these generated examples accurately simulate distribution shifts without altering the underlying semantics and how sensitive the method is to potential failures of proper generation? In Appendix E, the authors clarify that the generator is a simplistic model composed of two convolution layers and some adaptive normalization. But, I am not convinced that this overly simplistic network can generate high-quality realistic samples that simulate distributions shifts, without changing the underlying semantics. Other Comments Or Suggestions: The proposed method can be computationally demanding on the client side due to the extra modules, which raises a question of scalability. In the current experimental setup, the model architecture and number of clients are all small. More experiments on large-scale or highly heterogeneous federated settings would help assess the scalability of the proposed scheme. Questions For Authors: My concerns and questions have been raised in the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your suggestions, and we will include the citation you pointed out in the final version. **Q1: Experimental Designs** We conduct additional experiments in response to the suggestions about experimental designs. **Add Updated Model Architectures**: Following the reviewer's suggestion, we include experiments with ResNet-50 and ViT on PACS dataset, and our method still achieves superior performance. Notably, the selected models in original submission was guided by **the experimental settings of existing works** to ensure a fair comparison. |ResNet50|Art|Cartoon|Photo|Sketch|Avg| |-|-|-|-|-|-| |FedAvg|75.07|75.47|92.80|76.20|79.88| |FedSR|74.27|71.60|92.13|67.87|76.47| |FedGD-GA|77.47|72.80|**94.27**|69.73|78.57| |FedIIR|74.80|71.87|91.33|71.87|77.47| |Ours|**80.27**|**75.87**|93.27|**79.60**|**82.25**| |ViT-Base-32|Art|Cartoon|Photo|Sketch|Avg| |-|-|-|-|-|-| |FedAvg|80.01|74.53|93.07|69.73|79.33| |FedSR|73.60|68.13|87.33|65.07|73.53| |FedGD-GA|80.34|71.27|93.07|65.87|77.63| |FedIIR|78.93|72.67|87.87|68.93|77.10| |Ours|**80.40**|**74.60**|**94.93**|**78.13**|**82.02**| **Increase Client Number and Add Client Sampling**: We added experiments on the CMNIST dataset with 100 clients, adopting the widely used random client sampling strategy by selecting 20 clients per round for training. |Test Acc (%)|FedAvg|FedProx|Scaffold|Moon|FedSR|FedIIR|FedDG-GA|FedSDR|Ours| |-|-|-|-|-|-|-|-|-|-| |ID|97.65|97.86|98.25|97.66|98.45|96.04|94.5|98.19|**99.61**| |OOD|76.48|76.59|83.51|76.79|80.08|72.69|73.86|76.77|**93.87**| **Consider Non-iid data**: Reviewer suggested considering non-IID data. However, we **have already considered non-IID settings**. The experiment results presented in Tables 1 and 2 on page 7 of the original submission were all obtained under non-IID conditions, addressing both class imbalance and covariate shift. A detailed description of the non-IID setup can be found in Appendix D.2. **Q2: Broader Scientific Literature and References** We closely follow the reviewer’s suggestion to conduct comparisons, and we will cite these works in the final version. As we have **cited the advanced versions** from the same team (Tang 2023, 2024), the earlier versions Ref [1,2] were not included in the original submission. We evaluate the provided methods on the CMNIST dataset with 100 clients. We abbreviate the method in Ref [1] as DRC. Compared to FOOGD, Ditto, and CG-PFL, our method explicitly considers the extraction of causal features. In contrast to DRC, which focuses on preserving causal features specific to individual clients, our approach retains causal features across all clients. The experimental results further demonstrate the effectiveness of our motivation. |Test Acc (%)|DRC [ref 1]|CG-PFL [ref 2]|Ditto [ref 3]|Ours| |-|-|-|-|-| |ID|97.17|98.04|96.15|**99.61**| |OOD|76.75|79.94|65.23|**93.87**| | | Test Acc (%)| |-|-| |FOOGD|79.61| |Ours|**90.26**| **Q3: Question about Counterfactual Generator** **How semantic information is preserved:** We need to clarify that in the original submission we explicitly addressed this issue during model training by incorporating a loss term $L_{REG}$ , which encourages the preservation of semantic information. In particular, we provide a detailed explanation in Section 4.1(2) *Preserve semantic information* and offer a theoretical proof of its effectiveness in Lemma 4.3 on Page 5. **Generation Sensitivity:** We need to clarify that in the original submission we have defined the hyperparameter $\alpha$ to measure sensitivity to generation quality, where $\alpha = 0$ indicates generation without semantic constraints, and larger values of $\alpha$ impose stronger constraints. As shown in Fig. 7(a) on page 8 of the original submission, the accuracy only drops from 90.4% to 88.5% when switching from the best setting to a setting without semantic constraints—a gap of only 1.9%. This demonstrates that FedUni is robust with respect to generation quality and hyperparameters. **Why the simple model architecture works:** Our counterfactual generator does not generate images from noise, instead, it **adds noise (i.e., distribution shifts) to the original image**. Since the original image remains part of the input, a simple model architecture is sufficient to simulate distribution shifts. Furthermore, this architecture is widely used in style transfer methods [1,2,3] and has been shown to efficiently generate distribution shifts while preserving semantic information. [1] Huang, et al. Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization. 2017. [2] Guo, et al. Single Domain Generalization via Unsupervised Diversity Probe. 2023. [3] Yang, et al. Practical Single Domain Generalization via Training-time and Test-time Learning. 2024. **Q4: Computation Cost** Please refer to our response to Reviewer FyJy 4. FedUni achieves significantly higher accuracy, while maintaining comparable computational cost to many state-of-art approaches.
Summary: The paper proposes FedUni, a framework for federated learning. The framework extracts causal features from different clients in training and flexibly selects the features applicable for the target client. It differs from the existing methods in terms of not using a fixed sharing feature pool. The paper provides experiments to show the improvement brought by the novel design. Claims And Evidence: The theoretical claims are well-supported by proofs and explanations. I found them easy to follow and understand. The novel specification of SCM looks natural to me, and the advantages are well presented in the experiments. Methods And Evaluation Criteria: The proposed method and evaluation makes sense to the specific problem. Theoretical Claims: I checked Lemma 3.2, 3.4, 4.2. For Lemma 3.2 and 3.4, the techniques are pretty standard and look correct to me. I think the $\approx$ notion looks hand-wavy in Lemma 4.2, thus I suggest making a more rigorous argument. Experimental Designs Or Analyses: The design (Sec 5.1) and analysis (Sec 5.2) both look solid to me. Supplementary Material: I did not review supplementary materials. Relation To Broader Scientific Literature: The design of fake causal features is a smart and novel idea. It may be useful for the federated learning tasks where the target client requires domain knowledge (not just common knowledge) from a specific client in training. Yet for the tasks that the training clients are very similar to each other, the method may not make a large difference despite requiring more computational cost. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I like the presentation of the paper. Though it is mathematically dense and has a lot of content both theoretically and empirically, it does not cause a burden to read. It could be helpful to make the text font in some figures (Fig 5, 7) larger. Other Comments Or Suggestions: N/A Questions For Authors: 1. What is the computational cost of the current method? I see it is briefly mentioned in Appendix G. I’m wondering if the authors could provide details from either experiments (actual run time) or theories (time complexity/asymptotic convergence rate). 2. How should practitioners know when to choose FedUni over the prior methods? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your valuable suggestions, and we will enlarge the text font in Fig 5, 7 in the final version. **Q1: Computational Cost** We provided the run time per communication round and a theoretical proof for the convergence rate. Due to space limitations, we provide a brief proof here, and the full proof will be included in the appendix of the final version. Firstly, we measured the runtime per communication round (20 clients with 5 local training epochs) on two commonly used GPUs. As illustrated in the table below, FedUni achieves approximately 19% higher accuracy compared to the second-best baseline, while maintaining comparable computation cost to many state-of-art approaches. |Run Time (s)|FedAvg|FedProx|Scaffold|Moon|FedSR|FedIIR|FedDG-GA|FedSDR|Ours| |-|-|-|-|-|-|-|-|-|-| |NVIDIA GeForce RTX 2080|1.33|1.87|1.94|2.04|2.12|1.95|3.05|5.02|3.26| |NVIDIA GeForce RTX 3090|0.77|0.92|0.91|1.13|1.53|0.86|1.40|2.60|1.57| |**Test Acc (%) (OOD-avg)**|68.53|68.23|68.47|73.92|72.69|68.99|69.02|68.72|**88.03**| In addition, we presented a brief proof for the convergence rate, showing that our convergence speed is also comparable to that of other methods. We assume that the objective functions $(F_{1}, \cdots, F_{N})$ satisfy L-smoothness and μ-strong convexity. The variances of the stochastic gradients are bounded, and the expected squared norms are also bounded. Meanwhile, $(\Gamma=F-\sum_{k=1}^{N} p_{k} F_{k})$ is used to quantify the degree of non-iid data. In the case of full device participation, by choosing appropriate parameters $\kappa=\frac{L}{\mu}$, $\gamma = (\max{8\kappa, E})$ and the learning rate $\eta_{t}=\frac{2}{\mu(\gamma + t)}$. Our proposed algorithm satisfies $\mathbb{E}[F(w_{T})]-F^* \leq \frac{\kappa}{\gamma+T-1}(\frac{2B}{\mu}+\frac{\mu \gamma}{2} \mathbb{E}\| w_{1}-w^*| ^{2})$, where $B=\sum_{k=1}^{N} p_{k}^{2} \sigma_{k}^{2}+6 L \Gamma+8(E-1)^{2} G^{2}$, $\sigma_{k}^{2}$ represents the upper bound of the variance of the stochastic gradient in the k-th device, E represents the number of local updates performed by each device between two communications. This indicates that when all devices participate and specific conditions are met, our proposed method can converge to the vicinity of the global optimal solution at a rate of $O(\frac{1}{T})$. **Q2: When to choose FedUni** The main advantage of FedUni is its ability to extract and retain all causal features from any input, making it well-suited for scenarios with **unknown test data distributions** and **client heterogeneity**. **Unknown test data distributions:** Our method is inherently **distribution-agnostic**, performing robustly in both ID and OOD settings. In contrast, existing approaches typically focus on either ID or OOD settings. ID-based methods inevitably exploit spurious correlations in the training data, undermining OOD generalization, while OOD-based methods usually discard client-specific knowledge, reducing ID discrimination ability. FedUni addresses both issues concurrently by incorporating a counterfactual generator to enhance OOD generalization and a feature compressor to preserve valuable client-specific information. **Client heterogeneity:** Our method is particularly effective in settings with data heterogeneity among clients, where existing approaches may discard valuable client-specific knowledge. Our method can preserve a broader set of causal features, leading to improved model performance. **Q3: Performance Gains in Homogeneous Setting** Reviewer pointed out that when training clients are similar, FedUni may not yield significant improvements. We conducted experiments in homogeneous setting (on the CMNIST dataset). FedUni still can outperforms other approaches. Whether to use FedUni in homogeneous setting depends on the trade-off between performance gain and computational cost. |Test Acc (%)|FedAvg|FedProx|Scaffold|Moon|FedSR|FedIIR|FedDG-GA|FedSDR|Ours| |-|-|-|-|-|-|-|-|-|-| |ID|97.08| 96.68| 97.06| 97.08|97.42|96.07|96.92|95.18|**98.16**| |OOD| 85.27| 85.46| 87.83|86.20|86.18|90.25|88.24|89.81|**97.30**| **Q4: More rigorous argument for Lemma 4.2** Thanks for your review! We will revise Lemma 4.2 into a more rigorous form in the final version. Inspired by ref [1], Lemma 4.2 relies on adopting specific $l$-norms to approximate the entropy terms in the coefficient of constraint. In particular,we rely on two main approximations. First, for the conditional entropy $H(x'|x)=-\mathbb{E}_X(\log P(x'|x)) \approx \mathbb{E}_X[\|\|x-\psi(x)\|\|_1]$, where the approximation adopted in the last step amounts to assigning a $l_1$-Laplace distribution with identity covariance to the conditional probabilities:$P(x'|x)= \mathcal{L}(x';\mu(x), I)\propto \exp (-\|\|x'-x\|\|_1)$. Similiarily, for $H(x')$, we can derive $H(x')\approx\mathbb{E}_X[\|\|\psi(x)\|\|_1]$. [1] Pedro Savarese,et al. Information-Theoretic Segmentation by Inpainting Error Maximization. 2021.
null
null
null
null
null
null
On the Convergence of Continuous Single-timescale Actor-critic
Accept (poster)
Summary: The paper presents a finite time convergence analysis of the single timescale, single loop actor-critic algorithm with Markovian sampling in the discounted reward continuous state action space setting. The main contributions include - (a) extending to continuous state action space using an operator based analysis; (b) Markovian sampling for both actor and critic; and (c) novel Lyapunov function to analyze actor and critic simultaneously. Claims And Evidence: While the problem setting (continuous state action space; single timescale; Markovian sampling) considered in this paper is interesting and important to gain a theoretical understanding of, I believe the authors overclaim the novelty of the contributions of the paper in the following ways: - new operator based analysis to handle intricacies arising from uncountable state space - the lemmas presented in Appendix B are a very straightforward extension from the discrete to continuous state action space - Markovian sampling in the discounted setting has been already analyzed in Section 4 of [1] - Lyapunov function to analyze the error of critic and actor together in Theorem D.3 is akin to the interconnected system analysis in Theorem 3.5 of previous work [2] \ [1] Towards Understanding Asynchronous Advantage Actor-critic: Convergence and Linear Speedup. H. Shen, K. Zhang, M. Hong, T. Chen. IEEE Transactions on Signal Processing, 2023. [2] Finite-Time Analysis of Single-Timescale Actor-Critic. X. Chen, L. Zhao. NeurIPS, 2023. Methods And Evaluation Criteria: N/A Theoretical Claims: All theoretical claims are sound and are presented with well written proofs. Experimental Designs Or Analyses: N/A Supplementary Material: Yes, I reviewed all lemmas and theorems in the supplementary material. Relation To Broader Scientific Literature: While the problem being considered is interesting, I believe the analysis techniques used and the resultant theoretical claims are a combination of methods used in prior literature. See section on Claims and Evidence for further details. Essential References Not Discussed: While Shen et al. 2023 has been discussed in the Introduction, I believe it should also be added to Table 1. Other Strengths And Weaknesses: - Continuous state action spaces are an important problem to consider - The paper is well written and easy to follow with appropriate background material presented and proofs well organized Other Comments Or Suggestions: N/A Questions For Authors: Please correct me if I am mistaken in my understanding of the contributions of the paper. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **(Claim 1: on new operator-based analysis.)** Operator-based analysis is employed not only to handle the lemmas in Appendix B but also to accommodate the continuous distribution throughout the proof. For instance, it is used in establishing Propositions 3.1 and 3.2. Prior to our work, it remained unclear how to extend single-timescale actor-critic methods to continuous action spaces. This is precisely why (Chen et al, 2021) and (Chen \& Zhao, 2024) consider continuous state spaces but restrict the action space to be finite—an uncommon and limiting assumption. In analyzing the most challenging case of single-timescale actor-critic methods, one must rely on several previously established lemmas (see Appendix B) that state the regularity of the problem. However, these lemmas are derived for finite action space assume Lipschitz constants that scale with the number of actions ($|\mathcal{A}|$), which, however, becomes meaningless in the case of the uncountable continuous space. We observe that the dependence on $|\mathcal{A}|$ stems from the need to bound the evolution of the stochastic processes defined in Eq. (12) and Eq. (13) over the action space. This evolution is mainly governed by the policy $\pi_{\theta}$. We demonstrate (formally established in Proposition 4.4) that if the total variation distance between two policies is Lipschitz continuous, then the evolution of the stochastic process in the action space can be effectively controlled without assuming the finiteness of the action set. This novel insight and development allow us to extend several foundational lemmas (Appendix B) to the continuous action setting. Such extension is further enabled by the proposed operator-based analysis, which conveniently handles the continuous distribution throughout the proof, including in the verification of Propositions 3.1 and 3.2. **(Claim 2: on Markovian sampling in the discounted setting.)** We acknowledge that Markovian sampling in the discounted setting has been analyzed in [1]. Nonetheless, this does not weaken our contribution. First, several key propositions essential for our analysis are not presented in [1]. For instance, Proposition 3.1 plays a central role in our framework, yet it is not established in [1]. Even Proposition 3.2, while stated in [1], is assumed without proof. In contrast, we rigorously establish these results within a continuous setting. Furthermore, our work addresses Markovian sampling under a **single-timescale** formulation, whereas [1] considers the **two-timescale case**. As a result, the treatment of Markovian noise in our analysis is fundamentally different from those in [1]. In fact, our considered setting is significantly more challenging than the two-timescale case and requires the development of new analysis techniques. We acknowledge that [1] is a valuable contribution. As emphasized in the Introduction, the analyses of single-timescale and two-timescale actor-critic methods are fundamentally different. Since Table 1 is intended to compare only single-timescale methods, we did not include [1] in the comparison. **(Claim 3: on Lyapunov function.)** We respectfully disagree with the claim that our use of a Lyapunov function is akin to the interconnected system approach. In our analysis, we sum the critic and actor errors into a unified loss function $\mathcal{L}$ and convert all error terms into a single inequality involving $\mathcal{L}$, as shown in Eq. (33). In contrast, (Chen \& Zhao, 2024) track three distinct error terms and construct an interconnected system of three coupled inequalities. It remains unclear whether the latter case admits a Lyapunov-like formulation that combines the three errors into a single function and establishes a unified inequality. That said, we agree that from a more abstract mathematical standpoint, all such approaches can be interpreted as sequences of intricate inequality manipulations.
Summary: The paper analyzes the single-timescale actor-critic with TD(0) updates for the critic and REINFORCE with a baseline for the actor, with linear function approximation for continuous state-action spaces. The samples are taken from a single (Markovian) trajectory, while a generative model that enables an independent sample from the given initial state distribution is assumed. Under standard assumptions on the Lipschitz continuity of the policy parameterization and uniform ergodicity, convergence to a stationary point up to a function approximation error was proved. Claims And Evidence: It is a theoretical paper, and the claims are supported with corresponding statements along with their proofs. Methods And Evaluation Criteria: There is no numerical study in the paper. Theoretical Claims: I checked the proofs of the main theorems. They seem to be correct. Experimental Designs Or Analyses: No experimental designs or analyses in the paper. Supplementary Material: I checked the proofs of Theorem 5.1 and Proposition 3.1. Relation To Broader Scientific Literature: The paper is complementing the theoretical study of single-timescale actor-critic methods in two directions: (1) continuous/uncountable state-action spaces, and (2) Markovian sampling. The main challenges that are inherent to continuous state-action spaces can be more elaborated. In its current form, given the uniform Lipchitz continuity of the parametric policies, linear function approximation and uniform ergodicity, the analysis seems similar to the actor-critic methods for countable state-action spaces. What makes it particularly challenging and different in this particular setting, and how were these challenges addressed? This could be very important. Secondly, the Markovian sampling for \widehat{O}_t in (9) requires independent samples from the initial state distribution \eta with probability (1-\gamma). If one has access to this mechanism, i.i.d. sampling can also be performed. As such, the need to use such a simulator weakens the sampling argument in this paper. Can this requirement to have i.i.d. samples from \eta be removed? Essential References Not Discussed: The list of references seems sufficient. Other Strengths And Weaknesses: Please see my previous comments regarding the sampling. Regarding Assumption 4.3, does a specific class of policies (e.g., log-linear) satisfy these Lipschitz continuity claims? Other Comments Or Suggestions: - The use of linear function approximation to address large state-action spaces can be specified earlier. It is mentioned for the first time in Section 3, which is a bit late. - In (7), (9) and Algorithm 1 (Lines 4 and 5), the time index of the policy parameter \theta was not given. I guess it should be \theta_t in these equations. Questions For Authors: The analysis in this paper mainly establishes convergence to a stationary point. The natural policy gradient approach (Kakade, 2001), (Cen et al., 2020), (Agarwal et al., 2021) can provide finite-time optimality guarantees up to the usual function approximation errors. Is it possible to extend this approach to the natural actor-critic (NAC) setting? In those results the iteration complexity grows at a rate O(log|A|), and it would be quite interesting to see whether it can be possible to achieve convergence to an optimal policy in a continuous action space where |A|=\infty. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **(Q1: challenges in dealing with continuous action space.)** Previous results derived for finite action space assume Lipschitz constants that scale with the number of actions ($|\mathcal{A}|$), which, however, becomes meaningless in the case of the uncountable continuous space. We observe that the dependence on $|\mathcal{A}|$ stems from the need to bound the evolution of the stochastic processes defined in Eq. (12) and Eq. (13) over the action space. This evolution is mainly governed by the policy $\pi_{\theta}$. We demonstrate (formally established in Proposition 4.4) that if the total variation distance between two policies is Lipschitz continuous, then the evolution of the stochastic process in the action space can be effectively controlled without assuming the finiteness of the action set. This novel insight and development allow us to extend several foundational lemmas (Appendix B) to the continuous action setting. Such extension is further enabled by the proposed operator-based analysis, which conveniently handles the continuous distribution throughout the proof, including in the verification of Propositions 3.1 and 3.2. **(Q2: requirement to have i.i.d. samples from $\eta$.)** Thank you for your question. Note that existing analysis requires i.i.d. samples from the discounted state visitation distribution for updating the actor, not from the initial distribution. The former is unknown. It is difficult to directly sample it, even approximately, in a simulator given its form. The initial distribution is a predefined distribution over states, and its i.i.d sampling is easy. The key challenge here is how to sample from the unknown discounted state visitation distribution approximately. The assumption of a known initial distribution is standard in the discounted setting. As shown in Eq. (3), the objective function is defined as the integral of the value function $V_\theta(s)$, weighted by the initial state distribution $\eta$, over the state space. Accordingly, we assume access to $\eta$, which allows i.i.d. sampling from this distribution. This requirement might be removed by identifying a class of behavior policies for state sampling that are guaranteed to provide a good approximation to the true value functions and policy gradient estimate. The existing Markov chain $(\pi_\theta, \widehat{P})$ may serve as a special example to examine the desired characteristics that such distributions should satisfy. It will be our future research. **(Q3: on Assumption 4.3.)** Log-linear policies satisfy this assumption under the condition that the feature map $\phi(s,a)$ is bounded and the action space is finite. Moreover, as noted in our manuscript, certain continuous policy classes—such as the uniform distribution, truncated Gaussian distribution, and Beta distribution—can also satisfy this assumption. **(On other suggestions.)** Thanks for your advice. We will mention the use of linear function approximation to address large state-action spaces earlier. We will fix the time index of the policy parameter $\theta$, which is $\theta_t$. Thank you for highlighting this interesting problem. Establishing optimality guarantees for AC or NAC methods typically requires the underlying optimization problem to be convex or to satisfy properties such as gradient domination. We view the convergence of AC and NAC to an optimal policy in continuous action spaces as an important direction for future work.
Summary: This paper addresses the theoretical understanding of single-timescale actor-critic (AC) algorithms in continuous state-action spaces, a widely used reinforcement learning (RL) approach for continuous control tasks such as robotics. While actor-critic methods have demonstrated empirical success, existing theoretical analyses largely focus on finite state-action spaces and impractical simplifications like double-loop updates or two-timescale learning, which introduce artificial decoupling between the actor and critic. The paper aims to close this gap by establishing finite-time convergence guarantees for the canonical single-timescale AC algorithm with Markovian sampling, where both the actor and critic update simultaneously. To achieve this, the authors use a Lyapunov-based convergence analysis framework, which offers a unified and less conservative characterization of both the actor and the critic. Claims And Evidence: This is a theoretical paper so the claim is that they provide a state of the art sample complexity of the actor critic algorithm with a single loop, no i.i.d assumption and continuous state action space. Methods And Evaluation Criteria: No methods or evaluations are given as the paper is theoretical in nature. Theoretical Claims: The theorteical claims of the paper seem to be sound. Experimental Designs Or Analyses: There are no experiments performed as the paper is purely theoretical in nature. Supplementary Material: I have gone through the proofs. Relation To Broader Scientific Literature: The paper is essentially extending the analyses (Olshevsky & Gharesifard, 2023) and (Chen & Zhao, 2024). it describes a Lyapunov analysis framework similar to the one in (Tian et al., 2023) to obtain global convergence for a single loop actor critic with continous state action spaces. Convergence of actor-critic with multi-layer neural networks, H Tian, A Olshevsky, Y Paschalidis , NeurIPS 2023. Essential References Not Discussed: To my knowledge there do not seem to be any significant references missed by the authors. Other Strengths And Weaknesses: The one place I think there can be improvement is in the linear function assumption on the value function. That makes the result in my opinion less relevant than some existing ones where the value function is not restricted to be linear functions. Other Comments Or Suggestions: I would suggest the authors to explore removing the linear function assumption on the value function by implementing a local linearization method as laid out in (Ke et al., 2024) An improved finite-time analysis of temporal difference learning with deep neural networks, Z Ke, Z Wen, J Zhang: ICML 2024. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **(On removing linear function assumption)** Thank you for your suggestion regarding the removal of the linear function approximation assumption for the value function. We are aware of recent works (e.g., Tian et al., 2023; Ke et al., 2024) that employ deep neural networks for value function approximation. However, these approaches often rely on additional assumptions (e.g., Tian et al., 2023) to ensure theoretical tractability. We plan to further investigate these results and explore how to extend our analysis to broader classes of function approximators under milder assumptions.
Summary: This paper considers the problem of analyzing actor-critic algorithms for the discounted, continuous spaces setting when the actor and critic updates occur on the same timescale. The key idea in the analysis is to sample from two distinct processes: a "discounted process" corresponding to the discounted state occupancy measures of the sequence of policies, and an "undiscounted process" corresponding to the undiscounted state occupancy measures of those policies (the "discounted/undiscounted process" terminology is mine, not the paper's). Samples from the discounted process are used to perform actor updates, while samples from the undiscounted process are used to perform critic updates in Algorithm 1. Convergence results that recover state-of-the-art rates proved for less general settings are provided for Algorithm 1 under the assumption that linear function approximation is used for the critic. --- **Post-rebuttal comment:** after the author rebuttal, I maintain that the proposed approach does not achieve the "holy grail" described in **Strengths** below. Nonetheless, after the rebuttal more clearly situated their analysis within that of the literature on discounted AC analyses, I am more convinced that the paper provides a useful, meaningful step in this direction. I am increasing my score accordingly. Claims And Evidence: The claims made are broadly supported by the results provided, with the caveats discussed in the Weaknesses part of **Strengths and Weaknesses** below. Methods And Evaluation Criteria: n/a Theoretical Claims: The proof sketch outlined in the main body appears sound. I did not verify the results in the appendix in detail. Experimental Designs Or Analyses: n/a Supplementary Material: I skimmed but did not check the appendix in detail. Relation To Broader Scientific Literature: The paper proposes and analyzes a single-timescale -- but not single-sample -- actor-critic algorithm for continuous spaces. The related works section adequately situates these results within the relevant context. Essential References Not Discussed: None, to my knowledge. Other Strengths And Weaknesses: **Strengths** Finite-time analysis of actor-critic algorithms has seen a great deal of activity over the past several years. The holy grail of such analyses is to establish finite-time convergence of the actor-critic algorithm most common in practice: a single sample generated from the same stochastic process is used to update both the actor and the critic (single-sample), and the actor and critic update stepsizes are constant factor multiples of one another (single-timescale). This has been achieved for the average-reward setting, but to my knowledge has remained open for the discounted setting. This paper partially addresses this gap, and the topic of the paper is therefore definitely of interest to the theoretical reinforcement learning (RL) community. Though the scheme of eq. (9) for sampling from the discounted occupancy measure has long been known (see, e.g., the thesis [Konda, 2002] or more recently [Zhang et al., 2020a]), the "operator-based analysis" used to establish the counterpart of the uniform ergodicity property provided in Proposition 3.2 is interesting and the proposition itself is of potentially broader usefulness to the RL researchers interested in policy gradient analyses. **Weaknesses** My primary concern is the nature of sampling procedure introduced in Algorithm 1 to forcibly and artificially "decouple" the discounted and undiscounted processes used in the actor and critic estimation procedures. Due to this, the algorithm under consideration is not single-sample, and no longer tracks the "holy grail" of finite-time actor-critic analyses described in **Strengths** above. One of the attractive features of the analysis of [Chen & Zhao, 2024] for the average-reward setting is its ability to directly cope with the coupling between the actor and critic in the single-sample regime and not resort to artificial means to force decoupling. The approach of the present work falls short of this, analyzing an impractical algorithm that does not resemble what is used in practice. This negatively impacts the signficance of the contribution and its potential relevance to the community. Other Comments Or Suggestions: n/a Questions For Authors: 1. The "new operator-based analysis" results presented in Appendix B appear to be restatements of existing results (e.g., from [Chen & Zhao, 2024], [Chen et al., 2021], [Zhang et al., 2020a]). What are the key technical innovations that were required in extending these results to the continuous state and action space setting? 2. If line 5 of Algorithm 1 is correct as is, then the sequence $\{ \hat{O}_t \}$ does not follow the desired discounted process from eq. (13). Should $s_t$ be replaced by $\hat{s}_t$ in line 5 of Algorithm 1? 3. Algorithm 1 appears to require that two distinct stochastic processes $\{ O_t \}$ and $\{ \hat{O}_t \}$ be simulated in parallel and that the simulator used to generate $\{ \hat{O}_t \}$ allow arbitrary resets. Is this correct? If so, can you comment on how this algorithm relates to others considered in the literature? 4. If the current work is not "single-sample" for the reasons described above, can you elaborate on either (i) how the current work lays important foundations for subsequent development of a "holy grail" single-sample, single-timescale analysis, or (ii) why a single-sample analysis is not possible for the discounted setting. If the current work *is* single-sample, can you explain how? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **(Weakness \& Q4: on single-simple)** 1. The terminology ''single-sample" follows the seminal work [Olshevsky \& Gharesifard, 2023], where they directly assume sampling from visitation distribution and stationary distribution for updating actor and critic, respectively. It refers to the fact that at each iteration, the critic and the actor are each updated using a single sample. 2. Our analysis can accommodate the case where ''a single sample generated from the same stochastic process is used to update both the actor and the critic'', as suggested by the reviewer. To achieve this, we can modify the sampling scheme so that the critic is also updated using the same samples from the Markov chain $(\pi_\theta, \hat{P})$ as the actor. We would like to highlight that our theoretical analysis would still apply under this alternative sampling scheme. The current presentation of sampling from $(\pi_\theta, P)$ for the critic update simply follows the existing setup in the literature of on-policy actor-critic analysis. The proposed modification can be viewed as an extension to analyze a special off-policy version. 3. **Sampling from two Markov processes does not force the decoupling or simplify the analysis.** In fact, our analysis directly copes with the coupling between the actor and critic, which can be seen in the Proof Sketch. In particular, the term $I_5$ in the critic error analysis is coupled with the actor error and is ultimately bounded by $\mathcal{O}(\mathbb{E}[\\|\Delta_t\\| \\|\nabla J(\theta_t)\\|])$ in Eq.(26). Conversely, $ I_4$ in the actor error analysis is coupled with the critic error and simplifies to $\mathcal{O}(\mathbb{E}[\\|\nabla J(\theta_t)\\| \\|\Delta_t\\|])$ in Eq.(30). Due to this mutual dependence, we define a novel Lyapunov function that captures both the actor and the critic errors, enabling a unified analysis of their convergence. 4. **Our work is by far the most practical analysis**, in the sense that we address the most widely used single-timescale _discounted reward formulation_, and our analysis does not require sampling from unknown distributions (i.e., the stationary distribution and the discounted state visitation distribution). Moreover, as mentioned in Point 2, our analysis still holds with minor modifications to accommodate the case of using the same samples for updating both critic and actor. **(Q1: Key innovations for extending previous results to continuous space)** Previous results derived for finite action space assume Lipschitz constants that scale with the number of actions ($|\mathcal{A}|$), which, however, becomes meaningless in the case of the uncountable continuous space. We observe that the dependence on $|\mathcal{A}|$ stems from the need to bound the evolution of the stochastic processes defined in Eq. (12) and Eq. (13) over the action space. This evolution is mainly governed by the policy $\pi_{\theta}$. We demonstrate (formally established in Proposition 4.4) that if the total variation distance between two policies is Lipschitz continuous, then the evolution of the stochastic process in the action space can be effectively controlled without assuming the finiteness of the action set. This novel insight and development allow us to extend several foundation lemmas (Appendix B) to the continuous action setting. Such extension is further enabled by the proposed operator-based analysis, which conveniently handles the continuous distribution throughout the proof, including in the verification of Propositions 3.1 and 3.2. **(Q2: on the notation)** Thanks for pointing out the typo. It should be $\hat{P}(\cdot | \hat{s}_t, \hat{a}_t)$. **(Q3: Relation to existing actor-critic analyzed in the literature)** Yes, it is correct. This is the same sampling scheme also employed in [Shen et al. 2023]. However, their analysis only handles the two-timescale AC algorithm on finite state-action space. In Konda's thesis, this artificial MDP is only constructed to utilize its property that its average reward equals the original MDP's discounted reward induced by $(\pi_\theta, P)$. It's not analyzed for convergence. In addition, [Zhang et al. 2020a] employ a different sampling scheme to approximate the discounted occupancy measure. However, their approach relies on a multi-sample procedure. Moreover, they also require a simulator with a state-reset function. Existing literature on analyzing AC requires samples from the unknown visitation distribution for updating actor, which requires a simulator as well. [Shen et al. 2023] needs to assume the same simulator as ours. For i.i.d. sampling [Chen et al., 2021; Olshevsky \& Gharesifard (2023)], one has to run the simulator for sufficiently many steps under the current policy $\pi_{\theta_t}$ to approximate the visitation distribution. This inevitably takes a very long time and requires significant modification of the simulator as well. But our setting only requires a reset, which is much simpler and more time-efficient.
null
null
null
null
null
null
POROver: Improving Safety and Reducing Overrefusal in Large Language Models with Overgeneration and Preference Optimization
Accept (poster)
Summary: Making LLMs behave in a safe fashion, a major research concern, often comes with unwanted side effects such as overrefusal of prompts that may seem unsafe. This paper makes two contributions. 1. They show an improvement when using finetuning data overgenerated from a more advanced teacher LLM. It also presents a preference optimization algorithm that further improves performance, reducing overrefusals while maintaining safety. ## Update after rebuttal Because I didn't find any substantive issues with the paper during the review process, my positive assessment of the paper continues. Claims And Evidence: Yes, benchmark results across several teacher and student models provide robust evidence for claims regarding improvement and accuracy. Methods And Evaluation Criteria: Yes, evaluation builds off of existing relevant datasets and uses LLamaGuard which is well established. Theoretical Claims: The claims of this paper are empirical, not theoretical. Experimental Designs Or Analyses: Algorithm 1 appears a valid method for generating preference data. Use of multiple student and teacher models, as well as tuning the containment threshold and the increase in ASD all give me confidence in the robustness of the results. Supplementary Material: Yes, the appendix appears complete, and contains the prompt examples I had hoped to see while reading through the main text of the paper. Relation To Broader Scientific Literature: This paper contributes meaningfully to the tradeoff between safety and helpfulness in LLM alignment. It also applies preference optimization (an established technique) in a novel context. Essential References Not Discussed: None Other Strengths And Weaknesses: While the contributions are novel, the real-world applications are the greatest strength of the paper. Overrefusals cause problems for users of LLMs every day, and mitigating them without impacting safety is valuable. Other Comments Or Suggestions: I would change the 'toxic question example' in the appendix figure 7 to something a little more explicitly toxic. ## Update The authors' updated question is more appropriate and illustrative. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their review and comments. We will answer the points raised individually. __1. (Other Comments Or Suggestions) I would change the 'toxic question example' in the appendix figure 7 to something a little more explicitly toxic.__ We thank the reviewer for this suggestion. In our revision, we will include the following example which is more explicitly toxic. It targets personal privacy. https://drive.google.com/file/d/1X8yfzFXwMTwxgXzXglaiV8cNAqYIBted/view?usp=sharing Thank you very much for your valuable time and thoughtful review! We hope you would consider increasing your score if we have addressed your suggestion. Please let us know if you have additional comments and we are happy to follow up. Thanks!
Summary: This paper addresses the challenges of balancing safety and usefulness in large language models. It explores the effects of using more advanced teacher models to generate completions for instruction finetuning. Their main contributions include: - They show that using more advanced teacher models (e.g. GPT-4o) to overgenerate completions for both general-purpose and toxic prompts during instruction finetuning improves the safety-usefulness trade-off in student models. - The paper introduces POROver, a post-finetuning method that complements safety finetuning that reduces the overrefusal rate while maintaining high safety levels. The authors evaluate their method on multiple model families (Llama-3 and Phi-3) and sizes (3B to 8B) and come to similar conclusions for all models tested. ## Update After Rebuttal The authors added some additional experimental results for a wider variety of model families, as well as some additional jailbreaking results that added to the strength of the overall results. Claims And Evidence: Overall, the claims made in the paper are generally supported by extensive experimentation, however, there are a few areas where the evidence could be strengthened: - The claim that POROver maintains "comparable safety levels" while increasing usefulness seems to be supported, but the slight decrease in safety could be discussed more explicitly. - The generalizability of the results across different model families is demonstrated, but the sample size (two model families) is relatively small. Including results from more diverse model families would strengthen this claim. Methods And Evaluation Criteria: The use of established benchmarks like AlpacaEval, OR-Bench, and XSTest provide a solid foundation for their evaluations and seem appropriate for what the paper attempts to show. The choice of GPT-4o as the teacher model is completely reasonable, but comparing results with other advanced models (e.g. Claude, Gemini) could provide more insight into how well the approach generalizes. Theoretical Claims: This paper does not make any significant theoretical claims and is primarily empirical in nature. Experimental Designs Or Analyses: The experimental designs appear sound and well executed. They use appropriate statistical measures and isolate the effects of different components during their experimentation. A few observations: - The human evaluation on XSTest Seemingly Toxic dataset (Appendix F.1) is a strong addition, validating the automated metrics. - I appreciated the analysis of saturation with ASD to show the data efficiency of the approach. Supplementary Material: I reviewed the supplementary material found in Appendices D, E, and F. I appreciate the extensive experimentation shown by this extra information and that I could find the details of most of their experiments here. Relation To Broader Scientific Literature: This paper builds on several important areas in the field of LLM safety and alignment such as instruction finetuning, safety-utility trade-off, preference optimization methods, and attempting to reduce overrefusal. Getting the desired balance between safety and usefulness in models is an extremely important field of research as models continue to improve and gain new capabilities. Essential References Not Discussed: Not that I'm aware of. Other Strengths And Weaknesses: Strengths: - Extensive experimentation to demonstrate their claims. - The proposed methods are practical and show significant improvements over baselines. - The release of their generated datasets is valuable for further research in the community. - The paper's proposed method POROver can be used with a variety of preference optimization methods. Weaknesses: - While the results are promising, the discussion of the long-term generalization of the proposed methods to real-world tasks could be explored further. - While the paper does cover two model families across sizes from 3B to 8B, the sample size is still relatively small. Including a wider range of model architectures and sizes could show the potential generalizability of the proposed approaches. Other Comments Or Suggestions: Typos: - On line 103, "and increase its **the** Not-Overrefusal Rate..." - On line 256, "on the same initial model instance **instance** of reach set..." - On line 309, "with a modest reduction **...** usefulness" (in usefulness?) - Figures 3, 9, and 11 each have have a typo where it says "safey" rather than "safety." - Table 5's caption says "annoted" (annotated?) Questions For Authors: - The paper focuses on GPT-4o as the teacher model. Have you experimented with other advanced models as teachers, and if so, how do the results compare? - How does the computational cost of your approach scale with model size? Do you anticipate any challenges in applying these methods to even larger language models? - Your POROver method shows promising results in reducing overrefusal. Have you investigated how this approach performs on more diverse or challenging types of prompts beyond the benchmarks used in the paper? For example, how does it handle ambiguous requests or prompts that require more nuanced ethical reasoning? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their review and comments. We will answer the points raised individually. __1. (Claims And Evidence) The slight decrease in safety after POROver could be discussed more explicitly.__ We thank the reviewer for this suggestion. We believe the slight decrease in safety may stem from our grid search over the containment threshold parameter. The intermediate values we tested (0.01, 0.03, 0.1) correspond to slightly different points along the safety–usefulness curve. This suggests that our current choices for the containment threshold may be suboptimal, and finer or adaptive tuning could further improve results. In future work, we plan to explore automated methods for optimizing the containment threshold to better balance safety and usefulness. We will add this discussion to our revised manuscript. __2. Including a wider range of model architectures and sizes for student and teacher models could show the potential generalizability of the proposed approaches.__ New student models: We have expanded our analysis to include Falcon-7B as an additional student model, increasing the total number of model families to 3. To further broaden the range of student model sizes evaluated, we also include results for Llama-3.2-11B. Therefore, our analysis now covers student models ranging from 3B to 11B in size. We leave the exploration of models larger than 11B to future work, as we did not have access to sufficient compute resources at this time. Falcon-7B results: (To be included before April 2nd) Llama-3.2-11B results: (To be included before April 2nd) New teacher model: We agree that incorporating additional teacher models could provide deeper insight into the generalizability of our methods. Moreover, relying solely on a proprietary teacher model like GPT-4o may limit the broader real-world applicability of our approach and reduce its impact. To address these concerns, we have expanded our analysis to include results using Llama-3-70B, an open-weight model, as the teacher. Llama-3-70B results:(To be included before April 2nd) While the exact metric values for the student models vary slightly, all of our main conclusions remain consistent. We believe that including an open-weight teacher model as well as more student models strengthens the robustness of our findings, reduces dependency on proprietary models, and enhances the practical applicability of our methods. __3. While the results are promising, the discussion of the long-term generalization of the proposed methods to real-world tasks could be explored further.__ We thank the reviewer for raising this important point. One key aspect of long-term generalization to real-world tasks is robustness to adversarial prompts, as real-world usage often exposes models to unexpected or malicious inputs. To evaluate this, we have added a detailed adversarial robustness analysi. Specifically, we test all of our student models against three adversarial attack methods: Prompt Automatic Iterative Refinement (PAIR), Greedy Coordinate Gradient (GCG), hand-crafted jailbreaks from Jailbreak Chat (JBC), using the behavoral prompts from the JailBreakBench Benchmark. Jailbreaking results: https://drive.google.com/file/d/13aVd3igdd7JGZ_5PdfQRdcWOkF2Q19UU/view?usp=sharing For our supervised finetuning approach—i.e., overgeneration with better teacher models—we find that it improves adversarial robustness (measured by attack success rate) against GCG, PAIR and JBC significantly. We further show that our preference optimization method does not compromise adversarial robustness across any of the three attack types. This demonstrates that it effectively reduces overrefusal without degrading safety under real-world adversarial conditions. We think that these observations offer strong empirical support for the real-world reliability and generalization of our methods, especially in adversarial and high-stakes deployment settings. __4. (Questions for Authors) How does the computational cost of your approach scale with model size? Do you anticipate any challenges in applying these methods to even larger language models?__ In the model sizes we have examined, we found that the training takes longer GPU hours although the convergence times remain similar for the larger models. We have already included a numerical analysis in our original manuscript Appendix D. Other challenges associated with larger models include increased GPU memory requirements and the need for more complex code to enable distributed training. We will discuss these challenges in our revised manuscript. Thank you very much for your valuable time and thoughtful review! We hope you would consider increasing your score if we have addressed your suggestions and concerns. Please let us know if you have additional comments and we are happy to follow up. Thanks! --- Rebuttal Comment 1.1: Comment: I appreciate the added discussion points about the level of safety after using POROver, especially the jailbreaking results which would be a beneficial addition to the paper's results. I acknowledge the discussion of the addition of more model sizes and architectures, but would love to see the results from adding those models to the experiments. I'm grateful for the response about computational cost and about where to find more information on those metrics. I appreciate the author's response and revisions, and after reviewing, I have decided to keep my original score of a 4. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their response and appreciate their approval of our answers regarding points 1, 3, and 4 we have made in our initial response. __Regarding point 2, i.e., including a wider range of model architectures and sizes for student and teacher models:__ New student models' results: Falcon-3-7B: https://drive.google.com/file/d/1uW52KRIDrEHehRgrY4RV-yAV-R1PrUo6/view?usp=sharing Llama-3.2-11B: https://drive.google.com/file/d/1NRtRs8fD8kho9EXTQq3VjHYaFeNv3Pu8/view?usp=share_link As we have stated previously, while the exact metric values for the student models vary, all of our main conclusions remain consistent in both Falcon-3-7B and Llama-3.2-11B. We will add these results to our revised manuscript. New teacher model results: Llama-3-70B: https://drive.google.com/file/d/1riqK5Cprl9VycQRov5cCTnmRAVg_2w4B/view?usp=sharing We note that we used Llama-3.1-8B as the student model for the Llama-3-70B teacher. Llama-3-70B, as an open-weight model, still offers substantial improvements over older teacher models (e.g., GPT-3 or GPT-3.5) and serves as a highly effective teacher for our methods. Thus, all of our main conclusions remain consistent with Llama-3-70B as the teacher. Compared to our experiments using GPT-4o, we observe that GPT-4o—being a less overrefusing model [1]—leads to student models that exhibit lower overrefusal, as expected. [1] Cui, Justin, et al. “OR-Bench: An Over-Refusal Benchmark for Large Language Models.” ArXiv.org, 2024, arxiv.org/abs/2405.20947. Once again, we believe that including an open-weight teacher model as well as more student models strengthens the robustness of our findings, reduces dependency on proprietary models, and enhances the practical applicability of our methods. We will add these results and discussions to our revised manuscript. Thank you so much for reviewing our responses and acknowledging them! Also, thank you for the typo reminders, we will fix them in our revised manuscript. We hope you would consider raising your score if you feel we have addressed your remaining suggestions. If you have any further comments, please let us know—we are happy to follow up. Thanks again!
null
null
null
null
null
null
null
null
null
null
Behavior-Regularized Diffusion Policy Optimization for Offline Reinforcement Learning
Accept (poster)
Summary: This paper proposes a diffusion model optimization method based on multi-diffusion-step regularization, which is different from previous behavior-regularized policy methods. Claims And Evidence: Most of the claims in this paper are well supported by theory and experiments, but some parts remain difficult to understand. Please refer to the questions and comments. Methods And Evaluation Criteria: This paper compares several classic offline RL methods. The choice of experimental environments is appropriate. Theoretical Claims: I have read the theoretical parts related to the paper, including both the main body and the appendix. However, some questions still remain. I have summarized them in the questions and comments. Experimental Designs Or Analyses: Please refer to the questions and comments for experimental concerns. Supplementary Material: I have read the appendix, especially the results in Appendix C. Relation To Broader Scientific Literature: Previous studies on behavior-regularized policy optimization often apply end-to-end regularization directly on the policy, i.e., focusing on the output actions. In contrast, for expressive diffusion models, this paper considers applying regularization at each generation step of the diffusion model, achieving better-constrained policy optimization. Essential References Not Discussed: None Other Strengths And Weaknesses: Please refer to the questions and comments for review concerns. Other Comments Or Suggestions: 1. In line 817: Regarding the generation process as a decision sequence and the final step reward is Q(s, a^0), then the optimal value function V^{\*,s}_0=Q(s,a^0), Q^{\*,s}_0=Q(s,a^0)$, and $V^{\*,s}_{n}(a^n)=\eta*\log \mathbb{E}_{a^{n-1}}[exp(Q^{\*,s}_n(a^n)/\eta)], Q^{*,s}_n(a^n)=0+1.0*\mathbb{E}_{a^{n-1}}[ V^{*,s}_{n-1} (a^{n-1})]. (1) The formula of V^{*,s}_n(a^n) is different from the formula of line 820. (2) From the recursion, there exists expectation of a^{n-1}, a^{n-2},…, but I can not find the expectation in Equation (26). I suggest the authors show details of the derivation here to help readers understand these important results. 2. In Algorithm 1: If understanding right, when updating the related networks, the training process contains the following steps: 1) Calculate the value Q(s, a^0) and use the results as target to train value V(s, a^0); 2) Get perturbed data a^n by performing forward diffusion process; 3) Run diffusion once to obtain a^{n-1} and use a^n and a^{n-1} to update V function; 4) Calculate the Q target: R+\gamma*V(s, a^n) and update the Q function parameters. If the above process is right, I suggest the author revise Algorithm 1 to make the training process much clearer. Questions For Authors: 1. To better understand the background of diffusion models and correspond to the contents mentioned in line 142 right column, I suggest to write Equation (7) as q_{n-1|n}(x^{n-1}|x^n,x^0) rather than q_{n-1|n}(x^{n-1}|x^n). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your insightful feedback, which has greatly improved our work. We hope the following clarifications can further address your concerns and enhance your evaluation of our paper. **Q1: About the proof in Theorem C.1** First, we would like to restate the question to address any misunderstandings. Let $Q(s, \cdot)$ denote the ending reward in the diffusion MDP with a discount factor of 1. The reviewer considers the following relationships between the value functions: $$\begin{aligned} V^{\star, s}\_0&=Q(s, a^0)\\\\ Q^{\star, s}\_0&=Q(s, a^0)\\\\ V^{\star, s}\_n(a^n) &= \eta\log\mathbb{E}\_{a^{n-1}}\left[\exp(Q^{\star, s}\_n(a^n)/\eta)\right]\\\\ Q^{\star, s}\_n(a^n)&=0 + 1.0\mathbb{E}\_{a^{n-1}}\left[V^{*, s}\_{n-1}(a^{n-1})\right] \end{aligned}$$ which seems to contradict our results in line 820. However, we believe the main confusion here is that in the diffusion MDP, the “state” is actually the tuple $(s, a^{n})$ and the state transition is implicit. The policy follows the Gaussian distribution $p^{\pi,s}(a^{n-1}|a^n)$, and upon selecting action $a^{n-1}$, the state instantly transitions to $(s, a^{n-1})$ deterministically. Therefore, the $Q$ function of the diffusion MDP should be defined w.r.t the “state” $(s, a^n)$ and action $a^{n-1}$, leading to the corrected relationships: $$\begin{aligned} V^{\star, s}\_0(a^0)&=Q(s, a^0)\\\\ V^{\star, s}\_n(a^n) &= \eta\log\mathbb{E}\_{a^{n-1}}\left[\exp(Q^{\star, s}\_n(a^n, a^{n-1})/\eta)\right]\\\\ Q^{\star, s}\_{n}(a^n, a^{n-1})&=0 + 1.0V^{\star, s}\_{n-1}(a^{n-1}) \end{aligned}$$ Substituting the last equation into the second leads to the formulation in line 820 $$V^{\star, s}\_n(a^n) = \eta\log\mathbb{E}\_{a^{n-1}}\left[\exp(V^{\star, s}\_{n-1}(a^{n-1})/\eta)\right]。$$ As for question (2), expanding the recursion gives $$\begin{aligned} V^{\star, s}\_n(a^n) &= \eta\log\mathbb{E}\_{a^{n-1}\sim p^{\nu,s}\_{n-1|n}}\left[\exp(V^{\star, s}_{n-1}(a^{n-1})/\eta)\right]\\\\ &= \eta\log\mathbb{E}\_{a^{n-1}\sim p^{\nu,s}\_{n-1|n},a^{n-2}\sim p^{\nu,s}\_{n-2|n-1}}\left[\exp(V^{\star, s}\_{n-2}(a^{n-2})/\eta)\right]\\\\ &=\ldots\\\\ &=\eta\log\mathbb{E}\_{a^{n-1}\sim p^{\nu,s}\_{n-1|n}, \ldots, a^0\sim p^{\nu,s}\_{0|1}}\left[\exp(V^{\star, s}\_{0}(a^{0})/\eta)\right]\\\\ &=\eta\log\int p^{\nu,s}\_{0,1,\ldots,n-1|n}(a^0, a^1, \ldots, a^{n-1}|a^n)\exp(V^{\star, s}\_0(a^0))\mathrm{d} a^{n-1}a^{n-2}\ldots a^{0}\\\\ &=\eta\log\int p^{\nu,s}\_{0|n}(a^0|a^n)\exp(V^{\star, s}\_0(a^0))\mathrm{d}a^{0}\\\\ &=\eta\log\mathbb{E}\_{a^{0}\sim p^{\nu,s}\_{0|n}}\left[\exp(V^{\star, s}\_{0}(a^0)/\eta)\right] \end{aligned}$$ where the last but one equation is due to the marginalization over intermediate actions $a^1, a^2, \ldots, a^{n-1}$. We acknowledge that the derivation in the appendix is somewhat vague and will revise it to explicitly incorporate a proof akin to MaxEnt RL theory for clarity. **Q2: About the update of value networks.** The detailed update procedure of the value networks consists of two steps: 1 ) The first step is updating $Q^{\pi}$. To calculate the target, we use the actor diffusion policy to generate paths $a’^{0:N}$ at the next state $s’$, calculate the target value $Q(s’, a’)$ and accumulated penalties along the path $\sum_{n=1}^N\ell^{\pi,s'}_{n}(a^n)$, and perform temporal difference update as per Eq. 12; 2 ) The second step is updating $V^{\pi,s}$. This is achieved by sampling $n$ and $a^n$ using forward process, performing one-step diffusion to obtain $a^{n-1}$, and update according to Eq. 14. For $n=1$, we directly use $Q(s, a^0)$ as the target, rather than additionally regressing the output of $V^{s}_0(\cdot)$ to $Q(s, \cdot)$. Crucially, the update of $Q^{\pi}$ does not depend on $V^{\pi,s}_N$. The diagram of the update is presented in Figure 3. Compared to other methods, the additional cost w.r.t. value function training comes from the second step, which is a constant that does not scale with diffusion steps (see Figure 10). **Q3: Eq. 7 should be $q\_{n-1|n}(x^{n-1}|x^n,x^0)$** We appreciate the suggestion and would like to note that the actual distribution we wish to approximate is the posterior without conditioning on $a^0$, as $a^0$ is unknown during generation. When $a^0$ is given, the posterior distribution $q\_{n-1|n, 0}$ is tractable, with its analytical form being: $$q\_{n|n-1, 0}(a^n|a^{n-1}, a^0)=\mathcal{N}(a^{t-1}; \frac{\sqrt{\bar{\alpha}\_{n-1}}\beta_n}{1-\bar{\alpha}\_n}a^0+\frac{\sqrt{\alpha\_n}(1-\bar{\alpha}\_{n-1})}{(1-\bar{\alpha}\_n)}a^n, \sigma\_n I).$$ Since exact $q\_{n-1|n}$ is intractable, we turn to a parameterized distribution $p^\theta\_{n-1|n}$ and optimize it towards $q\_{n-1|n, 0}$ via in Eq. 10. As shown in DDPM, this training objective yields $p^\theta\_{n-1|n} \approx q\_{n|n-1}$. We recognize that the current presentation lacks clarity and will revise the text to explicitly articulate the connection between $q\_{n-1|n, 0}$ and $q\_{n-1|n, 0}$ in the next version. --- Rebuttal Comment 1.1: Comment: I thank the authors for their explanations. Most of my concerns are addressed. Based on the response, I have several questions: 1. When updating Q, you have sampled the action path $a^{0:N}$ and obtained the KL divergence of $l_n(a^n)$, why don’t you use $\sum_n:0 l(a^n)+Q(a^0)$ directly as the V target of $V(a^n)$? 2. The training of V is bootstrapped (Equation (14)). Under random sampling of $a^n$, could this training approach lead to instability in V, since the $a^n$ is unlikely to be sampled? 3. Have the authors considered treating the generation process as a low-level MDP and the RL MDP as a high-level MDP? In this case, the divergence in the generation process could be viewed as the reward of the low-level MDP, resulting in a reward sequence such as $...,r(s,a),-l_N,-l_{N-1},...,-l_1,r(s',a'),...$. And directly training Q and $\pi$ on this type of reward sequence. --- Reply to Comment 1.1.1: Comment: We are glad to know that most of the concerns have been addressed. Below, we provide detailed answers to the remaining questions, and we hope these can further support your evaluation of our work. 1. Why don't you reuse the already sampled diffusion path and use $Q(a^0)-\eta\sum_{i=1}^{n}l^{\pi,s}(a^n)$ to update $V^{\pi,s}(a^n)$? We sincerely appreciate your suggestion. We acknowledge that taking $Q(a^0)-\eta\sum\_{i=1}^{n}l^{\pi,s}(a^n)$ as the target is also a theoretically valid solution. The rationale behind us using a separate sampling process to compute the target for $V^{\pi,s}\_n(a^n)$ is two fold: 1) Using single-step diffusion generation and bootstrapping from $V^{\pi,s}\_{n-1}(a^{n-1})$ is more consistent with how we update the diffusion policy; 2) Our approach supports multi-sample estimation of the target. Specifically, by sampling multiple $a^{n-1}$ and averaging their values $V\_{n-1}(a^{n-1})$, we achieve a more accurate approximation of the expected value in Eq. 14. In practice, we sampled N=10 actions $a^{n-1}$ for each $a^n$. However, re-using the diffusion path only gives single sample estimation of the target value for each $a^n$, which may lead to higher variance. We present the learning curves (averaged over 3 random seeds) of our update scheme (labeled 'Ours') and the proposed alternative (labeled 'New') in this link: https://f005.backblazeb2.com/file/bdpo-review/rebuttal_abla_valueupdate.pdf , which reveals that our method achieves slightly faster convergence and stability in performance. 2. Since $a^n$ are randomly sampled, will the training approach cause instability in $V$? We plot the curves of output values of $V$ in the following anonymous link: https://f005.backblazeb2.com/file/bdpo-review/rebuttal_v_values.pdf . Please note that the value network output may fluctuate somewhat in the first 50k steps, because we begin training the policy after 50k steps. Overall we found that the value network is very stable. We attribute this stability to three key factors: 1) as discussed earlier, we sample N=10 actions and take their average value as the target, which produces estimations with lower variances. 2) the value function $V$ is optimized on *noisy actions* $a^n$, which are generated by perturbing dataset actions $a^0$ with Gaussian distributions and therefore have infinite support over the action space. In this sense, $V$ is optimized on intermediate actions from all over the action space. 3) As evidenced in Figure 4 and Figure 11, the value function exhibits much smoother outputs over the action space at higher noise levels, which inherently reduces variance in value function updates. 3. Have the authors considered treating the generation process and the RL MDP as the low-level and high-level MDP, respectively? Yes, we have explored this approach and trained a unified value function $Q(s, a, n)$ on the extended reward sequence. The most intriguing property of this formulation is that the value function udpate no longer requires sampling the full diffusion path -- instead, it only needs one single diffusion step, thereby reducing the computational overhead of critic learning to a constant that does not scale with the number of diffusion steps. However, in preliminary experiments, we observed that this formulation consistently underperforms our method. The primary reason is that unfolding the generation MDP into the RL MDP essentially extends the horizon by a factor of N (number of diffusion steps), making TD learning over such an extended horizon significantly more challenging. In contrast, our approach maintains the original horizon of the environment MDP for $Q$-function, while treating the diffusion MDP as "branches" from the environment MDP. This design ensures more stable and better performance (curves are presented in Figure 12). Alternatively, our method can be interpreted as employing an N-step return as the target of $Q$. Specifically, we compute the cumulative sum of rewards and penalties over N steps of the diffusion MDP and then bootstrap from the subsequent environment state.
Summary: This paper use diffusion policis for offline RL and the main idea is to take the rollout quality of the diffusion process as extra regularization. In other words, for behavior cloning, the paper proposes to measure the similarity between the demonstration action and the learnt action by comparing their distributions of trajectories generated by the revserse procedure of the diffusion process. They show that such difference can be accumulated as another scale of value function and is useful to behavior cloning. Claims And Evidence: 1. To justify the proposed method, the paper says that optimizing eq.11 is equivalent to eq.1, which means that the behavior of the proposed method would be like a behavior clone method which takes eq.1 as its objective. However, what we really interested in is the optimality condition under the offline RL setting - in other words, can an optimal policy be delivered by optimizing eq.11? 2. The proposed method provides a stronger regularization for the divergence between the new policy and the behavior policy with the pathwise constraint. This would make the policy more likely to be suffered from the bad behaviors generated by the sub-optimal behavior policy. Could the author discuss more about this point? 3. the authors consider the value of the actions generated during the diffusion procedure, however, such actions are artifical and are never been actually taken in practice, so how can you ensure that they would be appeared in the offline dataset? (assumption 4.3) In this sense, lots of OOD action would be evaluated by the proposed method, how to ensure the algorithm's stablity? Methods And Evaluation Criteria: yes Theoretical Claims: 1. Assumption 4.3. This assumption is like the well-known Concentrability assumption, which means that the dataset should cover the visitation states of any policies from the candidate set. When the set is large enough, the coverage of dataset would be very large, which is hard to realize in practice. As a result, such assumption has been aborted in recent works, where only the optimal coverage assumption should be given. However, in our opinion, the Assumption 4.3 is even more stronger than the Concentrability assumption, because it assumes the coverage at each diffusion step. This makes the theoretical results hard to generalize to real-world scenarios. 2. The 'optimal policy' in Theorem 4.2 seems different from that in other works, such as [Bellman-consistent Pessimism for Offline Reinforcement Learning], where the optimal policy is the global optimal policy in the true MDP. Therefore, Theorem 4.2 only guarantees the equivalence of Eq.(1) and (11), in my opinion, which is not so interesting as the analysis of the optimality of the learnt policy. 3. Lack of the theoretical comparison with other works, especially with those works about diffusion-based policy. Experimental Designs Or Analyses: In Table 1, there lack the results on 'random' and 'expert' benchmarks, which is important as well. Supplementary Material: n/a Relation To Broader Scientific Literature: see below Essential References Not Discussed: 1. In the literature of deep learning on teacher student network, there are several works about recording the learning trajectory of the tearcher network, and asking the student network to imitate that learning behavior. This paper obviously follows the same idea - the new policy should imitate the diffusion trajectory of the behavior policy. So I think the paper lacks the reference on such works. Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: n/a Questions For Authors: see above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for providing valuable feedback. Below, we provide further clarification and results to address your concern and we hope these materials can enhance your evaluation of our paper. **Due to the space constraint, we will post our discussion of the connection between BDPO and broader literatures (Q6 and Q8) in follow-up responses.** You can also access the responses in the anonymous link: https://f005.backblazeb2.com/file/bdpo-review/Q6-andQ8.txt . **Q1: Can an optimal policy be delivered by optimizing eq.11?** We would like to clarify that Eq. 1 is also a **well-established RL objective widely adopted in the literature**, and therefore, optimizing Eq. 11 delivers optimal policies with respect to Eq. 1. Please refer to our response to Q5 for details. **Q2: The pathwise KL regularization is stronger, making the policy suffer from sub-optimal data?** Yes, BDPO additionally regularizes intermediate actions along the diffusion path. However, we emphasize that this is **not a stronger regularization**, as we have established the equivalence between pathwise and action-wise constraint in Theorem 4.2. Given this guarantee, the pathwise constraint is in fact preferable as it enables a finer-grained control over the generation process. Furthermore, when the behavior policy is suboptimal, the regularization strength $\eta$ can be decreased to allow greater exploitation from the value function. **Q3: Intermediate actions are not present in the dataset.** When training value functions $V^{\pi, s}\_n$ for $n>0$, we optimize them using actions $a^n$ generated by first sampling clean actions $a^0$ from the dataset and then perturbing them using the forward diffusion process $a^n\sim q\_{n|0}(a^0)$. Consequently, **the action support for $V^{\pi,s}\_{n}$ with $n > 0$ is infinite and spans the entire action space** due to the unbounded support of the Gaussian noise distribution. The only risk of OOD evaluation comes from querying $V^{\pi,s}_0(a^0)$ for some $a^0$ generated by the actor. However, this challenge is inherent to all policy iteration methods and can be effectively mitigated by tuning the regularization strength. **Q4: The assumption about concentrability is stronger?** We clarify that Assumption 4.3 is made only to ensure the boundedness of the KL divergence in our theory, and it is not the concentrability used in papers like the reviewer mentioned. Similar assumptions are also made in papers that incorporate behavior regularization -- for example, SAC assumes $|\mathcal{A}| < \infty$ to ensure the policy entropy is bounded. Regarding the concentrability, as discussed in Q3, the marginal distribution $p^{\nu,s}_n$ at $n>0$ has infinite support over the action space due to Gaussian perturbations. Consequently, the concentrability requirement reduces to the base case of $n=0$, meaning BDPO does not impose stronger concentrability assumptions than other methods. Our primary contribution is providing a practical implementation of the behavior-regularization RL for diffusion-based policies, rather than introducing new theoretical bounds. **Q5: The optimal policy differs from other works.** The optimal policy differs because we consider the **behavior-regularized RL framework** (Eq. 1), which augments the standard RL objective with KL divergence. This framework helps us to shape the policy and is widely adopted in RL. For example, when $\nu$ is specified as the uniform distribution or the dataset collection policy, the framework becomes MaxEnt RL (used by SAC) or regularized RL (used by offline RL methods like ReBRAC and XQL), respectively. Other offline RL algorithms (e.g., TD3-BC, AWAC) can also be categorized into this framework if we omit the KL divergence term during policy evaluation. This framework is also adopted by applications like RLHF, where KL regularization towards the reference model prevents model collapse. Therefore, we believe it is important to study this framework. **Q6: Lack of theoretical comparison with other diffusion-based works.** See follow-up responses or the link. **Q7: Experiment results on random and expert datasets** We compare BDPO against several baselines, including a previous model-free SOTA, ReBRAC and the best performing diffusion-based method, DAC. All results are averaged over 4 seeds. |Dataset|IQL|CQL|ReBRAC|DAC|BDPO| |-|-|-|-|-|-| |hc-random|19.5|31.1|29.5|28.6|28.6±0.9| |hc-expert|95.5|97.3|105.9|103.4|105.5±2.4| |hop-random|10.1|5.3|8.1|8.4|15.0±9.7| |hop-expert|108.8|106.5|100.1|98.6|110.2±2.3| |walk-random|11.3|5.1|18.4|4.1|0.61±0.15| |walk-expert|96.9|109.3|112.3|113.5|110.4±0.1| |sum|342.1|354.6|374.3|356.6|374.4| Overall, BDPO matches SOTA performance (ReBRAC) and outperforms DAC, though the margin is narrow. Note that model-free offline RL methods tend to struggle on random datasets due to the inferior data quality. **Q8: Lack of references about imitation learning** See follow-up responses or the link.
Summary: The paper introduces Behavior-Regularized Diffusion Policy Optimization (BDPO), a framework for offline RL that integrates diffusion-based policies with behavior regularization. The key innovation is formulating KL regularization along the whole diffusion steps instead of on the final result, enabling more effecient computation. The authors propose a two-time-scale actor-critic algorithm that optimizes value functions, which further improves computational efficiency and training stability. Theoretical results establish equivalence between the pathwise KL regularization and the standard KL-regularized RL objective. Experiments on synthetic 2D tasks and D4RL benchmarks demonstrate superior performance compared to baseline methods, particularly in locomotion and antmaze navigation tasks. ## update after rebuttal The rebuttal has addressed most of my concerns. Given that the paper is strong with relatively minor deficits, I have decided to maintain my original rating of weak accept. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. I have checked all proofs for theoretical claims. Experimental Designs Or Analyses: Yes. I have checked all experimental designs and analyses. Supplementary Material: No. Relation To Broader Scientific Literature: The work builds on two key strands: behavior regularization and diffusion policies. - Behavior regularization is commonly used in both online and offline RL to provide an expected objective to the policy. Specifically, previous offline RL works usually utilize the KL divergence between the learning policy and the behavior policy. This paper extends behavior regularization to intermidiate diffusion steps, while keeping high computational efficiency through the two-time-scale optimization scheme. This scheme resembles DPPO but incorporates penalty in each diffusion step. - The diffusion policies model RL policies by diffusion models, instead of traditional Gaussion policies, As diffusion models enjoy better capability on complicated multi-modal distribution, they have surpassed their counterparts using Gaussion policies. This paper proposes a novel regularization and training scheme to further improve the performance of diffusion policies, while keeping the training complexity relatively low. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths - The method is novel on incorporating behavior regularization on each diffusion step, while maintaining training efficiency by the two-time-scale scheme. - The theoretical proof is clear and well-written, which links pathwise KL to standard KL regularization and continuous-time SDEs. - The empirical results across diverse tasks are strong. Weaknesses: see questions. Other Comments Or Suggestions: Here are some possible typos: - Title of Figure 8: "Illuration" -> "Illustration". - Title of Figure 7(a): $\beta$ -> $\eta$. - Eq. (18), $V(s,a)$ -> $V(a^n)$. Questions For Authors: - The bi-level TD-learning framework is proposed to avoid preserving the computation graph of all diffusion steps like Diffusion-QL. I wonder have you tried the optimization in EDP[1], which improves the efficiency of Diffusion-QL and also avoid perserving the whole computation graph. Is their method inconsistent with you behavior regularization objective? - As you use an ensemble of Q networks, how do you select the action in evaluation by the highest Q value? Do you use the mean of these Q values? Also, could you please report the inference time of BDPO, and compare it with your main baselines? - For two key hyperparameters $\beta$ and $\rho$ - In what range do you search $\eta$ and $\rho$? - Figure 7(a) shows that the trend of BDPO is completely different w.r.t. $\eta$ and $\rho$ in Halfcheetah and Walker2d tasks, though they are both medium dataset. Does this stem from some properties of these two environment? Also, is this trend similar to those on datasets other than medium (e.g., medium-replay)? - It seems that smaller $\eta$ in your tested range leads to better performance in Halfcheetah-medium. Could you please test $\eta=0$, i.e., no behavior regularization? The same for $\rho=0$, i.e., no LCB penalty. - It seems that BDPO is sensitive to $\rho$, as shown in Figure 7(b). Adjusting $\rho$ from $1.0$ to $0.75$ or $1.5$ will lead to significant drop in performance on Walker2d-medium. Could you please explain this phenomenon? - Finally, as exaustively tuning hyperparameters are impractical in many real tasks, could you please provide some insight to how to choose hyperparameters, or at least shrink the search range, for unseen tasks by their task properties? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for his/her constructive feedback. Below is further clarification for the reviewer's concern. **Q1: Using EDP for policy optimization?** Our policy improvement objective is to maximize the expected Q-values while also minimizing the KL divergence: $$\max\_{p^{s,\pi}}\ \mathbb{E}\_{a^0\sim p^{s,\pi}\_0}[Q(s, a^0)]-\eta \mathrm{KL}\left[p^{s,\pi}\_{0:N}\|p^{s,\nu}\_{0:N}\right]$$ Let us ignore the second KL term for now, since EDP implements this constraint using diffusion loss. For the first term, in order to circumvent backpropagating the gradient of Q through the diffusion path, EDP introduces the following action approximation: $$\hat{a} = \frac{1}{\sqrt{\bar{\alpha}\_n}}a^n - \frac{\sqrt{1-\bar{\alpha}^n}}{\sqrt{\bar{\alpha}^n}}\epsilon^{\pi,s}(a^n, n)$$ where $\epsilon^{\pi,s}$ is the output of the score network. Afterwards, they use $\hat{a}$ as the approximation for $\mathbb{E}\_{a^0\sim p^{s,\pi}\_{0|n}}[Q(s, a^0)]$. However, we emphasize that this approximation is **inexact and biased**. The action approximation is essentially the mean of the posterior distribution: $$\hat{a}=\mathbb{E}\_{a^0\sim p^{\pi,s}\_{0|n}}\left[a^0\right]$$ meaning that they are using $Q(s, \mathbb{E}\_{a^0\sim p^{\pi,s}\_{0|n}}\left[a^0\right])$ to approximate $\mathbb{E}\_{a^0\sim p^{s,\pi}\_{0|n}}[Q(s, a^0)]$, which is biased and inconsistent with our theory. **Q2: How to select the action with the highest Q value? What about the inference time?** Yes, we use the average of the ensemble Q networks to select the actions. During inference, BDPO first generates $N_a=10$ candidate actions **in parallel**, calculates their $Q$-values and selects the best one. The following table presents the inference latency per state, averaged over 100K trials: |Algorithm|Inference Time (ms)| |:-----:|:-----:| |BDPO (JAX)|0.310| |DAC (JAX)|0.298| |Diffusion-QL (JAX)|0.251| |Diffusion-QL (PyTorch)|1.22| |DTQL (PyTorch)|0.411| |QGPO (Pytorch)|5.73| We found that the inference cost of BDPO is comparable to DAC, which also generates 10 actions and selects the best one with the highest Q-value. The JAX implementation of Diffusion-QL is faster than BDPO, since it only generates one action and does not query the Q-values. The PyTorch version of Diffusion-QL is much slower. For DTQL, since its policy network is simply a one-step policy, its inference time is comparable to BDPO. Finally, QGPO requires taking the gradient of the Q-value network to calculate the guidance, which results in heavy computation overhead during inference. **Q3: About hyperparameters** - **The range of parameter sweeping for eta and rho?** For locomotion tasks, we swept eta and rho among {0.5, 1.0, 1.5, 2.0}. For antmaze tasks, we swept rho among {0.5, 0.8, 1.0} and eta among {1.0, 5.0, 10.0}. - **The different trend w.r.t. the hyper-parameters in halfcheetah and walker2d tasks, and what about other datasets like medium-replay?** The sensitivity analysis of eta and rho for medium-replay tasks is provided in the following anonymous links: https://f005.backblazeb2.com/file/bdpo-review/rebuttal_abla_eta.pdf and https://f005.backblazeb2.com/file/bdpo-review/rebuttal_abla_rho.pdf . Overall we found that the trend is similar across most of the datasets, that is, excessively large or small $\rho$ and $\eta$ may result in fluctuation or degradation in the performance. The halfcheetah-medium-v2 dataset seems to be an exception in that it is more tolerant to extreme parameter values. - **Setting $\eta=0$ or $\rho=0$?** For results with $\rho=0$, please refer to the above link to ablation w.r.t. rho. For results with $\eta=0$, we additionally provide curves on medium-expert datasets, and the results are provided in the anonymous link: https://f005.backblazeb2.com/file/bdpo-review/rebuttal_eta0.pdf . The lower confidence bound technique is conceptually similar to the commonly adopted double Q-network trick that penalizes the Q-values of OOD actions, and therefore when there is no LCB penalty, the performance drops sharply due to severe over-estimation. Setting $\eta=0$ also results in performance fluctuation and decrease for walker datasets due to insufficient constraint. However, in halfcheetah datasets, $\eta=0$ improves performance, likely because LCB already penalizes out-of-distribution (OOD) actions effectively. - **About the sensitivity to rho and advice on choosing hyper-parameters?** After inspecting the training details, we observed that an excessively small $\rho$ leads to severe over-estimation in Q-values, while an excessively large $\rho$ causes severe underestimation instead, both of which result in performance degradation. We found that a $\rho$ value yielding stable value estimations generally correlates with strong performance. Thus, we recommend to first adjust $\rho$ until stable value estimations are achieved, and then gradually decrease $\eta$ to strike a balance between robustness and performance.
null
null
null
null
null
null
null
null
Is Noise Conditioning Necessary for Denoising Generative Models?
Accept (poster)
Summary: This paper investigates the necessity of noise conditioning in diffusion models. It provides a theoretical analysis of the effects of removing noise conditioning and presents error bounds. The analysis shows that, under mild conditions, the errors resulting from the removal of noise conditioning are relatively small. The paper further conducts experiments to validate these findings. The results indicate that while noise conditioning is beneficial for enhancing sample quality, it is not a critical factor for denoising generative models. ## update after rebuttal I thank the authors for their responses. I have read all reviews and authors' responses. I will maintain my scores. Claims And Evidence: The claims are generally clear and supported by theoretical and empirical analysis. I only have a few questions 1. Without the noise conditioning, how are the models trained? That is, is Eq. 7 the objective for training? If yes, how do we sample $\mathbf{z}$ and compute the expectation? If no, is it that we still train models with Eq. 2, but only replace $NN(\mathbf{z}|t)$ with $NN(\mathbf{z})$? 2. If we still use Eq.2 for training and only replace $NN(\mathbf{z}|t)$ with $NN(\mathbf{z})$, we actually still use the noise conditioning. That is, the t is implicitly used since $\mathbf{z} = a(t)\mathbf{x} + b(t)\epsilon$. Methods And Evaluation Criteria: The proposed methods make sense. Theoretical Claims: I checked the proof in Appendix B, and did not find issues. Experimental Designs Or Analyses: The experiment results are sound. One question I have is 1. The paper states that the error bounds are related to the feature dimension $d$. In experiments of image sets, the $d$ is much greater than $1/t$, so the statements hold. It would also be helpful to vary $d$ and see how the error bounds are impacted. Supplementary Material: I checked the proof in Appendix B. Relation To Broader Scientific Literature: The paper mainly explores the impact of noise conditioning in denoising diffusion generative models. Its analysis and results can be generalized to many different variants of diffusion models, e.g., flow matching and consistency models. Essential References Not Discussed: Related works are sufficiently discussed. Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: 1. I am curious about the objective of removing the noise conditioning. That is, it seems that it will not reduce training/inference time too much, but the model performance may be impacted, i.e., in Table 2, most methods are worse. Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks a lot for the insightful feedback and the supportive comments to our work! **1.Definition & Benefits of removing noise conditioning** **Reviewer FoCJ:** `Without the noise conditioning, how are the models trained? … is it that we still train models with Eq. 2, but only replace `$NN_{\theta}(\mathbf{z}\mid t)$` with `$NN_{\theta}(\mathbf{z})$? Regarding the question: Correct. Without noise conditioning, the neural network is still trained with Eq.2 but only with $NN_{\theta}(\mathbf{z}|t)$ replaced with $NN_{\theta}(\mathbf{z})$. **Reviewer FoCJ:** `… we actually still use the noise conditioning. That is, the t is implicitly used since` $z=a(t)x+b(t)\epsilon$. This is the main message we wanted to convey: the noisy image contains information of the noise level $t$, and this is why *explicit* noise conditioning in the neural net is not needed. **Reviewer FoCJ:** `I am curious about the objective of removing the noise conditioning`. Our motivation actually roots from the curiosity on the necessity of noise conditioning, which is the “common wisdom” in most of the previous works on denoising generative models. In our opinion, the value of removing noise conditioning lies in challenging common sense and also lies in its theoretical meaning. Finally, as a direct result, some models (such as DiT+FM, as we share with Reviewer gvd2 and 7y3v) can enjoy improvement in removal of noise conditioning. **2. Low-Dimensional behavior** Reviewer FoCJ suggests to investigate model behavior in the senario where dimension $d$ is low, since our theory assumes a large enough $d$ value. This suggestion is very insightful. Inspired by that, we explored low-dimensional cases, where the approximation $d\gg 1$ does not hold and our theoretical analysis is ineffective. Specifically, we did experiments on the toy “two moons” dataset (which has $d=2$) with Flow Matching (i.e. Flowing from standard Gaussian to the two-moons dataset) models, and visualize the generated samples in the noise-conditional and unconditional settings. The results are shown in the figure below: [reb — Postimages](https://postimg.cc/dZwWS6hF) By visualizing the generated samples, one can see that removing noise conditioning leads to a significant performance drop in low-dimensional cases, preventing proper modeling of the distribution.
Summary: This paper investigates whether diffusion models, which are typically noise-conditioning networks, can be converted into noise-unconditional networks. It finds that many models are not significantly affected by the removal of noise conditioning, and in the case of Rectified Flow (RF) models, performance even improves. The paper provides a theoretical analysis suggesting that removing noise conditioning does not introduce significant errors. Additionally, it proposes a noise-unconditional variant of EDM that maintains FID performance. Claims And Evidence: The core idea of the paper is strong, but the analysis lacks exploration across a broader set of models. The experimental setup has low variance, raising concerns that some results might be due to chance. The model-wise analysis is limited, as the models selected are among the simplest available. One issue is that diffusion sampling often involves classifier-free guidance, which introduces instability through extrapolation in $x_t$. Noise-unconditioning may exacerbate these instabilities. For example, Pan et al. (2023) shows that increasing classifier-free guidance from 1 to 2 to 3 significantly degrades performance in a diffusion inversion setting. Similarly, models like Latent Diffusion Models (LDM) and DiT might exhibit unpredictable behavior under noise-unconditioning. Another key concern is why 1-RF performs better without noise conditioning. It is likely due to 1-RF's nature: it inherently learns the shortest path between the data distribution and Gaussian noise, which aligns well with the diffusion ODE trajectory, making noise conditioning unnecessary. However, this explanation is missing from the main text. Despite these gaps, Figures 2, 3, and 4 effectively support the logical flow between the paper’s core ideas and conclusions. Refs: Pan, Zhihong, et al. "Effective real image editing with accelerated iterative diffusion inversion." ICCV 2023. Methods And Evaluation Criteria: The evaluation is somewhat limited. While FID is a reasonable choice given the research focus, relying solely on FID without additional measures is problematic. Running multiple trials and reporting the standard deviation of FID scores could improve the robustness of the evaluation. Theoretical Claims: The theoretical statements are mostly reasonable, but the paper mixes different types of diffusion models without clearly distinguishing them. Table 1 properly differentiates DDIM, EDM, and FM, but In Statement 1 and 2, the focus is on FM, while Section 5 is heavily EDM-oriented. Since Table 2 shows that DDIM, EDM, and FM behave differently under noise-unconditioning, a general claim that "DMs work without noise conditioning" oversimplifies the issue. Instead, the results suggest that the effects of noise-unconditioning depend on the underlying diffusion formulation, requiring a more detailed explanation that is currently missing. Experimental Designs Or Analyses: Sections 4.2 and 4.3, along with Figure 3, focus primarily on FM models. However, to support similar claims for DDIM and EDM, equivalent experiments should be conducted for those models. As mentioned earlier, the analysis does not sufficiently highlight model-specific differences, making some conclusions appear oversimplified. Supplementary Material: No, I did not reviewed. Relation To Broader Scientific Literature: The ideas could be extended in conditional diffusion models (with varying classifier-free guidance settings) and latent diffusion models. Essential References Not Discussed: The difference between DDIM failing and 1-RF succeeding likely stems from DDIM's trajectory curvature. A more detailed analysis of this could strengthen the paper. Here's one paper that potentially helps discussion: Lee, Sangyun, Beomsu Kim, and Jong Chul Ye. "Minimizing trajectory curvature of ODE-based generative models." ICML 2023. Other Strengths And Weaknesses: The paper is well-written. Other Comments Or Suggestions: . Questions For Authors: In Figure 7, (a) shows 1-RF achieving an FID of 3.01, yet (b) achieves 2.58 and (d) achieves 2.61. Does this suggest that the baseline result of 3.01 was an outlier due to variance in training or sampling? Would running more trials and reporting standard deviations clarify whether 3.01 was a statistical anomaly? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank for the constructive feedback and supportive comments. Here, we address the concerns regarding classifier-free guidance (CFG), experimental random variance, model-wise analysis, and explanation on performance of 1-RF. **1. Latent Diffusion (DiT) & Classifier-free Guidance** Reviewer 7y3v commented, `One issue is that diffusion sampling often involves classifier-free guidance, which introduces instability through extrapolation in `$x_t$`. Noise-unconditioning may exacerbate these instabilities`. This comment on the risk of unstability of generation with classifier-free guidance is very valuable. Also, we believe that incorporating larger-scale experiments such as Latent Diffusion (DiT) will definitely make our work more solid. To address the concern, we conducted experiments DiT + FM (i.e. SiT) on the ImageNet 256x256 dataset with CFG. Here, all experiments use the same configuration as the original paper, using Euler sampler with 250 steps. The results demonstrate that our findings extend successfully to larger scales: | Model: DiT-B/2 + FM | | | | --- | --- | --- | | CFG Scale | FID w/ t | FID w/o t | | 2.0 | *9.36* | 10.66 | | 2.5 | _**8.03**_ | 8.15 | | 2.7 | 8.24 | _**7.96**_ | | 3.0 | 8.88 | _8.15_ | | 3.5 | 10.29 | _9.09_ | | 4.0 | 11.81 | _10.28_ | Notably, removing noise conditioning improves performance at optimal CFG scales. Also, for many different CFG scales, removing noise conditioning improves performance. We can see that in the setting of FM on large-scale dataset, the behavior of removing noise conditioning remains consistent with experiments in our paper. These results show that it’s feasible to extend our conclusions to large-scale diffusion models, and our observations are robust with respect to classifier-free guidance. **2. Statistical Robustness of the Results** **Reviewer 7y3v:** `In Figure 7, (a) shows 1-RF achieving an FID of 3.01, yet (b) achieves 2.58 and (d) achieves 2.61. Does this suggest that the baseline result of 3.01 was an outlier due to variance in training or sampling? ` To address this concern, we report the variance of FID on 5 trials with different random seeds (the top row), as well as using three different training checkpoints (the bottom row). Results are as follows. These numbers correspond to the last row in Figure 7(a). | Model: FM (1-RF) | (a) | (b) | (c) | (d) | | --- | --- | --- | --- | --- | | different sample seeds | $3.01\pm 0.02$ | $2.58\pm 0.02$ | $2.65\pm 0.02$ | $2.61\pm 0.01$ | | different checkpoints | $2.98\pm 0.03$ | $2.62\pm 0.03$ | $2.66\pm 0.01$ | $2.61\pm 0.02$ | Note that here a, b, c, d are different settings. These results confirm that the performance enhancement is not due to a statistical anomaly. The generation quality measured by FID-50k is robust enough for the evaluation. **3. Over simplification?** **Reviewer 7y3v:** `In Statement 1 and 2, the focus is on FM , while Section 5 is heavily EDM-oriented… oversimplifies the issue`. We clarify that our analysis is valid for general cases. While we used FM formulation as our primary example, our analysis extends to all listed diffusion models. We chose FM for its notational simplicity and clarity, avoiding unnecessary complexity while maintaining theoretical rigor. **4. Explanations on 1-RF: Curvature of the Path** We greatly appreciate your insightful perspective on how 1-RF inherently aligns with the diffusion ODE trajectory. Our theory mainly focuses on the bounding the error of removing noise conditioning, demonstrating that most models can work with the removal of noise-conditioning. We have to modestly admit that our theory is incapable of explaining why 1-RF becomes better when the noise-conditioning is removed. Nevertheless, we believe that using the curvature of trajectory to build connection between the noise-unconditional model’s performance to specific diffusion formulations is a very promising future direction, and we would like to explore in more depth on this topic in follow-up works. --- Rebuttal Comment 1.1: Comment: Thank you for the solid responses to points 1 and 2. I appreciate the clear answers and additional experiments. Regarding points 3 and 4, I still believe that the differences between FM and other diffusion models in how they respond to the removal of noise conditioning deserve deeper investigation. Exploring these differences could lead to a more complete understanding and a stronger contribution overall. I see point 3 as a key weakness of the current version of the paper. While the authors seem to agree that point 4 might offer a potential direction to address this limitation, it wasn't directly addressed in the rebuttal. For these reasons, I would like to maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful comments and for recognizing the value of our responses to points (1) and (2). We are glad that the additional experiments and variance analysis were helpful and provided further support for our findings, making our results more solid and comprehensive. Regarding points (3) and (4), we appreciate the insightful suggestions. Our theoretical analysis is intended to be broadly applicable across diffusion formulations, and FM was used as a representative example for clarity. We acknowledge that model-specific behaviors deserve deeper investigation, and such exploration will represent a promising direction for future work. Also, while our theory does not directly explain why 1-RF improves without noise conditioning, we agree that understanding such behavior — possibly through geometric perspectives like trajectory curvature — is an exciting avenue to pursue going forward. We sincerely thank you again for the constructive feedback. We have made concrete efforts to address the concerns through new experiments, statistical validation, and clarification of theoretical scope. We also engaged thoughtfully with the broader conceptual points raised, and hope our responses reflect the care with which we approached this work.
Summary: The paper tries to debunk a common belief among diffusion model practitioners if a time-condition of the model is necessary for a diffusion model. The authors take both the theoretical and experimental approach to address this issue. The paper mainly focus on the theoretical reasoning rather than practice, which is only demonstrated by CIFAR-10 generation problem, although various samplers are chosen for demonstration. Claims And Evidence: I believe that the paper’s main questioning on the necessity of the time-conditioning is valid and might open a new area of research that will benefit generative model communities. I am content with the theoretical justification of the paper and the choice of various sampling mechanisms. However, I am highly suspicious that CIFAR-10 generation task and commercial large-scale latent diffusion models do not share the same level of difficulty, and extending the “success” in CIFAR-10 domain to the large scale LDM should be done in extreme care. I still believe that the paper’s main question is well addressed. However, unless we have an actual experimental result for large scale commercial diffusion models, the practical value of this claim is not strong enough. I do not oppose to rejecting this so far, and I want to hear about other’s opinions regarding this including the authors’. Methods And Evaluation Criteria: As I have mentioned in the previous section, the claim for diffusion model should be done in extreme care and should be equipped with practical large scale latent diffusion models of any kind (e.g., Stable Diffusion, or other image/audio/text diffusion models). Unless any one of these experiments are conducted I cannot give higher score than 3. I do not think we need every single large scale experiments but I only think that CIFAR-10 is not enough, regarding the small size of domain, the simplicity of the manifold, and the extreme sparsity of the dataset compared to the large scale data such as LAION-5B. Theoretical Claims: I have found no error in the theoretical claims. However, I would like to humbly admit that I may have missed some errors, though. Experimental Designs Or Analyses: It is a redundant statement, but I am consent with the discussion of various different samplers. However, I am not very well consent about the domain of generation. Supplementary Material: Yes, I have checked the supplementary material, including theoretical claims. Relation To Broader Scientific Literature: The problem tackles the necessity of noise-level condition for multi-step denoising frameworks. If the paper’s claim is valid in general denoising problems, this can be extended to deep learning-based solutions for ill-posed problem in general. Essential References Not Discussed: It would be nice to include DPM-Solver, DPM-Solver++-type samplers as well, since these are still well-used samplers in practice, and also these samplers use multiple tabbed filtering. The performance for these type of samplers might be closely related to DDIM's failure as well. Other Strengths And Weaknesses: Overall, the paper gives an important question on the necessity of the time condition dependency of the model. This claim reveals that the architecture of current diffusion models may only be suboptimal. I am also happy that the paper is equipped with rich theoretical justification and experimental justification with various types of samplers. Regarding weakness claims, please refer to other sections of this review. Other Comments Or Suggestions: - The figures seems not to address the main claim of the paper well enough. For example, in Figure 1, I have looked into multiple times to realize that the core idea is that diffusion models may just work well with t removed. This type of presentation issues exist in every figures in the manuscript and I recommend to update those with better clarity. Questions For Authors: - Is there any practical problem to adopt this method in large scale diffusion models? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the thoughtful feedback and recognition of our theoretical contributions. Below, we address the concerns regarding experimental scope and practical applicability to large-scale diffusion models. **1. Large-scale experiment** To address reviewer gvd2’s concerns on experimental scale, we conducted additional large-scale experiments with DiT + FM (i.e. SiT) [1, 2] on ImageNet 256x256 dataset. All experiments use the same configuration as the original paper, using Euler sampler with 250 steps. The results demonstrate that our findings extend successfully to larger scales: | Model: DiT-B/2 + FM | | | | --- | --- | --- | | CFG Scale | FID w/ t | FID w/o t | | 2.0 | *9.36* | 10.66 | | 2.5 | _**8.03**_ | 8.15 | | 2.7 | 8.24 | _**7.96**_ | | 3.0 | 8.88 | _8.15_ | | 3.5 | 10.29 | _9.09_ | | 4.0 | 11.81 | _10.28_ | Notably, removing noise conditioning improves performance at optimal CFG scales. Also, for many different CFG scales, removing noise conditioning improves performance. We can see that in the setting of FM on large-scale dataset, the behavior of removing noise conditioning remains consistent with experiments in our paper. **2. Performance on Other Types of Samplers** **Reviewer gvd2:** `“It would be nice to include DPM-Solver, DPM-Solver++-type samplers as well, since these are still well-used samplers in practice, and also these samplers use multiple tabbed filtering. The performance for these type of samplers might be closely related to DDIM's failure as well.”` This is a very interesting extension to our current work, generalizing our theory and experiment to a wider range of commonly used sampling methods. Regarding the DPM solver equation: $$ x_{t_i}=\frac{\alpha({t_i})}{\alpha(t_{i-1})}x_{t_{i-1}}-\sigma({t_i})(e^{h_i}-1)\epsilon_{\theta}(x_{t_{i-1}},t_{i-1}) $$ (Eq. 3.7 in [3]), which, in our notation (Eq. 4 in our paper), becomes $$ \kappa_i = \frac{\alpha_{i+1}}{\alpha_{i}},\quad \eta_i = -\sigma_{i+1}(e^{h_{i+1}}-1),\quad \zeta_i=0 $$ This $\kappa_i$ for DPM solver behaves very similar with DDIM (which has $\kappa_i=\sqrt{\frac{\alpha_{i+1}}{\alpha_i}}$), which leads to large error bound value (the term $\prod \kappa_i$ dominates the error bound, and $\prod \kappa_i$ approaches infinity due to division by 0). Thus, we can **expect** a bad performance similar with DDIM in the noise-unconditional scenario by our theory. To further confirm this intuition, we perform numerical calculations of bound values as well as FID evaluations for both DPM-Solver and DPM-Solver++ (which has a similar formulation as DPM-Solver but uses $x$-prediction). The bound value of both of them are on the order of $10^6$ (which is of the same order with DDIM), and using DPM solver fails to generate good images (FID $>50$). These results shows that our theoretical error bound (in Section 4.4) can qualitatively predict the performance accurately, demonstrating that our theory generalizes on even these carefully designed samplers. **3. Other Suggestions** We appreciate the feedback regarding figure clarity and will incorporate the suggestions to enhance the presentation of our results. **References** 1. Peebles, William, and Saining Xie. "Scalable diffusion models with transformers." *Proceedings of the IEEE/CVF international conference on computer vision*. 2023. 2. Ma, Nanye, et al. "Sit: Exploring flow and diffusion-based generative models with scalable interpolant transformers." *European Conference on Computer Vision*. Cham: Springer Nature Switzerland, 2024. 3. Lu, Cheng, et al. "Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps." *Advances in Neural Information Processing Systems* 35 (2022): 5775-5787. 4. Lu, Cheng, et al. "Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models." *arXiv preprint arXiv:2211.01095* (2022). --- Rebuttal Comment 1.1: Comment: Thank you for the comment. I have read the other reviews and the authors responses on them, too. I believe this paper is a well-established work that questions the necessity of time-conditioning in diffusion models. It would be better to extend the discussion in Section 4.4 on why DDIM and DPM-Solvers fail with this method and why the other samplers gain advantage even if the noise level condition is removed. I will maintain my score as weak acceptance.
Summary: This paper analyzes noise conditional diffusion models (DMs) and develops theory supporting the viability of noise unconditional DMs. Empirical evidence supports the author's theoretical claims and demonstrates that noise unconditional DMs are capable of performance similar to noise conditional DMs. This challenges pre-existing notions that noise conditioning is fundamentally necessary for DMs to function. Claims And Evidence: Yes, all claims in the paper are well supported both by theoretical proofs and empirical studies. Methods And Evaluation Criteria: Yes. Theoretical Claims: I have reviewed the proofs / theoretical claims in the main paper and they appear sound to me. Experimental Designs Or Analyses: Yes, the empirical studies appear valid to me. Supplementary Material: I briefly reviewed the proofs / derivations and additional experimental details in the supplementary materials . Relation To Broader Scientific Literature: This paper challenges the existing notion that noise conditioning is fundamental to the high performance of DMs. Essential References Not Discussed: The finding in Section 4.1 (i.e., that the expectation over multiple realizations of z is an effective target) is very reminiscent of Noise2Noise [1]. Including a reference to [1] would provide additional context around using the expectation over multiple noisy realizations as a target. **References** 1. Lehtinen, Jaakko, et al. "Noise2Noise: Learning image restoration without clean data." arXiv preprint arXiv:1803.04189 (2018). Other Strengths And Weaknesses: **Strengths** 1. Novelty. I find the author's idea to use multiple realizations of z as an effective target to train an unconditional DM to be novel. 2. The theory and proofs explore the viability of noise unconditional DMs, as well as their error bounds 3. Impact. The theory is well supported by empirical evidence and unconditional DMs (somewhat surprisingly) perform similarly to noise conditional DMs. This is an important finding for the community because it challenges pre-existing notions about DMs. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thanks for the thoughtful review and positive feedback regarding our work’s novelty and impact! We agree that incorporating the suggested reference *Noise2Noise: Learning image restoration without clean data* will provide valuable context for the derivation in section 4.1. We will revise the manuscript to include this reference.
null
null
null
null
null
null
Learning Efficient Robotic Garment Manipulation with Standardization
Accept (poster)
Summary: The authors present APS-Net, a unified framework for garment manipulation that integrates both unfolding and standardization. APS-Net employs a dual-arm, multi-primitive policy to unfold crumpled garments and ensure standardization, which facilitates downstream tasks like folding. Experimental results show that APS-Net outperforms baselines in both simulation and real-world tasks. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: See Other Strengths And Weaknesses. Supplementary Material: Yes Relation To Broader Scientific Literature: Compared to previous works, APS-Net integrates both garment unfolding and standardization into a unified framework. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths 1. The authors conduct real-world experiments and provide detailed analysis for experiments. 2. APS-Net introduces a dual-arm, multi-primitive policy and integrates garment unfolding and standardization into a unified framework, offering notable improvement over prior work such as P&P and FlingBot. Weaknesses 1. The proposed approach is straight-forward and complicated. The framework is designed around three metrics (coverage, IoU, keypoint distance), with a corresponding three-encoder, six-decoder architecture (each encoder is paired with both fling and P&P decoders). The contribution and novelty in the pipeline architecture may be limited. 2. While the Introduction is well-written and easy to follow, the Methods section could be improved for clarity, as some details are confusing (see Questions). 3. The same metrics (coverage, IoU, keypoint distance) used in the experiments are also employed during the training of APS-Net. It is unclear whether these metrics are also used in the training of other baseline methods. If not, the comparison might be unfair. Other Comments Or Suggestions: In Line 262, a space is missing after the period. Questions For Authors: 1. What does $k$ represent? The paper does not define this variable. 2. Figure 15: In Step1 and Step2, why do the shapes of the spatial action maps in the first row not match the shapes in the second row? 3. The text and equations suggest that value maps are the aggregations of scores for each $<x, y, \theta, w>$, implying a high-dimensional representation (dimensionality > 3). However, the visualized value maps in Figure 2 appear to be three-dimensional (xyz space), which is confusing. Could the authors provide some clarifications? Additionally, should θ be discrete? This is not explicitly mentioned how $theta$ is handled. 4. Section3.2 explains how the grasp points are generated, but does not explain how place points are determined. 5. The concept of 'midpoint (x, y) between two grasp points' is used for pick&place action (in Euqation9), however, it seems only one arm is used for pick&place action, so it is unclear what the midpoint represents in this context. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank you for your thoughtful feedback, insightful questions, and recognition of our work’s strengths. Below we address the comments point-by-point: ## Weaknesses ### 1. Framework Architecture Complexity We understand the concern about APS-Net's complexity. However, it is specifically designed for the factorized reward function, with each component essential for guiding fling and p&p actions in garment standardization. Below, we explain its necessity and novelty. The three encoders (coverage, IoU, and keypoint distance) directly target the core challenges in garment manipulation: - Coverage Encoder: Guides the fling action to quickly unfold crumpled garments. - IoU Encoder: Guides p&p action fine-grained adjustments near-flattened garments for consistent shape and orientation. - Keypoint Encoder: Ensures visibility of key garment features (e.g., collar, sleeve, hem) Removing any encoder causes a performance drop, as shown in ablation studies (Table 4). Thus, this design is not arbitrary, but a novel architecture tailored to garment unfolding. To further validate our design, we conducted experiments with an unfactorized network structure (a single-encoder variant with two decoders (fling and p&p) and a weighted sum of the same metrics), resulting in poorer performance (see new Table 4.1). Table 4.1 Unfactorized network structure | Method | COV (%) | IOU (%) | KD | |--------------|---------|---------|--------| | Unfactorized | 87.5 | 73.7 | 1.995 | ||| ### 3. Clarifying Metric Usage We used the same reward function for FlingBot and P&P baseline as ours, with results shown in Table 1 under "S-Fling" and "P&P." We also compare this with Flingbot's original method, which used only coverage as the reward. These comparisons ensure a fair evaluation. ## Questions ### 1. Undefined Variable (k) The variable k was a typo; the correct variable is m (m ∈ {fling, p&p}). This will be corrected in the revision. ### 2.Spatial Action Map Shapes in Figure 15 The shape differences in spatial action maps arise from applying a spatial action mask (SAM), detailed in Section 3.4 and Appendix B. For dual-arm fling, SAM filters infeasible actions, such as regions with potential arm collisions or where only one arm can grasp the garment. For example, Figures 13(a) and 13(b) show the original garment and corresponding mask, while 13(d) shows the mask applied, excluding infeasible regions. Therefore, the valid mask does not match the garment's original shape. For single-arm p&p, collision avoidance is unnecessary, so the mask is aligned to the shape of the garment (see Figure 13(e)). ### 3.Dimension Gap and Meaning of θ The value maps are 3D, with the third dimension representing the number of transformations $T_n$ (scaling and rotation). The correct formulation for equation (8) is: $$ ⟨x, y, i⟩ ← APSNet(T_n) = argmax(V_{f_{\max}}^{\max}) $$ Once i is determined, the values of w and θ are retrieved from the i-th entry of $T_n$: $$ w, θ = T_n[i] $$ Based on the above reasoning, we obtain ⟨x, y, θ, w⟩. As for θ, it is indeed discrete. In Appendix E, we mention that θ undergoes 16 rotations, covering 360°. ### 4.Determination of Place Points Based on the definition (see Figure 1), for single-arm p&p, both the grasp and place points need to be predicted. As defined in Equation (9) (when m = p&p), the grasp point is (x, y), and the place point is (x − w, y). For dual-arm fling, only two grasp points are predicted, while place points follow a predefined swinging trajectory (Equation 4.1). As defined in Equation (9) (when m = fling), the grasp points are (x + w, y) and (x − w, y). During execution, both arms follow the trajectory (lift, forward/backward motion, place), adjusting acceleration and velocity at each stage to complete the fling motion. $$ fling = [(0,0,h_l) \to (0,f,h_l) \to (0,f-b,h_l) \to (0,f,h_p)] \quad \text{4.1} $$ where $h_l$ is the lift height, $f_m$ is the forward swing distance, $b_m$ is the backward swing range, and $h_p$ is the placement height. ### 5.Clarifying Midpoint in Equation (9) For dual-arm fling, directly predicting both grasp points is challenging due to collision avoidance. Instead, we reframe the problem by predicting the midpoint (x, y), angle θ (rotation), and grasp width w, with collision avoidance achieved by adjusting w. For single-arm p&p, collision avoidance is not required. However, to maintain consistency with dual-arm fling and provide a unified output from the same network, we define (x, y) as the pick point, while θ and w represent the rotation and length between the pick and place points. Therefore, we generalize the two cases as solving for ⟨x, y, θ, w⟩, and determine the operation points for two action primitives through Equation (9). ## Suggestions ### Space missing We will correct this in the revised manuscript and will be more attentive to formatting details in the future. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. The clarifications and explanations provided have addressed most of my concerns. However, based on the results from new Table 4.1, it appears that the architecture of the three-encoder, six-decoder setup plays a more important role than the weighted sum of the metrics (i.e., the reward function). Additionally, the results of this new ablation study appear to be indistinguishable from the baseline CLOTH FUNNELS (Table 1). This may suggest that the performance of the proposed method is more attributed to technical refinements rather than an innovative contribution. Could the authors provide further clarification on this point? --- Reply to Comment 1.1.1: Comment: Thank you very much for taking your valuable time to read my response and acknowledging most of my previous clarifications. We sincerely apologize for previously not addressing your concern sufficiently clearly. Below, we provide a more detailed clarification regarding your specific question. We agree with your insightful observation based on Table 4.1 that the three-encoder, six-decoder architecture contributes to overall performance. However, we would like to emphasize that the proposed weighted reward function plays the central role in driving the improvements. To demonstrate this more clearly, we have conducted additional ablation experiments (see new Table 4.2), where the model is trained using only individual reward metrics, rather than the proposed weighted combination. These experiments showed a substantial drop in performance compared to the results obtained using the weighted sum approach. Therefore, although the architecture contributes to the overall performance, the weighted reward function remains essential and has a dominant effect on the model’s behavior. Moreover, the architecture was specifically designed to support the weighted reward, serving as a tailored solution for learning from complex visual signals. Together, the architecture and reward function constitute complementary innovations at the core of our method. Table 4.1 Unfactorized network structure | Method | COV (%) | IOU (%) | KD | |--------------|---------|---------|--------| | Unfactorized | 87.5 | 73.7 | 1.995 | ||||| Table 4.2 Individual Reward Metrics. | Method | COV (%) | IOU (%) | KD | |--------|---------|---------|--------| | Cov | 92.6 | 53.4 | 2.698 | | IOU | 79.9 | 66.2 | 2.583 | | KEYPOINT | 68.9 | 55.1 | 2.621 | ||||| Regarding your concern about the apparent similarity between the performance of our ablation study (Table 4.1) and the CLOTH FUNNELS baseline, we would like to clarify the following issue. In Table 4.1, it used a much simpler architecture—only one encoder and two decoders—compared to CLOTH FUNNELS yet achieved comparable performance. This result demonstrates that, even when using a simplified architecture, the weighted reward method can maintain performance at the baseline level. More importantly, when employing our full architecture (three encoders and six decoders) with the proposed weighted reward performance significantly surpasses CLOTH FUNNELS (see Table 1). This level of improvement is unlikely to be achieved by minor technical optimizations or training tricks alone. Furthermore, our method leverages a weighted combination of evaluation metrics, enabling it to compute rewards directly from visual observations (RGB-D data). In contrast, prior approaches such as Lin et al. [1], CLOTH FUNNELS [2], and Deng et al. [3] rely on particle-based cloth representations to compute rewards or losses for cloth manipulation tasks. While such particle data is readily available in simulated environments like SoftGym, it is difficult to acquire in real-world. By relying solely on visual inputs, our method avoids the sim-to-real gap introduced by inaccessible simulation-specific data, resulting in better generalization and improved robustness in real-world cloth manipulation tasks. Therefore, our approach should not be regarded as a minor technical refinement, but rather as a principled and practical innovation that overcomes fundamental limitations of prior work. Importantly, our method naturally reveals garment key points during the flattening process. This significantly simplifies the challenge posed by infinite degrees of freedom. Furthermore, identifying these garment key points is crucially beneficial to downstream tasks, such as garment dressing and hanging, thereby clearly demonstrating the broader applicability and scalability of our method compared to existing approaches. In summary, our method is not a minor technical adjustment, but a response to fundamental challenges in cloth manipulation, integrating reward design and architecture to offer a more robust and generalizable solution. We sincerely thank the reviewer once again for the thoughtful and constructive feedback. Your comments have helped us to more clearly articulate the contributions and motivations of our work. We greatly appreciate your time and effort in reviewing our paper. thank you! ### Reference [1] Lin X, Wang Y, Huang Z, et al. Learning visible connectivity dynamics for cloth smoothing. CoRL 2022. [2] Canberk A, Chi C, Ha H, et al. Cloth funnels: Canonicalized-alignment for multi-purpose garment manipulation. ICRA 2023. [3] Deng Y, Mo K, Xia C, et al. Learning language-conditioned deformable object manipulation with graph dynamics. ICRA 2024.
Summary: In this paper, the authors present an RL framework for grament manipulation, which consists of two stages: standardization and folding. For the standardization stage, a two-primitive polcy of fling and pick-and-place is trained using a factorized reward function, which includes garment converage, keypoint distance and IOU. After standardization, folding is performed using a keypoint detection-based method. Two real-world datasets are collected to improve the real-world performance. The authors compare their method with four baseline methods in simulation, and their method achieves SOTA in three metrics. The authors further conduct ablation studies to verify the effectiveness of each module. And they condut unfolding and folding experiments in the real world. Their method outperforms other methods in terms of success rate. Claims And Evidence: Yes, the effectivness of their method and each module are verified through experiments and ablation studies. Methods And Evaluation Criteria: The main novelty of this method is the learning-based primitive selection and the factorized reward function. It can be considered as a technical improvement over existing pipeline, while there is no innovative contribution. And the performances in simulation and realworld show the effectiveness of their method. The experiments are extensive and the criterias are widely-used metrics to evaluate the performances. Theoretical Claims: NO theoretical claims Experimental Designs Or Analyses: The experimental designs are good and extensive. And the analysises verify the effectivenss of their method. Supplementary Material: A website link is given, where some real experiments are shown. Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: The key difference between the proposed method and existing methods needs further discussion and explaination. For example, it is claimed in the related work that ``their sim-to-real gap limits real-world applicability", but it is unclear how the proposed method addreeses the domain gap in the standarization stage. Other Comments Or Suggestions: The layout of the figures could benefit from improvement. Currently, it is disorganized and makes it difficult to follow the logical flow. Questions For Authors: What is the defination of R_c, R_I and R_K in Eq(10)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We truly appreciate the time and effort you invested in reviewing our paper. Thank you for recognizing the effectiveness of our two-stage RL framework for garment manipulation, particularly the learning-based primitive selection and factorized reward design. Below, we respond to your comments point-by-point: ### 1. Sim-to-real gap Our method minimizes this gap by incorporating several strategies to enhance simulation realism. Firstly, we use RGBD images rather than RGB alone, as the combination of color and depth provides a more accurate representation of the garment's position and shape. Depth data enhances spatial alignment, reducing the sim-to-real gap and improving real-world transferability. Secondly, in simulation, we use SoftGym to model cloth dynamics, with Blender 3D rendering realistic cloth colors based on HSV values sampled from real-world materials. The cloth's mass (ranging from 0.2 kg to 2.0 kg) and internal stiffness (0.85 kg/s² to 0.95 kg/s²) are varied to reflect real-world cloth properties. Thirdly, we use procedurally generated normal maps to simulate wrinkles and surface details, further enhancing simulation realism. These strategies—RGBD images, realistic cloth properties, detailed surface textures—help minimize the sim-to-real gap, ensuring effective transfer to real-world garment manipulation. ### 2. Layout of the Figures Thank you for your feedback on the figure organization. We recognize the importance of a clear and structured presentation and will revise the layout in updated version to improve the flow and organization of the figures. ### 3. Definition of $R_c$, $R_I$, and $R_K$ in Equation (10): The terms $R_c$, $R_I$, and $R_K$ in Eq. (10) represent the different components of the factorized reward function, and their definitions can be found in Appendix A. We will update the revised version to clearly show where these definitions are located. ### 4. Clarification of the innovative We sincerely appreciate the reviewer's thoughtful feedback and recognition of our method's effectiveness. Below, we clarify the innovative aspects of our approach. Existing garment unfolding methods rely primarily on learning-based static pick-and-place strategies [1][2], requiring numerous interactions, or dual-arm approaches [3][4] maximizing coverage without standardizing garment orientation and shape—key for downstream tasks such as folding and packing. Motivated by extensive literature review and practical needs, our goal is not only to rapidly unfold garments, but to standardize their final pose. To achieve this, our proposed dual-arm, multi-primitive policy combines dynamic fling actions for efficient unfolding with precise pick-and-place operations for standardization. To effectively train this policy, we introduce a novel factorized reward function (Cov, KD, IoU metrics) and a specialized multi-encoder-decoder architecture that intelligently selects primitives based on garment state. Moreover, we incorporate a Spatial Action Mask (Section 3.4, Appendix B) to filter infeasible actions and an Action Optimization Module, enhancing fling point selection effectiveness, validated through extensive ablation studies (see Table 4). By integrating these innovations, our approach effectively generalizes across various garment types (pants, skirts, long sleeves) in Table 2. Further downstream folding experiments confirm that our standardized poses significantly improve performance (see Table 6). Finally, extensive real-world experiments on a dual-UR5 robot validate the robustness and practical applicability of our approach. ### References [1] Lin X, Wang Y, Huang Z, et al. Learning visible connectivity dynamics for cloth smoothing. CoRL 2022. [2] Wu R, Ning C, Dong H. Learning foresightful dense visual affordance for deformable object manipulation. ICCV 2023. [3] Ha H, Song S. Flingbot: The unreasonable effectiveness of dynamic manipulation for cloth unfolding. CoRL 2022. [4] He C, Meng L, Sun Z, et al. Fabricfolding: Learning efficient fabric folding without expert demonstrations. Robotica 2024. --- Rebuttal Comment 1.1: Comment: Thank the authors for the detailed responses. My concerns have been well addressed.
Summary: This paper introduce a novel robotic garment manipulation system with standardization, which has better performance than the previous framework in this challenging robotics task. Claims And Evidence: 1. The standarization for the garment manipulation task can enhance performance. 2. However, the standarization will restrict the generalizability. Also, it's hard to set up various standarization for a lot of different kinds of clothes in the real world. 3. Detecting keypoints and selecting corresponding predefined skills can be helpful for resolving this question. However, it still restrict the generalizability. Methods And Evaluation Criteria: 1. Using predefined unfolding region and manipulation skills as the standardization 2. Using proposed APS-Net to detect the fold or unfold and detect keypoints. 3. Using IoU, better coverage, and keypoint distance as input to select the correct keypoints and predefined actions. This framework uses a lot of predefined information, which are target bounding edges and skills. It will reduce the generalizablity, and it's hard to define everything for different kinds of clothes. The folding strategy seens very simple. It's not convincing that only initial keypoint is enough for robot to accomplish dynamic motions, where the motion of the deformable object has various uncertainty. More analysis to show that only keypoint-based strategy is enough is helpful. Theoretical Claims: Not a theoretical paper Experimental Designs Or Analyses: Conducting comprehensive experiments in both simulation and real world, for both folding and unfolding. It's not clear how to evaluate the folding is success or not. It might be subjective? Supplementary Material: More details about the experimental setup and results are shown in the supplementary Relation To Broader Scientific Literature: No Essential References Not Discussed: No Other Strengths And Weaknesses: This framework uses a lot of predefined information, which are target bounding edges and skills. It will reduce the generalizablity, and it's hard to define everything for different kinds of clothes. Other Comments Or Suggestions: The figure is too large and contains too many details... The words are too small, and it's almost impossible to see anything without zooming in on the page Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their insightful feedback and for recognizing the novelty of our standardized robotic garment manipulation system. Below, we address the key concerns raised, particularly regarding generalizability, the simplicity of the folding strategy, and evaluation metrics. ### 1. Concern About Predefined Information and Generalizability Regarding target bounding edges: We agree that target bounding edges are used in our method, but their role is to act as an alignment metric for garment standardization. Once the model is trained, target bounding edges are no longer needed. Additionally, garments can be simply categorized into common types (e.g., skirts, pants, long sleeves), which share similar features within the same category. In simulation, garments are modeled as grids of particles, and the ground truth positions are accessible, allowing for easy computation of bounding edges. Thus, manual annotation of boundaries is not needed for different garment types. Our model, trained in simulation, generalizes across these categories, as demonstrated in our experiments on pants and skirts (see Figure 8). Regarding predefined skills: Selecting the appropriate action primitive is crucial in robotic manipulation, with pick-and-place being commonly used for rigid objects. For clothes, we designed two primitive actions (fling and pick-and-place) that enable rapid flattening and standardization. We tested these skills on various garment types (e.g., skirts, pants, long sleeves) without any additional parameter adjustments, and the model adapted well — demonstrating that these skills do not impact generalization. ### 2. Keypoint-Based Strategy and Generalizability Our keypoint-based reward serves as an optional metric to enhance garment feature visibility (e.g., collar/sleeve alignment) during standardized unfolding, but is not fundamental to the core method. The keypoints are obtained directly from simulation without requiring trained detectors, preserving generalizability across garment categories during training. The core contribution of our work is standardized unfolding, which benefits downstream tasks like folding, ironing, and packing. To evaluate its advantages, we used a keypoint-based folding task. However, this approach may face limitations in terms of generalizability to unseen garment categories due to structural variations in garments. Our future research will focus on adapting folding models to generalize across categories. Furthermore, even without the keypoint-based reward metric, our method performs well (see ablation study, Table 4), with generalization limits applying only to folding validation, not to unfolding. ### 3. Simplicity of the Folding Strategy As explained in Question 2, folding is not the main focus of our research; it is a validation task. To this end, we adopted a simple keypoint-based folding strategy, which is sufficient to showcase the advantages of our standardized unfolding. We believe this choice strikes a balance between simplicity and functionality for the purpose of validation. ### 4. Evaluation of Folding Success In simulation: The cloth is modeled as a grid of particles, and we can access the ground truth positions of these particles. We use the mean particle distance between the cloth states achieved and the desired target state. If the average particle distance error is less than 0.03, we consider the folding to be successful. In the real world: We evaluate performance quantitatively using the Mean Intersection over Union (MIoU) between the cloth masks achieved and the human demonstrator. If the MIoU exceeds 0.8, we consider the folding to be successful. We will clarify these evaluation metrics in the revised manuscript. ### 5. Figure Quality and Presentation We appreciate the feedback on the figure. We have simplified Figures 2 and 3 and enlarged the text for better clarity. The details of the changes are in the following link: https://github.com/hellohaia/img/blob/main/1.pdf --- Rebuttal Comment 1.1: Comment: Thanks for your response and clarifications. All of my questions have been addressed
Summary: This paper introduces APS-Net, a novel framework for robotic garment manipulation that seeks to both unfold garments and align them into standardized orientations—essentially “standardizing” them as part of the unfolding process. Unlike many existing solutions that focus on either single-arm quasi-static approaches or dynamic actions that only maximize coverage, APS-Net combines dynamic dual-arm fling actions for fast unfolding with more precise pick-and-place (p&p) actions for alignment. The authors propose a factorized reward function incorporating garment coverage, intersection-over-union (IoU), and keypoint positioning, guiding the system to flatten garments while preserving meaningful geometry for downstream tasks like folding Claims And Evidence: This paper claims four main things: (1) that combining fling and pick-and-place yields more efficient and accurate garment flattening, (2) that a novel factorized reward function leads to superior performance, and (3) that standardization (i.e., aligning shape/orientation) meaningfully benefits downstream tasks such as folding. Across their experiments, these claims appear largely substantiated. First, their method clearly improves coverage and IoU metrics relative to baselines like single-arm pick-and-place or exclusively fling-based approaches. The factorized reward containing coverage, IoU, and keypoint distance likewise shows consistent gains in shaping the final garment state. Ablation studies confirm that omitting parts of the reward (e.g., excluding IoU or ignoring keypoint alignment) harms performance. Methods And Evaluation Criteria: The evaluation criteria include coverage, IoU, and keypoint distance; these are standard evaluation metrics in the garment manipulation literature, so they make sense. Theoretical Claims: This is not a theory paper. Experimental Designs Or Analyses: The experiments are structured around evaluating coverage, IoU, and keypoint distance after each rollout, alongside real robot demonstrations. The environment in simulation uses the SoftGym framework with PyFleX, which is a well-established simulator for deformable objects. The tasks are repeated with multiple random initial configurations, and baseline comparisons are made with relevant prior works (including single-arm pick-and-place, a standard fling-based method, and a state-of-the-art cloth manipulation baseline). Supplementary Material: I did review the supplementary material; there were helpful information on the robot system as well as implementation details on the reward functions. Relation To Broader Scientific Literature: This works extends a rich body of literature on robot garment folding. The system presented in this work is more "complete" and effective than prior well-known works in the literature, such as flingbot. Essential References Not Discussed: All essential related works are discussed to the best of my knowledge. Other Strengths And Weaknesses: Weaknesses: 1. While real-world tests are present, the paper might have benefited from more quantitative comparisons across different cloth weights or textures to fully test fling reliability. 2. The method’s reliance on overhead segmentation and keypoint detection might fail if the garment color is too close to the table or if extreme wrinkles obscure important garment keypoints. I'd like to see the method being stress tested under more extreme visual conditions. 3. Given that recent works have demonstrated competent garment folding capabilities from pure end-to-end approaches (e.g., pi-0), I think some discussions on these approaches are warranted. Other Comments Or Suggestions: N/A. Questions For Authors: See my comments above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort you have dedicated to evaluating our work, and we are grateful for your recognition of our key contribution—the integration of dynamic flinging with precision pick&place for garment standardization, and the factorized reward function. Below, we provide a point-by-point response to the comments, incorporating additional experiments and analyses. ## Weaknesses ### 1. Fling Reliability Across Cloth Weights and Texture In simulation, we tested cloth masses ranging from 0.2 kg to 2.0 kg, with internal stiffness values between 0.85 kg/s² and 0.95 kg/s². Procedurally generated normal maps were used to simulate wrinkles and surface details. In real world, we experimented with garments of various weights, from lightweight long-sleeve shirts (e.g., Instance 2 in Figure 16) to heavier sweaters (e.g., Instance 4 in Figure 16). In terms of texture, we evaluated garments with complex patterns (e.g., Instances 1 and 3 in Figure 16). The results demonstrated that the fling motion operated reliably across all conditions. Therefore, variations in cloth weight and texture do not affect fling reliability. ### 2. Stress Testing Method under Extreme Visual Conditions We have tested our method with two garments whose colors are similar to the table surface, both in crumpled and smooth states, to evaluate the performance under extreme conditions. We tested each method 10 times on garments in both crumpled and smooth states to evaluate performance under extreme conditions. The results demonstrate that, even in these challenging scenarios, our segmentation algorithm successfully identifies the garment region (see Figure 1 in https://github.com/hellohaia/img/blob/main/1.pdf). However, keypoint detection performs poorly under these conditions, as shown in Table 1.1. In the future, we will explore using depth images for both keypoint detection and garment segmentation under extreme conditions, which would mitigate the impact of color similarity. Table 1.1 Results of Segmentation and Keypoint Detection. | Method | Precision (%) | |--------------------|----------------| | Segmentation | 10/10 | | Keypoint detection | 0/10 | ||| ### 3. Discussion of End-to-End Approaches for Garment Folding Recent works, including end-to-end approaches like SpeedFolding [1] and UniFolding [2], have demonstrated competent garment folding capabilities. Below, we discuss their methods: SpeedFolding is a bimanual system that folds crumpled garments based on user-defined lines. While it performs well on short-sleeve garments, it doesn't generalize to other types. Additionally, it requires 4,300 human-annotated training samples, which is labor-intensive. Its performance degrades with highly crumpled garments, as key areas are occluded, and it struggles with controlling garment orientation during unfolding, limiting standardization. UniFolding uses the UFONet neural network to integrate unfolding and folding into a single policy. Tested on long- and short-sleeve shirts, it hasn't generalized to other garment types. Its performance degrades with highly crumpled garments, as keypoints can be obscured. It relies on labor-intensive human demonstrations in virtual reality for data collection. While it flattens garments, it cannot standardize them, and the model tends to focus on manipulating the sleeves, often causing the collar to roll in, preventing full flattening. Our work introduces dual-arm, multi-primitive policy to quickly unfold crumpled garments with standardization, improving downstream tasks like folding, ironing, and packing. Unlike their methods, our model trains without human-collected data in simulations and achieves zero-shot transfer, delivering strong real-world performance. ## References [1] Y. Avigal et al. Speedfolding: Learning efficient bimanual folding of garments. IROS 2022. [2] H. Xue et al. Unifolding: towards sample-efficient, scalable, and generalizable robotic garment folding. CoRL 2023. --- Rebuttal Comment 1.1: Comment: Thank you for your response -- I will maintain my original acceptance score.
null
null
null
null
null
null
Effective and Efficient Masked Image Generation Models
Accept (poster)
Summary: This paper introduces eMIGM, a unified framework that integrates Masked Image Generation Models and Masked Diffusion Models into a single mathematical formulation. The authors categorize the possible design choices into training and sampling processes to optimize performance and efficiency. By leveraging a time interval strategy for Classifier-Free Guidance (CFG), replacing the fake class token with a mask token, eMIGM achieves comparable performance in image generation while significantly reducing function evaluations (NFEs). Experimental results demonstrate superior efficiency and sample quality on ImageNet 256×256 and 512×512 compared to existing models, including diffusion-based and masked modeling approaches. Claims And Evidence: The claims made in the paper are mostly supported by clear and convincing evidence: 1. The paper presents a mathematical framework that unifies Masked Image Generation and Masked Diffusion Models, showing that different approaches can be expressed under a generalized loss function. Empirical validation demonstrates that this framework enables systematic exploration of design choices, leading to performance and efficiency improvements. 2. Instead of using a traditional fake class token, the paper proposes replacing it with a mask token in Classifier-Free Guidance (CFG). This modification improves conditional generation by preventing performance degradation caused by fake class tokens, as confirmed through ablation studies. 3. The paper introduces a time interval approach to CFG, where guidance is applied selectively in later sampling steps. This method leads to better FID scores and reduces computational cost, as demonstrated in experiments. 4. (Minor Limitation) Some hyperparameter choices (e.g., mask scheduling functions, weighting strategies) are selected based on empirical tuning rather than theoretical analysis, making it unclear whether the observed improvements generalize beyond the tested configurations. Methods And Evaluation Criteria: The methods and evaluation criteria in the paper are generally appropriate. eMIGM is evaluated on ImageNet 256x256 and 512x512, which are widely used benchmarks for generative models. The paper primarily uses Frechet Inception Distance (FID) to assess image quality and Number of Function Evaluations (NFE) to measure computational efficiency. The proposed time interval strategy is systematically tested through ablation studies and experiments on different mask schedules, providing strong empirical validation. The use of NFE as an efficiency metric is well-justified, as it directly reflects computational cost. However, a limitation is that the evaluation relies solely on FID and NFE, without incorporating additional metrics such as Inception Score (IS), Recall, or Precision, which could provide a more comprehensive assessment of model performance. Theoretical Claims: All of the mathematical derivations are reasonable, and appropriate references and equivalences are provided, ensuring no issues. Additionally, after reviewing Section 3: Unifying Masked Image Generation and Appendix A: Equivalence of the Masking Strategies of MaskGIT and MDM, no logical errors or inconsistencies were found in the formulation and derivations. The mathematical framework presented in the paper is sound and well-supported by empirical validation. Experimental Designs Or Analyses: The paper effectively demonstrates its validity through a well-structured experimental design. To support the possible design choices, Figures 2 and 3 present experimental comparisons of weighting functions, mask schedules, and model architectures, clearly illustrating their impact on performance. To validate image generation performance, Tables 2 and 3 compare eMIGM against a diverse set of generation models, including Diffusion Models, Consistency Models, GANs, ARs, and Masked Models, establishing its performance superiority and scalability, further supported by Figure 4. However, most design choices are based solely on empirical selection of the best-performing methods, with limited analysis on why these choices lead to improvements. Additionally, while Consistency Models are included in the comparison tables, their differences from eMIGM are not explicitly discussed in the main text. It would be valuable to clarify how eMIGM differs from these models and what makes it superior. Supplementary Material: I Reviewed experimental details and provided equivalences. Relation To Broader Scientific Literature: This paper is highly relevant to the broader literature on Masked Image Generation and Masked Diffusion Models: It extends prior work on masked generative transformers (MaskGIT) and masked Diffusion models (MDMs) by providing a unified framework​. This paper aligns with research on efficiency improvements in generative models, focusing on reducing NFEs while maintaining sample quality. Essential References Not Discussed: - Other Strengths And Weaknesses: - Other Comments Or Suggestions: I believe this paper effectively unifies two approaches into a single framework, systematically categorizing possible design choices to achieve optimal performance. However, rather than introducing entirely new design choices, the paper primarily selects options already used in existing models. While the modifications, such as replacing the fake class token with a mask token in CFG and introducing the time interval strategy, are well-supported by experimental results, the analysis of why these choices improve performance could be further elaborated. Due to this, the paper's overall contribution and novelty may look somewhat limited. I encourage the authors to address these aspects during the rebuttal period. Questions For Authors: 1. In Figure 2(e), the results appear unstable between epochs 300-400. Do you have experimental results for longer training epochs? While it is clear that CFG with Mask achieves a faster convergence, wouldn’t the final convergence point be the same if trained for a sufficiently long time? 2. Although the experimental results include Consistency Models, the main context does not explore them in detail. What do you think are the differences between eMIGM and Consistency Models? 3. In MaskGIT [1], various mask scheduling options are explored (e.g., cubic, square, square root, logarithmic). This paper considers a more limited selection—was this choice guided by empirical findings or specific constraints? Would exploring additional schedules further improve performance? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer 5pkV for the interest and acknowledgement of our contributions and the valuable comments. We respond below to your questions and concerns. > **Minor Limit** Generalizability of improvements beyond tested configurations unclear Empirical analysis is a well-established approach in developing complex models. Prior works [a, b] similarly employed ablation studies and targeted experiments to optimize their designs. Through systematic testing of components and hyperparameters, they demonstrated the value of rigorous empirical analysis, even when optimal settings may be context-dependent. Building on this precedent, our study validates specific configurations and provides a framework for systematically exploring and understanding design choices in masked generative models. We believe this methodological contribution offers significant value to the research community. Besides, eMIGM performs effectively with both VAE (ImageNet 256x256) and DC-AE (ImageNet 512x512) tokenizers despite their distinct latent spaces. This adaptability highlights the transferability of our design principles across different data representations. [a]“Elucidating the Design Space of Diffusion-Based Generative Models.” [b]“Analyzing and Improving the Training Dynamics of Diffusion Models.” > **Exp1:** More evaluation metrics We've now evaluated our models using additional metrics (sFID, IS, Precision, and Recall). These comprehensive results are presented in our response to reviewer DbgT's Weakness2 and will be included in the revised paper. > **Sugg1:** novelty and analysis of the choices of training We sincerely appreciate your recognition of our unified framework, well-structured experimental design and results. Regarding your concerns about novelty and analysis, we would like to clarify two key points: 1. Our hyperparameter tuning follows an empirical approach, aligning with established practices in VAR and MAR. This practical methodology remains valuable as it yields effective, demonstrable results. 2. Our modifications stem from careful analysis of MDM's behavior. For instance, the time interval strategy was motivated by our observation that MDM's generation process is irreversible, making early-stage guidance less effective - a finding we quantitatively validated in Appendix C. Similarly, replacing fake class tokens with mask tokens in CFG was informed by the fact that MDM has seen more mask tokens during training, making them a more natural choice than the fake class tokens used in diffusion models. We appreciate your suggestion and will expand our analysis of these design choices in the revision. Our key contribution is a unified framework integrating masked generative transformers and masked diffusion models, advancing masked generative modeling through systematic experimentation and analysis. > **Q1:** Further training of Figure 2(e) To reponse this question, we trained standard CFG for 800 epochs. Below we compare its FID scores with mask CFG: Model|NFE|FID -|-|- eMIGM-B with standard CFG|16x1.2|2.76 eMIGM-B with mask CFG|16x1.2|2.79 eMIGM-B with standard CFG|128x1.35|2.32 eMIGM-B with mask CFG|128x1.35|2.32 The results show both approaches achieve equivalent final performance, with CFG with Mask converging more rapidly as shown in Fig.2(e) of our paper. > **Q2:** eMIGM vs Consistency Models Consistency models and eMIGM employ distinct approaches to generation. Consistency models establish direct mappings from noise to data via consistency constraints, enabling single-step sampling. In contrast, eMIGM utilizes masked prediction, beginning with fully masked images and iteratively revealing content through progressive unmasking. This architectural distinction enables eMIGM to optimally balance generation quality and computational efficiency by modulating function evaluations. > **Q3:** choice of mask schedule We explored three mask schedules (linear, cosine, and exponential) with constraints $\gamma_0 \approx 0$ and $\gamma_1 \approx 1$. While linear schedules are common in MDM text generation, we found concave functions like cosine performed better for images due to their information redundancy - higher mask ratios during training provide stronger learning signals. With $w(t)=1$, the exponential schedule slightly outperformed cosine, becoming our default. Based on your suggestion, we developed a log-exp schedule ($\gamma_t=\frac{\log\left(1 + (e^5 - 1) \cdot t\right)}{5}$) that balances mask ratios by reducing both high and low masking extremes. Using the setup from Figure 2(b) with $w(t)=1$, we present results of FID below. Epoch|Linear|Cosine|Exp|Log-Exp -|-|-|-|- 100|38.66|24.99|28.63|25.38 200|30.55|16.70|17.97|11.81 300|24.55|15.00|11.57|12.48 400|24.96|12.39|11.90|9.91 The log-exp schedule shows better convergence and performance, validating the benefits of exploring new masking approaches. We appreciate this insightful suggestion and will incorporate these findings in our revision. --- Rebuttal Comment 1.1: Comment: Thank you for addressing our concerns within the short rebuttal period. Our main concerns were whether the design choices were made with proper analysis, and the issue of limited novelty. However, the authors’ justifications have sufficiently addressed these concerns. In addition, during the rebuttal period, the authors provided additional experimental results with various evaluation metrics and introduced a mask scheduling strategy, which further strengthens the contribution of the work. Given these clarifications and additions, I would like to change my score to Weak accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 5pkV, We are sincerely grateful for your insightful comments and your decision to update the rating to 'weak accept'. We highly appreciate it. Best regards, Authors
Summary: The paper proposes a unified framework that integrates masked image generation models (e.g., MaskGIT) and masked diffusion models. The authors systematically explore the design space of training and sampling strategies to improve both performance and efficiency. The proposed model, eMIGM, achieves state-of-the-art performance on ImageNet (256×256 and 512×512) while requiring significantly fewer function evaluations (NFEs) compared to continuous diffusion models. Empirical results show that eMIGM outperforms VAR and approaches state-of-the-art diffusion models like REPA, while requiring less than 40% of the NFEs. Claims And Evidence: The paper makes several claims, and most are well-supported by empirical evidence: 1. Unified framework for masked image modeling and diffusion models. 2. eMIGM achieves better performance than VAR and is competitive with state-of-the-art diffusion models. 3. Proposed sampling strategies (e.g., time interval classifier-free guidance) improve efficiency. 4. Scaling eMIGM improves performance. However, the claim that eMIGM is "comparable" to continuous diffusion models could be more rigorously defined. Methods And Evaluation Criteria: 1. The methods used (masked image modeling, diffusion-based loss, classifier-free guidance) are appropriate for the problem. 2. Evaluation is conducted using ImageNet (256×256 and 512×512), using FID scores. 3. Comparisons with strong baselines (MaskGIT, MAR, diffusion models, VAR) are comprehensive. 4. The choice of FID as the evaluation metric is standard for generative models. Theoretical Claims: 1. The paper presents a mathematical unification of masked image generation and masked diffusion models. 2. The equivalence between MaskGIT’s loss and MDM’s loss is derived in Appendix A. 3. The mathematical formulation appears correct, though there is nothing new. Experimental Designs Or Analyses: 1. Experiments are well-designed, with systematic exploration of training/sampling design choices. 2. Ablation studies (Figures 2 & 3) effectively analyze the contributions of different components. 3. The comparison against state-of-the-art methods is thorough, but it would be beneficial to include additional baselines (e.g., GANs) in more direct comparisons. Supplementary Material: The appendices provide some details, including derivations (Appendix A) and additional experimental details (Appendix B, C). Relation To Broader Scientific Literature: 1. The work extends prior efforts in masked image modeling (MaskGIT, MAR) and diffusion models. 2. The proposed unified framework contributes to understanding the connection between masked models and diffusion models. Essential References Not Discussed: Missing reference in masked generative transformer areas[1]. [1] Meissonic: Revitalizing Masked Generative Transformers for Efficient High-Resolution Text-to-Image Synthesis, ICLR 2025. Other Strengths And Weaknesses: Strengths: 1. Novel unified framework for masked image modeling and diffusion models. 2. Strong empirical results on ImageNet, with significant efficiency improvements. 3. Comprehensive ablation studies exploring design choices in training and sampling. 4. Well-written and clear methodology, with detailed appendices. Weaknesses: 1. Limited discussion of failure cases: What types of images does eMIGM struggle with? 2. Lack of qualitative comparisons against diffusion models and VAR beyond FID scores. Other Comments Or Suggestions: I will be happy to raise my score if all my concerns are resolved, and vice versa. Questions For Authors: 1. How does eMIGM perform on out-of-distribution datasets? Would the efficiency gains generalize to different image distributions? 2. What are the limitations of eMIGM compared to continuous diffusion models beyond NFEs? Are there failure cases where continuous diffusion models outperform eMIGM? 3. Would a hybrid approach combining eMIGM with diffusion models further improve performance? Could additional modifications bridge the gap to state-of-the-art diffusion performance? 4. How does eMIGM compare in terms of robustness to adversarial perturbations? Could eMIGM be more or less vulnerable compared to diffusion models? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer DbgT for the interest and acknowledgement of our contributions and the valuable comments. We respond below to your questions and concerns. > **Claims 1:** Define "comparable" more rigorously To quantify our comparison: eMIGM achieves an FID of 1.57 on ImageNet 256x256 generation versus REPA's 1.40 - a 12% gap. We consider this competitive since eMIGM requires fewer function evaluations (128×1.4 NFEs vs. REPA's 250×2 NFEs). We will clarify this comparison more precisely in our revised paper. > **Experiment1:** Include addtional baselines (e.g. GANs) Our Tables 2 and 3 already include comparisons with GAN baselines (BigGAN and StyleGAN-XL), which we will emphasize more clearly in our revision. > **References** Missing Meissonic We appreciate the reference suggestion and will include it in our final paper. > **Weakness1:** Limited discussion of failure cases: What types of images does eMIGM struggle with? While we have not observed distinct failure patterns, eMIGM does experience occasional generation failures, as do other methods like MAR and VAR. We will include representative failure cases in our revision. > **Weakness2:** Qualitative comparisons beyond FID We provide qualitative visual comparisons with diffusion models and VAR at [link](https://anonymous.4open.science/r/icml-rebuttal-6388/icml_link.pdf) and quantitative evaluations using sFID, IS, Precision, and Recall (see Table below), both of which will be included in our revision. Method|NFE|FID↓|sFID↓|IS↑|Precision↑|Recall↑ -|-|-|-|-|-|- VAR-d30|10×2|1.92|-|323.1|0.82|0.59 -|-|-|-|-|-|- REPA|250×2|1.42|4.70|305.7|0.80|0.65 -|-|-|-|-|-|- eMIGM-XS|16x1.2|4.23|5.74|218.63|0.79|0.50 eMIGM-S|16x1.2|3.44|5.31|244.16|0.80|0.53 eMIGM-B|16x1.2|2.79|5.20|284.62|0.82|0.54 eMIGM-L|16x1.2|2.22|4.80|291.62|0.80|0.59 eMIGM-H|16x1.2|2.02|4.66|299.36|0.80|0.60 -|-|-|-|-|-|- eMIGM-XS|128x1.4|3.62|5.47|224.91|0.80|0.51 eMIGM-S|128x1.4|2.87|5.53|254.48|0.80|0.54 eMIGM-B|128x1.35|2.32|4.63|278.97|0.81|0.57 eMIGM-L|128x1.4|1.72|4.63|304.16|0.80|0.60 eMIGM-H|128x1.4|1.57|4.68|305.99|0.80|0.63 > **Q1:** How does eMIGM perform on out-of-distribution datasets and different image distributions? eMIGM demonstrates strong adaptability across data representations, performing effectively with both VAE (256x256) and DC-AE (512x512) tokenizers despite their different latent spaces, showing our efficiency improvements generalize across resolutions and distributions. Regarding out-of-distribution, while important, our research focuses primarily on improving the efficiency and quality of the generative models themselves, following the established research direction of prior works like MAR and VAR. These foundational works concentrate on in-distribution generation, which remains our focus for high-quality image synthesis. > **Q2:** What limitations does eMIGM have compared to continuous diffusion models? When do diffusion models outperform eMIGM? Compared to continuous diffusion models, eMIGM has limitations in zero-shot classification [a], ultimate generation quality with sufficient NFEs [b], and applications in video/audio synthesis. However, we believe masked generative models can narrow this gap as they evolve. We will discuss these limitations comprehensively in our revision. [a]“Your Diffusion Model is Secretly a Zero-Shot Classifier.” [b]“Inference-Time Scaling for Diffusion Models beyond Scaling Denoising Steps.” > **Q3:** Hybrid approach with diffusion models and modifications? A hybrid approach combining eMIGM with diffusion models could enhance performance. Recent works [c,d] show that integrating autoregressive and diffusion models achieves superior generation capabilities compared to pure continuous diffusion models. Additionally, simple modifications could bridge remaining performance gaps, such as adopting rectified flow [e] for the diffusion head's training objective or following REPA [f] by aligning eMIGM's intermediate Transformer outputs with pretrained DINOv2 encoder features. We will incorporate this discussion in our revised paper. [c]“Efficient Visual Generation with Hybrid Autoregressive Transformer” [d]“Diffusion Transformer Autoregressive Modeling for Speech Generation” [e]“Learning to Generate and Transfer Data with Rectified Flow” [f]“Training Diffusion Transformers Is Easier Than You Think” > **Q4:** How does eMIGM's robustness to adversarial perturbations compare to diffusion models? We appreciate your insightful question regarding adversarial robustness. While we acknowledge the critical importance of this aspect in the broader context of generative modeling, our current research primarily focuses on enhancing the efficiency and quality of the generative models themselves. We acknowledge that our expertise in adversarial perturbations is limited. Currently, we have not performed a assessment of eMIGM's robustness against such perturbations. We will discuss this important aspect in our revised paper.
Summary: This paper provides a comprehensive study of masked diffusion models for visual generation, covering training, sampling, and architectural designs with extensive experiments. In other words, this paper investigates how to make a good MDM with regard to training and sampling settings through empirical evidences. Claims And Evidence: The claims made in the submission are supported by clear and convincing empirical evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem. Theoretical Claims: There are almost no theoretical claims in this submission except for the equivalence of MaskGIT and MDM masking strategies. I checked the correctness of proof in Appendix A. Experimental Designs Or Analyses: I checked the soundness and validity of experimental designs and analyses. There are some concerns. 1. I don't quite understand the paragraph "CFG with Mask" in line 246 and the difference between Mask CFG and fake class CFG is hard to distinguish. The authors should put more details on what is the special mask token here. 2. The network architectural details come too late in experiments. I wasn't aware of the design of continuous targets in eMIGM framework before Section 5.2, as the conventional masked based generative models typically use discrete targets to perform classification. So there comes the problem, what is the baseline performance of the discrete variant of eMIGM? 3. I don't think some analysis, especially when compared to diffusion model steps, make sense. Since eMIGM use diffusion loss, there are multi-step diffusion steps in a single mask step. Supplementary Material: I reviewed the supplementary. Relation To Broader Scientific Literature: The key contributions of this paper is highly related to the MAR and MaskGIT, especially in terms of masking mechanism and diffusion loss head. Essential References Not Discussed: There are no highly-related work missing in this paper in my view. Other Strengths And Weaknesses: Overall, this is a borderline paper with both strengths and weaknesses. It includes extensive experiments that thoroughly explore the training and sampling space of masked diffusion models, which should be highly appreciated. On the other side, this paper lacks a clear goal or motivation to unify the masked diffusion models rather than performance. Additionally, despite extensive trial and error across many experiments, the largest eMIGM variant performs only on par with MAR-H, which raises concerns about the effectiveness of the proposed method, or the necessity of the unified masked diffusion modeling framework. Other Comments Or Suggestions: See Questions for Authors Questions For Authors: 1. What makes the abbreviation "eMIGM"? Is it something like "EDM" [1]? 2. When discussing mask-based generative models, the authors should first clarify whether they are using discrete or continuous models. Also, the authors should justify why they are using continuous models like MAR instead of MaskGIT pipeline. [1] Elucidating the Design Space of Diffusion-Based Generative Models Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > **Experimental Design1:** The claim of "CFG with Mask" and "fake class CFG" Thank you for your question. In standard Classifier-Free Guidance (CFG) used in diffusion models and MAR, training involves occasionally replacing the class label with a dedicated 'fake class' token, which is distinct from the 'mask' token used for image patches. Our "CFG with Mask" approach instead uses the existing 'mask' token to replace class labels during training, eliminating the need for a separate 'fake class' token. We will clarify this distinction in the final paper. > **Experimental Design2:** Network architectural details come too late and What is the baseline performance of the discrete variant of eMIGM? Thank you for your valuable suggestion. We appreciate your feedback and will incorporate more detailed network architectural information in an earlier section of our paper to enhance clarity. As briefly mentioned at the beginning of Section 4, our experiments exclusively focus on continuous masked-based generative models to mitigate the information loss associated with discrete tokenizers. Consequently, we did not develop a discrete variant of eMIGM, particularly since prior work has demonstrated that discrete variants of MAR perform significantly worse than their continuous counterparts. We will ensure all experimental details are presented more clearly in the final version of our paper. > **Experimental Design3:** The model steps of eMIGM. We sincerely appreciate your valuable feedback. In response, we conducted an experiment on 512x512 image generation using eMIGM-L with NFE=64x1.25 (as presented in Table 3 of our paper). In this configuration, the diffusion model requires 14 sampling steps. To determine the additional computational cost of the diffusion model beyond the main transformer's NFEs, we measured the sampling speed with different diffusion steps. Our measurements on a single A100 GPU with batch size 256 show that the diffusion model introduces approximately 14% additional computational overhead beyond the main transformer's NFE requirements. Thank you for raising this concern about NFE comparisons between eMIGM and diffusion models. While eMIGM does include diffusion steps within each masked step, since transformer forward passes remain the primary bottleneck, NFE continues to be a valid efficiency metric. We will address this point more thoroughly in the revised paper, ensuring clarity and precision in our presentation. > **Weakness1:** The effectiveness of the proposed method, or the necessity of the unified masked diffusion modeling framework. We appreciate your valuable feedback. Regarding the necessity of the unified masked diffusion modeling framework, it serves two primary purposes. First, it enhances the theoretical understanding of masked generation and simplifies comparisons between different methods (e.g., MAR vs. MaskGIT). Second, it facilitates a systematic exploration of the design space and allows for the integration of techniques from related areas. For example, advancements in diffusion models can be readily incorporated into our diffusion head training, helping to advance the state-of-the-art in masked image generation. Regarding the effectiveness of eMIGM, our comprehensive experimental analysis demonstrates key advantages: 1. Through systematic exploration of the training and sampling design space, we identified critical factors influencing model performance and efficiency. These findings provide actionable insights for developing future masked generative models. 2. The proposed time-interval strategy for classifier-free guidance (CFG) empirically improves sampling efficiency while maintaining generation quality. These improvements are substantiated by direct comparisons with MAR baselines. For ImageNet 512×512 generation, eMIGM achieves superior computational efficiency, requiring only 64×1.25 NFEs compared to MAR's 256×2 NFEs while maintaining competitive FID scores. > **Q1:** what makes the abbreviation "eMIGM"? Thank you for your question. The abbreviation "eMIGM" stands for "Effective and Efficient Masked Image Generation Models.", where the prefix 'e' denotes both effectiveness and efficiency. > **Q2:** Why using continuous models like MAR instead of MaskGIT pipeline. Thank you for this insightful question. We will clarify that our work focuses exclusively on continuous masked-based generative models. As briefly noted in Section 4, this choice stems from the need to avoid information loss inherent in discrete tokenizers. Previous research has consistently shown that discrete variants of MAR yield substantially inferior performance compared to continuous implementations. We will emphasize this choice more clearly in the final version of our paper and apologize for any previous lack of clarity.
Summary: The paper presents eMIGM, a novel model for effective and efficient masked image generation. It unifies masked image generation models and masked diffusion models within a single framework, exploring the design space of training and sampling to identify key factors impacting performance and efficiency. The model demonstrates strong performance on ImageNet generation, outperforming seminal models like VAR and achieving results comparable to state-of-the-art continuous diffusion models with significantly fewer NFEs. Key contributions include a unified formulation for exploring the design space, a time interval strategy for classifier-free guidance, surpassing state-of-the-art diffusion models on ImageNet 512×512 with fewer NFEs, and demonstrating scalability benefits. Claims And Evidence: The claims made in the submission are generally supported by clear and convincing evidence. The authors provide extensive experimental results on ImageNet 256×256 and 512×512, comparing eMIGM with state-of-the-art generative models. They demonstrate that eMIGM achieves lower FID scores with fewer NFEs and parameters compared to models like VAR and even outperforms some diffusion models. The ablation studies on different components (mask schedule, weighting function, model architecture, time truncation, CFG with mask) provide strong evidence for the design choices made. However, some claims could be further strengthened by additional analyses. For instance, while the authors claim that larger models are more training and sampling efficient, a more detailed analysis of the computational resources required for different model sizes and the trade-offs between model size and performance would provide more comprehensive support. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem of masked image generation. The unified framework integrating masked image modeling and masked diffusion models makes sense, as it allows for a systematic exploration of the design space. The choice of Fréchet Inception Distance (FID) as the evaluation metric is standard in the field and suitable for comparing image generation quality. The experimental designs, including comparisons with various state-of-the-art models and ablation studies, are well-structured to validate the effectiveness and efficiency of eMIGM. Theoretical Claims: the conceptual framework and derivations related to unifying masked image generation and diffusion models seem logically consistent based on the explanations provided Experimental Designs Or Analyses: The experimental designs and analyses are sound and valid. The authors conduct extensive experiments on ImageNet datasets, comparing eMIGM with a wide range of generative models, including diffusion models, autoregressive models, GANs, and other masked models. Supplementary Material: NAN Relation To Broader Scientific Literature: The key contributions of the paper are well-related to the broader scientific literature. The work builds upon and advances previous research in masked image modeling (e.g., MaskGIT, MAR) and masked diffusion models, integrating their strengths within a unified framework. It also connects to the extensive literature on diffusion models, autoregressive models, GANs, and other generative models, positioning eMIGM as a competitive alternative. Essential References Not Discussed: [1] Hart: Efficient visual generation with hybrid autoregressive transformer, ICLR 2025. [2] Meissonic: Revitalizing Masked Generative Transformers for Efficient High-Resolution Text-to-Image Synthesis, ICLR 2025. [3] Bag of Design Choices for Inference of High-Resolution Masked Generative Transformer [4] AdaNAT: Exploring Adaptive Policy for Token-Based Image Generation, ECCV 2024 Other Strengths And Weaknesses: Strengths: The paper presents a novel unified framework that integrates masked image generation and diffusion models, offering a systematic approach to exploring the design space. Extensive experiments demonstrate the effectiveness and efficiency of eMIGM, showing strong performance on ImageNet with fewer computational resources compared to state-of-the-art models. The introduction of the time interval strategy for classifier-free guidance is a valuable contribution that improves sampling efficiency. The ablation studies provide valuable insights into the impact of different design choices, helping to guide future research in this area. Weaknesses: While the paper shows strong results on ImageNet, the generalizability to other datasets and domains could be further explored. The paper could benefit from a more detailed discussion of the practical implications of using eMIGM Other Comments Or Suggestions: NAN Questions For Authors: How does eMIGM handle different image resolutions beyond those tested in the paper (256×256 and 512×512)? Are there any limitations or adjustments needed when applying the model to higher or lower resolution images? What are the main computational bottlenecks when scaling up eMIGM models, and are there any plans to explore more efficient architectural designs? Could you discuss the potential applications of eMIGM beyond image generation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > **Claims:** Some claims could be further strengthened by additional analyses. We appreciate your feedback regarding our efficiency claims. Our analysis of training efficiency examined the relationship between training FLOPs and FID scores. As shown in Fig.4(b), larger eMIGM models achieve better FID scores at equivalent FLOP budgets—for example, eMIGM-L outperforms eMIGM-B when both consume approximately 10^20 FLOPs, confirming superior training efficiency of larger models. Regarding sampling efficiency, we evaluated the inference speed-FID trade-off (Fig.4(c)), with measurements conducted on an A100 GPU using batch size 256. Results demonstrate that larger eMIGM consistently deliver better FID scores than smaller models at comparable inference speeds. We will follow your suggestion to enhance the clarity of these findings in our revised paper. > **References:** Some references are not discussed. Thank you for suggesting these reference. We will incorporate a discussion of them into the revised paper. > **Weak1:** The generalizability to other datasets and domains could be further explored. We appreciate your feedback regarding the generalizability of eMIGM. eMIGM performs effectively with both VAE (ImageNet 256x256) and DC-AE (ImageNet 512x512) tokenizers despite their distinct latent spaces. This adaptability highlights the transferability of our design principles across different data representations. We plan to explore eMIGM's application across diverse datasets and domains in future work. > **Weak2:** The paper could benefit from a more detailed discussion of the practical implications of using eMIGM We appreciate this valuable suggestion. The practical implications of eMIGM are significant and multifaceted. Our model achieves higher-quality sample generation with fewer sampling steps (NFE), offering substantial advantages in computational resource utilization. Based on these efficiency gains, we believe eMIGM has strong potential for effective application across diverse domains currently served by diffusion models, including text to image and video generation. Besides, we systematically identify better design choices in masked generative modeling, offering practical guidance for developing efficient models. Our framework demonstrates how empirical analysis can effectively guide architectural decisions in this domain. > **Q1:** Are there any limitations or adjustments needed when applying the model to higher or lower resolution images? Thank you for this question. When applying eMIGM to different resolutions, we find that the model architecture is inherently resolution-agnostic. The transformer backbone can process different sequence lengths without architectural modifications, allowing for flexible adaptation to various image resolutions. Our experiments on both 256×256 and 512×512 resolutions demonstrate this capability. The primary adjustment needed is simply retraining the model on the target resolution data, as is standard practice with most generative models. > **Q2:** What are the main computational bottlenecks when scaling up eMIGM models Thank you for this important question. In our current implementation and experiments with eMIGM, we have not yet encountered significant computational bottlenecks that would require specialized optimization techniques. The model scales effectively with our available computational resources. > **Q3:** are there any plans to explore more efficient architectural designs? We appreciate this valuable question. For future efficient architectural designs, we are exploring two promising directions: (1) U-ViT architecture, which achieves comparable performance to DiT on ImageNet 256x256 while using only ~30% of training FLOPs through its U-shaped design; and (2) REPA-inspired alignment between eMIGM's intermediate Transformer outputs and pretrained DINOv2 encoder features. Our preliminary experiments with the latter approach showed slight improvements when aligning earlier layers (4th), while later layers (18th) degraded performance. Though current benefits are marginal, more systematic investigation could yield substantial efficiency gains in future work. > **Q4:** Could you discuss the potential applications of eMIGM beyond image generation? We appreciate this important question about eMIGM's broader applications. The model's core mechanism of predicting masked content from surrounding context naturally extends to various image manipulation tasks including inpainting (treating missing regions as masks), conditional editing (regenerating masked objects based on specified conditions), and outpainting (predicting exterior content around a central image). Beyond images, we believe eMIGM could be adapted for text to image, video, and audio generation, though this would require carefully designed input formats and potential architectural modifications to accommodate these diverse data modalities.
Summary: This work explores masked diffusion and image modeling through a unified framework, systematically analyzing several key design choices in this domain. Within this framework, the authors ablate masking schedules, loss weighting, and sampling strategies, leading to improvements over existing standards in each area. Building on these insights, they propose eMIGM, an improved masked diffusion modeling method that rivals state-of-the-art continuous diffusion models like EDM2. Experimental results demonstrate that eMIGM achieves competitive FID scores on ImageNet at 256$\times$256 and 512$\times$512 resolutions while requiring fewer NFEs than most diffusion models. Claims And Evidence: The paper’s main claims regarding performance and efficiency are well-supported by the experimental results. However, some claims should be rephrased to better align with common practices in prior works. Specifically, using a weight schedule for guidance is a well-established technique for enhancing the diversity of CFG, as proposed in several recent works [1, 2, 3]. Additionally, the unsupervised guidance method was introduced in [4] and is closely related to [5]. To ensure clarity and accuracy, the authors should revise certain sections to avoid presenting these established methods as novel contributions of this work. [1] Sadat S, Buhmann J, Bradley D, Hilliges O, Weber RM. CADS: Unleashing the diversity of diffusion models through condition-annealed sampling. arXiv preprint arXiv:2310.17347. 2023 Oct 26. [2] Kynkäänniemi T, Aittala M, Karras T, Laine S, Aila T, Lehtinen J. Applying guidance in a limited interval improves sample and distribution quality in diffusion models. arXiv preprint arXiv:2404.07724. 2024 Apr 11. [3] Wang X, Dufour N, Andreou N, Cani MP, Abrevaya VF, Picard D, Kalogeiton V. Analysis of classifier-free guidance weight schedulers. arXiv preprint arXiv:2404.13040. 2024 Apr 19. [4] Nie S, Zhu F, Du C, Pang T, Liu Q, Zeng G, Lin M, Li C. Scaling up Masked Diffusion Models on Text. arXiv preprint arXiv:2410.18514. 2024 Oct 24. [5] Karras T, Aittala M, Kynkäänniemi T, Lehtinen J, Aila T, Laine S. Guiding a diffusion model with a bad version of itself. Advances in Neural Information Processing Systems. 2024 Dec 16;37:52996-3021. Methods And Evaluation Criteria: The authors use well-established benchmarks to evaluate the performance of eMIGM. Theoretical Claims: There is no major theoretical claim in the paper. Experimental Designs Or Analyses: The experiments are well-designed and clearly show how different components affect the performance of eMIGM. Supplementary Material: I have reviewed all sections in the Supplementary material. Relation To Broader Scientific Literature: The authors discuss most of the relevant prior work in the main text. Masked diffusion modeling is a novel technique with significant potential for generative modeling, yet it remains underexplored. Since most existing methods in this domain have been developed for language modeling, the authors' contribution to advancing performant MDMs for image generation is noteworthy. As such, the paper is well-positioned within the current literature on masked diffusion models. Moreover, the proposed unified framework can facilitate more systematic research into the performance of these models. That said, some claims presented as new contributions require greater clarity. Please refer to the Claims and Evidence section for further details. Essential References Not Discussed: Please refer to the Claims and Evidence section for further details. Other Strengths And Weaknesses: ### **Strengths** - The paper is well-written, with clear and easy-to-follow explanations. - The experiments are thorough, with a structured, step-by-step introduction that enhances readability and understanding. - Improving the performance of discrete diffusion models for image generation is an important and timely research direction. ### **Weaknesses** - Some prior works are not adequately discussed, which affects the accuracy of certain novelty claims. - The number of NFEs used by the model remains relatively high (e.g., exceeding 64), which slightly contradicts the paper's efficiency claims., although the method still performs well with lower NFEs. - The comparisons in Tables 2 and 3 are not entirely fair, as most reported FIDs are calculated using constant CFG. For instance, EDM2 with Guidance Interval achieves an FID of 1.40, whereas the reported value is 1.81. This discrepancy should be addressed for a more accurate comparison. - The NFE comparison with EDM2 is not entirely representative, as EDM2's guidance network is relatively small. Consequently, the NFEs for the conditional and unconditional parts do not have the same computational cost. A more appropriate metric for comparing the sampling speed of eMIGM with EDM2 would be generation throughput (images/sec). Other Comments Or Suggestions: - The discussion on the mask schedule suggests that the cosine schedule is optimal, yet the authors ultimately use the exponential schedule. This should be clarified—one way to address this is to discuss the weighting strategy and masking schedule together to explain why the exponential schedule with $w(t) = 1$ is the final choice. - If the diffusion model is used for clean data prediction similar to MAR, it is unclear how much additional sampling cost is introduced by the diffusion model beyond the NFEs required for the main transformer. - Figure 4 is misplaced. Questions For Authors: 1) Have you experimented with other weighting and masking functions, such as sigmoid masking or the log-normal weighting from EDM? 2) How does the method perform when using fewer than 16 sampling steps? 3) Could the diffusion component for data prediction benefit from classifier-free guidance instead of temperature sampling? 4) Does the model still receive the class condition as input when using the CFG mask token for training? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank reviewer 7cuF for the interest and acknowledgement of our contributions and the valuable comments. We respond below to your questions and concerns. > **Claim1:** using a weight schedule for guidance Compared to existing work, our approach is motivated by MDMs' unique irreversible token generation constraint. We found that early strong guidance restricts generation diversity and increases FID scores. We will revise our paper to properly acknowledge prior work and clarify our specific contributions to guidance strategies for MDMs. > **Claim2 and Weak1**: unsupervised guidance method We will clarify that we adapt the unsupervised CFG from text generation [4] to image generation. For clarity, we rename it to "CFG with Mask" to better reflect our focus on masked image generation. Besides, we will add a discussion of [5] in the final version. > **Weak2:** number of NFEs remains relatively high Our work focuses on improving masked image modeling efficiency. For example, in 512x512 generation, eMIGM-L matches MAR's performance with fewer NFEs (64x1.25 vs 256x2). We believe future work applying distillation could further improve efficiency. > **Weak3:** Tables 2 and 3 are not entirely fair While we previously cited results from the original EDM2 paper, we will update our comparison to include EDM2 with Guidance Interval [2]. > **Weak4:** NFE comparison with EDM2 We conducted additional experiments comparing sampling speeds on a single A100 GPU (batch size 256). As shown below, eMIGM achieves faster sampling than EDM2 while maintaining competitive FID. Model|Avg sec per image↓|FID↓ -|-|- eMIGM-L|0.165|1.77 EDM2-XXL|0.552|1.81 EDM2-XXL with interval|0.481|1.40 > **Sugg1:** More discuss about weighting strategy and masking schedule Our experiments showed that weighting functions significantly affect noise schedule performance. Using $w(t)=\frac{\gamma_t'}{\gamma_t}$ led to unstable training, especially with the exp schedule. Switching to $w(t)=1$ stabilized training across all schedules and improved performance, with the exp schedule yielding the best results. We adopted this combination as our default and will clarify this relationship. > **Sugg2:** About additional sampling cost In our 512x512 image generation experiments with eMIGM-L (NFE=64x1.25), the diffusion model need 14 sampling steps using DPM-Solver, introducing approximately 14% additional sampling speed on a single A100 GPU (batch size 256). This is more efficient than MAR, which requires 100 diffusion steps. As shown in Response to W4, eMIGM achieves faster sampling (0.165s vs 0.552s per image) while maintaining competitive FID scores compared to EDM2. We will include a efficiency comparison table in our final version. > **Sugg3:** Fig.4 is misplaced We will relocate Fig.4 to be positioned near Section 6.1. > **Q1:** Experimented with other weighting and masking functions Regarding to the mask schedules, we explored three mask schedules (linear, cosine, and exponential) constrained by $\gamma_0 \approx 0$ and $\gamma_1 \approx 1$. In response to your question, we introduce a log-exp schedule that balances mask ratios by reducing extreme cases of both high and low masking in exp schedule. The schedule is defined as: $\gamma_t=\frac{\log\left(1 + (e^5 - 1) \cdot t\right)}{5}$. Following the experimental setup in Figure 2(b) with $w(t)=1$, we present comparative FID in the table below. Epoch|Linear|Cosine|Exp|Log-Exp -|-|-|-|- 100|38.66|24.99|28.63|25.38 200|30.55|16.70|17.97|11.81 300|24.55|15.00|11.57|12.48 400|24.96|12.39|11.90|9.91 The log-exp schedule shows better convergence and performance, validating the benefits of exploring new masking schedule. Regarding weighting functions, we explored two primary options: $w(t) = \frac{\gamma_t^\prime}{\gamma_t}$ and $w(t) = 1$. The log-normal weighting from EDM isn't directly transferable to our framework. Exploring alternative weighting functions remains promising for future work. > **Q2:** Performance with <16 sampling steps We evaluated eMIGM's performance with fewer sampling steps on ImageNet 256×256, with results shown in the table below: Method|NFE|FID -|-|- eMIGM-XS|8x1.2|5.19 eMIGM-S|8x1.2|4.69 eMIGM-B|8x1.2|3.77 eMIGM-L|8x1.2|3.30 eMIGM-H|8x1.2|3.07 > **Q3:** Could diffusion component benefit from cfg Our diffusion component already implements classifier-free guidance. During sampling, our model processes both class and mask token inputs to generate $z_c$ and $z_u$ outputs. CFG guides generation toward class-conditioned results using the formula $\epsilon=\epsilon_\theta(x_t|t,z_u)+\omega\cdot(\epsilon_\theta(x_t|t,z_c)-\epsilon_\theta(x_t|t,z_u))$, where $\omega$ is the guidance scale. We will clarify this in our final paper. > **Q4:** model receive the class condition as input The model still receives class conditions during training, but we randomly replace the true label with a mask token at a fixed probability. We will clarify this in our revised paper. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for answering my questions in the rebuttal. I also have the following remaining questions: 1. If the model uses a mask token as input, how is this different from learning an unconditional model in classifier-free guidance (CFG) by introducing an additional empty class? 2. Do you have comparisons of eMIGM-H using 8 sampling steps with other diffusion-based models operating at similarly low step counts? While this is not strictly necessary for the rebuttal, given the strong performance of eMIGM-H in such settings, it could further enhance the contributions of the paper. 3. Similarly, since the Log-Exp schedule appears to outperform other scheduling methods, I’m curious what the final performance would be when combined with this masking strategy. Overall, I believe the paper is well-written, and the core contributions are interesting. Although some components build on prior work, the unification of different approaches could be valuable for future research in masked image modeling—especially if the authors release the code. Therefore, I would like to increase my score to Accept. **Minor comment**: The fact that early strong guidance negatively impacts diversity has been previously noted in [1, 2]. Please ensure that related prior work is appropriately discussed and cited in the final version of the paper. [1] Sadat S, Buhmann J, Bradley D, Hilliges O, Weber RM. CADS: Unleashing the diversity of diffusion models through condition-annealed sampling. arXiv preprint arXiv:2310.17347. 2023 Oct 26. [2] Kynkäänniemi T, Aittala M, Karras T, Laine S, Aila T, Lehtinen J. Applying guidance in a limited interval improves sample and distribution quality in diffusion models. arXiv preprint arXiv:2404.07724. 2024 Apr 11. --- Reply to Comment 1.1.1: Comment: We thank Reviewer 7cuF for acknowledging our contributions since the beginning. We are glad that the vast majority of the concerns have been addressed. We respond below to your remaining questions. > **Q1:** If the model uses a mask token as input, how is this different from learning an unconditional model in classifier-free guidance (CFG) by introducing an additional empty class? When using mask tokens as input, the training of unconditional models is fundamentally equivalent since mask tokens introduce no additional information. However, implementation-wise, using an additional empty class for unconditional model training optimizes both the mask token and empty class as trainable tensors, whereas using mask tokens as input only optimizes the mask token itself. > **Q2:** Do you have comparisons of eMIGM-H using 8 sampling steps with other diffusion-based models operating at similarly low step counts? We conducted a comparative analysis of eMIGM-H against other diffusion-based models at low step counts on ImageNet 256×256, with results shown below: Method|NFE|FID|IS -|-|-|- eMIGM-H|8x1.2|3.07|299.4 eMIGM-H|16x1.2|2.02|299.4 -|-|-|- DiT-XL/2|8x2|63.9|53.8 DiT-XL/2|16x2|19.5|136.9 -|-|-|- REPA|8x1.7|100.19|22.7 REPA|16x1.7|15.29|170.6 Using official implementations of DiT and REPA with modified sampling steps, we observe that eMIGM-H demonstrates superior performance at low NFEs. Specifically, with 8x1.2 steps, eMIGM-H achieves an FID of 3.07 and IS of 299.4, significantly outperforming both DiT-XL/2 (FID: 63.9, IS: 53.8) and REPA (FID: 100.19, IS: 22.7) under comparable settings. These results highlight eMIGM-H's exceptional efficiency and generation quality at low step counts. We will follow your suggestion to incorporate these findings in our revised paper. > **Q3:** what the final performance would be when combined with the Log-Exp mask schedule. We sincerely appreciate your valuable question. In response, we are currently conducting this experiment. However, due to time constraints, the experiment requires approximately two more days to complete, while the reviewer-author rebuttal period concludes in just a few hours. Therefore, we will include these results in our revised paper. > **Minor Comment:** Please ensure that related prior work [1,2] is appropriately discussed and cited in the final version of the paper. We sincerely appreciate you bringing these related prior works to our attention. We commit to thoroughly discussing these works and properly citing them in the final version of our paper. Thank you again for your valuable feedback. > **Acknowledgment of Positive Feedback** We sincerely thank the reviewer for their positive feedback and for recognizing the value of our work. We are particularly grateful for the decision to increase the score to Accept, which is a strong endorsement of our research contributions. We appreciate the reviewer's thoughtful consideration throughout the review process and their constructive suggestions that have helped improve our paper. We look forward to incorporating all feedback in our final version.
null
null
null
null
Efficient Heterogeneity-Aware Federated Active Data Selection
Accept (poster)
Summary: This paper considers active linear regression in the federated learning setting. It adapts the leverage score sampling to the federated learning setting for active learning. To make the method work in federated learning, it requires two components, data selection which estimates the leverage scores, and model learning which solves the linear regression problem once the data is selected and labels are obtained. For both of these two components, it applies the idea of FedSVD where it assumes that there is a trusted authority that generates random orthogonal matrices P and Q; each client to send their masked local data $PX_iQ$ to the server; the server does global computation, and send back the results to each clients. It claims that the method is privacy preserving. It provides a theoretical analysis showing that the method achieves a similar label complexity as in the centralized learning setting. It also provides some empirical evaluation. Claims And Evidence: One of my major concerns is that I'm not convinced that the proposed method is privacy preserving. The only support for it seems to be based on the idea that "there are infinitely many possible $\bar{X}$ can be masked into X' ($PX_iQ$)" (line 182, left column), but this only guarantees that one cannot recover $X_i$ from $PX_iQ$, but recovering it up to some linear transformation is already leaking a significant amount of information in my opinion. A more precise theoretical characterization of what kind of "privacy" is preserved would be helpful. Methods And Evaluation Criteria: N/A Theoretical Claims: I'm not very convinced that the main theoretical contribution, Theorem 4.1 is correct. It looks to me that in the proposed method, the leverage score $\tau_i$ is computed locally. Can you provide a formal proof showing that the locally computed $\tau_i$ is the same or close to the global leverage score defined as in equation (1)? Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for your careful review of our manuscript. Below we respectfully and explicitly address your main concerns point-by-point. > Q1-*Claims And Evidence*: Concerns about privacy preservation and lack of a precise theoretical characterization of what kind of "privacy" is preserved Thank you for raising this concern. We acknowledge that FALE utilizes masking techniques based on the FedSVD algorithm, which leads to partial leakage of structural information, such as the global distributional properties could be indirectly inferred from leverage scores, although direct identification of local client data remains infeasible. However, such limited structural leakage is often accepted in practical FL scenarios, please see [1-4]. Following the established FedSVD, **our method maintains the same level of data confidentiality against direct reconstruction**. To clarify this issue further, we have explicitly stated these limitations and the level of privacy protection provided in the revised manuscript. Specifically, **FALE adopts a one-pass querying mechanism and avoids persistent metadata tracking**. Once selected data indices are returned to clients, these mappings are discarded, reducing the risk of leakage. Furthermore, **FedSVD's privacy-preserving masking mechanism ensures that raw client data remains inaccessible**. Although the global distributional properties might be indirectly inferred from leverage scores, we wish to point that this certain degree of privacy risk is **generally acceptable** in the FL literature (see [1-4]). Hope for your understanding! We will folow and focus on more related privacy works in the future. > Q2-*Theoretical Claims*: Concerns regarding correctness and validity of Theorem 4.1, specifically the proof that locally computed leverage scores match the global leverage scores We clarify that the validity of Theorem 4.1 relies fundamentally on the correctness of leverage scores computed via FedSVD. FedSVD has been rigorously proven to securely compute exact global singular vectors from distributed, non-i.i.d. data without exposing raw client data. **Since leverage scores are directly computed from these securely aggregated global singular vectors, leverage scores computed locally by each client indeed exactly match the global leverage scores computed centrally**. Consequently, Theorem 4.1 remains rigorously correct in our federated setting. In the revised manuscript, we have rewritten Theorem 4.1 as follows. **Theorem 4.1** *Consider FL with non-i.i.d. data. Let $k$ be the number of clients, $\epsilon \in (0, 1]$ be an error parameter, $X_i \in \mathbb{R}^{n_i \times d}$ be the corresponding data matrices, and $\mathbf{y}_i \in \mathbb{R}^{n_i}$ be the initially unknown target vector. Denote by $X = [X_1^\top, \ldots, X_k^\top]^\top$, $\mathbf{y} = [\mathbf{y}_1^\top, \dots, \mathbf{y}_k^\top]$, and $\mathbf{\theta}^\ast = \arg \min _ {\mathbf{\theta}} \| X \mathbf{\theta} - \mathbf{y} \|_2^2$. **Algorithm 1 computes the global leverage scores for the data in each client**. Moreover, if Algorithm 1 queries $\mathcal{O}(d \log d)$ data points and outputs the model $\mathbf{\theta}^g$, then with probability at least 0.99 that:* $\| X \mathbf{\theta}^g - \mathbf{y} \|_2^2 \leq \alpha \| X \mathbf{\theta}^\ast - \mathbf{y} \|_2^2,$ *with some constant $\alpha$. In addition, if it queries $\Omega(d \log d + d / \epsilon)$ data points and outputs $\mathbf{\theta}^g$, then with probability at least $1/3$ that:* $\| X \mathbf{\theta}^g - \mathbf{y} \|_2^2 \leq (1 + \epsilon) \| X \mathbf{\theta}^\ast - \mathbf{y} \|_2^2$. Finally, we would like to emphasize that FALE, to the best of our knowledge, introduces the first FAL approach with a global data selection strategy, **substantially different from existing local-selection methods and is more applicable to the practical heterogeneous FL settings**. Moreover, our theoretical analysis on query complexity represents an important and novel contribution in the FAL literature. ### References [1] Rothchild, Daniel, et al. ”Fetchsgd: Communication-efficient federated learning with sketching.” International Conference on Machine Learning. PMLR, 2020. [2] Gu, Hang, et al. ”Fedaux: An efficient framework for hybrid federated learning.” ICC 2022-IEEE International Conference on Communications. IEEE, 2022. [3] Li, Chengxi, Gang Li, and Pramod K. Varshney. ”Decentralized federated learning via mutual knowledge transfer.” IEEE Internet of Things Journal 9.2 (2021): 1136-1147. [4] Chai, D., Wang, L., Zhang, J., Yang, L., Cai, S., Chen, K., and Yang, Q. Practical lossless federated singular vector decomposition over billion-scale data. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 46–55, 2022. --- Rebuttal Comment 1.1: Comment: Thanks for the response. My question on Theorem 4.1 is addressed. However, I would like to keep my score since: - As pointed out by Reviewer 2DKU, the main technique of this paper is a direct application of FedSVD and leverage score sampling, and there is not much new insights or new techniques. The novelty may not meet the bar for ICML. - I still think passing linear transform of the data to the server leaks too much information. Even for the reference [1-3] mentioned in rebuttal ([4] seems to be a simple variant of FedSVD), it looks like they're not directly passing such raw data after linear transforms. --- Reply to Comment 1.1.1: Comment: Thank you for your second-round comments and suggestions. We would like to respectfully provide clarifications on your remaining concerns. 1) Regarding novelty We respectfully argue that our work **is not merely a direct application of FedSVD and leverage score sampling to the FAL setting**. Instead, FedSVD is invoked only as a necessary sub-routine in our proposed method, enabling the secure computation of global leverage scores required for data selection. It is important to emphasize that leverage score sampling cannot be directly applied in the FL setting due to privacy concern, nor is it the only possible method suitable for FAL. With the development of FedSVD, the leverage score can be employed appropriately. Moreover, **the primary contribution of our work lies in defining and establishing an entirely new global FAL framework**, rather than simply applying existing techniques to FAL. Different with previous works, our proposed framework, including global data evaluation, global data selection, data indexing, labeling, and privacy-preserving global model aggregation, is systematically established and thoroughly analyzed **for the first time** in our work. We believe that this contribution is significant enough to meet the novelty requirement of ICML, as it clearly advances the state-of-the-art methods in FAL. Additionally, we kindly point out comparable levels of novelty and contribution from recent papers accepted at top conferences such as ICML and NeurIPS (see [1,2]). We sincerely hope you could reconsider our important contributions during your final evaluation. 2) Regarding information leakage We respectfully argue that transmitting a linearly transformed version of data reveals only rotation-invariant structural properties (e.g., singular values and eigenvalues). However, **such rotation-invariant information has already been explicitly revealed to the server at the FedSVD step**. Thus, passing linearly transformed data does not further degrade the privacy already established by FedSVD. Furthermore, we would like to emphasize clearly that our proposed method strictly adopts the secure aggregation and masking mechanisms of FedSVD [3], because we implement our method based on their code. Therefore, **the data transmission protocol and privacy guarantees in our method exactly match those proven secure in FedSVD**. Specifically, if FedSVD does not directly pass raw data after linear transformation, neither does our proposed FALE method. Thus, we believe our privacy-preserving approach remains consistent with widely accepted practices in the FL literature. Thank you once again for your feedback and for kindly reconsidering our work. [1] Zhu, Muzhi, et al. "Generative Active Learning for Long-tailed Instance Segmentation." ICML, 2024. [2] Huang, Lingxiao, et al. "Coresets for Vertical Federated Learning: Regularized Linear Regression and K-Means Clustering." Advances in Neural Information Processing Systems 35 (2022): 29566-29581. [3] Chai, D., Wang, L., Zhang, J., Yang, L., Cai, S., Chen, K., and Yang, Q. Practical lossless federated singular vector decomposition over billion-scale data. In Proceedings of the 28th ACM SIGKDD, pp. 46–55, 2022.
Summary: This paper proposes FALE algorithm to select informative data points for non-i.i.d. federated regression task. The query strategy performs global data selection using leverage score sampling, where a FedSVD technique is employed to obtain the leverage scores of all data points in federated learning. Furthermore, a global model learning scheme is proposed to fully exploit the labeled data without privacy leakage. Both theoretical and empirical studies are conducted to validate the effectiveness of the proposed method. Claims And Evidence: The paper claims that FALE enables efficient and effective cross-client data selection in federated regression. This claim is well supported by detailed analyses and a range of experiments. Methods And Evaluation Criteria: The authors evaluate the approach using 11 benchmark regression datasets. The empirical FL settings align well with existing practices in the literature. The evaluation criteria include the mean MSE loss among different random data split and the learning curve of AL approaches. They are correct and appropriate. Theoretical Claims: Yes, the proofs follow the techniques in leverage score sampling. They appear to be rigorous and correct. Experimental Designs Or Analyses: I have checked the details of experimental designs, the setup is both sound and appropriate. Supplementary Material: Yes, the supplementary material includes the proof of the theory and presents the algorithm steps of a variant of the proposed method. This additional content enhances the clarity of the work. Relation To Broader Scientific Literature: The contributions of this paper relate to the active regression problem with $\ell_p$ loss functions. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strengths: (1) The proposed method implements global active data selection, mitigating the issue of knowledge overlap among clients in federated learning. The data selection is conducted in a one-pass way, which is novel and well-motivated for FAL scenarios. (2) A global model learning paradigm is introduced and analyzed. It exploits the encrypted labeled data in the server and learns a model that is equivalent to the centralized setting, making it suitable for privacy-preserving learning settings. (3) The effectiveness of the proposed method is validated through comprehensive theoretical and empirical studies. The evaluation is robust. The experiments incorporate recent state-of-the-art methods and a diverse set of benchmark datasets, and the results are statistically significant. Weaknesses: (1) The method requires clients to encrypt and upload their local data, which might raise concerns regarding potential data leakage and increased communication overhead. (2) The paper employs an encryption technique related to homomorphic encryption; however, it would benefit from a more thorough review of related work in this area. (3) Experimental validation is limited to a federated learning scenario involving 10 clients. Further studies on a larger number of clients to better assess scalability. Other Comments Or Suggestions: There are some inconsistent symbols in the paper, e.g., in Algorithm 1. The authors should revise the paper again carefully. Questions For Authors: How does the communication overhead of FALE scale with an increasing number of clients and larger datasets? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful review of our paper. Below we respectfully address each of your comments point-by-point. > Q1-*Other Strengths And Weaknesses*: Concerns regarding potential data leakage and increased communication overhead Regarding data leakage concern, please see our response to reviewer JwJU for more details. Regarding communication overhead, we clarify that the method involves minimal overhead due to the one-pass querying strategy and efficient masking mechanisms. Specifically, the communication cost scales linearly with the number of clients and selected samples, and thus remains practical for large-scale federated scenarios. > Q2-*Other Strengths And Weaknesses*: Lack of thorough review of related homomorphic encryption works As Reviewer nFHH's suggestion, we have explicitly included and reviewed related works employing homomorphic encryption in the revised manuscript. > Q3-*Other Comments Or Suggestions*: Some inconsistent symbols We have carefully proofread the whole paper and eliminated all the grammatical, spelling, and punctuation errors. This revision enhances the readability and ensures the information is accessible without compromising the depth and accuracy of the content. > Q4-*Questions For Authors*: Clarification on how the communication overhead of FALE scales with increasing numbers of clients and larger datasets We wish to clarify further and provide detailed insights regarding communication overhead scalability explicitly as follow: The communication overhead of FALE involves three main components: FedSVD, data selection transmission, global model training. 1. FedSVD has been rigorously validated as highly scalable to large datasets, incurring communication costs proportional to data dimensionality rather than the total data size. Thus, this overhead remains manageable even for extremely large-scale datasets. 2. The cost of transmission the leverage scores and the indices of the selected data scales linearly with the total number of data instances. Each client transmits only a vector. Thus, even with an increasing number of clients or larger datasets, the incremental communication overhead remains relatively modest. 3. The global model training phase has a similar communication complexity with FedSVD, which shares the same scalablity to large scale datasets. Therefore, explicitly considering these three components together, **FALE scales efficiently and practically with increasing numbers of clients and larger datasets**. In the revised manuscript, we have explicitly provided quantitative analyses illustrating communication overhead of in various scenarios, thus thoroughly demonstrating its scalability and practical applicability in realistic FL settings.
Summary: This paper presents FALE (Federated Active data selection by LEverage score sampling), a novel Federated Active Learning (FAL) method for regression tasks with non - i.i.d. client data. FALE leverages FedSVD to gather global data information without exposing individual client data. For global model learning, it trains a global model on the server using masked feature matrices and label vectors of the queried data, which are then unmasked on the client side. Theoretical results are provided to validate the superiority of FALE. Experiments on 9 benchmark datasets demonstrate that FALE significantly outperforms existing SOTA methods in terms of mean squared error, validating its effectiveness. Claims And Evidence: The theoretical result is not convincing. Refer to **Theoretical Claims**. The experiments are extensive. Methods And Evaluation Criteria: Please refer to **Experimental Designs Or Analyses**. Theoretical Claims: * The authors provide theoretical guarantees, demonstrating both constant factor approximation and relative error approximation. However, Theorem 4.1 does not seem to have any relation with the proposed FALE. How can its result demonstrate the effectiveness of FALE? * Eq.4 seems to be meaningless. Eq.4 holds as long as $\alpha$ is large enough. * The correctness of any proofs for theoretical claims is not checked. Experimental Designs Or Analyses: The authors include enough baseline methods and consider sufficient benchmark datasets in experiments. The proposed FALE method outperforms other baselines on benchmark datasets. Supplementary Material: Supplementary material is not reviewed. Relation To Broader Scientific Literature: The key contributions are not related to the broader scientific literature. Essential References Not Discussed: There are no obvious essential references that were not discussed based on the information provided. Other Strengths And Weaknesses: * The problem setting of federated active regression is under-explored and meaningful. Mitigating issues like client drift and imbalance that are common in federated scenarios. * The one-pass selection mechanism and privacy-preserving model training are highly relevant for real-world federated settings. * Limited Novelty. The proposed method is a simple application of FedSVD in the FAL field. * Additional clarification on the computational overhead of the masking and aggregation steps would further strengthen the practical insights. Other Comments Or Suggestions: * Many text in Fig 1 are cut by frames, please fix them. * Line 111, the formulation lacks an ending period. * Fig 2, the same label repeats 9 times, please simplify it. Questions For Authors: The authors claim that the proposed method does not require an initially labeled dataset. However, the experiments use a labeled dataset to initialize the model. What would happen if there is no available label? Also, how does the size of the labeled set influence the performance of the proposed method? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your time in reviewing our paper. Below we respectfully address your concerns point-by-point. > Q1-*Theoretical Claims*: The relationship and effectiveness of Thrm. 4.1 for FALE We have improved and rewritten Thrm 4.1 to explicitly clarify its direct connection and effectiveness to FALE algorithm. Specifically, Thrm 4.1 provides a formal guarantee that the output of Algorithm 1 achieves an approximation to the optimal solution within a constant factor or relative error. This can directly verify the effectiveness of FALE, as it quantifies how close the performance of FALE is compared to the best achievable performance. Also, please refer to our response to Reviewer JwJU in Q2 for details. Regarding why these results apply in the FL context, please refer to our detailed response to Reviewer 2DKU in Q4. > Q2-*Theoretical Claims*: Meaningfulness of Eq. (4) Constant-factor approximations are standard and meaningful theoretical results widely recognized in learning theory literature (see also [1-2]). **The key significance is that the factor $\alpha$ is a fixed, bounded, and problem-independent constant rather than arbitrarily large.** This theoretical guarantee confirms that FALE’s selected samples provide sufficient global information. We have also explicitly highlighted and clarified this point in our revised manuscript. >Q3- *Other Strengths And Weaknesses*: Limited novelty We wish to clarify our contributions explicitly. To the best of our knowledge, **our proposed FALE method is the first federated active learning approach with a global data selection strategy, substantially different from existing local-selection methods.** Moreover, our theoretical analysis on query complexity represents an important contribution to FAL, demonstrating the theoretical advantages of FALE. We sincerely appreciate your feedback; in the revision, we have explicitly emphasized these novel contributions. > Q4-*Other Strengths And Weaknesses*: Clarification on computational overhead We have added the following detailed analysis in the revised paper: " **The client-side masking computation scales quadratically with the labeled samples per client ($n^L_i$) and linearly with data dimension $d$, both of which are typically modest in size.** The server-side aggregation step involves solving a regression problem, which modern computing systems can handle efficiently. Therefore, the computational overhead introduced by masking and aggregation steps is not significant, making our method practical and scalable for realistic federated scenarios." > Q5-*Other Comments Or Suggestions*: Presentation and readability issues We have carefully revised all figures and formulations (especially Fig. 1 and Fig. 2) in the paper. Specifically, text will be repositioned to avoid being cut by figure borders, repetitive labels will be simplified, and missing punctuation will be corrected. We appreciate your suggestions for improving clarity. >Q6-*Questions For Authors*: Influence of labeled set size on FALE performance We have conducted a new experiment to explore the effect of the size of the labeled dataset to the performance of FALE. The results with different initial labeled data rates (0.01, 0.03, 0.05, 0.1) of the proposed methods, after querying 1000 data points, are shown as below. | Dataset | 0.01 FALE | 0.01 FALE-local | 0.03 FALE | 0.03 FALE-local | 0.05 FALE | 0.05 FALE-local | 0.1 FALE | 0.1 FALE-local | |-|-|-|-|-|-|-|-|-| | ct | 0.15±0.00 | 0.18±0.02 | 0.15±0.00 | 0.25±0.09 | 0.15±0.00 | 0.21±0.06 | 0.14±0.00 | 0.20±0.05 | | diamonds | 0.11±0.01 | 0.55±0.06 | 0.15±0.01 | 0.55±0.04 | 0.15±0.01 | 0.48±0.03 | 0.13±0.00 | 0.42±0.03 | | kegg_undir_uci | 0.71±0.01 | 0.74±0.01 | 0.72±0.01 | 0.73±0.01 | 0.72±0.01 | 0.73±0.01 | 0.72±0.00 | 0.72±0.01 | | mlr_knn_rng | 0.51±0.01 | 0.61±0.03 | 0.51±0.00 | 0.58±0.03 | 0.50±0.00 | 0.57±0.03 | 0.50±0.00 | 0.55±0.03 | | online_video | 0.49±0.03 | 0.48±0.01 | 0.48±0.01 | 0.48±0.00 | 0.48±0.02 | 0.48±0.00 | 0.48±0.02 | 0.48±0.00 | | protein | 0.76±0.00 | 0.85±0.01 | 0.77±0.01 | 0.84±0.01 | 0.77±0.01 | 0.83±0.01 |0.77±0.00 | 0.82±0.01 | | sarcos | 0.16±0.01 | 0.56±0.03 | 0.18±0.01 | 0.55±0.03 | 0.17±0.01 | 0.54±0.03 |0.17±0.01 | 0.51±0.03 | | stock | 0.57±0.00 | 0.59±0.02 | 0.56±0.00 | 0.59±0.02 | 0.56±0.00 | 0.57±0.01 |0.56±0.00 | 0.55±0.01 | | wecs| 0.58±0.01 | 0.66±0.05 | 0.57±0.01 | 0.66±0.03 | 0.58±0.01 | 0.65±0.03 |0.57±0.01 | 0.61±0.03 | As shown, our proposed method is minimally affected by the size of the initially labeled dataset, demonstrating its robustness across a variety of applications. ### References [1] Woodruff, David P. "Sketching as a tool for numerical linear algebra." Found. and Trends in Theo. Comp. Sci. 10.1–2 (2014): 1-157. [2] Mahoney, Michael W. "Randomized algorithms for matrices and data." Found. and Trends in ML 3.2 (2011): 123-224.
Summary: This paper investigates the data selection problem in Federated Active Learning (FAL) and introduces FALE, a score-based sampling method. FALE leverages FedSVD to extract cross-client query information, enabling a leverage score-based sampling strategy for data selection and re-weighting. Theoretical analysis shows that FALE requires O(d log d) queries for a constant-factor approximation and Ω(d log d + d/ε) queries for a relative error approximation with high probability. Experimental results on regression benchmarks validate the effectiveness of the proposed approach. Claims And Evidence: The main contribution of this paper is a privacy-preserving data selection algorithm for regression tasks with non-i.i.d. client data in federated learning. The authors claim that the proposed method operates without requiring an initial labeled set and can select instances in a single pass, thereby reducing communication costs. Additionally, they provide a theoretical analysis of the query complexity of their approach. A key strength of the proposed method is its inherent support for unsupervised learning, as it leverages FedSVD decomposition to extract cross-client query information. However, several concerns arise regarding the claims: 1. Efficiency and Privacy: The paper mainly references analysis of FedSVD to support its efficiency and privacy claims but does not provide an independent communication complexity or privacy analysis for FALE itself. This omission is significant because FALE introduces additional challenges beyond standard SVD tasks. A key concern is metadata tracking, where the server must maintain mappings between sample indices and client identities to return selection results. This process may introduce extra communication and storage overhead, which is not explicitly analyzed. Additionally, it poses potential privacy risks, as it could expose certain distributional properties of local datasets, thereby compromising privacy. 2. Handling of Heterogeneous Client Data: The paper claims to address federated learning with heterogeneous client data; however, its approach primarily relies on FedSVD, where masked client data is aggregated on the server into a matrix representation, effectively forming a pooled dataset. There is no explicit mechanism or dedicated analysis in the paper that directly addresses the challenges of heterogeneity in federated settings. Without specific design considerations and analysis for non-i.i.d. data, the claim that the proposed method effectively handles heterogeneous client distributions remains unsubstantiated. Overall, while the proposed method offers an interesting approach to privacy-preserving data selection, the lack of independent analyses on communication efficiency, privacy risks, and heterogeneous data handling weakens the validity of its claims. A more thorough evaluation of these aspects would strengthen the impact and reliability of the work. Methods And Evaluation Criteria: The methods and evaluation criteria presented in the paper are partially reasonable but require further refinement and justification. Federated Active Learning and Data Heterogeneity Considerations: The approach of extracting cross-client information using FedSVD followed by leverage scoring is a reasonable choice. However, the paper does not sufficiently address the specific challenges of FAL and data heterogeneity (as discussed in the previous section). A more thorough analysis of how the method adapts to varying client distributions and the selection dynamics in an active learning scenario would strengthen its contributions. Empirical Evaluation Scope: The experimental evaluation primarily focuses on regression performance using Mean Squared Error (MSE) as the evaluation metric. However, the paper lacks ablation studies to analyze the contribution of each algorithmic component and case studies to examine the characteristics of the selected data instances. Without these, it remains unclear whether each step of the proposed method is essential and effective, and what types of data are preferentially selected. A more comprehensive evaluation, including component-wise validation and qualitative insights, would improve the reliability and interpretability of the results. Overall, while the proposed methodology is reasonable, a deeper investigation into federated active learning constraints and a more comprehensive empirical analysis are necessary to substantiate its claims and enhance its applicability. Theoretical Claims: The proof for Theorem 4.1 leverages FedSVD to transform distributed client data into a pooled matrix representation, applying several lemmas originally proposed for centralized matrix approximation. However, it is unclear whether these results remain valid in the non-i.i.d. and distributed setting of FL. In FL, client data distributions are typically non-i.i.d., unbalanced, and locally biased, which may violate the assumptions underlying these lemmas, many of which rely on random or structured sampling. Experimental Designs Or Analyses: The experimental design is reasonable but lacks soundness in several key aspects: 1. Scalability to Complex Models: All experiments are conducted on a single-layer model and mainly low-dimensional feature spaces. It remains unclear whether the proposed method can scale effectively to deeper architectures or more complex models in real-world applications. Testing on more diverse model architectures would provide stronger validation. 2. Federated Scalability: The experiments are limited to only 10 clients with a query budget of 5 instances per client. This small-scale setting does not reflect realistic federated learning scenarios. 3. Computational and Communication Costs: The paper claims reduced cost due to FedSVD-based selection, but there are no experiments analyzing server-side computational overhead (SVD computation) or communication cost (data collection and transmission). Supplementary Material: Yes, A.1. Proof of Theorem 4.1 and A.2. Algorithm of FALE-local Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: Federated Active Learning (FAL) is a timely and practical research topic, addressing key challenges in federated learning by reducing labeling costs. The application of leverage score sampling on unlabeled data is promising, as it provides a principled way to select informative instances while preserving privacy. Weaknesses: The paper lacks originality, as its core methodology and claims, particularly regarding privacy and communication efficiency, heavily rely on FedSVD without substantial novel extensions. While leveraging FedSVD for data selection is reasonable, the work does not introduce significant new contributions to the specific challenges of FAL, such as adaptive query strategies, uncertainty estimation, or handling of heterogeneous client data. Other Comments Or Suggestions: N/A Questions For Authors: Could you clarify key terminology such as "one-pass" (one communication between client and server?), "query budget"(numbers of instance queried?), and "query complexity" to ensure a precise understanding of their definitions and implications in the context of your method? Based on the experiments, the query budget is set to 5 per client with 10 query rounds. Does this mean that the server alone handles the selection process, selecting and labeling 50 instances per client before returning them for training? Additionally, for trials with 15,000 queries, does the sampling algorithm converge in a way that leads to repeated selection of the same instances, potentially collapsing diversity in the selected samples? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate your constructive suggestions. Below, we address each of your comments in detail. > Q1- *Claims And Evidence*: Independent communication complexity and privacy analysis of FALE As Reviewer 2DKU's suggestion, we have included a dedicated analysis of FALE's communication and storage complexity aspects. Specifically, in our proposed FALE method, each selected data instance is associated with two identifiers, i.e., the index of the data and client. Therefore, the communication complexity and storage overhead of transmitting leverage scores and selected data indexes are **$\mathcal{O}(n)$ and $\mathcal{O}(n_s)$**, respectively. Thus, the overhead in communication from maintaining and transmitting the metadata is **minimal and unlikely to be a practical bottleneck**, especially compared to transmission of the model gradient, which is usually much larger. Note that FALE conducts one-pass data selection, so the server **do not necessarily store the mappings** between sample indices and client identities. Regarding privacy, please refer to our detailed response to Reviewer JwJU in Q1. > Q2-*Claims And Evidence*: Explicit mechanism to address heterogeneity in FL settings We wish to clarify that FALE inherently addresses data heterogeneity by leveraging FedSVD, which securely computes global singular vectors on distributed and non-i.i.d. client data. Consequently, **the obtained global leverage scores accurately reflect global data structures** rather than being biased towards any specific local data distribution. This global perspective naturally addresses the challenge of non-i.i.d. and heterogeneous client data distributions. > Q3-*Methods And Evaluation Criteria*: Lack of ablation studies and analyses of selected instances We wish to explain that the paper includes the **variant FALE-local, which can be regarded as one of the ablations to FALE**. Since it is a degenerated version of the proposed FALE method that only performs global data selection. To further enhance clarity, in the revision, we have explicitly added more detailed ablation studies and analyses exploring individual components. We have also included visual and statistical analyses of the selected data instances to better illustrate their global informativeness. > Q4-*Theoretical Claims*: Validity of Theorem 4.1 in the non-i.i.d. distributed setting We wish to clarify that Theorem 4.1 remains valid under FL settings with non-i.i.d. data distributions. Theorem 4.1 only depends on the leverage score sampling derived from the globally computed singular vectors via FedSVD. Since FedSVD has been rigorously validated to accurately compute global singular vectors securely, the leverage scores obtained exactly match those in a centralized setting. We refer to our response to Q2 of reviewer JwJU for more details and an updated version of Theorem 4.1. > Q5-*Other Strengths And Weaknesses*: Contributions on specific challenges of FAL We respectfully emphasize that, to best of our knowledge, FALE is the first FAL method to propose global data selection explicitly, departing from local query strategies common in prior works. Additionally, we provide a theoretical analysis of query complexity to provide formal guarantee, which is largely absent in existing literature of FAL. **These innovations address both heterogeneity and communication challenges** and have been highlighted in the revised manuscript. > Q6-*Questions For Authors*: Terminology clarification In the revised manuscript, we have given the clear definitions. **One-pass** refers to conducting the active querying only once, without multiple iterative rounds. **Query budget** indicates the total number of allowed instances selected for labeling in active learning. **Query complexity** denotes theoretical analyses quantifying how many queried instances are necessary to achieve certain performance guarantees with high probability. > Q7-*Questions*: Experimental setup clarification and concern of distribution shift} We clarify explicitly that there is a budget of labeling 50 data points (5 data querying for each client) per round for the methods that select data iteratively. FALE conducts one-pass selection in the server side. **Note that the server only handles the data selection part**, labeling always occurs locally on the client side. For your 2nd concern, active selection naturally tends to focus on the most informative instances, potentially introducing bias. However, empirical studies indicate that such a bias does not necessarily degrade performance [1]. Regarding scalability, computational and communication costs, please refer to our detailed responses to Reviewer nFHH in Q4 and Reviewer gkkn in Q4. ### References [1] Lowell, D., Lipton, Z. C., and Wallace, B. C. (2019). ”Practical Obstacles to Deploying Active Learning.” EMNLP-IJCNLP.
null
null
null
null
null
null
UDora: A Unified Red Teaming Framework against LLM Agents by Dynamically Hijacking Their Own Reasoning
Accept (poster)
Summary: This paper proposes UDora, an iterative method based on GCG. The approach first collects responses z from the target victim model, then introduces a score function to modify the original response z into an attack-desired response z* (the response that achieves the attacker's goal). It then utilizes z* with the GCG algorithm to optimize the attack prefix. This process iterates until the model's output no longer requires modification and directly produces the attack-desired response z*. The authors conducted experiments on different datasets. ## Update after rebuttal Dear authors, The following is my further response to your rebuttal. I apologize that I didn't notice that you are not visible to the official comments. ****** Thank you to the authors for their efforts. ## C1: Unrealistic threat models > We assume that an adversary can use the target LLM agent like any regular user. I agree with this assumption. **However, this differs from what UDora assumes, which presumes that agents, environments, and user tasks are all accessible to attackers. UDora can directly access the agent responses, which is directly related to the environments (e.g., the tool call contents), user tasks, and the agent itself.** In reality, attackers can interact with the target LLM agent, but they typically cannot access the actual environment the agent is interacting with. For example, in the case of a personal assistant agent, attackers can only attempt to simulate the user's computer environment. **The attackers can guess possible tool data or common user tasks, but cannot directly access the complete system.** ## C2: Difference with GCG Thank you for the clarification. However, I still believe the main distinction between UDora and GCG is that UDora optimizes the suffix based on surrogate responses, which appears to be your primary motivation as well. ## C4: Regarding applying UDora to Agentdojo The explanation that Agentdojo only supports API-based models seems insufficient to justify why UDora cannot be applied to it. Given that you successfully attacked GPT-4o on the WebShop dataset, it should be easy to conduct a small experiment applying UDora to Agentdojo using GPT-4o. ## Q1: Baselines Regarding the false positive rates of GPT-4o and Claude 3.7, could you provide examples demonstrating why the injected prompts optimized by UDora are more stealthy? Additionally, I would appreciate seeing results using perplexity-based detection methods, as they have proven effective according to existing literature on jailbreaking. ****** Given these concerns, I maintain my original score. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical claims are made in this paper. Experimental Designs Or Analyses: Yes. Please see the weaknesses part. Supplementary Material: Yes, all the sections. Relation To Broader Scientific Literature: This paper is related to prompt injection attacks and jailbreak attacks. Essential References Not Discussed: The prompt injection attacks and defenses are largely not discussed. Other Strengths And Weaknesses: # Strengths The paper is clearly written. # Weaknesses ## W1: Unrealistic threat model The major scenario considered in this paper is essentially indirect prompt injection. In this field, attackers can only modify the environment—for example, as the authors mention, inserting certain text on a webpage—while having no knowledge of the user's agent (including the agent system architecture and backend LLM). However, UDORA requires knowledge of the user agent's background information and needs to access and modify the responses from the user's agent. If one can directly modify the user agent's response to obtain z*, why is it necessary to use GCG for iterative optimization? Why not simply replace z with z*? ## W2: Minimal differentiation from GCG UDORA is fundamentally similar to GCG, with the only difference being that UDORA obtains the optimization target for GCG by accessing agent responses. Therefore, it is unsurprising that UDORA performs slightly better than GCG. For this same reason, UDORA requires gradient computation and thus cannot attack black-box models. ## W3: High computational complexity The UDORA algorithm requires iterative sampling of z followed by GCG optimization of the adversarial suffix, resulting in extremely high computational complexity. ## W4: Insufficient experimentation The evaluation only considers two 8B models, whereas in practical applications, agents typically use Claude or GPT-4, which UDORA cannot attack. Additionally, mainstream prompt injection datasets such as AgentDojo [1] were not evaluated. The dataset InjecAgent used in this paper contains only one step execution for every task, I am not sure if UDora can be applied to tasks that require multiple tool calls, such as AgentDojo. [1] Debenedetti, Edoardo, et al. "Agentdojo: A dynamic environment to evaluate attacks and defenses for llm agents." NeurIPS 2024. Other Comments Or Suggestions: The spacing appears compressed, for example in the Section 3 heading. I am unsure whether this violates submission requirements. Questions For Authors: Please refer to the weaknesses part. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the clarity of our writing and the relevance of our approach to prompt injection and jailbreak attacks. We appreciate your solid suggestions, which motivate us to further refine our work, and we are happy to address the concerns raised! > **Q1: Unrealistic threat model** Sorry for the confusion regarding the threat model. Different from GCG, for LLM agentic scenarios, we cannot simply optimize for an affirmative response to trigger a specific malicious action. Therefore, we propose leveraging the original reasoning of the LLM agent (for which we cannot directly modify it, but we can query it) based on the current adversarial string. We then craft a surrogate modified response, $z^*$, as the optimization objective—similar to the “Sure” objective in GCG. After training, we generate a response based on the optimized string to see if it indeed triggers the target malicious action. Thus, $z^*$ serves as an optimization objective similar to "Sure" in GCG, and we do not have direct access to modify the real LLM agent's response or replace $z$ with $z^*$. We hope this clarification resolves any confusion. If you have further questions, we are happy to provide additional details. > **Q2: Minimal differentiation from GCG** Thank you for raising this point about differentiating our work from GCG! GCG optimizes a fixed, affirmative prefix (e.g., "Sure"), which is static and suitable only for harmful requests. In contrast, UDora addresses the challenge that LLM agents perform extensive intermediate reasoning before executing their final action. Directly optimizing for the final action in this scenario with GCG is impractical. Instead, UDora uses a dynamic three-step optimization process: 1. **Gather the Initial Response**: We first query the initial response from the LLM. 2. **Constructing a Surrogate Optimization Target**: Leveraging the original response, we insert adversarial noise at optimal positions determined through a weighted interval algorithm guided by our positional scoring function (Equation 1). This modified surrogate response is hypothetical, not the actual response from the LLM. 3. **Optimizing the Adversarial String**: We then optimize the adversarial input string by maximizing our positional score rather than directly maximizing the probability as done in GCG. Note that, unlike GCG, we only leverage the gradient on the multiple noise insertions, rather than on the entire $z^*$. This adaptive and dynamic process enables UDora to exploit the LLM agent’s reasoning in a way that a static, fixed objective like GCG cannot. Furthermore, UDora also significantly outperforms GCG in empirical results. For instance, with the LLaMA-3.1 model, UDora achieves: - 99% ASR versus GCG’s 77% on the InjecAgent dataset, - 61.75% ASR versus GCG’s 15.00% on the WebShop dataset, - 97.7% ASR versus GCG’s 60.8% on the AgentHarm dataset. > **Q3: High computational complexity** We leverage KV caching for generating the $z$ in our implementation, and due to the word limit, we kindly encourage you to review the discussion about the computation cost provided in Q4 for Reviewer c28e. > **Q4: Insufficient experimentation** Thank you for this insightful suggestion! We indeed included a practical example (Fig 11) of attacking the AutoGen Email Agent from Microsoft, which employs GPT-4o as the base LLM (GPT-4 can also be applied), relying only on the returned log probabilities for the attack. Regarding models like Claude, where neither log probabilities nor tokenizers are available, the attack scenario becomes inherently more challenging and remains a direction for future exploration. Concerning the AgentDojo dataset, currently, it only supports API-based models, lacking direct compatibility with HuggingFace models, which doesn’t allow the attack optimization here. In addition, we also have communicated directly with the authors of AgentDojo, exploring possibilities for data extraction or integration with HuggingFace models. Unfortunately, both the authors from AgentDojo and we have agreed that certain technical difficulties (primarily related to prompt injections within pre-designed functions) have made this quite challenging. We appreciate your understanding of this limitation. Regarding the application of UDora to tasks involving multiple tool calls, our experiments with the AgentHarm dataset indeed encompass multi-step scenarios. We observed that once UDora successfully triggers the initial malicious tool call, subsequent steps often follow naturally. Thus, UDora can still effectively apply to multi-step tool call scenarios, as long as the first step is triggered. Moreover, adapting UDora explicitly for optimization across multiple steps is also feasible with minor code adjustments. Exploring and refining this multi-step optimization approach will be a focus of our next step, and your suggestion is greatly appreciated! --- Rebuttal Comment 1.1: Comment: Thanks the authors for the detailed rebuttal. While I appreciate the clarifications provided, I still have several concerns. ## C1: Unrealistic threat models First, in real-world applications, it is impractical to assume the attacker can access the agents' responses. In fact, attackers would never know when and where their injected instructions would be retrieved and processed by the agents. Reviewers VRpZ and c28e also acknowledge this unrealistic assumption. A more realistic threat model would be that attackers can only modify contents retrieved by tools (e.g., websites, emails, calendar events) without visibility into how the agent processes this information or what responses it generates. However, this is the most significant assumption that UDora made. Without access to agent responses, attackers cannot perform the optimization loop that UDora depends on. ## C2: the difference between GCG and UDora As stated by the authors, the only difference between UDora and GCG is that UDora optimizes the suffix based on surrogate responses. This is not a fundamental advancement. Also, it is not surprising that UDora outperforms GCG since UDora is optimized specifically on surrogate responses that are designed to mimic the targeted attack instructions. ## C3: computational complexity Thanks for providing the detailed computation costs. It confirms that for each injection and each user task, the attacker must search for a specific suffix to attack the agent. This requirement makes UDora impractical for real-world applications where attackers would need to generate customized attacks for countless potential user tasks without knowing which ones will be executed. ## C4: Regarding applying UDora to Agentdojo I remain unconvinced about why UDora cannot be applied to AgentDojo. If there exists technical challenges preventing this application, it suggests that UDora cannot be easily deployed in real-world scenarios. Also, I have a question regarding the defense: ## Q1: A simple baseline: applying LLM to detect the tool retrieved contents. Regarding defense methods, thanks the authors for conducting experiments using LLaMA3.1 and Mistral models to detect potential prompt injections. I am curious whether more advanced models such as GPT-4o or Claude 3.7 might achieve better detection results with lower false positives. Additionally, have you explored perplexity-based detection methods? --- Reply to Comment 1.1.1: Comment: Thank you so much for your thoughtful suggestions. We truly appreciate the opportunity to discuss and clarify our setting and contributions with you! > **C1: Threat Model** Thank you for the follow-up questions! First, reviewers VRpZ and c28e raised the possibility of extending UDora to a black-box setting—where only the query (i.e., the agent’s response) is available—as a future research direction. **They specifically inquired about how our attack might be implemented with access only to the response, without access to corresponding gradients or log probabilities**. As discussed above and supported by our evaluation results, it is possible to extend our attack to these additional black-box scenarios. We are glad this point has been clarified, and we will incorporate this discussion into our revision. Additionally, we emphasize that access to the agent's response is both necessary and realistic in agent attacks, as noted by reviewers VRpZ and c28e. Specifically: 1. **Malicious Instruction Scenario:** In benchmarks such as AgentHarm, we assume the LLM agent’s user is malicious and intends to trigger harmful behavior. In such cases, it is natural and realistic for the adversary to access the agent’s responses like regular users. Under this threat model, we achieved a 97.7% attack success rate (ASR), compared to GCG’s 60.8%. 2. **Malicious Environment Scenario:** An adversary could access agent responses similarly to other users. For instance, APIs or SDKs for most LLM agents (e.g., OpenAI agent function calling) are publicly accessible. An adversary can perform routine tasks, gather responses, and subsequently optimize adversarial inputs based on these responses. Our evaluation against the real-world agent AI Email agents from AutoGen further validates the practicality of our attack. Moreover, our evaluation on black-box transferability-based attacks shown in Fig 10 (Perplexity AI) demonstrates that UDora successfully attacks realistic agent frameworks even under black-box conditions. We will add the suggested discussion to clarify our threat model in the final version. > **C2: Differences from GCG** Thanks for the question. As acknowledged by other reviewers, our work offers several key contributions beyond GCG: 1. **Adaptive Optimization:** We depart from GCG’s fixed-prefix optimization, which is challenging to adapt across different base LLMs and scenarios, requiring manually defined prefixes tailored to specific reasoning styles. 2. **Leveraging LLM’s Own Reasoning:** We propose a novel algorithm that exploits the LLM agent’s reasoning process to hijack itself. However, defining an appropriate optimization objective based on this reasoning presents a significant challenge, which we address comprehensively below. 3. **Optimal Multi-Position Noise Insertion:** To overcome this challenge, we introduce a systematic method for inserting noise at multiple positions within the reasoning process. Our algorithm optimally places $k$ instances of noise into the agent’s reasoning steps to create a robust surrogate optimization objective. Our experiments also demonstrate that both the number and the specific positions of inserted noises significantly affect attack effectiveness—contrary to the common assumption that a single targeted noise is sufficient. 4. **Targeted Gradient Utilization:** Unlike GCG, which uses gradients from the entire affirmative response for optimization, we exclusively utilize gradients from targeted noise and employ a positional scoring function to select optimal candidate strings. This modification significantly enhances the efficacy of adversarial attacks on LLM agents. 5. **Superior Performance:** We achieve considerably higher ASRs compared to GCG, surpassing it by at least 30% on average. All these contributions underscore UDora's substantial advancements over GCG. > **C3: computation cost** We appreciate the reviewer's acknowledgment of our detailed presentation on computational costs! For other experimental concerns, please refer to our response in C1. > **C4: Agentdojo** After consulting with AgentDojo's authors, the issue lies not in using UDora with AgentDojo, but in a compatibility limitation: AgentDojo currently only supports API-based models, not Huggingface models. This reflects an incompatibility between AgentDojo and Huggingface, not a limitation of UDora. Additionally, we have successfully applied UDora to three widely-adopted datasets—InjecAgent, WebShop, and AgentHarm—proving its effective real-world deployment. > **Q1: Baseline** Thanks for your insightful suggestion! Here are the results for GPT-4o and Claude 3.7 on the WebShop: true positive rates are 76.5% and 94.1%, respectively, but the false positive rates are still high at 61.5% and 80.8%. It is as challenging to detect hallucinations in an LLM agent. Perplexity-based methods will work, and our next step will be to refine our focus on more semantic strings. Any further suggestions would be appreciated!
Summary: The paper presents UDora, a unified red teaming framework designed to attack LLM agents by dynamically leveraging their reasoning processes. The core idea involves inserting adversarial perturbations into the agent's reasoning traces to steer it toward malicious actions. The framework operates in three steps: gathering initial responses, identifying optimal noise insertion positions, and iteratively optimizing adversarial strings. Experiments on three datasets (InjectAgent, WebShop, AgentHarm) demonstrate high attack success rates (ASR) across different scenarios (malicious environments/instructions) and LLMs (Llama 3.1, Ministral). UDora outperforms baselines like GCG and prompt injection, achieving up to 99% ASR. A real-world attack on an AutoGen email agent further validates its practicality. Claims And Evidence: The claims are well-supported by empirical evidence: Effectiveness: Tables 1–3 show UDora’s superiority over baselines. Generalization: Success across diverse scenarios (malicious environments/instructions) and models (open-source/closed-source) is demonstrated. Real-World Applicability: The AutoGen email agent case study (Figure 11) provides concrete evidence of practical impact. However, the theoretical justification for the positional scoring function (Equation 1) is underdeveloped. While empirically effective, its design choices (e.g., averaging token probabilities) lack formal analysis Methods And Evaluation Criteria: Methods: UDora’s dynamic optimization strategy is novel, particularly its use of sequential/joint noise insertion to exploit reasoning paths. The integration of token-level probability distributions enhances adaptability. Evaluation: ASR is a clear metric, but additional metrics (e.g., robustness to defenses, transferability) would strengthen the analysis. Theoretical Claims: The paper lacks theoretical guarantees. While the attack algorithm is empirically validated, formal analysis of how noise insertion impacts reasoning fidelity is missing. Experimental Designs Or Analyses: Strengths: Extensive experiments cover multiple models, datasets, and scenarios. Ablation studies (Tables 4–5) explore optimization modes and noise locations. Weaknesses: Limited discussion of computational costs (e.g., iteration steps, token sampling overhead). Supplementary Material: Appendices include optimization process examples (Figures 4–9) and real-world attack logs (Figures 10–11), which clarify methodology. However, code or reproducibility details are absent. Relation To Broader Scientific Literature: The work builds on prior adversarial attacks (GCG, AutoPrompt) and LLM agent benchmarks (WebShop, AgentHarm). It extends red teaming to dynamic reasoning exploitation, addressing gaps in fixed-prefix optimization. Essential References Not Discussed: Defensive Methods: Works like adversarial training (Madry et al., 2018) or detection mechanisms (Jones et al., 2023) are not discussed. Multi-Agent Reasoning: Techniques from cooperative AI (e.g., debate frameworks) could contextualize UDora’s adversarial focus. Other Strengths And Weaknesses: Strengths: Novel framework addressing underexplored LLM agent vulnerabilities. High practical impact with real-world case studies. Weaknesses: The paper lacks theoretical guarantees. Limited exploration of ethical implications (e.g., misuse risks). Overemphasis on ASR without analyzing failure modes or attack detectability. Other Comments Or Suggestions: Reproducibility: While the appendices provide examples, releasing code or pseudocode for the optimization process (Algorithm 1) would enhance reproducibility. Details on hyperparameter tuning are sparse. Ethical Considerations: The paper briefly acknowledges risks in the Impact Statement but lacks actionable mitigation strategies (e.g., controlled release of adversarial strings, collaboration with red- teaming communities). Including a discussion on potential defenses (e.g., adversarial detection, reasoning path validation) would balance the focus on attack efficacy. Suggestions for Improvement: 1. Defense Discussion: Expand Section 7 to include preliminary experiments on defending against UDora-style attacks. 2. Code Release: Provide a minimal implementation or pseudocode for key components (e.g., positional scoring). 3. Failure Analysis: Include examples of unsuccessful attacks to identify UDora’s limitations (e.g., cases where reasoning paths resist perturbation). Questions For Authors: 1. Theoretical Basis: Can you formalize the conditions under which UDora’s noise insertion guarantees successful attacks? 2. Defense Evaluation: How does UDora perform against LLM agents with adversarial training or detection mechanisms? 3. Ethical Safeguards: What measures are proposed to prevent misuse of UDora? Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for recognizing UDora’s innovative approach in utilizing reasoning steps and for highlighting the robustness of our empirical results across diverse benchmarks, particularly in the real-world AutoGen email agent attack scenario. We deeply appreciate your thorough and thoughtful review, which has significantly strengthened the clarity and impact of our contributions! > **Q1: While positional scoring function is empirically effective, its design choices lack formal analysis** Thank you for highlighting this point; we will include a more formal analysis to clarify our design choices in the final revision. Briefly, our positional scoring function (Equation 1) is guided by two primary principles: (1) we prioritize positions where some tokens already match the target; (2) if multiple positions have the same number of matched tokens, we prefer the one with the highest average token probability rather than the product, as the product would undesirably bias selections based on token length. Equation 1 precisely reflects these principles. We will provide additional intuition and a formal rationale for this approach in our final version. > **Q2: Limited discussion of computational costs** Thank you for raising this important point! Due to the word limit, we kindly encourage you to review the statistic provided in Q4 for Reviewer c28e. > **Q3: Essential References Not Discussed** Thank you for suggesting these essential references for contextualizing UDora’s adversarial focus! We have added all these in our current version. > **Q4: Code Release: Provide a minimal implementation or pseudocode for key components (e.g., positional scoring).** Thank you for this insightful suggestion. We will release our code upon acceptance and also include the pseudocode for the positional scoring function, in the appendix of our final version. > **Q5: Failure Analysis: Include examples of unsuccessful attacks to identify UDora’s limitations** Thank you for the valuable suggestion! While our current analysis primarily focuses on successful attacks, we agree that examining unsuccessful cases is equally important for understanding UDora’s limitations. In line with our current detailed successful-case analyses (Fig 5–9), we will include representative failure examples from each dataset in our final version. > **Q6: Theoretical Basis: Can you formalize the conditions under which UDora’s noise insertion guarantees successful attacks?** Thank you for this insightful question! Empirically, we've observed that attacks typically succeed when the overall positional score across all noise insertion locations exceeds a threshold (e.g., greater than 3). Intuitively, this corresponds to cases where a specific target is mentioned multiple times during the LLM agent's reasoning, significantly increasing the likelihood that the agent adopts the associated malicious action. We hypothesize that this phenomenon arises because such repetitive mentions rarely occur in the LLM agent’s training data without the agent subsequently performing the mentioned action. > **Q7: Defense Evaluation** Thank you for the insightful question! Regarding adversarial training, it could mitigate UDora-style attacks, but it typically involves a trade-off, potentially reducing the overall utility or generalization capability of the LLM agent. For detection mechanisms, following your suggestion and reviewer c28e’s recommendation, we will expand Section 7 to include preliminary experiments on defense mechanisms against UDora-style attacks. Specifically, we plan to explore reasoning validation techniques, such as detecting hallucinated or erroneous steps introduced by UDora to achieve the target malicious action. For instance, we can prompt the LLM to self-assess the consistency between its reasoning steps and the original instruction or the observation, thereby evaluating its ability to recognize and mitigate UDora-style adversarial manipulations. We have included some initial results in our response to Q2 under Reviewer c28e. Due to the constraints of the word limit, we kindly invite you to examine the details there. > **Q9: Ethical Considerations and Safeguards** Thank you for raising this critical concern. We will carefully control the release of adversarial strings derived from real-world examples, such as those involving AutoGen demonstrated in our paper, to prevent potential misuse. Additionally, we will discuss potential mitigation strategies, including reasoning validation (as previously mentioned) and methods like obscuring or summarizing intermediate reasoning steps presented to users (like open o1) during interactions with LLM agents. Furthermore, we agree that collaboration with red-teaming communities and implementing adversarial detection mechanisms would significantly enhance security. We will include a new paragraph addressing these ethical considerations and proposed safeguards in our final revision.
Summary: This paper introduces UDora, a novel red teaming framework designed to systematically attack LLM agents by leveraging their own reasoning processes. Unlike traditional adversarial attacks that rely on static prompt injections or optimized adversarial suffixes, UDora dynamically identifies and perturbs reasoning traces within LLM agents to optimize adversarial strings. The paper formulates the attack strategy as a multi-step iterative optimization process, showing superior attack success rates across multiple datasets (InjecAgent, WebShop, and AgentHarm). It further demonstrates real-world security implications by successfully misleading deployed AI agents like email assistants. Claims And Evidence: The claims in the paper are well-supported by experimental results. The evaluation on InjecAgent, WebShop, and AgentHarm benchmarks provides strong evidence for the framework’s effectiveness. However, while the attack success rates are high, it would be useful to include more ablations analyzing the robustness of different attack configurations (e.g., varying the number of noise insertion points in different attack settings). Methods And Evaluation Criteria: The methodology is well-motivated, and the proposed noise insertion and adversarial string optimization strategies are well-explained. However, additional comparisons with alternative red teaming approaches (e.g., self-reflection-based attacks) would provide a broader context. Theoretical Claims: The paper does not rely on formal theoretical claims, but the optimization-based attack procedure is sound. If possible, providing a theoretical justification for why adversarial noise placement at specific reasoning points is more effective than simple prompt injections would be valuable. Experimental Designs Or Analyses: The experiments are generally well-designed, covering multiple attack scenarios. The dataset selection is appropriate, and the baselines are reasonable. However, it would be beneficial to explore how different LLM architectures (beyond Llama and Mistral) respond to UDora. Supplementary Material: I review some experiment details and some specific cases on attacks from AutoGen. Relation To Broader Scientific Literature: The paper builds upon adversarial attack techniques in NLP and extends them into the LLM agent domain. While it cites relevant prior work, additional discussion on adversarial reasoning attacks (e.g., those targeting CoT reasoning or multi-step planning in LLMs) could strengthen the contextualization. Essential References Not Discussed: There is no other essential references not disscussed. Other Strengths And Weaknesses: The paper introduces a new framework that leverages LLM agents' own reasoning processes for red-teaming, which is a creative combination of adversarial attack techniques and reasoning-based optimization strategies. However, a notable weakness is the lack of comparative analysis with other red-teaming frameworks specifically designed for LLM agents, which limits the contextual understanding of UDora's performance advantages. Additionally, the proposed method relies on access to token-level probability distributions, which may not always be available in black-box settings. The feasibility of attacking models that do not expose these probabilities remains unclear. The iterative attack strategy, while effective, could introduce significant computational overhead. While the authors demonstrate average number of optimization iterations in Figure3, it would be useful to quantify the trade-off between attack efficiency and computational cost more explicitly, for example, analyze the exact average running time. Other Comments Or Suggestions: I have no other comments or suggestions. Questions For Authors: 1. How would UDora perform in a black-box setting where token probabilities are not available? 2. Have you considered countermeasures such as reasoning validation to detect adversarial manipulations? 3. How does UDora compare against self-reflective adversarial attacks that use an LLM’s own safety filters to generate bypassing strategies? 4. What are the computational requirements for running UDora at scale? Could it be optimized for faster attack execution? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are deeply grateful to the reviewer for their thorough and insightful feedback. Your contribution of time and expertise has significantly enriched the development of our research! > **Q1: Black-box setting** Thank you for the insightful question! In a pure black-box environment—where only the final response is observable—the attack indeed becomes more challenging. Still, UDora could be extended in a few ways: 1. Surrogate Model Training: Train a surrogate LLM to mimic the target LLM agent’s behavior based on observed responses, then apply UDora on the surrogate LLM. The resulting adversarial prompt may transfer to the real target model. 2. Cross-Model Transferability: Leverage the fact that adversarial prompts sometimes transfer between models, as illustrated by our Perplexity AI example using open-source models (e.g., LLaMA 3.1 or even the currently widely adopted DeepSeek-R1). Extending UDora to a purely query-based, black-box attack is a natural and important next step, and we are actively exploring methods in that direction. We welcome any additional ideas you might have on this front! > **Q2: Reasoning validation to detect adversarial manipulations** Thank you for raising the idea of using reasoning validation as a countermeasure! Our experiments (Figs. 5–9) show that UDora’s manipulations distort the agent’s reasoning—effectively causing "hallucinated/ wrong" reasoning paths that steer the agent toward harmful actions. For instance, in the WebShop scenario, the agent is tricked into believing an incorrect item meets all specified requirements (e.g., price, attributes), resulting in a malicious action. Therefore, we conducted the following experiments on reasoning validation as follows: we used a separate LLM to independently determine if the reasoning from the LLM agent aligns with the instructions or the observation from the environment. Specifically, we found the detection rates to be 97.0% and 75.0% on the WebShop dataset using the LLaMA3.1 and Ministral models, respectively. However, we also observed a high false positive rate, as 69.2% and 89.2% of the cases with normal reasoning were also incorrectly flagged as incorrect reasoning, indicating that reasoning validation in practice is still somewhat challenging as it is as difficult as detecting hallucinations in LLM agents. > **Q3: Compare against self-reflective adversarial attacks** Self-reflective adversarial attacks focus primarily on bypassing an LLM’s safety filters to extract harmful responses. By contrast, UDora targets a malicious tool or functionality rather than just overcoming safety barriers. 1. **Malicious Environment:** In contexts like WebShop or InjecAgent, UDora aims to manipulate the LLM into taking a malicious action (e.g., buying something undesirable) without producing obviously harmful or unethical text. Because the output may look innocuous, standard safety filters that rely on detecting overtly harmful or unethical content often fail to catch this subtle manipulation. 2. **Malicious Instruction:** Here, self-reflective attacks stop at bypassing the filter, but UDora must also ensure the malicious function is genuinely activated. As shown in Fig. 8 (4), even if the request bypasses safety checks, it might not trigger the intended malicious action, requiring further optimization of UDora. Hence, UDora is more specialized than self-reflective methods: it not only defeats safety filters but also reliably orchestrates a specific malicious action. > **Q4: Computational requirements and faster attack execution** In UDora, the computational overhead at each iteration primarily stems from two parts: 1. **Query time:** Generating the model’s reasoning process given the current adversarial string. 2. **Optimization time:** Finding the optimal position, computing gradients, and updating the adversarial string. When running UDora with LLaMA 3.1: - On the AgentHarm dataset, the average times per iteration for these two parts are **3.48s** (query) and **11.22s** (optimization). - On the InjecAgent dataset, they are **5.57s** (query) and **5.97s** (optimization). With the Ministral model: - On AgentHarm, the corresponding times per iteration are **2.49s** (query) and **6.04s** (optimization). - On InjecAgent, they are **6.86s** (query) and **8.18s** (optimization). These results show that, similar to GCG, **the main bottleneck is in the optimization phase**. However, UDora typically only requires around 20 iterations for a successful attack (see Fig. 3). UDora can also be optimized for faster attack execution: 1. **Less frequent updates:** Instead of updating the reasoning process after every iteration, update it at a specified interval (e.g., every 10 steps). 2. **Partial reasoning generation:** Generating only the first 100 tokens of reasoning during query can still yield high attack success rates. For instance, on AgentHarm with LLaMA 3.1, generating just the first 100 tokens still achieves a 97% ASR.
Summary: This paper introduces UDora, a unified framework for testing security vulnerabilities in LLM agents. It focuses on two scenarios: malicious environments and malicious instructions. UDora works by analyzing an agent's reasoning process, identifying optimal positions to insert misleading information, and optimizing attacking text through multiple iterations. UDora significantly outperformed existing methods across three datasets. Claims And Evidence: The claims made in this submission are supported by the experiments. This paper conducted experiments on three datasets (InjecAgent, WebShop, and AgentHarm) with different LLMs, demonstrating that UDora's attack success rate (ASR) is significantly higher than GCG and Prompt Injection attack methods. Methods And Evaluation Criteria: The methods and the evaluation criteria (ASR) make sense. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design is in general sound. Supplementary Material: No. Relation To Broader Scientific Literature: UDora extends prior research on prompt injection by dynamically leveraging LLM agents' reasoning. Essential References Not Discussed: N/A Other Strengths And Weaknesses: - UDora assumes the attacker can access either the entire model or its token probability distribution during reasoning, which is a strong assumption for real-world deployment. - UDora lacks discussions on defenses. Other Comments Or Suggestions: N/A Questions For Authors: Would the ASB benchmark provide a more comprehensive evaluation framework for assessing the effectiveness of UDora? *Zhang, Hanrong, et al. "Agent security bench (asb): Formalizing and benchmarking attacks and defenses in llm-based agents." arXiv preprint arXiv:2410.02644 (2024)* Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Many thanks to the reviewer for the thoughtful and detailed feedback. The expertise and time invested in this work have been instrumental in enhancing its quality! > **Q1: UDora assumes the attacker can access either the entire model or its token probability distribution during reasoning, which is a strong assumption for real-world deployment.** We appreciate your insightful question regarding our assumption that attackers can access either the entire model or at least the token probability distribution! As discussed in the paper, this requirement is indeed a limitation in certain real-world settings—particularly for closed-source models (e.g., Claude) that do not expose log probabilities or even a tokenizer. However, we note that some open models (including GPT-series models) do provide token-level probabilities, which enables our proposed attack in practice. For instance, in our experiments, the GPT-4o-based AutoGen email agent made these probabilities accessible, facilitating UDora’s successful attack strategies. Meanwhile, we observe a broader trend of rapidly improving open-source LLMs (e.g., DeepSeek-R1) being integrated into real-world applications, even in partnership with major industry players (e.g., [Perplexity AI](https://www.perplexity.ai/page/deepseek-s-new-open-source-ai-YwAwjp_IQKiAJ2l1qFhN9g?login-source=oneTapPage&login-new=false), [NVIDIA](https://build.nvidia.com/deepseek-ai/deepseek-r1), [Microsoft](https://azure.microsoft.com/en-us/blog/deepseek-r1-is-now-available-on-azure-ai-foundry-and-github/), etc.) to reduce costs. As this trend continues, full or partial access to model internals (including token probabilities) may become increasingly common in LLM agentic systems. In such cases, UDora’s approach remains directly applicable. Extending UDora to a purely query-based attack is a natural and important direction as our future work, and we are also excited to explore this in our next step! > **Q2: UDora lacks discussions on defenses.** Thank you for raising this important point about defenses! We will add a new paragraph in our final version to discuss how to mitigate potential misuse of UDora. One straightforward and practical strategy is for the agent provider to share only a condensed or sanitized summary of the reasoning steps like openai o1 reasoning model—rather than the full chain-of-thought. By limiting visibility into the underlying reasoning process, attackers lose critical information for inserting malicious noise in the reasoning steps. In our final version, we will also discuss concurrent work on guardrails for LLM agents, such as [1, 2]. Besides, following the suggestions from Reviewers c28e and vE1i, we have also explored defenses related to reasoning validation, i.e., using a separate LLM to determine if the attacked reasoning aligns with the instructions or environment. Specifically, we found the detection rates to be 97.0% and 75.0% on the WebShop dataset using the LLaMA3.1 and Ministral models, respectively. However, we also observed a high false positive rate, as 69.2% and 89.2% of the cases with normal reasoning were also incorrectly flagged as incorrect reasoning, indicating that reasoning validation in practice is still somewhat challenging as it is as difficult as detecting hallucinations in LLM agents. [1] Xiang, Z., Zheng, L., Li, Y., Hong, J., Li, Q., Xie, H., ... & Li, B. (2024). GuardAgent: Safeguard LLM agents by a guard agent via knowledge-enabled reasoning. arXiv preprint arXiv:2406.09187. [2] Luo, W., Dai, S., Liu, X., Banerjee, S., Sun, H., Chen, M., & Xiao, C. (2025). AGrail: A Lifelong Agent Guardrail with Effective and Adaptive Safety Detection. arXiv preprint arXiv:2502.11448. > **Q3: Would the ASB benchmark provide a more comprehensive evaluation framework for assessing the effectiveness of UDora?** Thank you very much for suggesting this reference! We agree that the ASB benchmark would indeed provide a comprehensive evaluation framework for UDora, and we plan to include experiments using it in our final version. Due to the limited rebuttal period, fully familiarizing ourselves with the framework’s code and integrating our attack will require some additional time. In the current submission, we have evaluated UDora across three diverse datasets, including InjecAgent, whose data characteristics look a bit similar to those of the ASB benchmark. Thus, the performance results obtained on InjecAgent could serve as a preliminary indicator of the expected performance on the ASB benchmark. If you have any further questions or suggestions, please feel free to let us know. Your feedback is greatly appreciated and will certainly help us improve our work a lot!
null
null
null
null
null
null
SPEX: Scaling Feature Interaction Explanations for LLMs
Accept (poster)
Summary: This paper introduces SPEX, a model-agnostic interaction attribution algorithm designed to scale feature interaction explanations to large input spaces, e.g., LLMs. The key contribution of SPEX is leveraging a sparse Fourier transform with channel decoding to efficiently identify and reconstruct important feature interactions. Since the existing methods for interaction explanations do not scale beyond small input sizes (e.g., Faith-Shap, SHAP-IQ), SPEX is the only method that can scale to inputs of up to ~1000 while improving faithfulness in reconstructing LLM outputs. Experiments are conducted on three long-context datasets: Sentiment Analysis, HotpotQA, and DROP, showing that SPEX outperforms baselines. ## update after rebuttal The author's response to my question regarding mechanistic interpretability is still relatively vague and high-level, but I think it is reasonable given this paper is a feature explanation paper. I would encourage the authors to discuss different paradigms of explanation in the paper, as it is not quite clear what the differences and advantages a newly proposed method has compared to methods from a different perspective. I have raised my score to 4. Claims And Evidence: Yes. - The claim of SPEX being efficient and scalable to large input sizes is supported both theoretically by the time complexity of O(sdn) and empirical runtime comparisons of different imput sizes (Figure 5). - There are also experiments supporting SPEX's faithfulness on Sentiment, QA datasets and human-labeled annotations. Methods And Evaluation Criteria: Yes. The method is evaluated in terms of Faithfulness, Feature Removal, Recovery@10 and some case studies. Theoretical Claims: Yes, theoretical claims regarding Fourier transform are discussed in App A. I scanned through the App A. Seems to be correct. Experimental Designs Or Analyses: Yes, the experiments design is reasonable with comparison against a broad set of baselines, including marginal attributions (LIME, SHAP, Banzhaf) and interaction indices (Faith-Shap, Faith-Banzhaf, Shapley-Taylor) on three datasets for different tasks. Supplementary Material: Yes, I checked the App A for the algorithms and App B for the experiment setting. Relation To Broader Scientific Literature: The paper is related to the XAI literature. It builds upon: marginal feature attribution methods and feature interaction attributions It also makes connections to sparse Fourier transforms, which is relatively underexplored in XAI, making this a novel contribution. Essential References Not Discussed: The coverage of related work is reasonable. The XAI literature is too huge to cover comprehensively. More discussions about the relationship to the recent Mechanistic Interpretability domain would be better, given this domain mainly focuses on LLM explanation. Other Strengths And Weaknesses: Strengths - SPEX addresses the key scalability limitation of prior interaction attribution methods. - The case studies are another strength, which demonstrates real-world relevance by debugging LLM reasoning failures and VQA tasks. Weaknesses - Limited model diversity. The experiments focus primarily on LLaMA-3 and GPT-4o Mini. Further evaluations on other LLMs would strengthen generalizability. - While SPEX is efficient, its robustness under noisy or adversarial perturbations remains unclear. A discussion regarding this perspective would be better. Other Comments Or Suggestions: No. Questions For Authors: In the era of LLMs, how useful are the feature attribution methods, especially compared to directly asking LLM to provide a reason or explanation of its output? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **This rebuttal contains (anonymized) links to figures. We also built a web app to help explore SPEX: https://anon858023.github.io/spex-webapp/** Thank you for the review. We hope that with the proposed additions based on your comments, your concerns are addressed and we can convince you that this manuscript is not a borderline case. **Discussion of Mechanistic Interpretability** We will add a discussion of mechanistic interpretability, which is a very timely topic in the space of LLMs. Specifically, much work in mechanistic interpretability focuses on discovering sparse circuits. Alternatively, this can be seen as finding interactions between attention heads (i.e., features in the context of our paper). Interesting future work can explore the application of SPEX to sparse feature discovery and other topics in mechanistic interpretability. **Importance of feature attribution with LLMs** Our approach offers several key advantages over asking an LLM to explain itself. 1. Many transformer-based models are encoder only which are unable to generate such explanations. Our experiments on the sentiment-analysis demonstrate the strong performance on SPEX on these models. Further, generative models such as protein language models are also unable to generate such explanations. 2. Recent work has shown that LLMs often cannot explain themselves (https://arxiv.org/pdf/2405.04382), and self-explanations do not accurately reflect the underlying reasoning process of the model. On the other hand, our approach is grounded in the model output, and can be systematically verified via our faithfulness and top-$r$ removal experiments, while LLM explanations are not. **More models** We have considered 4 different models in this work: DistillBERT, Llama-3.2, GPT 4o-mini and LLaVA-NeXT-Mistral-7B. For additional diversity, we repeat the faithfulness experiments with two additional models: Qwen2-7B-Instruct and Mistral-7B on the DROP dataset, and provide the results in the table below. Due to limited time, we only re-ran SPEX and the Faith-Banzhaf methods, and consider $5$ examples for each regime of $n$. We chose Faith-Banzhaf methods since they displayed the most competitive performance with SPEX in our previous experiments. | Model | n | SPEX | Banzhaf | Faith-Banzhaf 2nd | |----------------------|---------|------:|--------:|----------------:| | *Qwen2-7B-Instruct* | 32–63 | **0.850** | 0.483 | 0.727 | | | 64–127 | **0.559** | 0.441 | — | | | 128–255 | **0.670** | 0.513 | — | | | 256–511 | **0.549** | 0.445 | — | | *Mistral-7B* | 32–63 | **0.931** | 0.864 | 0.904 | | | 64–127 | **0.822** | 0.691 | — | | | 128–255 | **0.834** | 0.574 | — | | | 256–511 | **0.853** | 0.542 | — | *Table*: Faithfulness on DROP dataset across input lengths (*n*) for Mistral-7B and Qwen2-7B-Instruct. We continue to achieve significantly higher faithfulness across models. We will include these results, and run them for the other interpretability methods as well as for HotpotQA in the camera-ready version. **Noise Robustness** We find that the use of the BCH code makes SPEX impressively robust. We've conducted an experiment where we repeat our Sentiment experiment with additional observation noise. Our results demonstrate the robustness of SPEX to noise, as it mostly retains its $R^2$ performance even under very high noise levels. (https://imgur.com/a/qAf0y2J). These robustness results will be included in the camera-ready version of the paper. --- Rebuttal Comment 1.1: Comment: Thank the authors for their response. To follow up, I would like to find out the authors' view on mechanistic interpretability vs. feature attribution. Can the authors extend the discussion about mechanistic interpretability a little bit? For example, are they fundamentally the same and unified? Or are they more distinct? Can they complement each other and be combined together to achieve more coherent and convincing explanations of language models? If so, any more concrete ideas? --- Reply to Comment 1.1.1: Comment: Mechanstic Interpretability is an exciting and active research direction, with many different relevant sub-areas. There has also been a recent effort to unify approaches between mechanistic interpretability and feature attribution. [i] provides a good summary of these efforts. In particular SPEX belongs to the class of *model agnostic* approaches. These model agnostic approaches don't require the *inputs* to correspond to the natural inputs of the model. When we have access to model internals, we could apply SPEX to identify important model components, where the *inputs* to the value function correspond to different model components and *masking patterns* represent the ablation of subsets of model components. Similar approaches (using marginal attribution methods) were used in [ii] and [iii]. **Here are some of our concrete thoughts on how we might combine and connect some ideas (below) from SPEX to Mechantistic Interpretability:** 1. *Structural observation of a sparse, low-degree structure in feature space.* 2. *The mathematical construction SPEX uses to exploit this structure.* **Component/Neuron Attribution/ Localization:** This is perhaps the most straightforward connection to (**2**). If sufficient sparsity exists in the space of neurons or components, we can potentially use algebraic tools to carefully perturb these components and find a sparse set of them most relevant to a specific task. **Sparse Semantic Linear Representation of Activations, (SAEs, Transcoders, Crosscoders, etc.):** This branch of the literature is also primarily concerned with sparsity, not in feature interaction space, i.e., (**1**), but in **activation space**. We think the sparsity in feature interaction space and activation space can be *complimentary*. For example, features with strong interactions according to SPEX may result in activations with correlated encodings. One might even think of observing a matrix $\mathbf{X} \in \mathbb{R}^{d \times n}$ where $d$ is activation dimension and $n$ is the context length, and jointly applying dictionary learning ideas (like SAEs) and Fourier transforms to exploit the additional structure. There are a whole host of interesting things you can do by combining these notions of sparsity: - More sparsity $\implies$ more structure $\implies$ potential for greater efficiency in training things like encoders. - Joint analysis of sparsity. If we can jointly grasp semantic representations and interactions in feature space, we might be able to understand not just the presence of interactions but also, which semantics are involved in these interactions. *We think this is just scratching the surface of potential applications, and hope that the publication of this work can also have impact on the literature of Mechanistic Interpretability.* ***We thank you again for your time. We hope you will consider this discussion and our thorough answers to your other questions in your final review score.*** **References:** [i] Zhang, Shichang, et al. "Building Bridges, Not Walls--Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution." [ii] Ghorbani, Amirata, et al. "Neuron shapley: Discovering the responsible neurons." [iii] Shah, Harshay, et al. "Decomposing and editing predictions by modeling model computation."
Summary: This paper proposes a new algorithm to efficiently compute the sparse Fourier transform and identify salient interactions by leveraging the underlying sparse structure of interactions. The proposed methods outperform previous attribution methods and interaction indices in faithfulness measure while costing much less computation time. Claims And Evidence: The claims are supported by corresponding evidence. Methods And Evaluation Criteria: It is not clear how the method in this paper is different from that of Kang et al., 2024, which is a main reference in this paper. Some techniques seem very similar. I suggest the authors clarify what is inherited from Kang et al., 2024, and what are new contributions in the Related Work or Algorithm section. 1. The notion of *aliasing* and *subsampling* also appears in Kang et al., 2024. This seems to be a core technique to leverage the sparse structure of interactions and boost sample efficiency. 2. The philosophy of the **Designing shifts** part is similar to that of the **singleton detection** algorithm in Kang et al., 2024. 3. The **message passing** algorithm and the use of the bipartite graph is similar to the **graph peeling** algorithm in Kang et al., 2024. Moreover, although this paper shows a similar framework with Kang et al., 2024., this paper is able to deal with a much longer input (up to $n=1000$ input variables) than Kang et al., 2024 (only up to $n=20$ input variables). It is not clear which part contribute most to this improvement. Is it the shift design? Or the message passing algorithm? Regarding the evaluation criteria, the *faithfulness* metric defined in this paper is slightly different from that in Kang et al., 2024. Specifically, this paper subtracts the mean of value function outputs across all possible masks in the denominator. I would suggest the authors explain the intuition behind this change. Kang et al. Learning to understand: Identifying interactions via the mobius transform. arXiv preprint arXiv:2401.13371, 2024. Theoretical Claims: The theoretical claim about the sample complexity of the proposed algorithm is not carefully checked. Experimental Designs Or Analyses: 1. There is no validation or ablation study on how the hyperparameters are set (e.g., $C=3$, $t=5$, $b=8$). I suggest the authors clarify the reason for choosing the specific set of hyperparameters or conduct extra experiments to show the effects of different hyperparameters. 2. In Figure 5, with the increase of the number of input variables $n$, the SPEX method exhibits decreasing faithfulness but with almost identical compute time. Is it possible to obtain higher faithfulness with a slight degradation in compute efficiency (e.g., by tuning some of the hyperparameters in SPEX)? 3. The faithfulness is measured on 10000 randomly sampled masks. However, the number 10000 is still small compared to the total number of possible masks $2^n$, which can be huge when $n>20$. According to the sparsity structure of interactions, it is possible that none of the sampled masks cover significant interactions. As a result, the output of value functions on these masks will be close to zero, and the evaluation metric will not make sense. I suggest the authors provide guarantee that such scenario will not occur, or clarify that such scenario is rare in real experiments. Supplementary Material: I have checked Appendix A.1 and experimental details in Appendix B. Relation To Broader Scientific Literature: This paper is related to the broader literature of explainable AI, especially feature attribution and interaction-based explanation method. Essential References Not Discussed: No essential references not discussed. Other Strengths And Weaknesses: Strengths: 1. The paper focuses on an important problem in deploying interaction-based explanations in LLMs, i.e., their scalability to a massive number of input tokens. Other Comments Or Suggestions: In Line 133, the paper writes “we replace with the [MASK] token.” However, in Appendix, it shows that the [UNK] token is used. Which one is true? Questions For Authors: 1. Among the interactions computed by the SPEX method, how many of them are significant? How many of them are positive and how many are negative? What is the proportion of positive-negative cancellation? It would help if the authors can provide such statistics under different values of $n$. 2. This paper focuses on interactions derived through the Fourier transform (I will term it *Fourier interaction* for simplicity), which is different from widely-used interaction metrics such as the Mobius transform and Faith-shap interaction index. I wonder how we can interpret the intuitive meaning of a specific Fourier interaction w.r.t. the set $S$? According to Kang et al., 2024, the Mobius transform $I^M(S)$ measures the additional effect to the network output due to the formation of a coalition $S$,and this effect cannot be achieved by any subset of the coalition S. Does the Fourier interaction have similar intuitive meanings? Also, I would like to the authors to respond to my comments on **Methods And Evaluation Criteria** and **Experimental Designs Or Analyses**. --- Post rebuttal: The authors address my concerns. I have raised my score accordingly. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **This rebuttal contains (anonymized) links to figures. Web app to explore SPEX: https://anon858023.github.io/spex-webapp/** Thank you for the thorough and helpful review. We hope our proposed additions and clarifications convince you that this paper is not a borderline case, but rather an impactful and important contribution. **Kang et al.** This is the main theoretical inspiration behind this work, showing that sparse transforms and signal processing ideas can be used to compute feature interactions. As the reviewer notes, however, there is a large gap between the practical performance of SPEX and the Sparse Möbius Transform (SMT). We will add a paragraph specifically to highlight the significant effort that enabled this. 1. SPEX uses the Fourier transform, which is orthonormal. This improves robustness by avoiding noise amplification of SMT in the transform domain. 2. The sampling procedure uses different code structures (1) random linear codes and (2) BCH codes, both contributing to robustness. 3. The BCH code is used as a source-channel code, leveraging samples to both compressively sense each $\mathbf{k}$ and build robustness. 4. We add a soft decoder that exploits the real-valued nature of logits, unlike SMT, which quantizes output ratios. Switching between soft and hard decoding allows trading compute for superior performance (see KaFj rebuttal). 5. The SPEX message passing procedure enables superior detection of interaction effects, as we can decode multitons (cases where non-zero coefficients collide) if there is enough difference in interaction magnitude. SMT does not decode multitons. In exchange for superior practical performance, SPEX has a more involved implementation and lacks SMT’s clean theoretical results. **Faithfulness** Kang et al. defines $f$ as zero mean, so our evaluation is identical. ### **Experimental Design** **Hyperparameters** All hyper-parameters $C$, $t$ and $b$ control the sample complexity (i.e., number of sampled masks). As discussed in Section 5.1, the number of training masks is $2^bCt\log_2(n)$. Faithfulness is monotonic in all 3 parameters. Since $b$ has the largest effect, we measured faithfulness for $b\in \\{4,6,8 \\}$ for all three datasets in Fig 9. We outperform marginal approaches and are competitive with enumerative interaction indices. We perform additional hyper-parameter sweeps over $C$, $b$, and $t$ for the _sentiment analysis_ task over all possible regimes of $n$. Results in https://imgur.com/G4k4Q91 show $C=3$ and $t=5$ achieve a favorable trade-off in faithfulness and sample-complexity. We will include these experiments in the camera-ready version. **"Missed Interactions"** This question hits at a deep point! Every Fourier interaction is present in _every test mask_ (see eq. 1), and influences the output proportional to its magnitude. The direction (+/-) of influence changes depending on the mask. This is _not true_ of the Möbius transform, for which interactions may or may not be involved in the output depending on the mask. This property of the Fourier transform (deeply connected to the orthonormal property) is the reason why this approach works so well. Our revised paper will reflect this discussion. Additional experimental evidence supporting this: 1. **Variance of test responses** High variance in model outputs is observed across all datasets. For instance, in https://imgur.com/a/pagr97R, we show test response distributions for varying-length examples from _Sentiment_. Moreover, the test response distribution demonstrates clear bias, further indicating that variance in test responses is not an artifact of random noise. 2. **Difference in performance across methods** If this hypothesis were true, all feature attribution methods would have the same faithfulness since they would be evaluated on test samples with $0$ variance. 3. **Removal Results** Our top-$r$ removal results indicate we learn meaningful information. **[MASK] vs [UNK]** Thank you, we will update this. [UNK] is used. **SPEX Interaction Statistics** We've conducted a series of experiments to evaluate this. See figure caption for details: 1. Significant interactions: Across datasets and input sizes, _most of the faithfulness is achieved through a small number of interactions_ (https://imgur.com/a/8sKgFrO). 2. Interaction signs: We explore the signs of the interactions for different sized inputs from _sentiment analysis_.(https://imgur.com/a/cyON8I2) **Fourier Intuition** Table 1 provides equations converting Fourier coefficients to Möbius transform, Shapley indices, and Banzhaf values. Fourier coefficients themselves are also interpretable as they are central to influence functions which are widely used for feature and data attribution. In particular, for subset $S$, with Fourier coefficient $F(S)$, the term $F(S)^2/|S|$ corresponds to the contribution of the interaction $S$ to the influence function of $i\in S$. We will add a paragraph in the camera-ready version. --- Rebuttal Comment 1.1: Comment: Overall speaking, the rebuttal is clear and addresses my concerns. The comparison to Kang et al. (2024) and the connection between Fourier transform and influence function are insightful and I believe they should be included in the main text, if accepted. Some further comments: 1. Could you elaborate a bit more on “noise amplification of SMT in the transform domain”? In my view, it works as follows: the Mobius transform is $F(S)=\sum_{T\subseteq S} (-1)^{|T|-|S|} f(T)$, so for a $m$-th order transform, there are $2^m$ compositional terms, and this leads to an approximately exponential amplification of the noise in the original output $f(T)$. Is this understanding correct? 2. The rebuttal clarifies that the paper mainly focuses on the Fourier transform rather than the Mobius transform (which is more widely-used by previous game-theoretic XAI literature). When validating the sparsity assumption, the paper refers to Kang et al. (2024) and Ren et al. (2024). However, the two papers demonstrate the sparsity of the Mobius transform (and an extension named AND-OR interactions) rather than the Fourier transform, thus making this validation unconvincing. Could you provide more evidence on the underlying sparse structure of the Fourier transform on deep neural networks? 3. (Continued from previous question) How would you comment on the sparsity of Fourier transform and Mobius transform? Is the sparsity of one transform typically greater than the other? 4. The paper shows the connection between the Mobius transform and the Fourier transform in Table 1. Then, given the efficient algorithm of computing the Fourier transform up to $n=1000$ input units in this paper, is it possible to scale up the computation of the Mobius transform to $n=1000$? If not, what is the main obstacle? 5. This is a good work to scale up interaction-based methods, which are typically computational costly. I believe that opensourcing the code will lead to a more practical contribution. --- Reply to Comment 1.1.1: Comment: **Q1: Noise Amplification:** Your interpretation is a valid way of thinking about what we meant by noise amplification. This can be made a bit more rigorous as follows: Lets say $f(T)$ is a combination of a "true" sparse function $f'(T)$ and some noise $w(T) \sim \mathcal{N}(0,\sigma^2)$, i.i.d. for each $T$: $$f(T) = f'(T) + w(T), \quad w(T) \sim \mathcal{N}(0,\sigma^2) \text{ i.i.d.}$$ then the Fourier transform is: $$F(S) = F'(S) + W(S), \quad W(S) \sim \mathcal{N}(0,\sigma^2) \text{ i.i.d.}$$ In contrast, the Möbius transform of $f$ denoted as $M$ has the following form: $$M(S) = M'(S) + W(S),\quad W(S)\sim \mathcal{N}(0,\sigma^22^{|S|}),\quad \mathbb{E}[W(S_1)W(S_2)] = \sigma^2 2^{|S_1\cap S_2|}$$ This is of course, just an example, but shows that the Möbius transform can introduce correlated high-variance noise if the function deviates from an exactly sparse representation, and we do observe this type of behavior in practice. Real noise, however, is probably not i.i.d. gaussian (for example, it might depend on $|T|$). We think an interesting direction for future research in this space to further improve scaling is to study what type of noise is actually exhibited and build more tailored algorithms. Note this discussion is related to the concept of *whitening filters* in signal processing (the Fourier transform is a whitening filter for gaussian noise) https://en.wikipedia.org/wiki/Whitening_transformation. **Q2/Q3: Fourier vs. Möbius Sparsity:** We agree that we should have additional discussion beyond referencing Kang et al. (2024) and Ren et al. (2024) when discussing and justifying sparsity. We will develop this discussion with both a theoretical and empirical justification. *Theoretical Relationship (Möbius sparsity $\iff$ Fourier sparsity):* One argument we can use is that evidence of Möbius Sparsity implies Fourier sparsity. The sparsity of the Möbius transform ($s_M$) and Fourier transform $s_F$ are deeply connected. For example, a Fourier coefficient $F(S)$ results in at most $2^{|S|}$ Möbius coefficients. Since we are working with models that are also *low-degree* this means $|S|<d$ for some small $d$, (and typically most important $S$ have even lower $|S|$). Thus, we can upper bound the number of Möbius coefficients $s_M \leq 2^{d} \cdot s_F$, Conversely it is also true that $s_F \leq 2^{d} \cdot s_M$. **In practice, $s_M$ and $s_F$ are even closer together than the bound suggests** since (1) most of the important coefficients have $|S| < d$ and (2) the overlap between two Fourier interactions (conv. Möbius) $F(S_1)$ and $F(S_2)$ means they actually create only $2^{|S_1|} + 2^{|S_2|} - 2^{|S_1 \cap S_2|}$ coefficients. **Thus, by referencing Kang et al. (2024) and [1], which justify Möbius sparsity, this strongly supports the hypothesis of Fourier sparsity.** *Empirical Evidence:* For empirical evidence of Fourier sparsity, we can refer to this figure (https://imgur.com/a/8sKgFrO) which we provided in our rebuttal, and will include in our camera-ready version. Furthermore, Fig. 5 also serves as a justification since achieving high faithfulness with SPEX shows that these deep neural networks based models are well represented by a sparse Fourier transform. Comparing the full Fourier vs Möbius transforms on the first 9 movie reviews from *sentiment*, we find that the Fourier transform produces a much sharper decay in the sorted magnitudes of coefficients (https://imgur.com/a/UDkAdsf); however, this observation may be task or model dependent, so a more thorough comparison of the sparsity of the two transforms is another interesting future research direction. **Using SPEX to Compute Möbius and Other Interactions** Yes. Since it is very easy to convert from the Fourier to the Möbius transform, SPEX solves the problem of computing the Möbius transform at very large scales efficiently. Furthermore, since the Möbius transform is so deeply related to many of the other interaction indices (Shapley-Taylor, Faith-Shapley, etc.) it is also very easy to compute these other interaction indices once we have learned a Fourier transform. **Public and Easy-to-Use Code** Our code will certainly be made available upon publication. We have put significant effort into making it easy to use for practitioners and researchers. Our goal is to push forward the literature on feature attribution to be in line with the scale of the best models available today. Recently Llama-4-Scout released with a 10M token context limit, so this is a very timely problem! ***We thank you again for your time. We hope you will consider this discussion and our thorough answers to your other questions in your final revision.*** [1] Ren, Qihan, et al. "Where we have arrived in proving the emergence of sparse symbolic concepts in ai models." (Our understanding is that this work's proof is about Möbius sparsity, and not the AND-OR extension, which are studied in other works by overlapping authors).
Summary: The paper introduces **SPEX**, a scalable method for explaining LLM predictions by recovering feature interactions using **structured feature masking (BCH codes) and sparse Fourier recovery**. SPEX efficiently identifies the most important feature interactions without evaluating all \( 2^n \) subsets, making it the first method to scale to long-text inputs (1000+ features). It outperforms SHAP and Faith-Banzhaf in scalability while maintaining high faithfulness \( R^2 \). Claims And Evidence: The paper claims SPEX provides **efficient, scalable feature interaction recovery** and **highly faithful explanations**. While experiments support efficiency claims, faithfulness could be inflated by **memorization bias**, where LLMs correctly predict masked inputs due to seen training data. An evaluation on **unseen data** would better validate this claim. Methods And Evaluation Criteria: SPEX’s structured masking and sparse Fourier transform effectively reduce computational complexity, making it suitable for LLMs. However, its reliance on **faithfulness metrics** may not fully capture causal attributions. Adversarial masking techniques (e.g., synonym swaps) could strengthen evaluation. Theoretical Claims: The Fourier-based interaction recovery assumes **feature independence**, which may not hold for LLMs that rely on **context-dependent reasoning**. Experimental Designs Or Analyses: The experiments demonstrate SPEX’s scalability and faithfulness but lack **tests on out-of-distribution (OOD) inputs**. Additionally, the complexity of solving for \( c_T \) in BCH decoding is not fully analyzed, potentially hiding computational overhead. Supplementary Material: No Relation To Broader Scientific Literature: The paper relates well to work on **SHAP, Faith-Banzhaf, and interaction-based feature attributions**. Essential References Not Discussed: No Other Strengths And Weaknesses: _+_ Efficient and scalable feature interaction explanations. _-_ Faithfulness may be inflated due to memorization bias. _-_ Does not analyze decoding complexity in detail. Other Comments Or Suggestions: No Questions For Authors: 1. How does SPEX perform on unseen (OOD) data? Would faithfulness metrics hold without memorization effects? 2. Would synonym swaps help detect spurious feature attributions? 3. What is the computational complexity of BCH decoding? Are there efficiency trade-offs at scale? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **This rebuttal contains (anonymized) links to figures. We also built a web app to help explore SPEX: https://anon858023.github.io/spex-webapp/** Thank you for your constructive and positive review! We appreciate your insightful comments and have addressed them with additional experiments and clarifications below. We believe that SPEX's unique scalability for computing feature interactions in long-text inputs represents a significant advancement beyond the state of the art, and we hope this will be reflected in your final evaluation. **Memorization** This is an interesting hypothesis. Memorization could lead to the phenomenon you suggest, where only a few tokens suffice to determine the full sequence. Note that all baseline methods are masking based, and thus our SOTA performance claim is not invalidated, even if this hypothesis is valid. In practice, however, we find evidence that this hypothesis does not hold in our datasets: 1. **Variance**: We find that the variance on the test dataset we use to evaluate faithfulness is not small. That is, depending on the masks, the model output changes significantly. We visualize these model outputs for examples from the *Sentiment* dataset (https://imgur.com/a/pagr97R). This is further validated by Fig. 4, which show that marginal attributions, which perform a linear fit, are unable to capture the complex nature of the model output. 2. **Explicitly avoid memorization bias** An evaluation on unseen data is difficult for the models used in our paper since we do not know the pre-training data that was used. However, to address this concern, we repeat our faithfulness experiments on the TriviaQA dataset for OLMo2-7B-Instruct, an open-source/open-data language model. TriviaQA was not included in the pre-training corpus and was also not used at all during model development for OLMo2-7B-Instruct. Due to time constraints, we do not consider every regime of $n$, and only compare to Faith-Banzhaf since it was strongest in our previous experiments. | Model | n | SPEX | Banzhaf | Faith-Banzhaf 2nd | | :-------------------- | :-------- | :----- | :------ | :---------------- | | *OLMo2-7B-Instruct* | 32--63 | **0.761** | 0.410 | 0.632 | | | 64--127 | **0.651** | 0.378 | --- | | | 128--255 | **0.589** | 0.340 | --- | *Table: Faithfulness on TriviaQA dataset (held-out on OLMo2-7B-Instruct) across input lengths ($n$) for OLMo2-7B-Instruct* **Theoretical Claims** (*We tried our best to answer based on what you wrote, but, please feel free to clarify if we did not answer the question here, as there are many ways to interpret your point about feature independence*) The boolean Fourier transform is defined under a Hilbert space of functions with inner product $\langle f,g \rangle = \mathbb{E}_{\mathbf{x} \sim B^n} f(\mathbf{x})g(\mathbf{x})$. The induced distance metric essentially measures the average distance ($\ell_2$) in output between the two functions when features are i.i.d Bernoulli(1/2). However, the transform itself remains well-defined and interpretable even with feature dependencies. Our faithfulness metric (only one of our evaluation criteria) is related to the $\ell_2$ distance in the output space, provides a valuable measure of how well our recovered interactions approximate the original model's behavior. While other metrics might be more relevant for correlated features, the optimal metric for interpretability remains an open question. Importantly, SPEX can effectively capture learned correlations within the LLM. *Example*: When the LLM has already learned some of the correlations in the data, SPEX can extract this. For example, when two words ($w_1$ and $w_2$) always appear together, the model will not be impacted when either $w_1$ or $w_2$ are masked (since seeing either $w_1$ or $w_2$ is enough to deduce the presence of the other), and will only change when both $w_1$ and $w_2$ are masked simultaneously. SPEX will find a $2^{nd}$ order interaction here that indicates this correlation. **Synonym Swap** We appreciate this suggestion. While valuable, introducing synonym swaps adds potential confounding factors and poses scalability challenges for comprehensive evaluation. We believe this direction warrants further investigation in future work. **BCH Complexity** The complexity of decoding a $BCH(n_c, k_c, t)$ code is in Appendix A.5. It is $O(n_ct + t^2)$, with $n_c \approx n + t\log(n)$. Soft-decoding offers a way to trade off complexity for superior performance. We implement a 2-chase decoder which has an additional $2^{d_{chase}}$ factor, where $d_{chase} \leq t$ can be designed. We emphasize that this complexity is small compared to current interaction indices which have complexity that scales as $O(n^d)$, where $d$ is the highest degree of interaction considered. We will include a discussion in the final revision.
null
null
null
null
null
null
null
null
Reflect-then-Plan: Offline Model-Based Planning through a Doubly Bayesian Lens
Accept (poster)
Summary: This paper proposes learning an approximate Bayesian model for offline RL and use planning guided by an offline RL learned policy prior for action selection. Standard practices of ensemble architecture and variance penalty are used for planning. Experiment shows improved performance over offline RL only policy and comparable or better performance over LOOP which is a similar method but does not use a VAE based model. Claims And Evidence: Yes, the experiments demonstrate improved performance across different environments and offline RL algorithms. Methods And Evaluation Criteria: It's not super clear to me the exactly problem the authors are trying to tackle, mostly the two phases the authors are referring to. It's not offline-to-online RL, or is it? **On data sampling and evaluation** As far as I understand: * In phase 1: train policy prior use any offline RL algo, train encoder and decoder using the VAE objective * In phase 2: fine-tune decoder on new data, with encoder frozen. Section 5.1 mentions the new data are states from R. It is using R to set the initial state of the agent and the agent interacts with the environment from there on? **On planning objective** * The planning objective used by the authors is more of a posterior sampling approach than a Bayes adaptive approach which would require the agent to plan by rolling out $m_{t:t+h}$? The authors mentioned on line 143 that the evaluation target is single episode performance. Why not use the Bayes adaptive objective to actively reduce uncertainty and thus enhance performance in a single episode? * The single episode objective also seems to be a bit conflicting with fine-tuning the decoder, where fine-tuning is usually done between episodes for model based RL. Theoretical Claims: The paper is largely empirical. The theoretical motivations from the Bayesian perspective make sense. Experimental Designs Or Analyses: The experiment designs make sense. The chosen tasks and environments largely follow standard practices. Supplementary Material: Yes, I mainly reviewed additional results and implementation detail. Relation To Broader Scientific Literature: The paper sits within the broad offline RL literature. It takes a Bayesian perspective on offline RL which has been considered before by e.g., [Chen et al.](https://proceedings.neurips.cc/paper/2021/hash/470e7a4f017a5476afb7eeb3f8b96f9b-Abstract.html). Introducing online planning in this context seems to be the main novelty. Essential References Not Discussed: I am not aware of missed references. Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: * On planner implementation: most MPPI style planner can optimize action sequences for multiple iterations. Algorithm 2 shows that you only use a single iteration. Is that correct? Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer BPa5 for dedicating time to review our paper and for the feedback. We are glad you found the experiments demonstrate improved performance and the theoretical motivations sensible, and we appreciate the opportunity to clarify aspects of our problem setting and methodology that seemed unclear. 1. __Problem setting (offline vs. offline-to-online):__ Thank you for this clarifying question. Our setting is distinct from typical _"offline-to-online" RL (e.g., [1]) which often involves substantial online training iterations_. We focus on the scenario where an agent is trained entirely offline and then deployed online for evaluation, using planning to adapt its behavior based on the offline-learned model and uncertainty estimate. **No further training or model updates occur during this online deployment phase.** This "offline pre-train, deploy with planning" setting is important for applications where online interaction for training is limited, costly, or unsafe, yet adapting behavior based on learned uncertainties during deployment is crucial for robustness. 2. __Training phases (phase 1 / phase 2):__ To clarify the phases mentioned: both phase 1 (training the policy prior $\pi_{\mathrm{p}}$ and the VAE encoder/decoder) and phase 2 (fine-tuning the decoder) are performed entirely offline using the static dataset before any online interaction. The subsequent online phase is purely for evaluation without any training updates. 3. __Section 5.1 initialization:__ Your understanding is correct. For evaluation, the agent is initialized in a state sampled from the random (R) D4RL dataset and then interacts with the environment online. 4. __Planning objective (vs. Bayes-adaptive):__ RefPlan performs planning at each step using the learned model ensemble. And regarding the objective, as also clarified for Reviewer CFgz, we view the problem through the lens of an epistemic POMDP, which is an instance of a BAMDP. Therefore, indeed the underlying objective is Bayes-optimal behavior. RefPlan approximates this by sampling multiple latent model hypotheses (m) from the VAE's posterior approximation and planning within each sampled MDP using the prior-guided planner. The posterior over the MDP is continuously updated through the encoder during this evaluation time. By effectively marginalizing plans across these model samples, RefPlan seeks actions robust to the epistemic uncertainty captured by the VAE, thereby aiming to maximize performance in the evaluation episode, implicitly accounting for model uncertainty. 5. __Decoder fine-tuning.__ To clarify further based on your comment, the decoder fine-tuning mentioned (phase 2) occurs offline during pre-training, not during the online test/evaluation episode. 6. __Planner implementation (iterations):__ While Algorithm 2 presents a simplified view, our planner implementation does support multiple optimization iterations, similar to standard MPPI-style planners (e.g., resampling trajectories based on importance weights). We kept the pseudocode concise for clarity but will release our code upon acceptance to provide full implementation details. However, our earlier experiments showed no significant performance improvements from multiple sampling iterations, leading us to not pursue this approach. [1] Lee et al. (2022). “Offline-to-Online Reinforcement Learning via Balanced Replay and Pessimistic Q-Ensemble” --- Rebuttal Comment 1.1: Comment: Thank the authors for the clarification. I think the problem setting is very clear now. A few additional comments: * On decoder fine tuning: what you mean is you first do joint training of encoder and decoder, then you freeze encoder and train decoder for some additional number of steps, all on the same offline data, is this correct? The word fine tuning is somewhat unusual. I consider this as an implementation detail. Also, could you explain why you need to do this as opposed to just stopping at joint encoder decoder training? * I agree with reviewer CFgz that the work would be more elegant if uncertainty penalty was not needed for online planning. An ablation would be helpful. * I still think calling the method Bayes adaptive is a bit misleading, given the planner does not plan to gather information. I think labeling this as epistemic POMDP is appropriate. --- Reply to Comment 1.1.1: Comment: Thank you for the further engagement and for confirming the clarity of the problem setting. We appreciate the chance to address your additional comments: 1. __Decoder training steps:__ Your understanding is correct. We first train the encoder and decoder jointly, then freeze the encoder and continue training the decoder for a few extra epochs, all using the same offline dataset $\mathcal{D}$. We agree that calling the latter part "fine-tuning" might be confusing given the same data source and will revise the phrasing for clarity. Thank you for pointing this out. 2. __Rationale for additional decoder training:__ Our rationale for these extra steps relates to a key difference from VariBAD: RefPlan directly uses the learned decoder model for planning at test time. Therefore, maximizing its predictive accuracy is crucial to minimize the impact of model error on planning. Empirically, these extra decoder steps reduced validation loss (on held-out offline data), suggesting better predictive accuracy for planning. Since the decoder plays a key role during planning in RefPlan, we believed it was relevant to note this aspect in the main text. 3. __Uncertainty penalty ablation:__ Thank you for echoing Reviewer CFgz's point on the uncertainty penalty. We agree that demonstrating performance without relying on conservatism is important. As requested, we performed this ablation for our initial response to Reviewer CFgz. For self-containedness, we include the results also here for your reference: _[Table comparing RefPlan with and without the penalty, identical to the one provided in our response to Reviewer CFgz]_ | Env | Config | RefPlan| RefPlan w/o penalty | |----|----|----|----| | Hopper | MR | 98.1 ± 0.5 | 98.26 ± 0.5 | | Hopper | FR | 107.6 ± 0.5 | 107.71 ± 0.5 | | Walker2d | MR | 93.6 ± 0.3 | 93.71 ± 0.2 | | Walker2d | FR | 101.6 ± 1.1 | 100.35 ± 1.4 | | Halfcheetah | MR| 54.1 ± 0.6 | 54.34 ± 0.3 | | Halfcheetah | FR | 86.7 ± 0.7 | 87.42 ± 0.9 | As the results show, strong performance is maintained without the penalty under the considered tasks, indicating the primary gains stem from effectively utilizing epistemic uncertainty via Bayes-adaptive planning, rather than the penalty term. 4. __Bayes-adaptiveness:__ We appreciate you raising the insightful point about the connection to posterior sampling and the nature of planning in RefPlan; this is a valuable perspective that we may have overlooked initially. Indeed, our approach can be viewed as approximately optimizing the Bayes-adaptive objective via _posterior sampling_: at each step, we sample MDP hypotheses (m) based on the current belief (from the encoder) and plan within them. While explicitly planning to gather information (i.e., updating the latent belief during the planning rollout) is another way to approach BAMDPs, our preliminary experiments suggested this could be detrimental. _Actions deemed informative within an imperfect learned model might not translate to effective information gathering actions in the true environment due to model errors._ Consequently, we adopted the posterior sampling approach which still leverages the belief over MDPs to make robust decisions under uncertainty, but avoids potentially misleading information-seeking based on flawed model rollouts. We appreciate you highlighting this connection, and we will add discussion clarifying the relationship between our method and posterior sampling approaches for BAMDPs in the revision. Finally, we thank you again for the valuable discussion and insightful comments, which have helped improve the paper. We hope our clarifications are helpful and lead you to positively reassess your evaluation.
Summary: The authors introduce RefPlan, a doubly Bayesian method for offline model-based RL. RefPlan combines two existing methods (1) the probabilistic control-as-inference formulation of MB planning (using a policy prior) with (2) the variational representation of epistemic uncertainty. At inference time, RefPlan marginalizes over the latent variable representing the environment, to consider several possible MDPs and improve inference-time planning. The authors show that on D4RL, RefPlan is robust to OOD states, to environment changes and performs well with limited data. Claims And Evidence: The experiments ask interesting questions and study whether Replan - mitigates performance drop due to OOD states (5.1) - enhances performance and outperforms other methods (conservative policies) (5.2) - performs well with subsampled datasets (5.3) - is robust to changing dynamics (5.4) However, I have two comments about the experiments: (1) I wish the authors had used more than 3 seeds, which is very little and insufficient to claim statistical significance. Also, their error bars are missing in Table 1, Table 2 and Figure 3. (2) In the current setup, RefPlan increases the inference budget by a factor $\bar{n}$ (the number of times the latent variable m is sampled). Consequently, the authors should show that the performance gains reported do not simply come from this increase, but from combining the strengths of their two Bayesian frameworks. See my questions (2), (3), (4) below. Methods And Evaluation Criteria: The proposed framework which combines (1) a probabilistic inference for MP planning, using a learned policy as prior and (2) a latent variable for modeling the underlying MDP makes sense. Theoretical Claims: The paper does not contain theoretical claims. Experimental Designs Or Analyses: See "Claims And Evidence" Also, the authors only report normalized scores (100 being online SAC). They should report the SAC scores, so that future work can compare to them. Supplementary Material: I went through the supplementary material Relation To Broader Scientific Literature: Increasing run-time compute for improving the performance of pre-trained agents is a critical problem for developing general agents. The paper proposes an interesting approach in this direction. Essential References Not Discussed: Since the authors discuss MB offline policy learning (also known as background planning) they should cite the original Dyna work - Sutton, Richard S. "Integrated architectures for learning, planning, and reacting based on approximating dynamic programming." Machine learning proceedings 1990. Morgan Kaufmann, 1990. 216-224. and the Dreamer line of work: - Hafner, Danijar, et al. "Mastering diverse domains through world models." arXiv preprint arXiv:2301.04104 (2023). In the context of planning as inference, the authors should cite the sequential Monte Carlo work, which estimates the probability of following an optimal trajectory from the current timestep and add a resampling step. - Piché, Alexandre, et al. "Probabilistic planning with sequential monte carlo methods." International Conference on Learning Representations. 2018. Other Strengths And Weaknesses: I enjoyed reading the paper, which was well written and easy to follow. However, I think that the paper would benefit from highlighting more the authors' contribution. Here is one option: - As far as I understand, Sections 4.1 and 4.2 are summarizing the MBOP and VariBAD work, and the main novelty of the paper is 4.3. I find the current 4.1 and 4.2 to be misleading: I think the authors should consider moving these section to the preliminaries Section 3. They should also consider compressing Section 2 and 3, so that their contribution starts before the 6th page---which is currently the case. - In this new organization, Section 4 would be focused on RefPlan. The authors may also consider moving Algorithm 2 to Section 4. - The authors should keep more space for their experiments. For instance I am not sure that the results for applying RefPlan to MAPLE and COMBO should be in the appendices (there could be a table in the main text). Other Comments Or Suggestions: (1) Equation (1): $\hat{r}_{\psi}$ has not been defined. I assume it is a learned reward model. (2) The optimism bias that occurs with the RL as inference framework should probably be mentioned. Questions For Authors: (1) Is there any technical novelty in Sections 4.1 and 4.2? As far as I understand, 4.1 is similar to MBOP which also uses a BP policy as action prior, and 4.2 is summarizing VariBAD? (2) What is the value of $\bar{n}$ and $\bar{N}$ used in the experiments? How does it compare to LOOP? (3) Could the authors show that introducing the latent variable $m$ is useful? One option would be to compare: RefPlan vs. planning as inference without epistemic uncertainty (Equation 7) using using $\bar{n} \times \bar{N}$ samples (4) Similarly, could the authors compare RefPlan to others method for $\textbf{the same}$ inference budget? Two options could do that could be: - using $\bar{n} \times \bar{N}$ samples for LOOP (same as RePlan) - only replanning the sampled every $\bar{n}$ step when using RefPlan I am willing to increase my score if the authors can provide a stronger evidence that their gains come from their proposed RefPlan and not from simply increasing the inference budget of existing methods. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed, constructive review. We appreciate the opportunity to address your feedback, particularly on statistical significance and budget comparisons. 1. Statistical significance: While runs use 3 seeds, Figure 5 uses RLiable [1] for robust aggregate analysis (Appx B.1). Figure 5 demonstrates that RefPlan consistently outperforms LOOP across aggregate metrics with non-overlapping confidence intervals, indicating statistically meaningful improvements. We will add error bars to Figure 3 on revision. 2. Reporting raw scores: Normalized scores are from the D4RL codebase; we will clarify this on revision. 3. Essential references: We will add discussion on Dyna/Dreamer. Re: SMC, we cited Piche et al. (lines 114, 166) but will expand discussion on its relation to RefPlan in revision. 4. Paper structure & highlighting contributions: Thank you for the thoughtful suggestions on restructuring. We agree that improving the flow to present novel aspects earlier could strengthen the paper. While major restructuring is challenging, we commit to revising the presentation for the camera-ready version to improve clarity and better foreground RefPlan's core contributions. 5. Answers to specific questions: 1) Novelty in Sec 4.1 & 4.2: While algorithmically related to MBOP/VariBAD, our primary contribution is the conceptual framing and synthesis. Deriving MB planning via control-as-inference justifies the prior and enables the Bayesian uncertainty treatment (Sec 4.3). The novelty lies in integrating these for offline RL in RefPlan. We will clarify this. We will refine the writing to make this relationship clearer. 2) $\bar{n}$ and $\bar{N}$ values: For the main results (RQ2), we used $\bar{n}=16$ and the same $\bar{N}$ as LOOP. Full hyperparameter details are in Appendix D.2, Table 7. 3) Usefulness of latent variable: Fig 8 shows benefits up to $\bar{n}=16$. New results with $\bar{n}=32,64$ (in our response to Reviewer CFgz, Tables 2-3), show further gains, particularly under high uncertainty due to limited-data (RQ3), suggesting the value of marginalizing over $m$. 4) Comparison with same inference budget: This is a crucial point. We conducted new experiments to directly address this: * LOOP with increased budget: We compared RefPlan against LOOP using 16x its original sampling budget (matching RefPlan's max $\bar{n}$). Table 4 shows RefPlan generally maintains an advantage, though LOOP benefits from compute (esp. w/o penalty in HalfCheetah-FR). Table 4: RefPlan vs. LOOP-16x budget, RQ2 |Env|Config|LOOP|LOOP w/o penalty|RefPlan| |---|---|---|---|---| |Hopper|MR|97.8 ± 1.1|97.8 ± 0.8|98.1 ± 0.5| | |FR|107.5 ± 0.6|107.5 ± 0.6|107.6 ± 0.5| |Walker2d|MR|83.2 ± 8.8|78.4 ± 7.1| 93.6 ± 1.1 | | | FR | 99.9 ± 1.5 | 99.9 ± 1.5 | 101.3 ± 0.3 | | Halfcheetah | MR | 53.2 ± 0.1 | 55.1 ± 0.4 | 54.1 ± 0.6 | | | FR | 83.1 ± 0.8 | 89.4 ± 0.7 | 86.7 ± 0.7 | * LOOP with increased budget (limited data): We repeated this comparison in the RQ3 setting (Table 5). Here, RefPlan's advantage over the high-budget LOOP was often more pronounced, especially with smaller datasets, suggesting RefPlan better handles the higher epistemic uncertainty. Table 5: RefPlan vs. LOOP-16x budget, RQ3 Hopper FR: | Data Size | LOOP| RefPlan| |----|----|----| | 50k | 101.7 ± 4.9| 99.5 ± 12.9| | 100k| 106.9 ± 0.3| 107.0 ± 0.7| | 250k | 107.2 ± 0.3| 106.8 ± 0.6| | 500k | 104.5 ± 3.6| **107.7 ± 0.4**| Walker2d FR: | Data Size | LOOP| RefPlan| |----|----|----| | 50k| 70.6 ± 16.4 | **82.1 ± 9.5**| | 100k| 95.9 ± 1.4 | 96.4 ± 2.1| | 250k| 96.1 ± 0.8 | 96.8 ± 1.2| | 500k| 98.6 ± 0.9 | **100.8 ± 0.8**| Halfcheetah FR: | Data Size | LOOP | RefPlan | |----|----|----| | 50k| 63.8 ± 0.2| **68.5 ± 1.6** | | 100k| 71.4 ± 0.4| **75.7 ± 1.0**| | 250k| 77.9 ± 0.3 | **81.8 ± 0.3**| | 500k| 82.4 ± 0.7 | **83.9 ± 1.4**| * RefPlan with reduced budget: We matched RefPlan's budget to LOOP's by increasing its replanning interval. Table 6 shows RefPlan still performed competitively, often better than LOOP (except HalfCheetah-FR). Table 6: |Env|Config|LOOP|RefPlan-ReducedFreq| |----|----|----|----| | Hopper| MR| 97.5 ± 0.5| **98.7 ± 0.6**| | | FR | 106.2 ± 0.7| **107.8 ± 0.5**| | Walker2d| MR| 81.9 ± 3.0| **89.3 ± 2.3**| | | FR | 99.4 ± 0.3| **100.1 ± 0.5** | | Halfcheetah | MR| 52.1 ± 0.0| 52.1 ± 0.2 | | | FR | **81.8 ± 1.0**| 79.4 ± 0.7| These results suggest that RefPlan's performance gains are not merely due to increased computation but stem from the way it handles epistemic uncertainty via the doubly Bayesian approach, leading to more effective planning. We hope these responses, new experiments directly addressing the budget comparison, and our commitments to revision clarify the contributions and robustness of RefPlan. We are grateful for your constructive feedback and hope we have provided the evidence needed to reconsider the initial evaluation. [1] Agarwal et al. (2021). “Deep Reinforcement Learning at the Edge of the Statistical Precipice” --- Rebuttal Comment 1.1: Comment: I thank the authors for carefully addressing my comments and running these additional experiments. They indeed show that RefPlan can lead to some performance gain w.r.t. LOOP, when using the same inference budget. Part of these should be included in the revised paper. Although the camera ready version of the paper will require significant restructuring + will need to include many of the additional experiments run in the rebuttals, I am happy to increase my score. --- Reply to Comment 1.1.1: Comment: Thank you for carefully considering our response and for the positive reassessment. We acknowledge the need for significant revisions for the camera-ready version, including restructuring and incorporating the new experimental results. We are fully committed to making these improvements should the paper be accepted.
Summary: This paper combines ideas from _adaptive_ and _online planning_ Offline RL to achieve a conceptually nice framework and reasonable performance improvements. They are able to improve upon (a) epistemically adaptive methods with no online computation, and (b) online computation methods with no explicit epistemic adaptability. Claims And Evidence: The claims made are broadly sensible and supported, other than: 1. The claim that "existing methods rely on fixed conservative policies" (abstract lines 15-17) is an oversimplification and is contradicted by later discussion of several adaptive ORL methods. The authors should clarify this point so the claims are not overstated. 1. I don't agree with the way Epistemic POMDPs are compared to BAMDPs (lines 126-155, 185-191). - There is no real conceptual difference between the Epistemic POMDP and BAMDP formulation, or the methods used to solve them. The difference is purely perspective/assumption-based, where epistemic POMDPs were pitched with the offline->online deployment setting in mind. The description in the text ("Epistemic POMDPs prioritize performance during a single evaluation episode") makes it seem as if there is a fundamental difference in their representative capabilities, which is not the case. This is even said later in the paper -- "an epistemic POMDP is an instance of a BAMDP" (lines 185-186). - Along the same lines: line 202-203 says "we can leverage the BAMDP reformulation of epistemic POMDPs". I don't think there's any reformulation needed. - I don't think it’s accurate to say a “BAMDP can be reformulated as a belief MDP” (line 130); a BAMDP is itself a special case of a belief MDP [Zintgraf et al. VariBAD: A very good method for bayes-adaptive deep rl via meta-learning, ICLR 2020.] - Ultimately I think this confusion arises from the original Epistemic POMDP paper [Ghosh et al. 2021], where I do wish they'd originally posed the problem as a BAMDP. But there's no need to perpetuate that confusion here. Methods And Evaluation Criteria: The chosen combination of methods is sensible, and the evaluation domains and baselines are standard for offline RL. Theoretical Claims: I checked the derivations in the text, which are relatively simple (the result of minor changes to existing methods). These derivations seem correct to me. Experimental Designs Or Analyses: The basic experimental approach seems sound. ### Ablations I would have liked to see ablation studies or some kind of qualitative comparison to justify the design choices and components of the method: 1. "Additionally, following Sikchi et al. (2021), we apply an uncertainty penalty based on the variance of the returns predicted by the learned model ensemble" (316-320), would be nice to see an ablation of this. It would be more elegant if all behaviour was Bayes-adaptive rather than some conservatism. 1. Ablating prior policy with random policy instead ### Statistical significance Although 3 seeds is not very many for the experiments, experiments are repeated several times over different prior policy methods, so the results in Figure 5 are based on a reasonable number of samples. It would be nicer if this plot was in the main body of the paper. ### Experiment improvements - I would like to see more latent sample values being tried (Figure 8), rather than 16 being the maximum number. - The "normalised score" hasn't obviously plateaued in the range of latent sample values shown in Figure 8. - MCTS-based Bayes-adaptive planning methods such as BAMCP [Guez et al., "Scalable and efficient Bayes-adaptive reinforcement learning based on Monte-Carlo tree search". JAIR 2013.] effectively sample a new MDP (equivalent to latent sample) for each MCTS trial, which would be >> 16 samples. - Given everything seems to run relatively fast (Table 13) I don't see why this couldn't be done with 32 or 64 samples for example. - It would be nice if stochastic MDPs were evaluated, as well as the D4RL deterministic continuous control ones. Evaluating only on deterministic environments is common for offline RL papers however. Supplementary Material: I read through the supplementary material. - The additional experiments in the appendix are useful and thorough. It is also nice to see a discussion of runtime and scaling. - The BAMDP appendix is useful as a quick reference, but focuses on the general dirichlet/multinomial case which isn't that relevant to the "set of possible MDPs" uncertainty setting of this paper. Relation To Broader Scientific Literature: The work is a combination of ideas from two main areas in offline RL: adaptive behaviour and online planning. The contribution of this paper is to combine these ideas in a way that is conceptually nice. Essential References Not Discussed: I did not know of or find any related work that was essential to be cited in this paper. Other Strengths And Weaknesses: 1. The flow and clarity of the paper is good. Given that this paper combines ideas/algorithms from several fields, the authors achieved the task of succinctly covering the important concepts and how they are combined into their method. Other Comments Or Suggestions: ## Minor / clarity 1. Figures 3/6/7 seem like they could be combined somehow, with a more information-dense presentation than bar charts. It could also do with more discussion in the text, as the performance improvement over LOOP seems weakest in this experiment setting. 1. I think clarity would be aided by always showing subscripts on $\mathcal{O}$ in (2) and (3) 1. The list of evaluation configurations (lines 351-355) could make it clearer that it is the behaviour policy that is changing between configurations. It would be helpful to have a brief description of how these configurations vary, or a pointer to somewhere where the variants are more fully described, especially for a reader without familiarity with D4RL. ### Typos - Line 43: “have severe implications” should be “has severe implications” - Line 164: “model of the environment refers to the transition and reward functions”. The difference between the ground-truth T and r, and the estimated models of them, should be made clearer here. - Line 973 “moel” -> “model” Questions For Authors: 1. Did you carry out ablations or comparisons to justify your design choices discussed in my ablation comments? As a combination-of-methods paper I think it is important to discriminate between the contributions of the individual components, and ablation studies or more discussion would help with this. 1. Do you agree with my comments on the Epistemic POMDP/BAMDP confusion? I could be convinced otherwise if you have a good argument. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer CFgz for taking the time to thoroughly analyze our paper and provide constructive and insightful feedback. We address the main points raised below: 1. Epistemic POMDP vs. BAMDP: Thank you for this important point. We agree there's no fundamental conceptual difference and "reformulation" was imprecise. Our use of the Epistemic POMDP perspective aimed to highlight our specific offline pre-training -> single online evaluation setting, following [1]. As noted (lines 185-186), an Epistemic POMDP is an instance of a BAMDP; the distinction is mainly perspective. We apologize for the confusion and commit to revising the text for the camera-ready version to clarify this relationship and correct related statements (e.g., line 130). 2. Ablation on uncertainty penalty: This is an excellent suggestion. To disentangle the benefits of Bayes-adaptivity from conservatism, we conducted new ablation experiments removing the uncertainty penalty, focusing on the CQL prior with medium-replay (MR) and full-replay (FR) datasets. [Table 1] | Env | Config | RefPlan| RefPlan w/o penalty | |----|----|----|----| | Hopper | MR | 98.1 ± 0.5 | 98.26 ± 0.5 | | Hopper | FR | 107.6 ± 0.5 | 107.71 ± 0.5 | | Walker2d | MR | 93.6 ± 0.3 | 93.71 ± 0.2 | | Walker2d | FR | 101.6 ± 1.1 | 100.35 ± 1.4 | | Halfcheetah | MR| 54.1 ± 0.6 | 54.34 ± 0.3 | | Halfcheetah | FR | 86.7 ± 0.7 | 87.42 ± 0.9 | The table shows strong performance is maintained without the penalty, indicating gains primarily stem from Bayes-adaptive planning utilizing epistemic uncertainty, not conservatism. 3. Ablation with random prior policy: We appreciate the suggestion. Early experiments showed random priors performed very poorly due to distribution shift. Given that even stronger priors like BC policies (e.g., in MBOP) were outperformed by LOOP, we decided a full sweep with a random prior would be less informative than other comparisons. 4. Number of latent samples: Thank you for this keen observation regarding Figure 8 and the potential benefits of using more latent samples. We ran new RQ2 experiments with 32 and 64 latent samples (CQL prior, MR/FR datasets). Table 2 compares these with $\bar{n}=16$. [Table 2] |Env|Config|16|32|64| |---|---|---|---|---| |Hopper|MR|98.3±0.5|96.5±0.3|98.0±0.3| | |FR|107.6±0.5|107.6±0.5|107.6±0.6| |Walker2d|MR|92.8±0.7|88.1±2.0|87.1±3.8| | |FR|100.1±0.8|100.3±0.9|100.3±1.3| |Halfcheetah|MR|54.3±0.3|54.1±0.3|54.4±0.1| | |FR|87.4±0.9|87.1±0.9|87.2±0.7| In this standard setting, increasing $\bar{n}$ beyond 16 yielded no significant gains, possibly due to lower epistemic uncertainty. We hypothesized the effect would be clearer with higher uncertainty. To test this, we ran additional RQ3 experiments using models trained on subsampled datasets. [Table 3] |Env|Data Size|16|32|64| |---|---|---|---|---| |Hopper|50k|99.5±12.9|102.7±4.0|**106.2±3.3**| |Hopper|100k|107.0±0.7|106.9±0.4|106.9±0.4| |Hopper|250k|106.8±0.6|103.2±3.3|106.7±0.2| |Hopper|500k|107.7±0.4|107.7±0.5|107.5±0.6| |Walker2d|50k|82.1±9.5|85.3±10.9|**93.9±1.0**| |Walker2d|100k|96.4±2.1|89.5±6.2|84.4±5.9| |Walker2d|250k|96.8±1.2|96.8±0.6|96.4±0.6| |Walker2d|500k|100.8±0.8|99.2±0.3|99.9±0.4| |Halfcheetah|50k|68.5±1.6|69.1±1.0|**69.3±1.0**| |Halfcheetah|100k|75.7±1.0|76.2±0.3|76.1±0.6| |Halfcheetah|250k|81.8±0.3|82.0±0.2|82.1±0.5| |Halfcheetah|500k|83.9±1.4|83.8±1.1|84.0±1.0| As shown in the table, with higher uncertainty (limited data), increasing $\bar{n}$ to 64 does indeed lead to more noticeable performance gains, particularly with the 50k dataset. This confirms the value of more samples in these settings. 5. Stochastic environments: We agree that evaluating on stochastic environments would be a valuable extension. We acknowledge this limitation and plan to explore this direction in future work. 6. Minor points & typos: Thank you for pointing out the areas for clarification and the typos. We will address all these points in the camera-ready version. We hope these clarifications, new experimental results, and our commitment to revise the paper address your concerns. We are grateful for the constructive feedback, which has immensely helped us strengthen the paper. We believe the results, particularly the new ablations and latent sample experiments, further validate the effectiveness of our approach. [1] Ghosh et al. (2021) “Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability” --- Rebuttal Comment 1.1: Comment: Thank you for the clarification and additional experiment results. I will leave my recommendation unchanged. Table 3 is interesting and answers my question on behaviour with respect to the number of latent samples. I think the paper or appendices should include this discussion on the link between small training data and the benefit of using more latent samples. The small data / high model uncertainty regime is key to demonstrating the benefits of Bayes adaptive methods. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our response and for the additional feedback. We are glad the results on the number of latent samples were informative. We agree that discussing the link between limited data/high uncertainty and the benefit of more latent samples is important. We will certainly incorporate this discussion into the camera-ready version, either in the main text or appendices, if the paper is accepted.
null
null
null
null
null
null
null
null
Learning Time-Varying Multi-Region Brain Communications via Scalable Markovian Gaussian Processes
Accept (oral)
Summary: This paper proposes a statistical model for estimating the time-varying delay of communication between multiple brain regions. This is achieved through low-dimensional latent variable modeling and incorporating multi-output Gaussian Process models. In addition to the modeling contribution, the paper exploits a connection between state-space models (SSM) and Gaussian Process (GP) with arbitrary temporally stationary kernels which allows the authors to develop fast and scalable inference models which scale logarithmically with the number of time points. Results are shown on a toy dataset as well as two neuroscience datasets with recordings from 2 and 5 regions respectively. The model discovers interesting interplay between feedforward and feedback flow of information across regions. Claims And Evidence: The claims regarding the speed of inference and the accuracy of the estimation for ground truth experiments are indeed supported by the results. For results on neuroscience datasets, the biological relevance of the estimated delays is still an open question. While the method provides valuable insights into the dynamic communication across regions, further validation experiments are required to support whether the delays have biological or causal relevance. Addressing the causality aspect of the estimated delays is beyond the scope of this paper and requires further careful investigation. Methods And Evaluation Criteria: Methods that jointly model the low-dimensional latent evolution of neural dynamics along with modeling the dynamic communication across regions did not exist before. Indeed this paper addresses an important and open question in neuroscience data analysis and the modeling framework used here is sensible and appropriate. In addition, the authors discover a deeper connection between SSM and GP with temporally stationary kernels with opens up new research direction to the broader ICML community. Theoretical Claims: The derivations seem correct to me. Apart from those the paper does not include theoretical claims. A few approximations are proposed in the paper. Addressing the quality of those approximations can be further investigated. I will explain this below in more detail. Experimental Designs Or Analyses: The datasets considered and the application of the model to those datasets is sounds and appropriate. Supplementary Material: Yes, the code package is included in the supplementary which follows standard code development practices. I appreciate the authors for including it with the submission. Relation To Broader Scientific Literature: The connection between SSM and GP can have potential applications that go beyond the results shown in the paper. In addition, the methodology is relevant to the computational and experimental neuroscience community and enables them to gain new insight into multi-region datasets. Essential References Not Discussed: Nothing off the top of my mind. Other Strengths And Weaknesses: **Strengths** - The paper is very well-formatted and well-written. I had a relatively easy job following the arguments and derivations. - The accompanied code package follows standard code development practices. - The connection between SSM and GP with temporally stationary kernels is of potential interest to the broader ML community. - The method opens up new interesting analyses to the neuroscience community enabling new insights about multi-region communication. **Areas of improvement** - In some figures your description of the results has a causal flavor to it (feedforward vs. feedback direction of information). Imagine a scenario in which you have two regions exhibiting sine and cosine waves that are phase shifted; essentially it’s impossible to tell which one is sending/receiving information to the other, unless one region is manipulated and the effect is seen in the other region (but not in the opposite direction). Therefore it sounds like it’s impossible to claim anything causal from the estimated delay values. While this is a fundamental challenge associated with all the modeling approaches that do not consider causal manipulations, it’d be great if the authors can discuss this in the limitation section of their work. That said, I think there’s still a lot of value with the development of methods like the one proposed in the paper and it can help build insights and hypotheses that can then be causally tested. - Both datasets presented in the paper use drifting grating stimuli which induce synchronous activity across brain regions; it’d be nice if the authors could show results on datasets with more complex sensory stimulus or datasets recorded from freely behaving animals. Other Comments Or Suggestions: **Minor issues** - Please reword the following, there are two many communications in one sentence: [Understanding and constructing brain communications that capture dynamic communications across …]. - Please fix [Dicovering] -> discovering. - Please fix [Dimmention] -> dimension. Questions For Authors: - Parallel scan $O(logT)$ that’s assuming you have access to O(T) GPUs, right? The arguments in the paper give the impression that $O(logT)$ comes for free. It'd be great if the authors could discuss this in the limitations section. - Can you improve the diagonal covariance model and incorporate a full covariance model (or low-rank + diagonal)? - I didn’t quite understand the inference, where are you computing the complete log likelihood? If you're running gradient descent isn't your algorithm essentially performing marginal likelihood optimization? - Samples from a GP are nonlinear functions, looks like everything in your modeling framework is linear though, can you characterize the approximation gap you incur by treating a GP as an SSM? - How do you deal with Poisson data? Can you expand the model to account for Poisson observations? In the current settings, do you apply any transformations to your spike counts to make them consumable by the model? - Why not comparing against SLDS? Can you include these comparisons? - What enforces orthogonality of the within vs. between variables? Also among “between” variables what makes them orthogonal? Is it not possible for all of the between variables to reflect the same temporal delay characteristics? - You perform cross-validation on the delay, do you also perform cross-validation on the number of latents? If not, how do you set it? Can you show cross-validation results to get an overall sense of the sensitivity of the method to these parameters? Also can you include experiments where you set the number of latents to larger or smaller than the true value and see the effect? Specifically, if you set to larger this should result in some non-identifiability and inconsistency across runs or random initializations; this would specifically be an issue since you’re using the same number of between area latents which is practical assumption but might not be the the right assumption. - How are the shaded errorbars computed? If you run the method multiple times with different initializations, do you get the same result (same latents, same estimated delays)? How well defined is the problem? There seems to be many degrees of freedom in the model which suggests that multiple solutions can possibly co-exist (which is perhaps exacerbated by the existence of both positive and negative delays). - Can you include state space visualizations too (e.g. PCA, or projections to the communication subspace)? - Can you expand the simulation results to study the performance of the model (in terms of the recovery of the true delay parameter) as a function of number of regions, latents, delays, trials, smoothness of the kernel, etc.? - What's the interplay between the choice of the kernel and the estimated delay. The estimated delay parameter seems to be dependent on the choice of kernel. In other words, if we change the kernel, the estimated delay parameter will be different too. Given this non-identifiability, how should we interpret the estimated delay values from a physical or biological standpoint? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer sX24, Thank you for the suggestion! We hope these improvements clarified your concerns, and that they can be taken into account when deciding the final score. The additional results are: (https://anonymous.4open.science/r/rebuttal-figures-for-ICML-2025-42E6/rebuttal_figures.pdf), including Rebuttal-Figures 1&4. > Claim anything causal from the delay? - We thank the reviewer for this thoughtful comment. We agree that delay or directional estimates from observational data alone do not prove causality. We do not make causal claims here; instead, we use these estimates to form hypotheses about across-region information flow. We acknowledge that only direct manipulations can establish true causal effects. Our model’s value is in highlighting putative, time-varying interactions, which can help pinpoint the circuits and time points most relevant for further causal testing. We will elaborate on this limitation in future revisions. > More complex sensory stimulus - Thanks for this suggestion. We agree that more complex stimuli offer exciting possibilities, and we see this as an important direction for future work. > Parallel scan access to O(T) GPUs. - Parallel scan algorithm relies on having enough parallel threads rather than literally requiring $T$ separate GPUs. We will clarify this in limitation part. > Improve the diagonal covariance model to a full covariance model? - In FA model, the latents approximates a full kernel GP, so the marginal covariance $\mathbf{C}Cov(\boldsymbol{x})\mathbf{C}^\top+\mathbf{R} $ is full. > Where are you computing the complete LL? - In the E-step, we use a Kalman filter/smoother to compute the posterior of the latents and compute the expected complete LL. In the M-step, we maximize this expectation with respect to model parameters using gradient descent, as part of the EM framework rather than directly optimizing the marginal LL. We will provide a pseudo algorithm in future revisions. > Approximation gap between SSM and GP. - Although our model is linear in state evolution, it can capture the nonlinear covariance structure of GP with a sufficient order $P$ in Eq. 5. In practice, choosing an appropriate $P$ results in performance that closely matches that of a GP. > How do you deal with Poisson data? - We preprocess using Gaussian smoothing, following prior work setting (Li et al., 2024). We acknowledge this is not ideal and developing a Poisson Kalmen Parallel scan is a key direction for future work. > Why not comparing against SLDS? - SLDS is not designed to capture temporal delays between regions. But we also show log-likelihood comparisons with multi-region SLDS in Rebuttal-Figure 4. > What enforces orthogonality of the within vs. across variables? - We do not claim strict orthogonality; instead, the FA model encourages decorrelation between variables through a block-diagonal observation matrix that assigns latents exclusively to shared or independent activity. Across-region variables use separate kernels, encouraging different delay characteristics; however, if the data indicates similar delays, the latents can reflect that similarity. > Do you also perform cross-validation on the number of latents? - Yes, we perform cross-validation on all hyperparameters, including the number of latents, in Figure 3(C). Also, in Rebuttal Figure 1(D), we test cases with larger and smaller latent numbers than true value. Incorrect latent settings result in larger variance across runs and lower test log-likelihood. We will address this issue in future revisions. > How are the shaded errorbars computed? Degrees of freedom? - The shaded error bars represent the variance of the learned delay across runs. We initialize the projection matrix $\mathbf{C}$ using CCA, which yields stable fits across random seeds. Our model incorporates a shared kernel length scale over time, which reduces the model’s degrees of freedom by determining the temporal dynamics and constraining the evolution of delays. > Can you include state space visualization? - We visualized our model’s communication subspace on both neural datasets in Figure 2(A) and Appendix E. > Performance vs number of regions, latents, and length? - We thank the reviewer for this insightful suggestion. As shown in Rebuttal Figure 1(A-C), the model demonstrates stable performance across different conditions. > The interplay between the kernel and the delay? - The estimated delay is an effective parameter that captures the time shift between signals, as defined by the chosen kernel. While different kernels may yield varying delay values, the directional information they convey remains consistent. The sign and relative ordering of delays are stable across reasonable kernel choices, as shown in prior work (Li et al., 2024), where a different kernel produced the same directional insights on the two regions data. Again, we thank the reviewer for insightful suggestions, which improved the quality of our paper! --- Rebuttal Comment 1.1: Comment: Thank you for addressing the comments and including the new results, I have adjusted my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you very much for updating the score! We sincerely appreciate your insightful feedback.
Summary: This paper extends GP-based methods for modeling multi-region neural recordings to the case where temporal delays in communication can dynamically shift. This is achieved by a novel combination of GPs with state-space models. The authors evalute their work on both synthetic and real multi-region neural recording datasets. Claims And Evidence: Overall, the paper did a good job supporting its claims with a thorough and extensive evaluation. They are able to reconstruct the true time-varying delays in the setting of synthetic data. And they show that they can fit real-world data, and the results make sense in the context of known details about the dynamics of the studied regions. Methods And Evaluation Criteria: The proposed methods and evaluation criteria (both benchmarks and metrics) make sense for this problem setting. The real-world data analysis is more of a proof-of-concept, which is appropriate for the venue. Theoretical Claims: I looked over the derivations in Appendix A and nothing popped out to me as incorrect. Experimental Designs Or Analyses: I looked over the experimental design of the main results and found them to be sound. The synthetic data with known time-varying temporal delays is a great place to start as a sanity check. And the real-world datasets show the scalability of the method to more than two regions. It also demonstrates that it fits to real data, and illustrates the type of hypotheses it can help generate for further testing. It would be interesting to see how well the model reconstructs true delays in the setting where there are more than two regions though. This could be tested with a synthetic model similar to the proposed one, but with more than two regions. Additionally, the results show that the model is fairly consistent across seeds, which is a nice property for a modeling framework. Supplementary Material: I looked over the derivations in Appendix A. Relation To Broader Scientific Literature: This paper extends state-of-the-art GP-based multi-region brain modeling techniques which focus on modeling inter-region delays in communication. Prior work primarily focuses on static delays, but this method considers the case where they are influenced by cognitive states. Their developed method also combines GPs with state-space models in an interesting way which may be of indepdent interest outside of neuroscience. Essential References Not Discussed: There are no essential references missing from this paper as far as I'm aware of the literature. Other Strengths And Weaknesses: While not the primary concern of this paper, it is unclear how well this framework can account for the ill-posedness of inferring time-varying messages with static delays versus time-varying delays. It's possible that some of what appears to be time-varying in the delay of data is more due to the time-varying nature of the messages than something about the actual communication channel. But this comes down to the interpretation of the model, which is an important but different problem than the focus of the paper. Other Comments Or Suggestions: No additional comments. Questions For Authors: - How well does the model reconstruct true time-varying delays in data generated by a synthetic model with more than two regions? - To what degree is the inference of delays vs. communications an ill-posed problem? Is there a degeneracy where the true message could have static delays in terms of the transmission channel but it looks like a time-varying delay due to the time-varying nature of the communications? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer MsHr, Thank you for the encouraging feedback and suggestion! We have made several clarifications according to your questions and comments. We hope these sufficiently clarified your concerns, and that they can be taken into account when deciding the final review score. The additional results are provided (https://anonymous.4open.science/r/rebuttal-figures-for-ICML-2025-42E6/rebuttal_figures.pdf), including: - Delay estimation on the synthetic dataset with five regions (Rebuttal-Figure 2). > It would be interesting to see how well the model reconstructs true delays in the setting where there are more than two regions though. - We thank the reviewer for this insightful suggestion. In Rebuttal-Figure 2, we present the estimated time-varying delays from synthetic data with five regions. Rebuttal-Figure 1(A-C) further shows model performance in terms of MSE and Pearson’s correlation coefficient (CC) between the estimated and ground truth delays as the number of regions increases. While MSE rises due to greater amplitude variability, CC remains stable, indicating reliable recovery of the temporal delay patterns. As shown in Rebuttal-Figure 2, this variability in amplitude is not a practical concern, as the temporal patterns, which are crucial for understanding brain communication, are well recovered. We will add these results in the revised version. > To what degree is the inference of delays vs. communications an ill-posed problem? Is there a degeneracy where the true message could have static delays in terms of the transmission channel but it looks like a time-varying delay due to the time-varying nature of the communications? - We thank the reviewer for raising this question. Separating the effects of time-varying delays from changes in the message content is inherently challenging. In principle, the inference problem is ill-posed because the same observed data could be explained either by static delays combined with time-varying messages or by genuinely time-varying delays. In other words, there is a potential degeneracy: even if the true communication channel has static delays, the variability in the messages may cause the inference process to interpret these as time-varying delays. - To mitigate this issue, our model incorporates a regularization strategy by using a shared length scale parameter $l$ across all time points. As described in Sec. 2.2, the state transition matrix $\mathbf{\hat{A}}_t$ is uniquely determined by the delay $\theta_t$ and the length scale $l$. By sharing $l$ across time, we constrain the temporal evolution of the delay parameters, enforcing similar temporal dynamics and reducing the risk of misattributing variability in the messages to changes in delays. Furthermore, our synthetic experiments with known ground truth demonstrate that the model reliably recovers the true dynamic delays, indicating that the inferred time-varying delays reflect genuine changes in the communication pathways rather than artifacts arising from the message content. - However, we acknowledge that further research could explore additional constraints or independent measures to better address this problem, but our current results suggest that the degeneracy is minimal in the context of our application. Again, we thank the reviewer for the comments, constructive feedback, and insightful suggestions, which improved the quality of our paper. If you have further questions, we are happy to discuss them! --- Rebuttal Comment 1.1: Comment: I thank the authors for providing additional experiments showing reconstruction of known true delays in synthetic data! And for considering my concerns about the degeneracy of true messages with static versus time-varying delays. --- Reply to Comment 1.1.1: Comment: We again thank the reviewer for your encouraging feedback and for appreciating our additional experiments!
Summary: Modeling neural activity across networks of populations of neurons across multiple brain regions is critical to understand neural computation and how information is processed. While the recent recoding technological advances have made it possible to acquire the data, modeling tools are limited in their ability to capture all the variability present in it. To address some of these limitations, the authors present a new model that extends latent space models to capture time varying variability across brain regions while allowing for temporal delays between them. The authors also introduced an inference procedure to reduce computational complexity. They test the model in a synthetic dataset and two neural datasets showing the feasibility of the approach. Claims And Evidence: The authors clearly frame their work and provide theoretical proof and empirical results to back them. The authors present results for one synthetic dataset and two neural datasets, showing the feasibility of their model. However, the authors motivate their work by stating that existing methods fail to capture neural representations that could provide additional insight into neural processing, but they fail to present novel neuroscience discovery beyond corroboration with results surfaced by existing models. Comparatively showing how this model can improve interpretability or drive new insight is critical to understand the impact of the contributed work. Methods And Evaluation Criteria: The methods are well motivated and described. The authors compared their model to alternative existing models that explicitly model temporal dynamics and delays, but do not provide systematic comparisons to other static methods such as vanilla Procrustes alignment (Williams et al NeurIPS 2021, Safaie et al. Nature 2023). Including such comparisons would be critical to evaluate the tradeoffs between computational cost, data demands and explanation capabilities across models. They minimally evaluate the proposed model in a synthetic dataset, but the authors fail to show comparative performance with other models in this data. Moreover, they could have further used this setting to assess the applicability and limitations of the model (e.g. how does the model behave with respect to the latent dimensionality, or number of areas, or length of recording?). The authors present results on two sets of neural data, where they prove that their method is more computationally efficient and provides efficient reconstruction of the neural responses and captures the temporal delays between regions. It would be relevant to assess not only the delays and directionality of the connections between regions, but also i) the neural representations in the communication space and ii) the task relevant information present between the communications subspaces. This would further highlight the potential of the model to drive neuroscientific discovery. Theoretical Claims: The theoretical claims are correct, and full proofs are shown in the supplementary material. Experimental Designs Or Analyses: As mentioned previously, the design and analysis address the feasibility of the approach but lacks to fully illustrate the advantages of the model with respect to alternative models or the ability to drive new scientific insight. Including comparisons to simpler methods, which are expected to fail to capture the neural variability, and a comparison beyond log likelihood estimates would further strengthen the presented results. Possible extensions would be to include comparisons of the latent representations across models, or decoding performance with respect to relevant neural computation variables, such as visual stimulus class. Supplementary Material: The supplementary material includes the full proofs for the method and example synthetic data. Relation To Broader Scientific Literature: The authors adequately motivate their work and present alternative methods. However, they could include references to other simpler neural alignment methods (Williams et al NeurIPS 2021, Safaie et al. Nature 2023). And add a references to methods that capture communication spaces between brain regions and also model the task variability, such as Balzani et al. ICLR 2023. Essential References Not Discussed: The aforementioned references should be added to illustrate alternative alignment methods for neural recordings without explicit modeling of temporal dynamics (Williams et al NeurIPS 2021, Safaie et al. Nature 2023) or that capture task and behavioral information (Balzani et al. ICLR 2023). Other Strengths And Weaknesses: The method uses vanilla FA to estimate the latent dimensionality, and it is not clearly listed as a limitation or design choice. Why not use likelihood estimates from the model itself? Other Comments Or Suggestions: Axes ticks and labels are missing in multiple figures, including Fig. 1A, 4 and 5 both x- and y- axes, Fig1C, 2A,D , 3A, 6 x-axis. Questions For Authors: Does the model support different dimensionality across different latent spaces? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer 9Ksf, Thank you for the constructive comments! We have made several improvement according to your questions and comments. We hope these sufficiently clarified your concerns, and that they can be taken into account when deciding the final review score. The additional results are provided (https://anonymous.4open.science/r/rebuttal-figures-for-ICML-2025-42E6/rebuttal_figures.pdf), including: - A comparison of our model with existing methods on the synthetic dataset (Rebuttal-Figure 4(C)). - The model performance vs. the number of regions, latents, and length on the synthetic dataset (Rebuttal-Figure 1(A-C)). - Decoding visual stimulus-related information from the learned latents (Rebuttal-Figure 3). > The aforementioned references should be added (Williams et al. NeurIPS 2021, Safaie et al. Nature 2023, Balzani et al. ICLR 2023). - Williams et al. (NeurIPS 2021) introduce a metric framework for comparing static neural representations, Safaie et al. (Nature 2023) show preserved latent dynamics across subjects using alignment, and Balzani et al. (ICLR 2023) propose a task-aligned model capturing within- and across-region neural variability. In contrast, our work focuses on modeling delay-based brain communication across regions, aiming to capture dynamic, time-varying delays that mediate across-region interactions. This enables insights into the directionality and strength of functional coupling. - Therefore, we respectfully disagree that our work should be directly compared to alignment-based approaches such as Williams et al. (NeurIPS 2021) and Safaie et al. (Nature 2023), as our modeling goals differ. On the other hand, we greatly appreciate the novel perspective introduced by Balzani et al. (ICLR 2023) on task alignment within communication subspaces. We will include a discussion of their work in our revised manuscript to clarify how it complements and contrasts with our approach. > They minimally evaluate the proposed model in a synthetic dataset, but the authors fail to show comparative performance with other models in this data. - We thank the reviewer for raising this concern. In Rebuttal-Figure 4(C), we present the test observation log-likelihoods on the time-varying synthetic dataset used in Sec. 4.1. Our model outperforms existing multi-region modeling methods, demonstrating its effectiveness in capturing time-varying multi-region communications. > How does the model behave with respect to the latent dimensionality, or number of areas, or length of recording?. - We thank the reviewer for this valuable suggestion. In Rebuttal-Figure 1(A-C), we evaluate our model using MSE and Pearson's correlation coefficient (CC) between estimated and ground truth delays under varying numbers of regions, latent dimensions, and lengths. The model shows stable performance across conditions. While MSE increases due to amplitude variability with more regions, CC remains stable, indicating reliable recovery of temporal delay patterns. As shown in Rebuttal-Figure 2, this amplitude variability is not a practical concern because the temporal patterns, which are crucial for understanding brain communication, are well preserved. > The method uses vanilla FA to estimate the latent dimensionality, ..., why not use likelihood estimates from the model itself? - We follow the strategy from prior work (Gokcen et al., 2022), see Sec. 4.3, using FA to estimate total latent dimensionality and then performing a grid search with model log-likelihood to determine across- and within-region dimensions. This approach balances accuracy with computational efficiency, as a full grid search of latent dimensionality is time-consuming. > Axes ticks and labels are missing in multiple figures. - We apologize for the oversight and thank the reviewer for pointing it out. We will add axis ticks and labels in the revised version. > Possible extensions would be to include comparisons of the latent representations across models, or decoding performance with respect to relevant neural computation variables, such as visual stimulus class. - Thank you for the suggestion. We have visualized our model’s latent representations on both neural datasets in Figure 2(A) and Appendix E. In the revised version, we will include latent representations from other models for comparison. Additionally, Rebuttal-Figure 3 demonstrates that decoding from our latent variables shows stronger task-related information (e.g., grating orientations) in the communication subspaces compared to the within-region subspaces. We also view the discovery of further task-relevant information in these subspaces as a promising direction for future work. > Different dimensionality across different latent spaces? - Yes, we allow the number of within-region latents to vary over regions. Again, we thank the reviewer for the insightful suggestions, which improved the quality of our paper. If you have further questions, we are happy to discuss them! --- Rebuttal Comment 1.1: Comment: I thank the authors for the detail comments and additional experiments. I adjusted the score accordingly. Please add the new results and discussed reference points to the final manuscript. --- Reply to Comment 1.1.1: Comment: Thanks very much for your thoughtful feedback and for reconsidering your score. We sincerely appreciate your time and effort in reviewing our work. We will make sure to incorporate the new results and discussed reference points into the final paper.
Summary: This submission describes an approach for inferring latent factors underlying shared neural responses across brain areas. Notably, the approach enables inferring a continuous and time-varying delay factor that captures the temporal delay between two brain areas. The submission formulates this as both a Gaussian Process model and state-space model (SSM) and leverages the SSM setup to enable faster inference via parallel scans. The method is validated in a simulated dataset and demonstrated on two experimental applications for the analysis of visual data. Importantly, the method found time-varying delay responses and was applied to more than two brain areas. Claims And Evidence: The submission presents convincing evidence that the approach enables inference of time-varying delays in brain communication. Methods And Evaluation Criteria: Yes, the datasets and methods make sense. The submission evaluates the method using standard approaches on both simulated datasets and two relevant neuroscience datasets. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, I checked the details for the simulated and two experimental data analyses. Supplementary Material: I reviewed the supplemental derivations, additional figure, and GP regression results. Relation To Broader Scientific Literature: This submission is related to a line of important work developing methods for the analysis of neural recordings that span multiple brain areas. In particular, this submission builds on previous work on Gaussian process models for capturing shared and private variability across brain areas, potentially with time delays. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: The submission nicely demonstrated the generality of their proposed approach, showing how under various GP kernels they could relatively accurately approximate a GP regression using their SSM approximation. Weakness: Figure 3(B) is challenging to understand. I suggest presenting these results in a different format, potentially via a matrix where each element represents a signed pairwise connection. Other Comments Or Suggestions: N/A Questions For Authors: Can the authors provide additional details on the formulation and fitting of the time-varying delay? It appears to be one of the crucial innovations. I'm curious on how flexible this model is and whether it presents any issues in fitting. The paper states that the resulting $\hat{A}_t$ are still constrained at each time point via a shared length scale. However, it is not obvious to me how strongly that constrains these parameters. Additionally, Appendix B does not appear to describe the time-varying dynamics, as $A$ is not indexed by $t$ in the EM equations. Is there a different objective for the time-varying formulation? If so, it would be very helpful and important to put that in the appendix as well. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer Wfpo, Thank you for the encouraging feedback and practical suggestions! We have made several clarifications according to your questions and comments. Hopefully these will resolve most of your concerns, and that they can be taken into account when deciding the final review score. > Figure 3(B) is challenging to understand. I suggest presenting these results in a different format, potentially via a matrix where each element represents a signed pairwise connection. - We thank the reviewer for pointing this out. Figure 3(B) is intended to highlight the time-varying nature of temporal delays across five brain regions. It visualizes the learned temporal delays from Figure 3(A) at time points $t=3$ and $t=50$. In the figure, the orange arrows indicate the direction of communication: a positive delay denotes forward communication, while a negative delay indicates feedback communication. Additionally, the length of each edge represents the absolute value of the corresponding temporal delay. - To provide a clearer and more quantitative view, we will add a matrix alongside the network diagram to explicitly show both the magnitude and sign of the learned temporal delays. > Can the authors provide additional details on the formulation and fitting of the time-varying delay? It appears to be one of the crucial innovations. I'm curious on how flexible this model is and whether it presents any issues in fitting. The paper states that the resulting $\mathbf{\hat{A}}_t$ are still constrained at each time point via a shared length scale. However, it is not obvious to me how strongly that constrains these parameters. - At each time step $t$, we construct a time-specific Markovian GP conditioned on the MOSE kernel corresponding to that time step. The state transition matrix $\mathbf{\hat{A}}_t$ is uniquely determined by the delay $\theta_t$ and length scale $l$ as mentioned in Sec. 2.2. While the delay $\theta_t$ is allowed to change over time, the length scale $l$, which is shared across all time steps, limits the freedom of $\mathbf{\hat{A}}_t$. This shared length scale fully determines the temporal dynamics of the latent variables and thus constrains the evolution of dynamics from $\mathbf{\hat{A}}_t$ to follow the same smoothness. In practice, this allows the model to flexibly capture time-varying delays while maintaining similar temporal dynamics across time. - The blue and red latent variables shown in Figure 1(A) illustrate this behavior: they share overall temporal dynamics but exhibit different delays at each time point. Empirical results in Sec. 4.1 further confirm that this shared length scale leads to robust and smooth delay estimates without fitting issues if a reasonable initialization is used, e.g., setting a relatively large length scale to encourage initial smoothness. This initialization aligns with standard practice in GP regression, where initializing with a larger length scale helps guide optimization during early iterations. > Additionally, Appendix B does not appear to describe the time-varying dynamics, as $\mathbf{A}$ is not indexed by in the EM equations. Is there a different objective for the time-varying formulation? If so, it would be very helpful and important to put that in the appendix as well. - We apologize for the confusion regarding Appendix B. In our time-varying extension, the underlying EM objective remains the same: maximizing the expected complete-data log-likelihood, but the state-space model is augmented to include time-specific transition matrices $\mathbf{\hat{A}}_t$ and process noise covariances $\mathbf{\hat{Q}}_t$. The derivations for the EM updates are analogous to those in Appendix B, with the key difference that the E-step now involves running a vectorized Kalman filter and smoother over the entire time sequence. As a result, the shapes of $\mathbf{\hat{A}}$ and $\mathbf{\hat{Q}}$ become $(T, NP, NP)$. We will revise Appendix B to explicitly incorporate these modifications and clearly describe the time-varying dynamics. Again, we thank the reviewer for the comments, constructive feedback, and insightful suggestions, which significantly improved the quality of our paper. If you have further questions, we are happy to discuss them! --- Rebuttal Comment 1.1: Comment: Thank you for the thorough response. I have increased my score to Accept in light of these improvements. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the encouraging feedback and updating the score!
null
null
null
null
null
null
Preconditioned Riemannian Gradient Descent Algorithm for Low-Multilinear-Rank Tensor Completion
Accept (poster)
Summary: The authors proposa a preconditioned Reimannian gradient descent algorithm for low-rank tensor completion. The provide analysis of the computational cost and convergence guarantees Claims And Evidence: The claims are mostly clear and have theorem support. Question: - Is $G_{t,i}$ the optimal choice, are there alternatives or tradeoffs to be made? Methods And Evaluation Criteria: The incorporation of preconditioners to RGD makes sense and evidently improves convergence speed. Comparisons with state-of-the-art algorithms like RGD and ScaledGD are appropriate. Theoretical Claims: The Theorems are well structured and concise. In lemma 4.2: Can you comment on the practicality of the spectral initialization? What happens if the singular values decay slowly (or at a slower than expected rate)? Experimental Designs Or Analyses: I think the experimental design is reasonable. What is the influence of the learning rate on the method's performance? Is the preconditioned Riemannian update sensitive to the curvature of the low-rank manifold? Supplementary Material: I have looked at the proofs, which seem reasonable. Relation To Broader Scientific Literature: The paper seems to be well situated within the literature of low-rank tensor completion, and refers to recent works. Essential References Not Discussed: I'm not aware of any misssing key references Other Strengths And Weaknesses: Strength -The paper presents a novel approach to tensor completion with clear theoretical and empirical support. -The algorithm shows significant improvements in convergence speed, which is a critical aspect of optimization algorithms. Weaknesses - The paper could benefit from a discussion on the scalability of the PRGD algorithm for very large-scale tensors. Other Comments Or Suggestions: see above Questions For Authors: - The paper focuses on Tucker decomposition. Can PRGD be extended to Tensor Train (TT) or Tensor Ring decompositions? - Can the method be generalized to arbitrary high order tensors? -How sensitive is PRGD to misspecified rank selection? -Riemannian optimization methods such as Gauss-Newton achieve faster convergence. How does PRGD compare in terms of convergence rates when tested against second-order methods? - How does the PRGD algorithm handle tensors with very high dimensionality or sparsity? Are there any specific challenges or adjustments needed in such cases? -Could you provide more insights into the computational complexity of PRGD compared to RGD, especially for large-scale problems? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > **Q1:** The paper focuses on Tucker decomposition. Can PRGD be extended to Tensor Train (TT) or Tensor Ring decompositions? **A1:** Thank you for raising this important question. **Please refer to our response to Q2 of Reviewer eeW6.** > **Q2:** Can the method be generalized to arbitrary high-order tensors? **A2:** Thank you for your insightful question. **Please refer to our response to Q1 of Reviewer eeW6.** > **Q3:** How sensitive is PRGD to misspecified rank selection? **A3:** Thank you for your question. To evaluate the sensitivity of PRGD to rank selection, we conducted additional experiments comparing PRGD with other SOTA algorithms under varying ranks. We fix $n=150,\operatorname{OS}=15$ and choose rank $r$ from $3,4,5,6$. For each algorithm, we conduct $3$ random trials and count the average iteration number and CPU time (seconds) required to achieve a relative error tolerance $10^{-4}$. **Iteration number:** ||$r=3$|$r=4$|$r=5$|$r=6$| |-|-|-|-|-| |**PRGD**|71.0|131.3|106.7|53.7| |**ScaledGD**|269.0|476.7|296.3|205.0| |**RGD**|623.7|2293.0|1697.3|722.0| **CPU time (seconds):** ||$r=3$|$r=4$|$r=5$|$r=6$ | |-|-|-|-|-| |**PRGD**|1.67|3.16|3.01|1.50| |**ScaledGD**|6.45|11.75|7.93|4.92| |**RGD**|9.34|35.65|28.45|13.67| **Key Observation:** PRGD consistently outperforms other algorithms across all tested ranks and is less sensitive to rank selection. > **Q4:** Riemannian optimization methods such as Gauss-Newton achieve faster convergence. How does PRGD compare in terms of convergence rates when tested against second-order methods? **A4:** Thank you for your insightful question. Indeed, the Riemannian Gauss-Newton (RGN) algorithm proposed in [1] achieves second-order convergence, while PRGD guarantees only linear convergence. However, RGN **requires solving a RGN equation at each iteration, which is computationally expensive, particularly for the large-scale problems**. In contrast, the PRGD algorithm maintains a similar computational cost to RGD. Overall, PRGD might be more computationally efficient than RGN. [1] Luo, Y, and Zhang. A.R "Low-rank tensor estimation via riemannian gauss-newton: Statistical optimality and second-order convergence." JMLR 24.381 (2023): 1-48. > **Q5:** How does the PRGD algorithm handle tensors with very high dimensionality or sparsity? Are there any specific challenges or adjustments needed in such cases? **A5:** Thank you for your insightful question. **The design of PRGD's preconditioners $G_{t, i}$ inherently addresses these concerns**. Specifically, the $G_{t,i}$ in (3) utilizes the outer product of the gradient unfoldings. To ensure computational efficiency in high dimensionality scenarios, we preserve only the diagonal entries, which **reduces the $\mathscr{W}_t^{-1}$ operation to element-wise scaling** of the input tensor. Additionally, those diagonal entries are the norms of mode-$i$ slices of the gradient tensor, thus **computing $G_{t, i}$ takes only $O(|\Omega|)$ operations**. This makes PRGD computationally efficient in the cases of high sparsity. > **Q6:** Could you provide more insights into the computational complexity of PRGD compared to RGD, especially for large-scale problems? **A6:** As analysed in lines 237 - 248, the additional computation cost of PRGD is $O(|\Omega| + n r)$, which is negligible compared to the RGD algorithm, whose complexity scales as $O(n^3 r)$. Therefore, PRGD maintains the same computational complexity as RGD for large-scale problems. > **Q7:** Is $G_{t, i}$ the optimal choice, are there alternatives or tradeoffs to be made? **A7:** The Hessian matrix would be the optimal choice without considering the computational costs. Our $G_{t,i}$'s approximate the Hessian using the diagonal entries of the outer product of the gradient unfoldings, with $\epsilon_t$ ensuring positive definiteness. This design balances efficiency, leveraging the advantages of tensor products (as in Shampoo [2]) and diagonal approximations (as in AdaGrad [3]), while remaining practical for large-scale problems. [2] Gupta, V., Koren, T., & Singer, Y. Shampoo: Preconditioned stochastic tensor optimization. In ICML (pp.1842-1850). PMLR, 2018. [3] Duchi, J, Hazan, E., & Singer, Y. "Adaptive subgradient methods for online learning and stochastic optimization." JMLR 12.7 (2011). > **Q8:** What is the influence of the learning rate on the method's performance? Is the preconditioned Riemannian update sensitive to the curvature of the low-rank manifold? **A8:** The learning rate is important for PRGD's performance, as it directly impacts the convergence speed and stability of the algorithm. In our experiments, we tuned the constant step sizes for all tested algorithms to ensure optimal, enabling a fair comparison. PRGD is not sensitive to the curvature of the low-rank manifold. As demonstrated in Theorem 4.1, the contractive factor is invariant to the condition number of $\mathcal{X}\_*$, which can represent the curvature of the manifold.
Summary: This paper introduces the Preconditioned Riemannian Gradient Descent algorithm for low-multilinear-rank tensor completion, leveraging the manifold structure to achieve faster convergence than standard Riemannian Gradient Descent while maintaining the same per-iteration complexity. Claims And Evidence: The proposed method appears somewhat unconventional. What is the motivation for introducing (4) and (5)? In line 219, \( W_t \) is not defined. Additionally, if the inverse of \( W \) needs to be computed, it may not be easy or computationally efficient. According to the paper, the proposed method merely adds a complex residual to constrain the gradient’s step size. However, computing this residual is inefficient, and the motivation for using such a residual is unclear. Why should one transition from classical and simpler methods to the proposed approach, which is more complex but offers only marginal improvements? There should be compelling reasons to persuade researchers to adopt this method. Methods And Evaluation Criteria: See claims and evidence. Theoretical Claims: See claims and evidence. Experimental Designs Or Analyses: As shown in Figure 5, the noisy data reconstruction results at different noise levels indicate that the proposed method only outperforms SOTA methods in low-noise scenarios (less than \(10^{-3}\)), which is hardly observable in real-world applications due to the minimal noise presence. However, at higher noise levels (greater than \(10^{-2}\)), which are still relatively moderate compared to real-world cases, the performance advantage is not evident. Providing an explanation for this behavior is essential to substantiate the claimed advantages of the proposed method. In Fig. 2 and Fig. 3, the text at the top of the figures overlaps with the figures. Supplementary Material: NO Relation To Broader Scientific Literature: Tensor is widely used in computer vision field. Essential References Not Discussed: No Other Strengths And Weaknesses: Some of the equations are numbered (e.g., line 23), while others are not (line 45). It is better to number all of them to ensure each equation is trackable. Additionally, all notations should be introduced before they are used, or at the very least, immediately after their first appearance. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > **Q1:** The proposed method appears somewhat unconventional. What is the motivation for introducing (4) and (5)? **A1:** Thank you for your question. The derivation of PRGD involves **two essential steps:** (1) endowing the preconditioned metric to the tangent space of the iterate on the manifold. (2) projecting the Euclidean gradient onto the tangent space under the preconditioned metric to obtain the preconditioned Riemannian gradient. Specifically, **(4) is to define the preconditioned Riemannian metric and (5) is to reparameterize the tangent space**. This reparameterization ensures that the four terms in (5) are mutually orthogonal under the preconditioned metric. Consequently, we can derive the explicit formula for the tangent space projection operator and the preconditioned Riemannian gradient according to Proposition 3.1. > **Q2:** In line 219, $\mathscr{W}_t$ is not defined. Additionally, if the inverse of $\mathscr{W}_t$ needs to be computed, it may not be easy or computationally efficient. **A2:** We have defined $\mathscr{W}\_t$ in (4) (line 195) as $\mathscr{W}\_t(\mathcal{Y})=\mathcal{Y}\times\_{i=1}^3G_{t,i}$ for $\mathcal{Y}\in\mathbb{R}^{n_1\times n_2\times n_3}$, where $G_{t,i}\in\mathbb{R}^{n_i\times n_i}$ ($i=1,2,3$) are preconditioned matrices. Regarding the inverse of $\mathscr{W}\_t$, since $G_{t,i}$ are diagonal matrices, computing $\mathscr{W}\_t^{-1}$ reduces to **element-wise scaling** of the input tensor. In PRGD, we only apply $\mathscr{W}\_t^{-1}$ to the sparse gradient tensor $\mathcal{G}^t=\mathscr{P}\_{\Omega}(\mathcal{X}^t-\mathcal{X}\_*)$ , which only needs $O(|\Omega|)$ operations. **This cost is negligible compared to RGD's computations**. Thus, the $\mathscr{W}_t^{-1}$ is computatioanlly efficient. > **Q3:** According to the paper, the proposed method merely adds a complex residual to constrain the gradient’s step size. However, computing this residual is inefficient, and the motivation for using such a residual is unclear. **A3:** Thank you for your comment. We are pleased to clarify the motivation and computation efficiency of our preconditioners in (3) (line 187). **Computational Efficiency:** Constructing $G_{t,i}$ requires the diagonal entries of the outer product of gradient unfoldings, which correspond to the norms of mode-$i$ slices of gradient tensor (as explained in lines 202-206). Since gradient tensor is sparse, **this computation takes only $O(|\Omega|)$ operations, making it computationally efficient**. **Motivation:** The outer product of gradient unfolding is used to **approximate the Hessian matrix** of objective function, as demonstrated in conventional optimization techniques such as AdaGrad [1] and Adam. To ensure computational efficiency, we preserve only the diagonal entries, while the parameter $\epsilon_t$ ensures $G_{t,i}$ remains positive definite. Intuitively, this metric flattens the landscape of the objective function, enabling PRGD to take more effective descent directions. [1] Duchi, J, Hazan, E., & Singer, Y. "Adaptive subgradient methods for online learning and stochastic optimization." JMLR 12.7 (2011). > **Q4:** Why should one transition from classical and simpler methods to the proposed approach,...,There should be compelling reasons to persuade researchers to adopt this method. **A4:** Thank you for your question. The PRGD algorithm we proposed is **computationally efficient while offering substantial improvements** over RGD. **As analyzed in lines 237-248**, the additional cost of preconditioning in PRGD is negligible compared with RGD. **Thus, PRGD maintains the same per-iteration computational complexity as RGD.**. More importantly, our numerical experiments (such as Fig 2 and Fig 3) demonstrate that PRGD **achieves approximately $10\times$ acceleration** compared to standard RGD. We believe this improvement is substantial and provides a compelling reason to adopt our method. > **Q5:** As shown in Fig 5, the noisy data....substantiate the claimed advantages of the proposed method. **A5:** Thank you for your question. The noisy data reconstruction experiment is designed to **evaluate the robustness of the algorithm**. As shown in the optimization plots (Page 24, Lines 1285-1319), **our PRGD algorithm consistently converges faster than the other two algorithms** across all noise levels. Consequently, when the stopping criterion ($||\mathcal{X}^{t+1}-\mathcal{X}^t||_F/||\mathcal{X}^t||_F\leq 10^{-4}$) is met, PRGD achieves a lower relative error. Furthermore, Fig 5 demonstrates PRGD's robustness, the relative error remains consistently below the noise level $\sigma$. This highlights the effectiveness of PRGD and substantiates the claimed advantages. > **Q6:** In Fig. 2 and Fig. 3, the text... Some of the equations are numbered...their first appearance **A6**: We sincerely appreciate your valuable feedback. We have corrected all identified typos and made a full proofreading of the final version.
Summary: In this paper, the author introduces a Preconditioned Riemannian Gradient Descent (PRGD) algorithm for low tensor completion based on Tucker decomposition model. A data-driven Riemannian metric is proposed to accelerate convergence. Theoretical analysis is given to guarantee the recovery performance. Experimental results verify the desired performance of the proposed method. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I roughly check the proofs. No big issues. Experimental Designs Or Analyses: I've checked the experimental analysis. No big issues. Supplementary Material: I roughly went through all the parts of the supplementary material. Relation To Broader Scientific Literature: The author proposes a novel data-driven Riemannian metric construction method with theoretical guarantees, contributing to efficiency in tensor completion. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: Data-driven method with theoretical guarantee and experiment verifications. Weakness: 1. The order of tensor is limited to 3. 2. Limited comparison in experiments. Other Comments Or Suggestions: N/A Questions For Authors: 1. Could the proposed method be applied to higher-order data (4th-order tensor or higher?) 2. Could the proposed method be extended to other types of tensor decomposition model (CP, TT or TR)? 2. Please add comparisons on real data, as tensor completion is a task that many tensor completion methods (not only Tucker format) can be compared. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > **Q1:** Could the proposed method be applied to higher-order data (4th-order tensor or higher?) **A1:** Thank you for raising this important question. Indeed, our PRGD algorithm can be extended to the higher-order tensor case and handle higher-order data. From the algorithmic perspective, for the general $d$ order tensor case, we need firstly to compute preconditioners, e.g. the diagonal matrices $G_{t, i}, i=1, \dots, d$, following the same manner as in (3). And the data-driven metric in (4) will be $\langle\mathcal{X}\times_{i=1}^d G_{t, i}, \mathcal{Y}\rangle$ where $\mathcal{X}, \mathcal{Y}\in \mathbb{R}^{n_1\times \cdots\times n_d}$ are two arbitrary tensor. Next, we can **extend the tangent space parameterization in Lemma 3.2 to the $d$ order case**, which involves $d$ gauge matrices and $d+1$ components as in (5). Based on this tangent space parameterization, we can derive the tangent space projection formula under $\langle\cdot,\cdot \rangle_{\mathscr{W}_t}$ and the preconditioned Riemannian gradient. Regarding the convergence theory, we first need to extend our key tools, the concentration inequalities in Lemma B. 2 and Lemma B.3, to the $d$ order case. Once this is done, **the contraction analysis of the distances between iterates and ground truth can be conducted within the same framework** as presented in the current work, thereby establishing the linear convergence of the algorithm. > **Q2:** Could the proposed method be extended to other types of tensor decomposition model (CP, TT or TR)? **A2:** Thank you for your insightful observation. Indeed, the proposed method can be extended to the TT format, as the set of **fixed-TT-rank tensors forms a smooth manifold** [1]. By endowing the tangent space of this manifold with our preconditioned metric, one can derive a TT-PRGD method. Regarding the CP and TR formats, the situation is less straightforward. To the best of our knowledge, the set of fixed-CP-rank tensors does not form a smooth manifold, and it remains unverified whether the fixed-TR-rank set forms a smooth manifold. However, it is still worth exploring other preconditioning strategies for these formats, which could be an interesting direction for future research. [1] Holtz, Sebastian, Thorsten Rohwedder, and Reinhold Schneider. "On manifolds of tensors of fixed TT-rank." *Numerische Mathematik* 120.4 (2012): 701-731. >**Q3:** Please add comparisons on real data, as tensor completion is a task that many tensor completion methods (not only Tucker format) can be compared. **A3:** Thank you for your insightful suggestions. We have expanded our experiments to include comparisons with other format tensor completion methods, specifically the Tensor-Train format RGD (TT-RGD) algorithm [2], on both color image and video datasets. The code of TT-RGD is obtained from the Manopt toolbox. The results demonstrate that **TT-RGD lags in both performance and speed**, highlighting the advantage of the Tucker-rank approach. **Detailed comparison results and analysis are included in our response to Q1 of Reviewer NJ9K.** [2] Cai, Jian-Feng, Jingyang Li, and Dong Xia. "Provable tensor-train format tensor completion by Riemannian optimization." *Journal of Machine Learning Research* 23.123 (2022): 1-77.
Summary: This paper introduces the Preconditioned Riemannian Gradient Descent (PRGD) algorithm for low-multilinear-rank tensor completion. By designing a data-driven Riemannian metric and an efficient diagonal preconditioner derived from gradient statistics, PRGD achieves 10× faster convergence than standard Riemannian Gradient Descent (RGD) while maintaining comparable computational complexity. Theoretically, PRGD guarantees linear convergence under near-optimal sampling complexity \(O(n^{3/2})\), validated by synthetic experiments (e.g., 10× speedup for \(n=100\) tensors) and real-world video inpainting tasks (e.g., 0.75 dB higher PSNR than RGD on the Tomato dataset at rank 70). The method addresses tensor incoherence/spikiness via gradient-based preconditioning and tangent space parameterization, providing a computationally efficient solution for high-dimensional tensor optimization. Claims And Evidence: Please refer to below Methods And Evaluation Criteria: Please refer to below Theoretical Claims: Please refer to below Experimental Designs Or Analyses: Please refer to below Supplementary Material: no supplementary material Relation To Broader Scientific Literature: Please refer to below Essential References Not Discussed: Please refer to below Other Strengths And Weaknesses: Advantage:This manuscript combines preprocessing techniques with Riemannian optimization and proposes a data-driven preprocessing Riemannian metric. In terms of theory, the author provides rigorous proofs demonstrating the linear convergence of the method and that the sampling complexity is close to the theoretical lower bound. Disadvantages: 1.In the experimental section, the author presents too few experimental results, which do not provide sufficient evidence to convincingly demonstrate the advantages of the proposed algorithm. The author should refer to article and increase the number of experiments. The experiments should include color images and videos, and performance metrics such as PSNR, SSIM, and TIME should be provided. The datasets can be referenced from the following sources: https://sipi.usc.edu/database/database.php and http://trace.eas.asu.edu/yuv/. 2. The author should provide ablation experiments. 3. Preprocessing and Riemannian optimization both appear to be existing technologies. The innovation of PRGD lies more in the combination of these methods rather than a disruptive breakthrough. Other Comments Or Suggestions: There are some typos,eg: Riemannain optimization" → "Riemannian optimization","ap plied" → "applied",Sampling complexity formula uses \(\overline{n}\) without prior definition (defined later in 1-36), and others. The author should check typos carefully. Questions For Authors: The author mentioned about data-driven metrics. I'm not sure if data-driven here means being able to automatically learn according to the characteristics of the data. What features does it have? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > **Q1:** In the experimental section, the author presents too few...should be provided. **A1:** Thank you for your constructive feedback. We have conducted video inpainting on the videos from your recommended source (see section 5.3 and page 25). In response to your suggestion, we expanded the experimental evaluation to include additional **color image and video inpainting tasks,** incorporating the **suggested PSNR, SSIM, and Runtime (seconds) metrics**. The datasets used are *Tomato* (T), *Akiyo* (A), *Hall-Monitor* (H) videos, as well as the *Airplane (F-16)* color image. Additionally, as suggested by Reviewer eeW6, we compare our algorithm with TT-RGD (Tensor-Train RGD) on both image and video tasks. Below, we summarize the results: **Video Results:** Sampling ratio $\rho=0.1$, Tucker rank $(70,70,70)$. For TT-RGD, we set TT rank $(33,33)$ to match parameter dimensionality. | |PSNR(T)|SSIM(T)|TIME(T)|PSNR(A)|SSIM(A)|TIME(A)|PSNR(H)|SSIM(H)|TIME(H)| |-|-|-|-|-|-|-|-|-|-| |**PRGD**|**28.25**|**0.754**|12.80|**30.45**|0.869|23.91|**28.83**|**0.824**|39.83| |**RGD**|27.50|0.711|9.59|30.42|**0.870**|16.58|28.75|0.820|24.61| |**ScaledGD**|27.15|0.678|**9.27**|28.62|0.812|**12.57**|28.52|0.813|**23.98**| |**TT-RGD**|26.65|0.673|158.05|28.27|0.821|143.49|25.71|0.736|180.57| **Image Results:** *Airplane(F-16)* image with size $(512,512,3)$, $\rho=0.3$. The Tucker rank is set to $(r,r,3)$ and varies $r=20,30,40$. The TT rank $(r_t,3)$ is set accordingly to match parameter dimensionality. |$r=20,r_t=10$|PSNR|SSIM|TIME| |-|-|-|-| |**PRGD**|**23.16**|**0.659**|0.41| |**RGD**|22.93|0.648|0.35| |**ScaledGD**|22.80|0.647|**0.33**| |**TT-RGD**|20.74|0.591|0.83| |$r=30,r_t=16$|PSNR|SSIM|TIME| |-|-|-|-| |**PRGD**|**24.80**|**0.695**|0.76| |**RGD**|24.71|0.692|0.73| |**ScaledGD**|24.70|0.695|**0.69**| |**TT-RGD**|22.16|0.609|1.05| |$r=40,r_t=21$|PSNR|SSIM|TIME| |-|-|-|-| |**PRGD**|**25.77**|**0.716**|0.83| |**RGD**|25.46|0.704|**0.61**| |**ScaledGD**|25.47|0.703|**0.61**| |**TT-RGD**|22.76|0.612|1.35| **Key observations:** PRGD consistently outperforms baselines in PSNR/SSIM with comparable runtime needed. Also, TT-RGD lags in both performance and speed, highlighting the advantage of the Tucker-rank approach. > **Q2:** The author should provide ablation experiments **A2:** Thank you for your suggestion. Actually, in our current experiments, we have already included key ablation analyses. The primary innovation of PRGD is the preconditioned metric. **By removing this component, PRGD reduces to standard RGD**, which we compare against in all experiments. For real-world data, we further validate PRGD by **testing different rank parameters** and giving detailed comparisons. Additionally, we are happy to conduct additional ablation studies if you could specify particular components. > **Q3:** Preprocessing and Riemannian optimization both appear to be existing technologies. The innovation of PRGD lies more in the combination of these methods rather than a disruptive breakthrough. **A3:** Thank you for your comment. To the best of our knowledge, existing Riemannian optimization methods on the fixed-multilinear-rank manifold use the canonical metric. We design a computationally efficient preconditioner tailored to the fixed-multilinear-rank manifold and provide rigorous proofs of convergence theory, which distinguishes PRGD from prior works. **Computational efficiency:** Methods like Shampoo [1] rely on dense preconditioners requiring outer products of historical gradients, which are expensive for large-scale problems. **PRGD’s preconditioners are several lightweight diagonal matrices**, preserving similar per-iteration cost as RGD while achieving more than $10\times$ faster speed (as empirically validated). [1] Gupta, V., Koren, T., & Singer, Y. Shampoo: Preconditioned stochastic tensor optimization. In International Conference on Machine Learning (pp.1842-1850). PMLR, 2018. >**Q4:** The author mentioned about data-driven metrics. I'm not sure if data-driven here means being able to automatically learn according to the characteristics of the data. What features does it have? **A4:** Thank you for raising this important question. Our data-driven metric is designed to automatically adapt to the local geometry of the optimization landscape by utilizing gradient information at each iteration. The preconditioners $G_{t, i}$ in (3) are constructed from **the iteration data**, more specifically, the diagonal entries of the outer product of gradient unfoldings, which implicitly approximate the Hessian matrix. This ensures the metric flattens the objective function, enabling PRGD to take more effective descent directions. > **Q5**: There are some typos....carefully. **A5**: We sincerely appreciate your careful reading and valuable feedback. We have corrected all identified typos and made a full proofreading of the final version.
null
null
null
null
null
null