title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
Auditing $f$-differential privacy in one run
Accept (oral)
Summary: This paper considering auditing the claimed $f$-DP guarantees of a model training procedure with a single training run; prior work of Steinke et al. (NeurIPS 2023) considering the same problem but in the setting of $(\epsilon, \delta)$-DP. $f$-DP provides a more fine-grained view of the DP guarantees of a mech...
Summary: This paper presents an accurate and efficient auditing procedure to assess the privacy level of mechanisms within the framework of $f$-DP. The authors utilize a more generalized and refined privacy notion, $f$-DP, and effectively solve the guessing game, a generalized framework for reconstruction and membershi...
Summary: This paper looks at auditing f-DP in one run (adding to prior works like auditing DP in one run by Steinke et al) as opposed to one $\epsilon,\delta$-pair, providing tighter privacy leakage bounds and characterizes the privacy bounds of approximate DP mechanisms like the Gaussian mechanism better when the fail...
Summary: The paper studies auditing of DP parameters $\varepsilon$ and $\delta$ using one training run. The goal is to find high-confidence lower bounds for the DP parameters via membership inference guessing game, similarly to the baseline method by [Steinke et al., 2023](https://proceedings.neurips.cc/paper_files/pap...
null
null
null
null
null
null
SAH-Drive: A Scenario-Aware Hybrid Planner for Closed-Loop Vehicle Trajectory Generation
Accept (poster)
Summary: This paper presents a hybrid planner, SAH-Drive, for closed-loop vehicle trajectory generation. SAH-Drive uses the fast-slow hybrid planner paradigm. It has a rule-based fast planning path using PDM and a learned slow planning path using a diffusion model. The main innovation of SAH-Drive is that it uses a sc...
Rebuttal 1: Rebuttal: We really appreciate the reviewer’s constructive comments and positive feedback, which have helped us better articulate our contributions and clarify the novelty of our approach. Regarding the concerns of the reviewer 2ndh, we provide the following responses. > Q1: Even though SAH-Drive has a sce...
Summary: This work introduces a hybrid planning framework that combines diffusion-based and rule-based proposal generation, evaluated through PDM simulation, with asynchronous updates and spiking neurons guiding final trajectory selection. To optimize efficiency and performance, an adaptive proposal regulator dynamical...
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s constructive feedback. In response to the concerns raised by Reviewer Z9oQ, we provide the following responses. > Q1: Detailed explanation on the proposal regulator. The number of diffusion trajectories $N'$ is dynamically adjusted based on the highest diff...
Summary: This paper performs post-ensemble on two SOTA methods on the nuPlan Val14 dataset. Drawing inspiration from Spike-Timing Dependent Plasticity(STDP)in neuroscience, it proposes a novel scoring method called Score-based STDP. Additionally, it incorporates both a Scenario-based switching rule and a Score-based sw...
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. Regarding the concerns of the reviewer NL4n, we provide the following responses. > Q1: Achieving SOTA only on the interPlan may not be enough, especially given that it fails to outperform the ensemble methods this paper used on Val14. Thanks f...
null
null
null
null
null
null
null
null
Smooth Interpolation for Improved Discrete Graph Generative Models
Accept (poster)
Summary: This work proposes a new graph generative model called GraphBFN by extends BFN to graph generation. This paper further proposes an advanced sampling strategy and new time-scheduling techniques to boost the performance of BFNs. Empirical evaluation shows that GraphBFN consistently achieves strong performance an...
Rebuttal 1: Rebuttal: > **W1: Towards the non-trivialness of our work** Thank you for raising this concern. We would like to further clarify the technical contribution of our approach. While our method builds on the BFN framework, *naively applying it fails to perform well*. Our work bridges the gap between the theore...
Summary: The paper introduces Graph Bayesian Flow Networks (GraphBFN), a novel framework for discrete graph generation that enables smooth interpolation between graph states. Unlike discrete diffusion models, GraphBFN leverages probabilistic adjacency matrices and Bayesian flow updates to improve stability and efficien...
Rebuttal 1: Rebuttal: > **Q1: Towards the choice of Generative Framework** Thank you for your thoughtful and insightful comments. We first clarified our motivation for choosing the BFN as our model framework. As you mentioned, we use discrete diffusions as a case study to reveal a specific challenge in the discrete gr...
Summary: Graph Bayesian Flow Networks (GraphBFN) introduce a new way to generate graphs using continuous latent variables. Unlike traditional models, GraphBFN smoothly transitions from a starting state to the desired graph by mapping these variables to a probability matrix. This approach efficiently models complex grap...
Rebuttal 1: Rebuttal: > **W1: Lack of error bar in results** Thank you for the catch. We apologize for not making clear the inconsistent appearance of error terms in the paper. Actually, all reported results are averages of 3 runs. We omitted error terms for readability and because baseline results in prior work did n...
Summary: The paper introduces Graph Bayesian Flow Networks (GraphBFN), a generative model for graphs based on Bayesian Flow Networks. The key idea is to represent the graph structure in a continuous latent space to generate discrete graphs more smoothly. In other words, instead of working directly with the adjacency ma...
Rebuttal 1: Rebuttal: > **W2: Towards how spectral features are used and how they improve the model** We apologize for the confusion. Our use of extra features—including both spectral and structural—follows the setup in prior work [1]. The key difference is that GraphBFN computes these features over the output distrib...
null
null
null
null
null
null
OW-VAP: Visual Attribute Parsing for Open World Object Detection
Accept (poster)
Summary: This paper proposes OW-VAP for OWOD, which does not rely on guidance from LLM. It introduces the visual attribute parser (VAP) to parse the visual attributes corresponding to the current region. The proposed OW-VAP surpasses the state-of-the-art (SOTA) methods with an advantage of over 13 U-Recall and 8 U-AP i...
Rebuttal 1: Rebuttal: Thank you very much for your valuable comments and suggestions on our manuscript. Below, we provide detailed responses to your feedback: **Q1**: Similarity to Prior Work While the referenced works also utilize textual attributes, our OW-VAP differs significantly from these methods in several key...
Summary: The paper proposes OW-VAP, a novel framework for Open World Object Detection (OWOD) that eliminates reliance on Large Language Models (LLMs) for attribute descriptions. OW-VAP employs a Visual Attribute Parser (VAP) to extract generic attributes (e.g., shape, color) from visual regions and uses Probabilistic S...
Rebuttal 1: Rebuttal: We sincerely appreciate you taking the time to carefully review our manuscript. Your expert opinions and constructive suggestions are invaluable to our research. Below, we provide our responses to the concerns you raised: **Weak 1**: Why is VAP effective? Our OW-VAP is built upon the standard Y...
Summary: This paper proposes a novel OWOD framework, termed OW-VAP, which operates independently of LLM and requires only minimal object descriptions to detect unknown objects. Claims And Evidence: Yes, the author provides extensive experimental results to demonstrate the claims. Methods And Evaluation Criteria: Yes....
Rebuttal 1: Rebuttal: We sincerely appreciate you taking the time to carefully review our manuscript. Your professional insights and constructive suggestions are invaluable to our research work. Before addressing the questions, we present a complete description of the overall OW-VAP process. **All Pipeline** OW-VAP a...
Summary: This paper tackles open world object detection in a fresh way, moving beyond heavy reliance on language models. Their key idea is a Visual Attribute Parser, learning directly from images, combined with smart pseudo-labeling using probabilistic soft labels. Experiments show their approach really pushes the stat...
Rebuttal 1: Rebuttal: We greatly appreciate your careful and thoughtful review of our manuscript. Your meticulous approach to the review process contributes significantly to the advancement of the research community. Below, we provide our responses to the concerns raised in your comments: $\textbf{Q1}$: Regarding the ...
null
null
null
null
null
null
AdaSplash: Adaptive Sparse Flash Attention
Accept (oral)
Summary: This paper aims to improve the efficiency of alpha-entmax attention, which is a kind of adaptive sparse attention. Its contributions are two folds: 1. Alpha-entmax attention requires a threshold to determine which tokens are masked. This paper proposes an algorithm that achieves faster estimation of this thre...
Rebuttal 1: Rebuttal: Thank you for your insightful comments. We are glad that you found our paper provides clear and convincing evidence about the computational improvements of our implementations, and that our proposed methods and evaluation criteria make sense. We address your main concerns below. **“They should ...
Summary: The paper introduces ADASPLASH, an efficient implementation of α-entmax attention that leverages sparsity to improve computational performance while maintaining model quality. The key contributions are: 1. A hybrid Halley-bisection algorithm that reduces the number of iterations needed to compute α-entmax tra...
Rebuttal 1: Rebuttal: Thank you for your positive comments. We address your points below. **“Can the proposed method reuse the memory of KV with zero attention scores? If not, can the method actually save memory?”** Similarly to FlashAttention, AdaSplash is primarily optimized for efficient training. For inference s...
Summary: The paper introduces ADASPLASH, a novel method that combines the computational efficiency of GPU-optimized algorithms with the sparsity benefits of the α-entmax attention family. The goal is to enable efficient and scalable training of transformers for long-context tasks while maintaining or improving task per...
Rebuttal 1: Rebuttal: Thank you for your insightful comments and suggestions. **“The experiments are primarily conducted on BERT and GPT-2 level models, without testing on larger-scale models such as 7B-scale LLMs. Additionally, evaluations on long-text benchmarks, such as NIAH and RULER, are missing...”** Please no...
null
null
null
null
null
null
null
null
Weak-to-Strong Generalization Even in Random Feature Networks, Provably
Accept (poster)
Summary: This paper theoretically proves that, even in the random features model (a special case of two-layer neural networks and we only train the second layer), we can still obtain week-to-strong generalization if we choose a suitable earlying stopping time for the student model. The contributions include the followi...
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback. Below we address the main comments: -The reviewer asks “Why at all introduce the teacher”? Indeed, in the weak-to-strong setup as introduced by Burns, and as we study here, training directly on the target $f^*$ would give better performance than...
Summary: The paper studies weak-to-strong generalization in random feature models trained via gradient flow with early stopping. Contrary to the prior belief that well-pretrained models (like GPT-4) are necessary for weak-to-strong, this paper shows that weak-to-strong can occur even in much simpler models without pret...
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback. Below we address the main comments: -Regarding $M_{TE}\to\infty$: First, it is important to emphasize that our results are not asymptotic and we show a gap for finite teacher width $M_{TE}$, not only as $M_{TE}\to\infty$. Indeed, we also discus...
Summary: The paper investigates weak to strong generalization via studying random feature networks. A set of experiments and a series of proofs are shown which give evidence that weak to strong generalization occurs in random feature networks with student models having significantly smaller error than the teacher model...
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback. Full details on the data distributions used in the simulations are provided in the figure captions. These are synthetic data distributions that match Theorems 3.1 and 3.2. More explicitly: In Figure 2, we use a ReLU networks with the input dime...
Summary: The paper analyzes how weak-to-strong generalization can emerge in random feature models when the student model is trained using gradient descent with early stopping. It demonstrates that the student model can significantly outperform the teacher model, with the student’s error being much smaller, even to the ...
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback. Below we address the main comments: -Regarding linear networks achieving optimal exponent: With the linear network model, we choose the covariance directly and so can impose an extreme spectral structure, namely a huge eigengap. This makes our l...
null
null
null
null
null
null
Collaborative Mean Estimation Among Heterogeneous Strategic Agents: Individual Rationality, Fairness, and Truthful Contribution
Accept (poster)
Summary: The paper studies a mechanism design problem for agents gathering and sharing data. The authors formalize a model for agents that collect data, with some cost, and distribute it to others, potentially in a strategic manner. They demonstrate a mechanism with satisfies Nash incentive-compatibility, individual ra...
Rebuttal 1: Rebuttal: Thank you for your response and questions. **Experimental Designs Or Analyses:** - *Creation of Figure 1:* The plot was created using a small amount of Mathematica code to solve each of the 5 optimization problems and a minimal amount of Python code to plot the results. We will update the paper ...
Summary: This paper studies a scenario of data sharing among players interested in estimating the mean of a vector $\mu \in \mathbb{R}^d$ through samples from a gaussians with mean $\mu_k, k \in [d]$. The fact that the players can exchange data (which could happen in practice if some players have specific advantages to...
Rebuttal 1: Rebuttal: Thank you for your response and questions. **Relation To Broader Scientific Literature:** - *New challenges from heterogeneity:* We apologize for not highlighting the new challenges that heterogeneity introduces well enough due to space constraints. Please see "comparisons with Chen et al. 2023"...
Summary: This paper proposes a novel collaborative learning mechanism to solve the problem of individual rationality, fairness, and strategic behavior among heterogeneous strategic agents. Its contributions include mechanism design, approximate ratio analysis, hardness results, and fairness comparison. Claims And Evid...
Rebuttal 1: Rebuttal: Thank you for your response and questions. **Other Strengths And Weaknesses:** - *Unknown costs, privacy, communication costs:* These are important considerations in real-world systems but because our current model is already technically challenging, we leave these for future work. - *Fairness...
Summary: This paper studies collaborative learning among multiple heterogeneous strategic agents. There is a $d$ dimensional isotropic Gaussian with an unknown mean. Each agent has a cost to derive a sample for each coordinate of the Gaussian, but also wants to reduce the square error of mean estimation. The paper w...
Rebuttal 1: Rebuttal: Thank you for your response and questions. **Essential References Not Discussed:** Thank you for the suggestions. We will update the paper to include this reference. **Other Strengths And Weaknesses:** - *Comparisons with Chen et al. 2023:* We would like to emphasize the two (orthogonal) challe...
null
null
null
null
null
null
AMPO: Active Multi Preference Optimization for Self-play Preference Selection
Accept (poster)
Summary: The paper introduces AMPO, a method for aligning LLM using multi-response preference selection instead of traditional pairwise comparisons. It combines on-policy data generation, a group-contrastive loss, and active subset selection to train models. Various selection strategies, called bottom-k, coreset cluste...
Rebuttal 1: Rebuttal: We thank Reviewer P2gm for their insightful and helpful feedback. We appreciate the opportunity to address the concerns raised. We have prepared a detailed point-by-point response with supporting experiments in the full rebuttal document (provided separately) and offer a summary here. We hope thes...
Summary: The paper introduces the AMPO (Active Multi-Preference Optimization) method, which aims to improve the alignment performance of large language models (LLMs) through active negative sample selection in multi-preference optimization. The main contributions are: proposing several active selection strategies to ch...
Rebuttal 1: Rebuttal: Thank you for your insightful and helpful review. We have conducted several additional experiments, detailed in the full rebuttal document (provided separately), and summarize the key findings here. We hope these responses adequately address your points and would be grateful if you would consider ...
Summary: This paper studies the multi-preference optimization problem in which two sets of helpful and undesired responses are contrasted during self-play alignment. The authors propose Active Multi-Preference Optimization (AMPO), a framework comprising on-policy generation, a multi-preference group-contrastive loss, a...
Rebuttal 1: Rebuttal: Thank you for your insightful and helpful review, and for acknowledging the strengths of our work. We appreciate the opportunity to address your concerns point-by-point. We have prepared a detailed response with additional experiments in the full rebuttal document (provided separately) and summari...
null
null
null
null
null
null
null
null
QT-DoG: Quantization-Aware Training for Domain Generalization
Accept (poster)
Summary: Domain generalization is a research field that pursues performance improvement against unseen domains of data. Widely used optimizers such as SGD(Stochastic Gradient Descent) tend to push an optimized point towards sharp and narrow minima. Thus, neural networks trained with those optimizers show low generaliza...
Rebuttal 1: Rebuttal: We appreciate the reviewer’s positive feedback and recognition of the novelty of our approach, as well as the various visual and theoretical analyses presented. We also value the reviewer’s constructive comments and thoughtful questions, which we address in detail below. **Quantization vs. Noise ...
Summary: The paper studies the use of quantization-aware training (QAT) for domain generalization and comes with the finding that QAT can be a valuable tool for improving generalization to out of domain settings. Strong results are obtained, showing clear differences compared to basic ERM. Combination with ensembling i...
Rebuttal 1: Rebuttal: We appreciate the reviewer’s thoughtful feedback and recognition of our work’s contributions, especially in terms of its novelty and the various analyses provided in the appendix. Below, we provide detailed responses to each question and concern. **Performance Without Ensemble:** Thank you for p...
Summary: The paper introduces QT-DoG, a quantization-aware training (QAT) method for domain generalization (DG), and is the first (it says) to demonstrate that QAT, traditionally used for model compression, can serve as an implicit regularizer, with quantization noise enhancing generalization. Theoretical and empirical...
Rebuttal 1: Rebuttal: We appreciate your recognition of our results' competitiveness and your acknowledgment of the broader relevance of our work to the scientific community. We address your major concerns below. **Theoretical proofs combined with the theory of model compression and domain adaptation should be offered...
Summary: This paper proposes QT-DoG (Quantization-aware Training for Domain Generalization), which introduces weight quantization as an implicit regularizer by injecting noise during training, guiding the optimization toward flatter minima in the loss landscape to enhance generalization on unseen target domains. Quanti...
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive feedback. We greatly appreciate your recognition of our key contributions, including the theoretical insights connecting quantization and flat minima, and the practical benefits of model compression. We address your major questions and concerns below...
null
null
null
null
null
null
Light as Deception: GPT-driven Natural Relighting Against Vision-Language Pre-training Models
Reject
Summary: In this work, the authors propose a GPT-driven adversarial relighting framework called LightD to deceive VLP models by adjusting the lighting of clean images. LightD first adopts ChatGPT to select the lighting parameter, then optimizes the parameters of relighting model IC-Light to relight the input image. Cl...
Rebuttal 1: Rebuttal: ### 1. Difference between color attack classification vs. VLP Thank you for the important question. Attacking VLP models differs significantly from attacking classifiers due to the following: **a)Cross-modal structure.** VLPs generate captions or answers, requiring joint reasoning over vision and...
Summary: This paper introduces LightD, a novel GPT-driven adversarial relighting technique against VLP models. It uses ChatGPT to generate lighting scenarios and uses SGA to optimize adversarial effects. They also propose a general optimization framework for adapting existing natural adversarial attacks for image class...
Rebuttal 1: Rebuttal: ### 1. NIQE result analysis on BLIP model Thank you for pointing this out. NIQE is a widely adopted no-reference metric in the field and is also used by the baselines for fair comparison. However, it is not perceptually linear and may not be highly sensitive to subtle semantic-preserving perturbat...
Summary: This paper proposed a framework for generating natural adversarial samples for VLP models via semantically guided relighting. LightD leverages ChatGPT for context-aware lighting parameters and integrates a pretrained relighting model (IC-light). A gradient-based refinement further enhances adversarial impact w...
Rebuttal 1: Rebuttal: ### 1. Contribution We respectfully disagree with the assessment that our work lacks significant contribution. While our approach does utilize IC-Light, we would like to clarify several important points. First, scientific advancement often comes from novel combinations and applications of existi...
Summary: The submission proposes a novel, state-of-the-art relighting adversarial attack (LightD) on VLP models using a 4 stage approach: 1) prompt ChatGPT to provide a lighting in the form of lighting parameters (start color, end color, light direction) that could confuse the objects in a given input image 2) gener...
Rebuttal 1: Rebuttal: ### 1. Impact of lighting model Thank you for your comment. While existing tools help generate reference lighting, directly applying them in adversarial relighting for VLPs presents challenges: **1) Optimization vs. semantic integrity.** Complex lighting models can introduce artifacts or distorti...
Summary: This paper presented a relighting-based adversarial attack method against pre-train vision-language models. Given an image, the attack first consult ChatGPT for initial attacking lighting parameters. Based on the parameters, a lighting image is generated using Comfyui-ic-light. Next, IC-Light is used to religh...
Rebuttal 1: Rebuttal: ### 1. Baseline performance on clean images Thank you for this comment. The baseline performance on clean images for both image captioning and VQA tasks (shown in Table 1 and Table 2 of the original submission) can be found at: https://imgur.com/a/Kk9VYDG. These results confirm that our method ind...
null
null
null
null
Aligned Multi Objective Optimization
Accept (poster)
Summary: This paper focuses on multi-objective optimization (MOO). Differently from the typical MOO studies that focus on dealing with conflicts among objectives, this paper considers a so-called aligned MOO settings, where objectives align well with each other and no conflict occurs. This paper wants to develop method...
Rebuttal 1: Rebuttal: Thank you for the feedback and recognizing the positive aspects of our work. We appreciate the detailed and useful feedback. We will address the questions in their chronological order. 1. In Section 6 – where we generalize AMOO to approximate AMOO – we are not assuming a unique minimizer for all ...
Summary: The paper introduces Aligned Multi-Objective Optimization (AMOO), addressing scenarios where multiple objectives share a common solution. Traditional MOO focuses on conflicting objectives, but here, aligned objectives enable simultaneous optimization. The authors propose CAMOO (curvature-aware weighting) and P...
Rebuttal 1: Rebuttal: Thank you for your interest in this work and the questions you raised! We will address them in their chronological order. 1. We will make the local curvature example figure clearer – thank you for pointing this out! 2. We provided results on inaccuracy of $f_i(x_{\star})$ in Section 6, where we ...
Summary: This paper studies an interesting problem, i.e., aligned multi-objective optimization (MOO) where different objectives could share the same optimal solution. The paper introduces new algorithms for this setting, and provides theoretical convergence guarantees of the new algorithm under convexity assumptions of...
Rebuttal 1: Rebuttal: Thank you for the feedback and recognizing the positive aspects of our work. We appreciate the detailed and useful feedback. We wish to mention additional contributions of this work, beyond those that were mentioned in the review: 1. In Section 6 we study the approximate AMOO framework. There w...
Summary: This paper study a setting called aligned multi objective optimization (AMOO), which they define as a setting where objectives share a common solution. It studies how aligned multi-objective feedback can improve gradient convergence. The authors propose a framework for AMOO (aligned multi objective optimisatio...
Rebuttal 1: Rebuttal: Thank you for your positive feedback and recognizing the positive aspects of our work! We will address the questions that were raised in their chronological order. 1. Since prior work did not directly investigate the Aligned Multi-Objective Optimization (AMOO) setting, we focused on comparing our...
null
null
null
null
null
null
Unified Breakdown Analysis for Byzantine Robust Gossip
Accept (poster)
Summary: This paper addresses the problem of designing robust decentralized algorithms in the face of so-called Byzantine adversaries, i.e. adversaries likely to send arbitrary (and potentially equivocal) information to other participants during protocol execution. It introduces a general framework, F-RG, for robust de...
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and their precious suggestions. We examine the questions raised below. # Additional Experiments Following the reviewer's suggestion, we performed new experiments on Erdős-Rényi graphs and additionally compared ourselves with the IOS algorithm [Wu et al., ...
Summary: Decentralized training often encounter adversaries that may cause degraded model without proper defenses. This paper considers the problem of robust decentralized training against Byzantine adversaries. This paper revisits the previous robust gossip schemes and propose a generic framework which recovers these ...
Rebuttal 1: Rebuttal: We thank the reviewer for their comments, which we discuss below. The reviewer is perfectly right to point out that the choice of edge weights influences the spectral properties of the graph, which impact the robustness criterion. It is thus an interesting research direction to develop a method t...
Summary: This paper presents a general framework for robust decentralized averaging over sparse communication graphs, providing tight convergence guarantees for various robust summation rules. The authors then investigate the so-called theoretical breakdown: the maximum number of Byzantine nodes an algorithm can tolera...
Rebuttal 1: Rebuttal: We thank the reviewer for their comments, which we discuss below. # Experiments As requested by Reviewers CF8P, UczM, and EynP, we performed additional experiments, which are available here https://anonymous.4open.science/r/rebutal_files-342B/. We provide 2 additional experiments on Erdös Renyi...
Summary: This paper studies Byzantine robust decentralized optimization with a focus on breakdown point. The authors propose a unified method F-RG, and a new algorithm $CS_{ours}-RG$ adapted for sparse communication networks, both of which have near-optimal breakdown point. Under the proposed $(b, \rho)$-robustness co...
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments, which we discuss below. # Theoretical Improvements We respectfully but firmly disagree on the main weakness stated, i.e. our framework does not demonstrate theoretical improvement. The major theoretical improvements of our framework rely on the t...
null
null
null
null
null
null
On the Interplay between Graph Structure and Learning Algorithms in Graph Neural Networks
Accept (poster)
Summary: This paper explores the relationship between graph structure and learning algorithms in Graph Neural Networks (GNNs), particularly in the context of generalization performance. Unlike prior work that primarily focuses on convergence rates in noise-free settings, this study extends those analysis to scenarios w...
Rebuttal 1: Rebuttal: Dear Reviewer, thank you for your positive comments and thoughtful questions! These greatly help us in making our paper better, and we appreciate the opportunity to address your questions here. ## Explanation and Realism of Assumption Our assumptions are mild and broadly applicable in both theor...
Summary: The paper studies how graph structure influences the learning dynamics and generalization performance of GNNs, going beyond traditional noise-free settings to analyze SGD and Ridge regression under moew realistic conditions. Using spectral graph theory, it establishes connections between excess risk and the gr...
Rebuttal 1: Rebuttal: Dear Reviewer, thank you for the positive comments and thoughtful questions. These greatly help us in making our paper better, and we appreciate the opportunity to address your concerns and questions here. ## **Suggested Reference** Thank you for highlighting this relevant work. While both paper...
Summary: The authors analyze the excess risk of stochastic gradient descent (SGD) and ridge regression for certain GNNs. Their theoretical analysis, grounded in graph spectral theory and theoretical results from Benign overfitting, demonstrates that SGD can outperform ridge regression on power-law graphs, while ridge r...
Rebuttal 1: Rebuttal: Dear Reviewer, thank you for thoughtful comments and questions. These greatly help us in making our paper better, and we appreciate the opportunity to address your concerns and questions here. (Please note that the order of our responses may not exactly follow the sequence of your comments.) ## G...
Summary: This paper explores the interplay between graph structure and learning algorithms in Graph Neural Networks (GNNs), focusing on generalization performance (excessive risk) in the presence of noise. Extending learning theory to GNNs, it derives excessive risk profiles for Stochastic Gradient Descent (SGD) and Ri...
Rebuttal 1: Rebuttal: Dear Reviewer, thank you for the positive comments and thoughtful questions! These greatly help us in making our paper better, and we appreciate the opportunity to address your concerns and questions here. ## Related Works in Over-smoothing Thank you for recommending relevant literature. We will...
null
null
null
null
null
null
Learning In-context $n$-grams with Transformers: Sub-$n$-grams Are Near-Stationary Points
Accept (poster)
Summary: In this work, the authors study the loss landscape of transformers in the task of next-token prediction. To that end, they consider a simplified two-layer transformer and focus on learning n-gram language models in context with the cross-entropy loss. They rely on a constructive approach of transformer that mi...
Rebuttal 1: Rebuttal: We sincerely appreciate the detailed feedback and thoughtful suggestions provided by the reviewer. **References:** We thank the reviewer for bringing concurrent works Zekri et al. (2024) and Nguyen (2024) to our attention. We acknowledge their relevance to our work, especially in motivating stud...
Summary: This paper explores the loss landscape of next-token prediction in a synthetic setup, where the model is trained to learn the transition probabilities of a Markov chain of order $n$ in-context. The authors use a simplified two-layer transformer (disentangled transformer) and theoretically analyze the populatio...
Rebuttal 1: Rebuttal: We appreciate the reviewer for bringing Chen et al. (2024) to our attention, which we unfortunately overlooked in our discussion of related work. **Comparison with Chen et. al. 2024.** The paper examines the same task using a disentangled transformer, with the primary differences being a **three-...
Summary: This paper contributes the following. 1. A sufficient condition for the population cross entropy loss to vanish in the setting of $n^{\text{th}}$-order Markovian sequence modelling. The condition is based on expressing the derivative of the model's logits independently of $n-1$ tokens in the input s...
Rebuttal 1: Rebuttal: Thank you for your detailed and careful review of our paper. We sincerely appreciate the time and effort you have put into reviewing our work and providing constructive feedback. Your comments have helped us identify areas where we can improve the clarity and precision of our presentation. Below, ...
null
null
null
null
null
null
null
null
Demystifying Long Chain-of-Thought Reasoning
Accept (poster)
Summary: This paper systematically investigate the underlying mechanics of long cot reasoning. It conducts experiments on qwen-7b and llama3-8b, and evaluates on diverse math benchmarks. Based on the experiment results, the paper draws several conclusions. ##update after rebuttal: Thank you for the authors’ efforts i...
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments and for recognizing the timeliness and scope of our work. > Add experiments on different sized LLMs (i.e., 1.5b-32b-70b) We fully agree that evaluating across a wider range of model sizes is important. **Experiments with both smaller (1.5B) and...
Summary: This paper conducts comprehensive experiments for long chain of thoughts. They adopt a variety of methods related to long CoTs, including SFT, RL, source of the SFT data, impact of reward design, noisy reward. Claims And Evidence: The paper provides many orthogonal directions for long CoTs. The style is more ...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the **very positive assessment** and thoughtful comments. We're glad the key takeaways around long CoT data, reward shaping, and SFT+RL initialization were found to be useful and convincing. > Q1: On Takeaway 5.1 and the impact of WebInstruct (WebIT) data We a...
Summary: This paper investigates the mechanics of long chain-of-thought (CoT) reasoning in large language models (LLMs), focusing on how supervised fine-tuning (SFT) and reinforcement learning (RL) can enhance reasoning capabilities. Key findings include: 1) SFT with long CoTs significantly improves model performance a...
Rebuttal 1: Rebuttal: We thank the reviewers for their positive and thoughtful feedback! Below, we address the key points raised: > Q1: “Can the proposed reward shaping techniques be adapted for other types of reasoning tasks beyond mathematics and STEM?” Yes, the reward shaping techniques we propose, particularly th...
null
null
null
null
null
null
null
null
Nested Expectations with Kernel Quadrature
Accept (poster)
Summary: The authors study the problem of estimating nested expectations, i.e., expectations over two entities where we can condition on one of the entities $E\[g(X, \Theta)\] = E\[E\[g(X, \theta) | \Theta = \theta\]\]$. These expectations are common in Bayesian statistics. While there are existing techniques to estima...
Rebuttal 1: Rebuttal: Thank you for your consideration of the paper. We are very glad to see that you think the idea of using NKQ to estimate these expectations is interesting and the paper is well-written and easy to follow. ## 1. The pushforward map for change of variable trick Thank you for your question about thi...
Summary: This paper proposes a novel kernel quadrature estimator for nested integrals involving two integration levels: the inner integral $g(X, \theta)$, which computes the conditional expectation over $X$, and the outer integral, which evaluates the expectation of this conditional expectation over $\theta$. We establ...
Rebuttal 1: Rebuttal: Thank you for your consideration of the paper. We are very glad to see that you think the paper is very clearly written and appreciate our effort to include real-world applications. We have carefully addressed all your comments below, but please do let us know if there is anything we can do to hel...
Summary: This paper presents a method to estimate nested expectations using (essentially) bilevel application of kernel quadrature. The central result is that this gives a convergence rate that depends on the smoothness constants for each of the two functions in the expectations. This recovers the best known rates in w...
Rebuttal 1: Rebuttal: Thank you very much for your consideration and strong support of our paper (including the very kind comment about the effort we put in the writing). You mention leaning towards a ‘strong accept (5)’ and we remain at your disposal in case there is anything we can do to help in this respect. ## 1....
Summary: The paper introduces Nested Kernel Quadrature (NKQ), which a novel method for estimating nested expectations (i.e., integrals where the outer expectation involves a function of an inner expectation). Compared to existing methods such as Nested Monte Carlo (NMC) and Multilevel Monte Carlo (MLMC), which are suit...
Rebuttal 1: Rebuttal: Thank you very much for your careful consideration of our paper and for checking the proof of the main theorem. We are very happy that you agree that our convergence rate results (Theorem 1) are technically sound and that our experiments are well-designed. Thank you for also bringing up the issu...
null
null
null
null
null
null
Boosting Virtual Agent Learning and Reasoning: A Step-Wise, Multi-Dimensional, and Generalist Reward Model with Benchmark
Accept (poster)
Summary: The paper proposes Similar, a Step-wise, Multi-dimensional Generalist Reward Model, designed to improve the training and inference of Generalist Virtual Agents (GVAs). Similar addresses limitations in outcome-based reward models by introducing a process-based system that provides fine-grained supervision signa...
Rebuttal 1: Rebuttal: **Q1: Compared with more process reward models (PRMs) and outcome reward models (ORMs).** **A1:** Thank you for suggestions on more comparisons with more RMs, To our knowledge, our **Similar** is the first step-wise, multi-dimensional, cross-platform PRM for *virtual agents (VA)*. **PAVs[1]** is...
Summary: Previous multimodal LLM-based virtual agents usually require human annotations, multi-dimensional fine-grained process supervision, and scaling inference time. The paper proposes a new step-wise, multi-dimensional generalist reward model to offer fine-grained signals for agent training and can choose better ac...
Rebuttal 1: Rebuttal: **Q1: Benefit of *SRMEval* as a benchmark.** **A1:** Thank you for suggestions on more comparisons with other benchmarks. Section 1 of our paper proposes that ***SRMEval*** is **the first multi-step, multi-dimensional, and multi-platform benchmark** specifically for **evaluating reward models(RM...
Summary: Traditional training methods for virtual agents depend on outcome supervision and costly human annotations, limiting their scalability. The authors propose Similar, a step-wise multi-dimensional reward model that refines agent training and improves inference-time decision-making. They define five key dimension...
Rebuttal 1: Rebuttal: **Q1: Claim on no manual annotations.** **A1:** We appreciate to **clarify a misunderstanding**: our claim of "no manual annotations" means while utilizing the benchmarks' evaluation scripts (for dimension calculations), our method **annotates step-wise multi-dimensional data without any additio...
null
null
null
null
null
null
null
null
Physics-informed Temporal Alignment for Auto-regressive PDE Foundation Models
Accept (poster)
Summary: The authors studies autoregressive model which are used to make predictions about time series governed by a PDE system. To solve the shortcut problem for autoregressive models, the authors propose to use the training sequence to learn the governing equation, then train the autoregressive model so that they fit...
Rebuttal 1: Rebuttal: 1.Regarding the ablation studies on more datasets. Reply 1: We conducted additional experiments on three datasets that span different PDE domains and exhibit varied physical characteristics, with an average step length of 84. The results (https://anonymous.4open.science/r/PITA_1/Table2.pdf) consi...
Summary: The paper *"Physics-informed Temporal Alignment for Auto-regressive PDE Foundation Models"* introduces **Physics-Informed Temporal Alignment (PITA)**, a self-supervised learning framework designed to address the error accumulation problem in auto-regressive PDE foundation models. Instead of relying on predefin...
Rebuttal 1: Rebuttal: 1.Regarding Long-Horizon Stability Reply 1:To rigorously evaluate PITA’s capacity for modeling chaotic dynamical systems under extended temporal extrapolation, we conducted controlled experiments on synthetic 2D Kolmogorov turbulence flows, which is a canonical benchmark for chaotic PDE systems e...
Summary: This paper focuses on the shortcut bias that can occur in PDE auto regressive models. The proposed method suggests enhanced predictions using some physical knowledge extracted from data, using sparse regression. The results show some improvement in the rollout prediction. ## update after rebuttal After the re...
Rebuttal 1: Rebuttal: 1.Regarding the comparison of the shortcut problem. Reply 1: Thanks for your insightful comments. We followed the error visualization methodology in [1], plotting rolling-step MSE for each long-term dataset, as shown in the Shortcut folder of https://anonymous.4open.science/r/PITA_1/ (see 'Shortc...
Summary: The paper introduces Physics-informed Temporal Alignment (PITA), a new self-supervised learning framework aimed at improving autoregressive PDE foundation models. The authors identify a "shortcut" issue common in autoregressive models, where the model takes easy solutions, leading to accumulated prediction err...
Rebuttal 1: Rebuttal: 1.Regarding the additional cost of PDE discovery and alignment. Reply 1: We appreciate the reviewer highlighting computational scaling. For fixed batch sizes, the additional time from PDE discovery and alignment remains constant, independent of dataset scale or trajectory length, since it depends...
null
null
null
null
null
null
Fundamental limits of learning in sequence multi-index models and deep attention networks: high-dimensional asymptotics and sharp thresholds
Accept (poster)
Summary: This paper uses GAMP to quantify the Bayes-optimal performance of deep attention networks. They show that such a deep attention network can be mapped into a sequence multi-index model, which is a variant of the basic multi-index model but applied to a sequence of data of fixed length M. -- update after rebutt...
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments. Their remarks will allow us to improve the clarity of our manuscript greatly. **Claims And Evidence** Clarifying the limit: By "deep," we mean constant ($O(1)$) depth. Our results hold in the large $D$, proportionally large $N$ limit. St...
Summary: This paper extends the theoretical framework of multi-index models to sequence models and derives a sharp asymptotic characterization of the optimal performance of the generalized approximate message-passing (GAMP) algorithm. This paper also characterizes sharp thresholds on the minimal sample complexity requi...
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments. Numerically, we observe that in the setting of Fig. 2, GAMP and Gradient Descent display similar layer-wise learning dynamics. To put these observation on completely rigorous theoretical grounds, one would need to adapt the dynamical-mean-f...
Summary: This paper studies the fundamental limits of learning in deep attention networks by establishing a connection to sequence multi-index models. The key contributions include the mapping from deep attention networks to SMI models and theoretical characterization of statistical and computational limits of the mode...
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments. The focus of this work is indeed theoretical, and lies in making several theoretical findings on learning in attention models, extending previous related works. We discussed how the uncovered phenomena also hold with Gradient Descent (Figure 2...
Summary: This paper studied the problem of learning sequential multi-index models. The authors showed a deep connection between the SMI model and deep attention networks, in that the deep attention network can be formulated in the form of a SMI function. The authors studied the limit thresholds of the sample complexity...
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments. **Claims And Evidence:** Fig. 3 numerically supports the conjecture, while Sec. 3.2 discusses its infinite-depth limit where inner layers ($\ell \leq L - 1$) become interchangeable. **Relation To ... Literature:** The reviewer is right th...
null
null
null
null
null
null
Direct Discriminative Optimization: Your Likelihood-Based Visual Generative Model is Secretly a GAN Discriminator
Accept (spotlight poster)
Summary: The paper proposes fine-tuning a likelihood-based generative model to steer the generated distribution towards the real distribution from a new perspective. Specifically, the authors parameterize pretrained likelihood-based generative models as the discriminator within the GAN framework. Using a reference gene...
Rebuttal 1: Rebuttal: Thank you for your positive comments. Below, we provide detailed responses to your concerns. > The motivation for introducing $\alpha$ and $\beta$ to address gradient-vanishing/numerical issues is not well-supported. It would be helpful if the authors could provide experimental evidence to show t...
Summary: This paper proposes a new optimizing method for likelihood-based models. The idea comes from GANs, but the method is not to introduce a new discriminator, which is more efficient and easy to apply. Theoretical analysis and experiments prove the effectiveness. Claims And Evidence: The authors claim that their ...
Rebuttal 1: Rebuttal: Thank you for your positive comments. Though you do not have notable concerns, we would like to provide some new results on higher-resolution datasets. After submission, we further use DDO to finetune EDM2-L on ImageNet 512x512, successfully advancing the FID from 2.11 to **1.36**, without any gu...
Summary: This paper tackles the issue of the predisposition of likelihood-based generative models to cover modes of the data distribution, yielding unrealistic or blurry results. This paper introduces a solution to this problem that is complementary to the usual but impractical guidance methods via a fine-tuning method...
Rebuttal 1: Rebuttal: Thank you for your positive comments. Below, we provide detailed responses to your concerns. > Can the few debatable claims be reformulated along with the title? - I disagree with the title...it is not a discriminator per se, but can be used to parameterize a discriminator. We agree with "it is...
Summary: This paper introduces Direct Discriminative Optimization (DDO), a novel finetuning framework designed to enhance the generation quality of likelihood-based generative models, such as diffusion and autoregressive models. Likelihood-based generative models are inherently limited by the mode-covering tendency of ...
Rebuttal 1: Rebuttal: Thank you for your positive comments. Below, we provide detailed responses to your concerns. > One potential weakness of the paper is the lack of discussion on diversity metrics Recall during the DDO finetuning process. Analyzing how recall evolves during the DDO distillation process could provid...
null
null
null
null
null
null
Compositional Causal Reasoning Evaluation in Language Models
Accept (poster)
Summary: This paper applies the concept of compositional reasoning to causal inference, thereby introducing the notion of compositional causal reasoning (CCR). It further analyzes the relationships among different tiers of causal measures. Building on these findings, the authors propose an evaluation framework for comp...
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the clarity and value of our framework, as well as the importance of validity and consistency. We appreciate the suggestion that this framework has potential utility for AI interpretability and safety, areas where we see increasing crossover with causal theo...
Summary: In their paper, the authors formalize ways to measure the ability of reasoning systems to consistently reason over compositional causal quantities. The authors propose the general task of compositional causal reasoning (CCR) which is then defined in terms of external validity (- the adherence to ground truth q...
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough feedback and for acknowledging the soundness of our approach. We hope that the comments below address all concerns. 1. **“Outliers” for GPT-4o (CoT).** - Results for $PNS_{XY}$ and $PNS_{DY}$ are not extreme outliers, though Fig 7 might imply such (we u...
Summary: Regarding the combination of causal and combinatorial reasoning in generative AI, this paper presents a unified perspective called combinatorial causal reasoning (CCR), which is the ability to infer how causal measures are combined and how causal quantities are propagated in a graph. A framework for systematic...
Rebuttal 1: Rebuttal: We thank the reviewer for their attention to the correctness of our proofs, the powerful abstraction provided by CCTs, and the gap in the literature that we attempt to close: the explicit and systematic evaluation of compositional consistency in causal reasoning. We address questions below. 1. **...
Summary: The paper proposed a unified framework that evaluates compositional causal reasoning ability in large language models by measuring the average treatment effect and the probability of necessity and sufficiency. Claims And Evidence: The paper claims to introduce a framework for a comprehensive assessment of com...
Rebuttal 1: Rebuttal: We thank the reviewer for their useful feedback. We hope that the following addresses your concerns. 1. **Limited results / no dataset.** - **Revision 1:** CandyParty experiments now include **results for o1 with and without CoT.** For o1, CCR reasoning was “complete,” enriching our discussion...
null
null
null
null
null
null
A Versatile Influence Function for Data Attribution with Non-Decomposable Loss
Accept (poster)
Summary: handle non-decomposable loss functions, thus broadening their application in machine learning models. Unlike conventional approaches limited to decomposable losses, VIF can be directly applied to any model trained with complex losses like contrastive or ranking losses, without needing retraining. Utilizing aut...
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and further suggestions. We address your comments in detail below. > Although VIF performs excellently in many aspects, its assumption that the loss function is convex may limit its effectiveness in certain practical applications. Therefore, future ...
Summary: ## Update after author discussion period Thanks a lot for all the discussion below, both for correcting my mistake with the asymptotics and for addressing some of my concerns. I've updated my score to vote for an accept, since I think the extra experiments now address the accuracy of VIF more clearly and I thi...
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review and feedback. We have prepared a thorough response to each comment, but have to omit some of them due to the strict character limit. However, we could further provide the omitted response once the reviewer replies to us. # Claims and Evidence > The...
Summary: The paper proposes a method called Versatile Influence Function (VIF) for data attribution in machine learning models. Traditional influence functions require the loss function to be separable (decomposable) into individual data points, limiting their application. The authors extend the influence function to h...
Rebuttal 1: Rebuttal: We appreciate the reviewer for the positive feedback and further suggestions. We will revise our abstract to make it more succinct. Following the reviewer's suggestion, we have also included more experiments on a larger-scale survival analysis dataset, RR-NL-NHP (Kvamme et al. 2019), with 16,00...
null
null
null
null
null
null
null
null
WyckoffDiff -- A Generative Diffusion Model for Crystal Symmetry
Accept (poster)
Summary: The authors introduced WyckoffDiff, a discrete diffusion model for generating crystalline materials with explicit symmetry constraints. WyckoffDiff encodes crystals via protostructure, which includes a space group and Wyckoff positions. The authors evaluated their approach on materials benchmark (WBM dataset)....
Rebuttal 1: Rebuttal: We are happy to see that the reviewer thinks our approach is novel and that generating protostructures is an approach that can enable efficient material generation. We address the concern that we do not quantify how structures can be realized from protostructures, and elaborate on differences betw...
Summary: In this paper, the authors propose a symmetry-aware generative model for crystal generation. The proposed Wyckoff diffusion model generates a protype based on elements taking Wychoff positions instead of 3D positions defined in a unit cell like a few methods in literature. They show that using this with a disc...
Rebuttal 1: Rebuttal: We greatly appreciate the comments from the reviewer that have helped improving the clarity of the paper and improved the numerical evaluation. We are happy that the reviewer agrees with us that using discrete diffusion for generation of materials based on Wyckoff positions is a reasonable and use...
Summary: The paper introduces WYCKOFFDIFF, a novel framework for generating crystal structures using a discrete diffusion process that inherently preserves symmetry. By representing crystal protostructures based on Wyckoff positions, the method partitions these positions into constrained (fixed) and unconstrained (flex...
Rebuttal 1: Rebuttal: We are very happy to see that the reviewer thinks our method is both original and interesting. We address the concerns related to the comparison to CDVAE and the proposed ablation study below. ## While generating many novel structures, CDVAE does not generate symmetrical materials, which are the...
Summary: This paper proposes a novel generative model, WyckoffDiff, for generating Wyckoff representations of crystal materials. By applying discrete diffusion models directly to Wyckoff representations, WyckoffDiff can generate diverse protostructures for crystal materials. Additionally, WyckoffDiff leverages GNN mode...
Rebuttal 1: Rebuttal: We thank the reviewer for their comments, and we are happy to see that the reviewer agrees that incorporating symmetries is crucial for efficient material generation, and that our approach of applying discrete diffusion for working with the Wyckoff representation is reasonable. Below, we address a...
null
null
null
null
null
null
The dark side of the forces: assessing non-conservative force models for atomistic machine learning
Accept (oral)
Summary: This paper investigates the implications of using machine learning models that predict non-conservative forces for atomistic simulations. Non-conservative models predict interatomic forces directly, rather than computing them as the derivative of a potential energy, which offers computational advantages but vi...
Rebuttal 1: Rebuttal: We thank the reviewer for the suggestion to investigate a solvated peptide, and have done so: we studied alanine dipeptide solvated in water, and additionally a benzene molecule adsorbed on a graphene surface, beyond the homogeneous cases of graphene, amorphous carbon and aluminium that we had alr...
Summary: The paper presents a systemmatic study of non-conservative force models for atomistic machine learning. Traditionally, forces are computed as the derivatives of potential energies to enforce symmetries and conservation laws. Several recent studies, however, directly predicted forces and learned the conservatio...
Rebuttal 1: Rebuttal: We thank the reviewer for pointing out additional relevant literature, which we have added to the manuscript. We also added the recent related work by Eissler et al. (arXiv:2503.01431), in which a non-conservative and non-equivariant transformer model is trained on a large dataset and used to stud...
Summary: This paper critically evaluates the implications of using non-conservative (NC) force models for learning machine learning interatomic potentials (MLIPs). This paper investigates the general accuracy of the NC models' forces, the effect of using NC force models on measured values of specific properties of inte...
Rebuttal 1: Rebuttal: We thank the Reviewer for their comments and questions. To provide more context, the use of multiple time-stepping to reduce the cost of a simulation while matching the accuracy of a “slow” potential is well-established in the molecular dynamics community. It was originally introduced to reduce th...
null
null
null
null
null
null
null
null
Approximating Nash Equilibria in General-Sum Games via Meta-Learning
Reject
Summary: Finding exact Nash equilibria (NE) in general-sum games is known to be PPAD-complete. In contrast, regret minimization is a well-established method for learning equilibrium strategies, but it only guarantees convergence to coarse correlated equilibria (CCE; a weaker equilibrium notation than Nash equilibrium) ...
Rebuttal 1: Rebuttal: We would like to sincerely thank the reviewers for their time spent to help improve our work. We appreciate all the comments and will integrate them into a revised version of the paper. Let us address the questions and comments raised. The main theoretical conclusion of our paper is that if one...
Summary: This paper falls into the approximation of NEs in general-sum, normal form, or especially extensive-form games. While regret-minimization algorithms only guarantee convergence to a CCE, this paper proposes NPCFR, which uses neural networks to parameterize such an algorithm. The parameterization is hard-coded t...
Rebuttal 1: Rebuttal: We would like to sincerely thank the reviewers for their time spent to help improve our work. We appreciate all the comments and will integrate them into a revised version of the paper. Let us address the questions and comments raised. Thank you for pointing out the negative CCE Gap example, we...
Summary: This work provides a meta-learning strategy to guarantee convergence to a CCE (with low correlation) of no-regret learners for a repeatedly played n-player general-sum game. Claims And Evidence: Proofs for theoretical results are provided. However, I am not able to verify these proofs. (see questions to the a...
Rebuttal 1: Rebuttal: We would like to sincerely thank the reviewers for their time spent to help improve our work. We appreciate all the comments and will integrate them into a revised version of the paper. Let us address the questions and comments raised. 1) We make no assumptions about the utility function. The r...
Summary: The paper proposes a new method for finding approximate Nash equilibria of n-player extensive-form games. The method uses regret learners. Regret learners can be proved to converge to coarse-correlated equilibria (CCE). To make the CCE closer to Nash, the paper proposes that we train a regret minimizer. The pa...
Rebuttal 1: Rebuttal: We would like to sincerely thank the reviewers for their time spent to help improve our work. We appreciate all the comments and will integrate them into a revised version of the paper. Let us address the questions and comments raised. The reason why we don't compare with algorithms such as Lem...
null
null
null
null
null
null
Local Identifying Causal Relations in the Presence of Latent Variables
Accept (spotlight poster)
Summary: The paper proposes both sufficient and necessary local characterizations for the invariant ancestor, invariant non-ancestor, and possible ancestor relationships, relying solely on local structure rather than the entire graph, even in the presence of latent variables. A novel algorithm, LocICR, leverages these ...
Rebuttal 1: Rebuttal: We are deeply grateful for the time you devoted to reviewing our manuscript. We appreciate that you think our proposal is a very solid paper. We hope that the following responses adequately address your concerns. **Q1.** ``Additional details regarding the learning of $\mathcal{L}_{V_i}$ should be...
Summary: The authors propose novel local characterizations that are necessary and sufficient for various types of causal relationships between two variables and bypass the need for global structure learning. Leveraging these local insights, the authors develop efficient and fully localized algorithms that accurately id...
Rebuttal 1: Rebuttal: We sincerely appreciate the time you dedicated to reviewing our paper, as well as your insightful and encouraging comments. Below, we provide our responses to your comments. **Q1.** ``I encourage the authors to compare against some of more recent baselines'' **A1.** Thank you for the suggestion...
Summary: This paper provides a local causal discovery method for inferring causal relations between a pair of variables. Specifically, given any two variables $X$ and $Y$, the proposed algorithm outputs one of the following four results: $X$ is an invariant non-ancestor of $Y$, $X$ is an explicit invariant ancestor of ...
Rebuttal 1: Rebuttal: We sincerely appreciate your thorough review and insightful comments. We hope the following response properly addresses your concerns. **Q1** Regarding ``...the theoretical contribution...'': **A1.** We would like to clarify that one of our paper’s main theoretical contributions is proposing ne...
Summary: The paper addresses the challenge of locally identifying causal relationships between arbitrary pairs of variables in a causal graph, without assuming the absence of hidden confounders. Existing methods typically rely on access to the entire graph or impose strong assumptions about latent variables or on the p...
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive and thoughtful feedback, as well as your recognition of the importance, clarity, and empirical rigor of our work in the field of causal inference. We hope the following responses properly address your concerns. **Q1** ``introducing the correct definition...
null
null
null
null
null
null
$DPOT_{L_0}$: Concealing Backdoored model updates in Federated Learning by Data Poisoning with $L_0$-norm-bounded Optimized Triggers
Reject
Summary: The authors proposed a new backdoor attack method named DPOT. DPOT generates triggers by: 1) utilizing the sensitivity of the model's output concerning each pixel of the input to determine which positions should be selected, and 2) optimizing the pixel values of the triggers to maximize their effectiveness. Th...
Rebuttal 1: Rebuttal: 1. I am confused about the authors' threat model. I do not understand why a malicious client cannot manipulate their local training process (Lines 128-133). We found this Wikipedia page very helpful for understanding our threat model: https://en.wikipedia.org/wiki/Trusted_execution_environment. I...
Summary: This paper introduces $DPOT_{L_{0}}$, a new backdoor attack method in FL that dynamically optimizes an L0-norm-bounded trigger to conceal malicious model updates among benign ones. By focusing on data poisoning alone, the attack avoids reliance on model poisoning, which is increasingly impractical under Truste...
Rebuttal 1: Rebuttal: 1. What is the direct relationship between Section 5 and Section 4? There is no clear explanation provided to demonstrate how the design of DPOT is benefited from the theoretical analysis. I would appreciate a brief explanation here to emphasize how Section 5 can help the reader better understand ...
Summary: This paper presents a method to optimize backdoor attack triggers in federated learning systems. The proposed scheme is validated by experiments. The main contributions of this work include: 1. Proposing a simple and effective method for generating triggers, simultaneously optimizing the pixel values and posit...
Rebuttal 1: Rebuttal: 1. As can be seen from Figure 2, the generated trigger is distinguishable by the human eye. Does this lead to smart clients using simple data filtering/clear methods to suppress backdoor attacks? (Simple defenses against poisoning data being filtered and deleted need to be discussed to reveal the ...
Summary: The paper proposes $DPOT_{L_0}$, a backdoor attack strategy for Federated Learning that focuses on concealing malicious model updates. Unlike traditional backdoor attacks that use fixed triggers or obvious model poisoning, $DPOT_{L_0}$ dynamically optimizes an $L_0$-norm-bounded trigger for each round. This tr...
Rebuttal 1: Rebuttal: Thank you for the effort you put into providing feedback and advice! 1. Limited studies of MCR and Data poison rate Due to the space limitation, we discussed ablation studies of MCR and Data poison rate in Appendix L and N. We also discussed ablation studies of Trigger size and Non-iid degree i...
null
null
null
null
null
null
CurvGAD: Leveraging Curvature for Enhanced Graph Anomaly Detection
Accept (poster)
Summary: The paper introduces CurvGAD, a novel graph anomaly detection method that leverages mixed-curvature geometry to detect anomalies overlooked by conventional structural and attribute-based approaches. CurvGAD employs two parallel reconstruction pipelines: a curvature-equivariant pipeline that captures geometric ...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their insightful feedback, valuable assessment and constructive suggestions. ### **[Q1] Scalability and Complexity** Thank you for this important point. We provide a detailed time complexity (and scalability) analysis in **Appendix C**. Specifically: 1. **Ap...
Summary: The paper proposes CurvGAD, a novel graph anomaly detection framework that incorporates curvature information through a mixed-curvature graph autoencoder consisting of two parallel pipelines: curvature-equivariant geometry reconstruction and curvature-invariant structure and attribute reconstruction. Claims A...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their detailed and thoughtful comments. We deeply value the opportunity to clarify our contributions and improve our work based on your feedback. ### **[Q1] Why curvature?** The core aim of our work is to learn node representations on a **non-Euclidean manifo...
Summary: The authors propose a method for graph anomaly detection that directly takes into account the curvature of the graph, thus capturing the structure of the graph in an arguably more appropriate way than what can be done with Euclidean representations (i.e., the common approach of embedding nodes/graphs via GNNs)...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their insightful feedback and constructive suggestions. We hereby address the raised concerns. ### **Computational Considerations for ORC** > Computing ORC is computationally expensive We appreciate your concern and apologize for not sufficiently emphasizing ...
Summary: This paper proposed a mixed-curvature graph autoencoder that detects curvature-based geometric anomalies. It combines geometry reconstruction, using a Riemannian encoder and Gaussian kernel-based decoder, with structure and attribute reconstruction, regularizing graph curvature through discrete Ollivier-Ricci ...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their positive assessment and thoughtful questions. Please find our detailed responses below: > [Q1] Could the authors provide visualizations or quantitative results showing the curvature of the regularized graph? We thank the reviewer for pointing this out. W...
null
null
null
null
null
null
When Do LLMs Help With Node Classification? A Comprehensive Analysis
Accept (poster)
Summary: The paper systematically analyzes LLM-based node classification, introducing LLMNodeBed, a benchmark with 10 datasets and 8 algorithms for fair comparisons. It finds that LLMs significantly outperform traditional methods in semi-supervised settings but offer marginal gains in supervised learning, with Graph Fo...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your valuable feedback. --- > Definition of Reasoning, LLM-as-Reasoner and LLM-as-Predictor, and Graph Foundation Models (GFM) Sorry for any confusion on the terminologies. The categorization of LLM-as-Enhancer, LLM-as-Predictor, and GFMs is adopted from the...
Summary: This paper establishes a benchmark for the fair evaluation of different categories of LLM-based methods for node classification and uncovers some insights through performance analysis and comparison. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No proofs. Experimental...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your insightful and constructive feedbacks. --- > E1: Experiments on KGs We thank the reviewer for suggesting the inclusion of Knowledge Graphs. However, datasets like FB15k-237 and WN18RR primarily focus on the **Link Prediction** task, whereas most LLM-ba...
Summary: In this paper, we conduct a fair and systematic comparison of LLM-based node classification algorithms. They developed LLMNodeBed, a comprehensive codebase and testbed for node classification using LLMs. Then, they conducted extensive experiments, training and evaluating over 2,200 models, to determine the ke...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your insightful and constructive feedbacks. --- > Question 1: Model selection and determination Thank you for the opportunity to clarify the selection process and total number of models evaluated. For each baseline on a specific dataset, hyper-parameters w...
Summary: In this paper, the authors provide guidelines for leveraging LLMs to enhance node classification tasks across diverse real-world applications. The authors introduce LLMNodeBed, a codebase and testbed for systematic comparisons, featuring ten datasets, eight LLM-based algorithms, and three learning paradigms. T...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your insightful and constructive feedbacks. --- > Weakness 1: Missed references Thank you for pointing out these additional related works. We have incorporated **GAugLLM** [Ref 1] and **GRENADE** [Ref 2] as baselines. Due to time constraints, we have comp...
null
null
null
null
null
null
Differentially Private Space-Efficient Algorithms for Counting Distinct Elements in the Turnstile Model
Accept (poster)
Summary: This paper studies the classical problem of counting distinct elements in a streaming (bounded space) setting, under differential privacy constraints in the continual release model (where the requirement is to provide an output at all times along the stream, while retaining differential privacy as usual). The ...
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments and valuable feedback. **Connections to heavy hitters:** We agree that there is a conceptual connection to the heavy hitters problem, as high-frequency elements can be viewed as heavy hitters, especially in the insertion-only setting. However, in...
Summary: The paper considers the problem of continually releasing private estimates of the number of distinct elements in a turnstile stream (with insertion and deletion of elements). It presents the first such algorithm that uses sublinear space while maintaining error similar to that of previous private algorithms (t...
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments and valuable feedback. On using Misra Gries to track elements with high occurrency instead of blocklisting, we note that our algorithm’s goal is not to output elements with high occurrency but simply to filter them out at every timestep. While it i...
Summary: This paper introduces the first differentially private algorithm for counting distinct elements in the turnstile model to obtain *sublinear* space-complexity. Their new approach is to introduce the notion of occurency (maximum number of times any element appears in a stream) and are able to achieve sqrt(W) add...
Rebuttal 1: Rebuttal: We thank the reviewer for the comment about the subroutines and other technical details of the main algorithm that currently appear in the appendix. Based on this feedback, we plan to reduce the space dedicated to the technical overview (Section 2) in the final version, and instead use the space t...
Summary: This work gives a new space-efficient algorithm for differentially private counting of distinct elements under the turnstile model of continual release. In this setting, the algorithm sees a length $T$ stream of insertions, deletions, or null updates of elements from a fixed domain. At each timestep of the str...
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments and valuable feedback. Regarding the comparison against Henzinger, Sricharan, and Steiner (2024), we do, in fact, compare our result to this work in Line 192 of the submitted version; however, we cited an earlier version of the paper, titled "Dif...
null
null
null
null
null
null
$\mathcal{V}ista\mathcal{DPO}$: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video Models
Accept (poster)
Summary: In this paper, the authors propose a Video Hierarchical Spatial-Temporal DPO (VistaDPO) mechanism, a DPO strategy to optimize the alignment between video and language in LVMs. VistaDPO enhances text-video preference alignment across three hierarchical levels: i) Instance Level, aligning overall video content...
Rebuttal 1: Rebuttal: Thank you all for your thoughtful and constructive comments! We are really encouraged to see that the reviewers appreciate some positive aspects of our paper, such as **strong performance** (Reviewer ```NY3g```, ```CzUw```, ```zu9j```, ```sRt7```), **high robustness** (Reviewer ```NY3g```, ```sRt7...
Summary: The paper tackles the problem of video large language models. The authors claim that existing methods for open-ended video-language understanding often suffer from misalignment with human intuition and video hallucination issues. In order to address these issues, they proposed VistaDPO, a new framework for Vid...
Rebuttal 1: Rebuttal: Thank you to all the reviewers for your thoughtful and constructive comments! We are really encouraged to see that the reviewers appreciate some positive aspects of our paper, such as **strong performance** (Reviewer ```NY3g```, ```CzUw```, ```zu9j```, ```sRt7```), **high robustness** (Reviewer ``...
Summary: This paper introduces VistaeDPO, a method designed to enhance video-text preference alignment at three levels: instance, temporal, and perceptive. The authors also contribute VistaDPO-7k, a new dataset for DPO training, and demonstrate great performance improvements on various video benchmarks. Claims And Evi...
Rebuttal 1: Rebuttal: Thank you to all the reviewers for your thoughtful and constructive comments! We are really encouraged to see that the reviewers appreciate some positive aspects of our paper, such as **strong performance** (Reviewer ```NY3g```, ```CzUw```, ```zu9j```, ```sRt7```), **high robustness** (Reviewer ``...
Summary: In this work, a DPO based framework has been proposed for Video-LLMs for different task including video QA and hallucination and captioning. Inspired from DPO based methods for LLMs, a spatio-temporal aware DPO framework is proposed that optimized the preference of Video-LLM on across three axis: 1) instance l...
Rebuttal 1: Rebuttal: Thank you to all the reviewers for your thoughtful and constructive comments! We are really encouraged to see that the reviewers appreciate some positive aspects of our paper, such as **strong performance** (Reviewer ```NY3g```, ```CzUw```, ```zu9j```, ```sRt7```), **high robustness** (Reviewer ``...
null
null
null
null
null
null
Optimal Transport Barycenter via Nonconvex-Concave Minimax Optimization
Accept (poster)
Summary: This paper investigates the Wasserstein barycenter problem. The approach involves transforming the problem into the dual formulation of Kantorovich's optimal transport problem, which is then reformulated as a nonconvex-strongly concave minimax optimization problem. To solve this, the standard gradient descent-...
Rebuttal 1: Rebuttal: Thanks for your constructive comments and careful reading. We will fix minor errors in the revision. Please find our response to your main questions below. >The author should emphasize that the problem is not merely concave but strongly concave, as this distinction is crucial for ensuring the val...
Summary: The paper propose a novel Wasserstein Descent H1-Ascent algorithm for the barycenter on W2-space, which is an important problem without an accurate and computational efficient solution. The proposed method adopt Gradient on the functional in the dual formlation of the W2 optimazation and apply it into the min-...
Rebuttal 1: Rebuttal: Thanks for your constructive comments. In the revision, we will cite the paper [Daaloul 2021], which proposed an interesting notion for approximating and sampling from unregularized Wasserstein barycenter. Please find our response to your main questions below. >the authors also need to clarify w...
Summary: This work introduces a coordinate optimization algorithm for the computation of Wasserstein barycentres for probability measures discretized on compact Euclidean domains (such as images). By reformulating the traditional barycenter problem as a nonconvex-concave minimax optimization problem and employing a Gra...
Rebuttal 1: Rebuttal: Thanks for your constructive comments and careful reading. We will fix minor errors in the revision. Please find our response to your main questions below. > As for experiment 3, here it seems that spatial notions are introduced > in the OT problem. Space, understood as the coordinate (x, y) of t...
Summary: This paper introduces the Wasserstein-Descent H-Ascent (WDHA) algorithm, a primal-dual method for computing the exact Wasserstein barycenter with nearly linear time and space complexity. WDHA alternates between Wasserstein and Sobolev optimization geometries for the primal barycenter and dual Kantorovich poten...
Rebuttal 1: Rebuttal: Thanks for your constructive comments. Please find our response to your questions below. > I think \[1\] should be discussed that one can resort to using first > order methods such as subgradient descent on the dual for exact > barycenter calculations. Thanks for pointing out this relevant first...
null
null
null
null
null
null
SafeMap: Robust HD Map Construction from Incomplete Observations
Accept (poster)
Summary: This paper proposed SafeMap, which is designed to address missing camera views in HD map construction. A Gaussian-based perspective view reconstruction module and a distillation-based panoramic BEV feature correction module are presented. Experiments on nuScenes and Argoverse2 datasets demonstrate the effecti...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your thoughtful and detailed feedback. We appreciate your recognition of our *"design"*, *"formulation"*, *"experimental rigor"*, and *"supplementary material"*. Below, we address each of your comments and suggestions in detail. --- > ***`Q1`: "Investigate whic...
Summary: This paper presents a method that tackles the challenge of learning-based multi-view HD map reconstruction when one or more input views are missing. The proposed approach uses two new modules, G-PVR and D-bEVC, to aggregate information from available views and predict or correct the missing features, enhancing...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the detailed and constructive feedback. We are encouraged that you found *"the experimental results convincing"* and *"the supplementary material informative"*. Below, we address your insightful concerns point by point and will incorporate the clarifications int...
Summary: This paper presents SafeMap, an HD map prediction model for autonomous driving scenarios. Its uniqueness lies in its focus on robustness, particularly in handling incomplete multi-camera input views, which may occur in real-world scenarios due to sensor failure or occlusions. The framework integrates two key m...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the positive evaluation and thoughtful comments. We appreciate your recognition that our work *"provides new insights into fault-tolerant HD map reconstruction for autonomous driving"*. Below, we address your specific concerns in detail and will incorporate corr...
Summary: The paper introduces SafeMap, a novel framework designed to enhance the robustness of high-definition (HD) map construction for autonomous driving, particularly in scenarios where camera views are incomplete or missing. The key contributions of SafeMap are two innovative modules: the Gaussian-based Perspective...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We greatly appreciate your recognition of our *"technical novelty"*, *"robustness in incomplete scenarios"*, *"experimental design"*, and *"plug-and-play applicability"*. Below, we address your main questions and concer...
null
null
null
null
null
null
Accurate Identification of Communication Between Multiple Interacting Neural Populations
Accept (poster)
Summary: The paper introduces Multi-Region Latent Factor Analysis via Dynamical Systems (MR-LFADS), a novel model for accurately identifying inter-region neural communication in multi-region recordings. MR-LFADS extends the existing LFADS framework by jointly inferring region-specific inputs from unrecorded brain areas...
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and constructive feedback. We’re encouraged by the reviewer’s comments highlighting MR-LFADS as a novel extension of LFADS, and by the positive assessments on our synthetic dataset. Below, we respond to the reviewer's concerns on the lack of comparison to...
Summary: In the paper "Accurate Identification of Communication Across Multiple Interacting Neural Populations" the authors propose an extension of LFADS to multiple regions to model region-specific inputs and cross-region interactions. The authors test their model on synthetic data with ground truth network connectivi...
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough and constructive feedback. We appreciate the positive assessment of the manuscript’s contribution, clarity, and experimental design. Below, we respond to each of the points raised. Rebuttal figures referenced throughout our response can be accessed at the f...
Summary: This paper addresses the challenge of identifying connectivity patterns between brain regions from neural recordings. It introduces Multi-Region LFADS (MR-LFADS), an extension of the LFADS framework (Latent Factor Analysis via Dynamical Systems) for modeling multiple interacting neural populations. The key inn...
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and insightful comments. We are encouraged that our model was recognized as a valuable contribution and that the key claims were seen as well supported. Below, we respond to each point raised. Rebuttal figures referenced throughout our response can be acces...
Summary: The paper presents a modeling approach for inferring dynamics and communication between brain areas in multi-region neuroscience recordings. The approach extends the LFADS model to multiple regions via the introduction of messages between brain regions that are functions of the inferred firing rates. This has ...
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and constructive feedback. We also appreciate the reviewer’s positive assessment of our effort to address limitations in common architectural choices for communication models. Below, we respond to each of the points raised. Rebuttal figures referenced thr...
null
null
null
null
null
null
Gridded Transformer Neural Processes for Spatio-Temporal Data
Accept (spotlight poster)
Summary: The authors present an approach for spatio-temporal modelling. This involves encoding point observations into grid based “pseudo-tokens” using an attention mechanism and processing these pseudo tokens using a Transformer Neural Process approach. A grid decoder also using an attention mechanism allows predictio...
Rebuttal 1: Rebuttal: Thank you for your review. We address your concerns below. **Similarity to Aardvark** - While our method shares similarities with Aardvark, we highlight key differences: - **Methodological advancement, rather than weather-specific**: Aardwark is specifically designed for weather modelling, while ...
Summary: This paper introduces Gridded Transformer Neural Processes (TNPs), a new framework for modeling large-scale spatio-temporal data. Existing approaches, such as Conditional Neural Processes (CNPs), Convolutional CNPs (ConvCNPs), and Transformer Neural Processes (TNPs), struggle with either scalability or handlin...
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review and address their concerns below. **Claims and evidence** **Out-of-Distribution (OOD) and Translation Equivariance (TE)** - We perform TE experiments in the station and wind speed experiments. In the former, where we use stations from all regions g...
Summary: The work introduces a novel approach for modeling large-scale spatio-temporal data without being limited to fixed-resolution grids. Instead of traditional methods that constrain inputs, the authors build on transformer neural processes (TNPs) by developing gridded pseudo-token TNPs. These models use specialize...
Rebuttal 1: Rebuttal: Thank you for your review and for describing our experimental design “sound and well-aligned with the goals of large-scale spatio-temporal modelling.” We address your concerns below. **Claims and Evidence** 3. We believe there may be some confusion here because, as mentioned in the caption of Ta...
null
null
null
null
null
null
null
null
PipeOffload: Improving Scalability of Pipeline Parallelism with Memory Optimization
Accept (poster)
Summary: In this paper, the authors present the memory offload strategies in pipeline parallel training. The authors suggest that the variables with long lifespan should be offload, and then introduce a selective offload strategy that decreases peak activation memory in a better-than-linear manner. Moreover, that offlo...
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful feedbacks. See response to individual points below. > Whether the proposed strategy is suitable for other pipeline parallel strategy like DualPipe? The authors can present some analysis theoretically and empirically. TL;DR: Yes, but different devices will...
Summary: This paper proposes PipeOffload, a novel selective offload strategy to decrease peak activation memory in pipeline parallelism training. PipeOffload greatly improves the scalability of pipeline parallelism, and makes pipeline parallelism a stronger alternative than tensor parallelism. Claims And Evidence: Cla...
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful feedbacks. See response to individual points below > The selective offload strategy introduces additional complexity in managing activation lifespans, which could complicate implementation and maintenance. Every new feature requires some implementation, h...
Summary: This paper proposes a method for optimizing large language model (LLM) training by leveraging pipeline parallelism (PP) focusing on selectively offloading activations. It identifies the opportunity to overlap computing and data transfer during the forward and backward passes, and proposes selective offloading ...
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments. We respond to individual points from your review below. > The paper lacks clarity regarding its novel contributions ... the distinction between established and novel concepts, such as Selective Offload and Zero-Bubble Strategy, is not adequately de...
Summary: In this work, the authors introduced PipeOffload, a pipeline parallelism strategy that offloads activations with negligible overhead. The evaluation results demonstrate that PipeOffload uses less memory than the 1F1B method, enabling LLM training with larger model sizes and longer sequence lengths. Claims And...
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful feedbacks. Please see individual points below. > Including learning curves could enhance the understanding of convergence and correctness. We checked that our methods produces identical loss curves compared to vanilla 1F1B-I. We'll include this in the PDF....
null
null
null
null
null
null
GRAIL: Graph Edit Distance and Node Alignment using LLM-Generated Code
Accept (poster)
Summary: This paper presents an evolutionary-based algorithm that generates better programs by large language models to calculate heuristics scores for graph edit distance. With generated scores, the LLM-discovered algorithm achieved better heuristic prediction than other GED methods. Claims And Evidence: (+) All clai...
Rebuttal 1: Rebuttal: **W1(a) Take the top-15 performing GED heuristics from \[Blumenthal et al. 2020\] and evaluate the performance of that "ensembled" algorithm** **Ans:** First, we would like to clarify that several non-neural heuristics also consider multiple solutions. Specifically, branch-and-bound methods (e.g....
Summary: In this paper, the authors introduced a new paradigm of computing GED by leveraging LLMs to autonomously generate programs. Unlike traditional methods that rely on neural networks and require computationally expensive, NP-hard ground truth data, proposed method employs a self-evolutionary strategy to discover ...
Rebuttal 1: Rebuttal: **W1: Is the role of the LLM merely to generate code programs? What are the advantages of this approach compared to manually collecting a series of graph edit distance programs?** **Ans:** Yes, the LLM's role is to generate code as responses to strategically curated prompts. This approach of...
Summary: This paper presents a novel paradigm for computing Graph Edit Distance (GED) by harnessing the capabilities of LLMs to autonomously generate executable programs. Departing from conventional approaches that depend on neural networks and computationally intensive, NP-hard ground truth data, our methodology adopt...
Rebuttal 1: Rebuttal: Thank you for the constructive feedback on our work. Below, we outline the changes made to address the reviewer's concerns. If the reviewer finds our responses satisfactory, we would sincerely appreciate a reconsideration of our paper’s rating. ------------- **W1.** **I am concerned that the eff...
Summary: This paper addresses the problem of computing graph edit distance. In contrast to existing neural and non-neural methods, the authors propose an LLM-based approach, referred as GRAIL, by transforming the graph edit distance computation into two sub-problems - (1) Weight selection in a bipartite graph and (2) b...
null
null
null
null
null
null
When Model Knowledge meets Diffusion Model: Diffusion-assisted Data-free Image Synthesis with Alignment of Domain and Class
Accept (poster)
Summary: Working on the limitations regarding the inefficiency of open-source pre-trained models, which generate samples that significantly deviate from the training data, the authors proposed Diffusion-Assisted Data-Free Image Synthesis (DDIS) in the hope of improving synthetic image quality. The authors first extract...
Rebuttal 1: Rebuttal: Thank you for your detailed comments. We sincerely appreciate your insights and hope to address your concerns. **W1)** A well-trained generator captures rich knowledge about the training data, enabling it to produce desired images through appropriate combinations of learned features. As a result,...
Summary: - This paper proposes DDIS, a data-free image synthesis method that leverages a pre‐trained text-to‐image diffusion model as an image prior. - It introduces two key components: Domain Alignment Guidance (DAG), which uses the classifier’s batch normalization statistics to steer the diffusion sampling toward the...
Rebuttal 1: Rebuttal: Thank you for providing meaningful feedback. We appreciate your thoughtful consideration. We have tried our best to provide responses to address your comments. **Claims and Evidence** Please refer to our response **W1)** to Reviewer #83BW for a detailed explanation. As a result of supporting th...
Summary: This paper focuses on DataFree Image Synthesis and claims that the existing methods usually produce samples that deviate significantly from the training data distribution. To address this problem, the authors aim to leverage a text-to-image diffusion model to extract knowledge about the learned distribution. T...
Rebuttal 1: Rebuttal: We would like to express our gratitude for your thoughtful comments and valuable feedback. We understand your concerns, and hope the following response addresses them. | Method | DeepInversion | PlugInInversion | Ours | |:--------------------------------------...
Summary: This paper proposed a novel Diffusion-assisted Data-free Image Synthesis method designed to improve the quality of images generated without access to training data. Traditional methods struggle to approximate the original data distribution due to the absence of natural image priors. DDIS overcomes this by leve...
Rebuttal 1: Rebuttal: We would like to express our gratitude for your thoughtful comments and valuable feedback. Our response to the weakness is as follows. | Method (C10) | IS | FID | Precision | Recall | KD Result (\%) | |:------------------:|:------:|:-------:|:---------:|:-------:|:---------:| |...
null
null
null
null
null
null
Bridging Fairness and Efficiency in Conformal Inference: A Surrogate-Assisted Group-Clustered Approach
Accept (poster)
Summary: - This paper presents a novel approach to fair conformal inference. - Specifically, the proposed method is based on an group-conditional conformal inference. - The proposed method is based on two keys: (1) clustering score values and (2) the use of an influence function for prediction sets. - Experimental resu...
Rebuttal 1: Rebuttal: Thanks for your careful review. We provide responses to your questions: **Claims And Evidence** 1. **Fair coverage is crucial in socially consequential domains**, where prediction intervals can help guide decision-making related to access to resources, opportunities, or fair treatment. For examp...
Summary: The authors introduce a new strategy for achieving equitable group coverage in conformal inference. The approach consists of two components: (1) rather than using raw sensitive groups, they cluster groups with similar conformal score distributions via K-means and (2) they leverage surrogate information correla...
Rebuttal 1: Rebuttal: Thanks for the comprehensive review. We now provide detailed responses to your concerns. **Theoretical Claims** 1. $V_{eff}^r$ and $V_{eff}$ are referred to as the **variances** of the estimators. Therefore, the more predictive the surrogates are, the smaller $V_{eff}$ will be and the larger (mor...
Summary: This work studies how to provide (conformal) prediction intervals for individual outcomes when additional information, in the form of surrogate outcomes, is available for both source and target datasets. The goal is to provide tight (short) prediction intervals that also achieve nominal coverage for all subgro...
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments and provide itemized responses to your questions. **Theoretical Claims** & **Questions For Authors** 1. The asymptotic coverage in Theorem 4.6 should be interpreted as $P_{V\sim P_{D_0}}(y \in C(W;r_{\alpha}^k)\mid Z = z) \geq 1-\alpha - o(1)$ as $...
Summary: This paper introduces a conformal inference algorithm (SAGCCI) aimed to produce prediction sets that satisfy group-conditional coverage guarantees. The algorithm is intended to be robust to missing information settings, where sometimes the primary outcome (around which we are trying to produce prediction sets)...
Rebuttal 1: Rebuttal: Thanks for the careful review and nice comments! We here provide detailed responses to your comments. **Experimental Designs Or Analyses** & **Other Comments Or Suggestions** 1. The **size of the groups was $M=3$**. We have conducted **additional experiments with a larger number of groups (e.g.,...
null
null
null
null
null
null
TinyMIG: Transferring Generalization from Vision Foundation Models to Single-Domain Medical Imaging
Accept (poster)
Summary: This paper addresses the challenge of single-source domain generalization in medical imaging by proposing a novel framework that transfers generalization knowledge from large vision foundation models to specialized smaller models, thereby eliminating the inference burden of large models during the testing phas...
Rebuttal 1: Rebuttal: Dear Reviewer peNL: **1. discuss the novelties with some strongly related works** We sincerely appreciate your valuable feedback. Below, we clarify the key distinctions and advantages of our method compared to the cited works from two perspectives: motivation and implementation. These points wil...
Summary: This paper proposes to address the challenge of single-source domain generalization in the field of medical imaging. The authors propose to leverage the generalization knowledge embedded in current vision foundation models to enhance the generalization capabilities of smaller-capacity segmentation models. They...
Rebuttal 1: Rebuttal: Dear Reviewer 3XUJ, We sincerely appreciate your comprehensive review and highly constructive feedback on our manuscript. Below we provide point-by-point responses to address all of your concerns: 1. **phase component attention mechanism** We sincerely appreciate your feedback. In the field of d...
Summary: The authors propose a transfer generalization framework, TinyMIG, to address the issue of single-domain generalization caused by the diversity of imaging devices and variabilities among data collection centers. To capture both global feature distribution and local fine-grained details, they developed a global ...
Rebuttal 1: Rebuttal: Dear Reviewer uvPv, **R-W1 KL for Domain-invariant Features** We sincerely appreciate your insightful concerns. The bidirectional KL divergence serves two key purposes in our approach: (1) It ensures stable and consistent semantic outputs before and after feature perturbation. This is because ...
Summary: The authors propose a method to efficiently make use of generalizable features of visual foundation models by distilling learned distributions to a smaller, more efficient model. Thereby, they propose four main components: 1. Global Distribution Consistency Learning: Forcing the “specialized model” (i.e. stu...
Rebuttal 1: Rebuttal: Dear Reviewer mFeP, We sincerely appreciate your comprehensive review and highly constructive feedback on our manuscript. Below we provide point-by-point responses to address all of your concerns: **1.Some Typos**: *(1) Revised: Comparative Results of Perturbation Methods vs. UFDM (Lines 368-371...
null
null
null
null
null
null
An Efficient Matrix Multiplication Algorithm for Accelerating Inference in Binary and Ternary Neural Networks
Accept (poster)
Summary: The paper proposes an efficient matrix multiplication algorithm for cases where one of the operands is binary or ternary. The binary weight matrices are partitioned into multiple blocks to exploit their intrinsic low-dimensional structure, leading to reduced storage and computation during inference. Within eac...
Rebuttal 1: Rebuttal: **Theoretical Claims:** Thanks for bringing this to our attention. We explicitly include all the proofs in the updated version. The proof of Theorem 4.4 was removed due to its similarity to Theorem 4.3 (with a different value of $k$). **Weaknesses:** *RSR and RSR++ involve permutation of the in...
Summary: This paper describes and complexity-analyzes an efficient algorithm (RSR/RSR++) for multiplication with binary (and by derivation, ternary) matrices. CPU and GPU implementations are provided, and inference with ternary-weight LLMs is demonstrated. Claims And Evidence: Yes. Methods And Evaluation Criteria: Ye...
Rebuttal 1: Rebuttal: Thank you for your time and the careful review of our work. We appreciate it. --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal, and I maintain my original evaluation.
Summary: This work proposes an efficient algorithm for binary/ternary matrix multiplication, where the binary/ternary weights are known ahead of time. The algorithm first transforms the ternary matrix into a sum of binary matrices, and then compresses the binary matrices into a set of aggregate values which contribute ...
Rebuttal 1: Rebuttal: **Weaknesses:** *While the GPU implementation attains speedups...* Thank you for the insightful suggestion—this is indeed a valid point. In this work, our primary focus was on the algorithmic design and application-level evaluation of the RSR algorithm. We compared its performance across various...
Summary: The paper introduces two algorithms, RSR and RSR++, designed for accelerating matrix multiplication in binary/ternary LLMs during inference. By preprocessing weight matrices into column blocks, permuting rows lexicographically, and computing segmented sums to create optimized indices, the proposed methods redu...
Rebuttal 1: Rebuttal: **Methods And Evaluation Criteria:** Thank you for the helpful suggestion. We agree that performance evaluation in other settings—such as generating full sequences rather than single tokens—would also provide valuable insights. We will include additional experiments in the updated version to show...
null
null
null
null
null
null
AUTOCIRCUIT-RL: Reinforcement Learning-Driven LLM for Automated Circuit Topology Generation
Accept (poster)
Summary: This paper proposes a reinforcement learning (RL)-based framework, AutoCircuit-RL, for automated analog circuit topology generation. It consists of two main stages: instruction fine-tuning and reinforcement learning optimization. During the instruction fine-tuning stage, supervised learning techniques are used...
Rebuttal 1: Rebuttal: We appreciate the reviewer’s detailed feedback and address the concerns as follows: ## Scalability Concerns: Our experiments primarily focus on 4–5 component circuits, with a few-shot evaluation for 6–10 component designs using only 1,000 training samples. Although success rates drop for larger c...
Summary: This work proposes a framework for automating analog circuit topology synthesis using RL. It has two phases. In the instruction tuning phase, LLM learns to generate initial circuit topology, ensuring feasibility under basic constraints. In the RL phase, LLM iteratively optimizes topologies using reward models,...
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. ## Device Sizing and Evaluation Metrics: Our approach primarily focuses on generating circuit topologies, which includes selecting the devices and their interconnections. The device parameters (e.g., capacitor, inductor, and MOSFET sizes) are set based on s...
Summary: The authors proposed a novel framework, AUTOCIRCUIT-RL (AC-RL), for automating analog circuit topology synthesis using reinforcement learning (RL). The architecture operates in two main phases: instruction fine-tuning and RL refinement. Through extensive experimental validation, the authors demonstrated that t...
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. ### Benchmarking Datasets: To the best knowledge of the authors, currently in the domain of analog circuit topology generation, there is no standard benchmarking datasets. All the recent publications in this domain would spend considerable effort in curating...
Summary: The paper introduces AUTOCIRCUIT-RL, an RL-based LLM framework for automating analog circuit synthesis. The framework operates in two phases: instruction tuning and RL refinement. The authors claim that AUTOCIRCUIT-RL outperforms existing baselines, generating ~12% more valid circuits, improving efficiency by ...
Rebuttal 1: Rebuttal: We appreciate the reviewer’s constructive feedback and address each concern below. ## Comparison with Auto-SPICE [1]: Our work fundamentally differs from Auto-SPICE, which focuses on generating SPICE netlists via domain-specific prompt engineering for dataset creation. It aims to generate large-...
null
null
null
null
null
null
Scalable Gaussian Processes with Latent Kronecker Structure
Accept (poster)
Summary: This paper considers a Gaussian process (GP) regression with data points on a Cartesian grid and proposes a new way of constructing GP gram matrices to deal with missing values. The proposed method represents a gram matrix with missing values as a projection of a latent exact gram matrix, enabling efficient GP...
Rebuttal 1: Rebuttal: Dear Reviewer 1bD1, Thank you for your time and effort spent on reviewing our paper. In the following, we provide responses to your specific concerns and questions. --- > The relationship to related works regarding the handling of missing values seems not to be clearly presented. For example, h...
Summary: This work proposes an exact latent Kronecker structure for Gaussian Processes, by expressing the covariance matrix of the observed values (not restricted to a grid structure) as the projection of a latent Kronecker product matrix (defined on a grid structure). Namely, it allows defining a Kronecker product-ba...
Rebuttal 1: Rebuttal: Dear Reviewer UJyJ, Thank you for your time and effort spent on reviewing our paper. In the following, we provide answers to your specific questions. --- > Can the authors elaborate on why results in Table 1 showcase improved predictive RMSE performance of SVGP, yet not best negative log-likeli...
Summary: In multitask or spatio-temporal Gaussian process regression, scalability issues lead users to adopt a Kronecker, also described as a factored, structure in the covariance between tasks, space, and time. However, this computational gain requires that observations be available for all tasks and inputs, or across...
Rebuttal 1: Rebuttal: Dear Reviewer F5JD, Thank you for your time and effort spent on reviewing our paper. In the following, we address your suggestions, concerns, and questions. --- > I couldn’t find in the paper whether multiple runs were performed with different selections of missing data for the inverse dynamics...
Summary: The paper improves on the scalability of Kronecker product large scale Gaussian processes (GPs). Specifically, Kronecker product GPs cannot easily deal with missing observation outputs in the gridded data. The paper proposes a trick based on a projection operation that computes the full Kronecker product kern...
Rebuttal 1: Rebuttal: Dear Reviewer Zcew, Thank you for your time and effort spent on reviewing our paper. In the following, we provide answers to your specific questions. --- > (i) It is unclear if the method is still applicable in high-dimensional spaces since we know that for Kronecker-product formulations the nu...
null
null
null
null
null
null
DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning Based on Constant-Overhead Linear Secret Resharing
Accept (poster)
Summary: This paper introduces a distributed matrix mechanism for federated learning. It is a local DP mechanism where each client of FL uses their own noises instead of the server adds the noise. To achieve this goal, the authors use cryptographic protocols for secrete sharing to achieve this. The proposed frame is ro...
Rebuttal 1: Rebuttal: 1) Why discrete Gaussians?: Since our novel techniques are independent of the noise distribution used, we just chose one distribution that we believe to be the most well-studied/understood: the discrete Gaussian distribution. Indeed, we just need CX + Z to be differentially-private, where Z can b...
Summary: The paper proposes a Linear Secrete Sharing Protocol with a Constant-Overhead Linear Secret Resharing method to facilitate matrix mechanism to preserve DP in Federated Learning. In the case of Distributed FL, in which the clients do not trust the server, the clients will generate the noise and share to each ot...
Rebuttal 1: Rebuttal: 1) Construction of matrix A: The matrix A (and its factorization A=BC) is a public value that can be constructed before training begins by the central server (which does still exist in our framework, though importantly privacy holds even with respect to this server; see, e.g., right of Figure 1)....
Summary: The paper focuses on differentially private (DP) federated learning (FL), with the main contribution of formulating a distributed version of the well-known matrix mechanism. As the continuous Gaussian mechanism typically used with the centralized matrix mechanism does not work well with cryptographic primitive...
Rebuttal 1: Rebuttal: 1) How baseline is run: We use the exact experiment details specified in the respective papers: (Section 5; Kairouz et al., 2021) for Local DP DDG and (Section 6 and Appendix I; Choquette-Choo et al., 2023a) for the Central DP mechanisms. For local DP DDG, the FEMNIST norm clip is set to 0.03 and...
Summary: This paper presents a protocol for long running secure aggregation with matrix mechanism DP. The protocol consists of a sequence of committees/data-providers passing theinternal secrets of the algorithm along in secret shared form. The main theoretical innoviation is the novel secret resharing scheme used to p...
Rebuttal 1: Rebuttal: 1) Dropout Resistance: We thank the reviewer for pointing this out. We did not include the details for dropout resistance in the main body to make the (already complex) protocol easier to understand, but we should have included it in the appendix. However, the reviewer is correct; there is a sim...
null
null
null
null
null
null
Variational Learning Induces Adaptive Label Smoothing
Reject
Summary: The author finds that variational learning naturally induces an adaptive label smoothing where label noise is specialized for each example. As a result, variational learning method IVON shows performance gain compared with traditional label smooth method. ## update after rebuttal My concerns are not well add...
Rebuttal 1: Rebuttal: We thank the reviewer for their review. The main issue seems to be related to the writing of the paper, for which we have given clarifications and promised a few fixes. We hope that the reviewer will consider raising their score. > Q1: I think the conclusions in Section 3.4 lack credibility. Redu...
Summary: The authors demonstrate that variational learning induces adaptive label smoothing and derive the exact form of label noise for various problems. They show that variational learning associates the high noise to atypical samples. Experimental results show that the proposed approach outperforms label smoothing. ...
Rebuttal 1: Rebuttal: We thank the reviewer for their review. As per reviewer's suggestion, we have made the code available at the following [anonymous link](https://drive.google.com/file/d/18l5kuKmmN0H9dud4QPBI3LBeF6XobYC5/view?usp=sharing). We also thank the reviewer for suggestions regarding writing. We will fix the...
Summary: The paper studies the connection between variational inference and adaptive label smoothing. Throughout a thorough derivation and analysis on linear models, the authors show that learning with a variational posterior is equivalent to conventional point estimation with adaptive smooth labels. The paper also ext...
Rebuttal 1: Rebuttal: We thank the reviewer for their review. The reviewer found that “the extension to deep neural network part is unclear and not convincing” and also that the assumption on Gaussian variational posterior to be “slightly over-claim”. We do not agree with the reviewer's assessment and challenge it belo...
Summary: This paper establishes a relationship between variational learning and adaptive label smoothing. It introduces the validation of this relationship across three models: Logistic Regression, Generalized Linear Model, and neural networks optimized using the IVON optimizer. The results show that directly learning ...
Rebuttal 1: Rebuttal: We thank the reviewer for their review. The reviewer finds that the work “may not be sufficient for an ICML-level publication” due to lack of “deeper theoretical insights” and “broader empirical validation”. **We do not agree with the assessment. Our connections between variational learning and la...
null
null
null
null
null
null
Accelerating Eigenvalue Dataset Generation via Chebyshev Subspace Filter
Reject
Summary: The paper introduces the Sorting Chebyshev Subspace Filter (SCSF), which is the first method designed to accelerate eigenvalue dataset generation. Instead of solving each eigenproblem independently, SCSF exploits spectral similarities in two key ways. First, it applies a truncated FFT sorting algorithm to reor...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their insightful and constructive feedback. Below we respond to your comments as follows and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score and your confidence. If not, pl...
Summary: The authors propose a method for accelerating the generation of a training dataset for subsequent solution of eigenvalue problems by machine learning methods. The proposed method is based on the idea of using spectral correlations between operators corresponding to different samples (FFT-based approach for sor...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their insightful and constructive feedback. Below we respond to your comments as follows and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score and your confidence. If not, pl...
Summary: In this work, the authors propose new computational method for accelerating the problems of generating datasets that involve eigenvalue computations. Such problems often appear in applications of deep learning to scientific computing, especially with differential operators, where it is necessary to solve eigen...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their insightful and constructive feedback. Below we respond to your comments as follows and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score and your confidence. If not, pl...
null
null
null
null
null
null
null
null
Underestimated Privacy Risks for Minority Populations in Large Language Model Unlearning
Accept (poster)
Summary: This paper studies how the unlearning quality of many proposed unlearning methods degrades when applied to minority groups. This was observed across evaluting unlearning with several MI attacks, datasets, and models. Building on this insight, their proposed maximum leakage informed a new ablation of unlearning...
Rebuttal 1: Rebuttal: We sincerely thank Reviewer jBLe for the insightful suggestions and for recognizing the potential impact of our work on the field of unlearning. Below, we address the reviewer’s questions and suggestions. > Major comments & W1. Clarification in Introduction and Related Work. We sincerely thank t...
Summary: The paper argues that minority data points are harder to unlearn than the common typical example. To show this, the authors construct canaries by replacing PII in two datasets in the forget set with infrequent PIIs. Then the authors show that common unlearning algorithms struggle to unlearn these data points u...
Rebuttal 1: Rebuttal: We greatly thank Reviewer Cub4 for reviewing our paper. Below, we address the questions and comments raised by Reviewer Cub4. > The paper fails to provide a rigorous definition of ‘minority’. The definition should use a mathematical model. We thank the reviewer for this comment. As noted in Foo...
Summary: This paper investigates the underestimated privacy risks faced by minority populations in the context of large language model (LLM) unlearning. The authors argue that current evaluations, which rely on average-case assessments and MIAs, underestimate these risks since minority data is harder to forget. They pr...
Rebuttal 1: Rebuttal: We sincerely thank Reviewer WLjM for acknowledging the novelty and comprehensiveness of our minority-aware framework, as well as for supporting the acceptance of our paper. Below, we provide detailed responses to the questions and concerns raised. > Q3 & W3 & C3: Code for our paper. We appreciat...
null
null
null
null
null
null
null
null
Balancing Preservation and Modification: A Region and Semantic Aware Metric for Instruction-Based Image Editing
Accept (poster)
Summary: This paper presents BPM, a new image quality assessment metric designed to address the biases of existing metrics in image editing, particularly regarding the preservation and modification areas, as well as their narrow considerations. BPM introduces a region-based approach by using masks to divide the image i...
Rebuttal 1: Rebuttal: We appreciate the reviewer's recognition of our work that "viewpoints are supported by experimental evidence", "theoretical analysis is detailed", "experiments is reasonable with clear logical progression and evidence" **1.Verification on larger dataset:**(1) As suggested, we conduct experiments ...
Summary: This paper proposes a novel evaluation metric for instruction-based image editing. The paper identifies the weaknesses in commonly used metrics, which can only consider the image as a whole but fail to focus on local and modified parts. It proposes to extract the edited object and intended editing effect from ...
Rebuttal 1: Rebuttal: We appreciate the reviewer's recognition of our work that "claim is solid", "metric is novel". **Limited scope of proposed metric, i.e., global editing and object attribute changing:** 1) Thanks for recognize our proposed metric suitable for local object-oriented editing. 2) As suggested, we app...
Summary: This paper introduces BPM, a metric designed for instruction-based image editing. BPM explicitly separates images into editing-relevant and irrelevant regions, enabling more precise evaluation. It conducts a two-tier assessment from both regional and semantic perspectives to provide a comprehensive measure of ...
Rebuttal 1: Rebuttal: We appreciate the reviewer's recognition of our work that "enabling more precise evaluation", "broad applicability", "paper is well written", "motivation is clear and easy to understand" and "contributes to the broader scientific literature". **1.More concise figure of BPM pipeline:** Thanks for...
Summary: The authors proposed a new metrics to balance the evaluation of preservation and modification for instruction-based image editing tasks. It is supposed to address some practical problems of image editing evaluation, by disentangling and measuring both the edited and untouched regions. The results show that the...
Rebuttal 1: Rebuttal: We appreciate the reviewer's recognition of our work that "expected to have a high impacts on image editing domains", "claims are well evaluated with sufficient evidences", "human alignment testing is effective". **1.Robustness of BPM on more diverse image editing tasks:** (1)Our BPM can apply ...
null
null
null
null
null
null
MARS: Unleashing the Power of Variance Reduction for Training Large Models
Accept (poster)
Summary: Training large deep neural networks is a challenging endeavor due to their massive scale and non-convexity. This is true even more so for large language models where the current state-of-the-art method AdamW. The paper propose incorporating a form of variance reduction, ala STORM, and combining it with a fam...
Rebuttal 1: Rebuttal: Thank you for your careful evaluation of our work. We appreciate your recognition of our contributions and your constructive questions. We have carefully considered your concerns and we provide detailed responses to your questions below. --- **Q1**: First sentence in the abstract: Training deep ...
Summary: The paper presents an algorithm template for variance reduction that includes common optimizers such as AdamW and Lion as a special case. A convergence analysis is provided the smooth case, with an assumption on the pre-conditioning matrix. Extensive experiments are conducted on language modeling and vision ta...
Rebuttal 1: Rebuttal: We sincerely appreciate your careful review and constructive feedback. Your comments on both the theoretical and experimental aspects of our work have helped us further strengthen our contributions. We have carefully considered your suggestions and provide detailed responses below. --- **Q1**: Pa...
Summary: This paper studies variance-reduction algorithm for large language models, and proposes a general framework, MARS, that combines variance-reduction and pre-conditioning. In particular, the authors add a control parameter $\gamma_t$ to the STORM momentum aggregate, i.e., $m_t = \beta m_{t-1} + (1-\beta) c_t$ wh...
Rebuttal 1: Rebuttal: Thank you for your detailed and thoughtful review. Below, we provide detailed responses to each of your comments. --- **Q1**:The name MARS-Shampoo **A1**: The interpretation of Shampoo as a projection onto the closest semi-orthogonal matrix ($UV^\top$) originates from Proposition 5 of [1], which...
Summary: This paper proposes MARS, which is a framework that applies variance reduction to preconditioning methods for stochastic optimization. Namely, MARS takes a scaled update from STORM (which calculates momentum as an EMA combining the stochastic gradient and a gradient correction term) and applies preconditioning...
Rebuttal 1: Rebuttal: Thank you for your thorough review and constructive feedback. We sincerely appreciate your recognition of the novelty of our proposed framework and the clarity of our writing. Additionally, your detailed questions have been invaluable in helping us refine our work, and we listed our responses to y...
null
null
null
null
null
null
Improving Value Estimation Critically Enhances Vanilla Policy Gradient
Accept (poster)
Summary: This paper demonstrates, through both theoretical analysis and experiments, that vanilla policy gradient (VPG) can achieve performance comparable to PPO by simply increasing the number of value updates per iteration. Contrary to the common belief and existing claims, the authors argue that accurate value estim...
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their positive and supportive review! > However, the assertion that trust regions (“trust regions are neither sufficient nor necessary for guaranteed performance improvement”) requires further theoretical backing. We will revise the statements in the paper...
Summary: The authors provide a novel perspective how the approximation error of the value function determines the performance of the algorithm more so than the proximal constraint in TRPO and PPO. they argue this both empirically and theoretically. They further argue that the accuracy of the value function adds to robu...
Rebuttal 1: Rebuttal: Thank you for the important review. Hope that our response adequately addresses your concerns. For claims: 1. This argument directly corresponds to the Theorem 1 from the TRPO paper, which states that $$J(\theta) \geq L^{TRPO}(\theta) - \frac{4 \beta \gamma}{(1 - \gamma)^2} D_{KL}^{max}(\pi_\the...
Summary: This paper studies the performance gap between vanilla policy gradient and PPO/TRPO type of trust region enforcing algorithms. The authors conclude through empirical studies that the core to the performance gap lies in value estimation, as opposed to the common belief of trust region enforcing. Claims And Evi...
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback and hope that our response can address your concerns. Regarding the weaknesses: We used the PPO implementation from Tianshou with learning rate decay and reward normalization disabled to better compare algorithmic performance without excessive co...
Summary: This paper investigates the role of value estimation accuracy in on-policy policy gradient reinforcement learning. The authors demonstrate that improving the accuracy of value estimation (by performing more value function updates per iteration) dramatically improves the data efficiency of vanilla policy gradie...
Rebuttal 1: Rebuttal: We appreciate the reviewer's thorough and insightful feedback. For claims: 1. In Figure 2(c)(d), we observe that the new loss with 50 epochs effectively enforces the trust region compared to runs with fewer epochs, despite its poorer performance. This coincides with the suggested alternative app...
null
null
null
null
null
null
Refining Adaptive Zeroth-Order Optimization at Ease
Accept (poster)
Summary: The paper introduces a zero-order optimization technique called R-AdaZO. Specifically, it makes a simple yet seemingly effective modification to ZO-AdaMM. By replacing the second moment estimation term $g_t$ in ZO-AdaMM with $m_t$, R-AdaZO achieves variance reduction compared to ZO-AdaMM and demonstrates a fas...
Rebuttal 1: Rebuttal: We are grateful to Reviewer MyoC for the positive and constructive feedback! We appreciate that the reviewer highly recognizes that our paper is **simple yet effective** and our theoretical analysis is **comprehensive**. We would like to address your comments as follows. --- > ...the authors sho...
Summary: The authors take a closer look at the Adam update rule in the context of Zero-Order Neural Network training and identify a specific change to the second-moment update rule that provides improved convergence theoretically and empirically. Their experimental sections contains synthetic experiments as well as ne...
Rebuttal 1: Rebuttal: We are grateful to Reviewer 6KdC for the positive assessment and valuable feedback! We particularly appreciate the recognition that "Many good papers stem from simple ideas, such as this one". We would like to address your concerns below. --- > I find formulations such as "we theoretically show t...
Summary: This paper proposes an improvement over existing adaptive-gradient methods for zeroth-order optimization (ZO). In ZO optimization, to minimize a function $\min_{\theta \in \mathbb{R}^d} F(\theta)$ where $F(\theta) = \mathbb{E} [ f (\theta;\xi)] $, we can only access noisy function values $f(\theta;\xi)$. ZO m...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their diligent, detailed, and constructive feedback, and appreciate the time and effort invested . We found the comments very helpful and address each point below. --- > Incomplete proof Thank you for highlighting the need for clarity regarding Corollary 5.10 ...
null
null
null
null
null
null
null
null
Sparse Spectral Training and Inference on Euclidean and Hyperbolic Neural Networks
Accept (poster)
Summary: The authors introduce sparse spectral training (SST), a memory-efficient way to update neural network parameters. Unlike low-rank methods, SST begins by computing the singular-value decomposition of a weight matrix. At each step, SST updates all singular values and a sample of singular vectors (weighted by sin...
Rebuttal 1: Rebuttal: Thank you for your insightful comments and suggestions. **W1:** There is another work ... **Reply:** Thank you for pointing out this recent work. We have included it in the related work section of our revised manuscript. **W2:** I am interested in ... **Reply:** Low-precision and mixed-precisi...
Summary: Sparse Spectral Training (SST) is a memory-efficient pre-training method for large language models. It updates all singular values, selectively updates singular vectors and achieves performance comparable to full-rank training. It first incorporates parameter-efficient pre-training process in hyperbolic space,...
Rebuttal 1: Rebuttal: Thank you for your detailed review and valuable insights. **Weakness 1:** Result analysis should provide insights into future work and underlying reasons for the experimental results in the performance tables, not only provide numerical values already shown in the table. **Reply:** Thank you for...
Summary: Training large language models faces challenges due to high GPU memory demands. Existing methods like LoRA and ReLoRA have limitations, such as being constrained by low - rank structure or suffering from saddle point issues. This paper proposes Sparse Spectral Training (SST). It updates all singular values, se...
Rebuttal 1: Rebuttal: Thank you for your insightful comments and suggestions. We are grateful for the opportunity to clarify and improve our work. **Weakness 1:** There are not enough baselines and they are not particularly advanced. Consider introducing Dora, HIRA, Vera, FourierFT (at least Dora and Vera). **Reply:*...
null
null
null
null
null
null
null
null
MDDM: Practical Message-Driven Generative Image Steganography Based on Diffusion Models
Accept (poster)
Summary: This paper introduces a new message-driven generative steganography based on the diffusion model, MDDM, which is a novel framework that combines the diffusion implicit model (DDIM) with Cardan Grille to dynamically adjust between watermarking and steganography effectively, practically, and securely. MDDM encry...
Rebuttal 1: Rebuttal: We sincerely thank you for your detailed review and valuable comments on our paper. Your in-depth analysis and constructive suggestions on our work not only reflect your rigorous attitude toward research but also provide important guidance for us to further improve the paper. We are very grateful ...
Summary: This paper proposes a practical and robust message-driven generative image steganography framework, called MDDM. MDDM uses a carefully designed encoding strategy to encode the binary message and uses it as the starting point for diffusion, allowing users to generate diverse images through conditional text with...
Rebuttal 1: Rebuttal: We are very grateful for your recognition of the innovativeness of our work and the thoroughness of our experiments. At the same time, we sincerely thank you for your valuable comments, which we think will significantly improve the quality of the paper. The following are the detailed responses to ...
Summary: To address the issues with extraction accuracy, robustness, efficiency, and practicality in GIS, this paper proposes a practical and robust message-driven GIS framework, called MDDM. Although various experiments show the advantages of MDDM in accuracy, controllability, efficiency, and practicality, there still...
Rebuttal 1: Rebuttal: We are very grateful for your recognition of our work and believe that our work has great advantages in accuracy, controllability, efficiency and practicality. At the same time, we sincerely thank you for your valuable comments, which we think will significantly improve the quality of the paper. T...
null
null
null
null
null
null
null
null
Pixel-level Certified Explanations via Randomized Smoothing
Accept (poster)
Summary: The paper introduces a pixel-level certification approach for black-box attribution methods using Randomized Smoothing. The authors reformulate the pixel-level attribution problem into a segmentation task and provide theoretical guarantees adapted from Fischer et al. (2021). Extensive experiments demonstra...
Rebuttal 1: Rebuttal: **Note:** We include a general response covering global updates and experiments in our comment to Reviewer **itGy**. Please refer to that comment and the supplementary PDF link there. ### Q: The method is an extension of Fischer et. al. 2021. Therefore, the theoretical novelty is limited. While...
Summary: The authors proposed a method to provide the certified explanation on the pixel level using the Randomized Smoothing approach. While a lot of techniques authors used are not novel, the overall pixel-level approach could provide a more rigorous insight into assessing the attribution methods from the robustness ...
Rebuttal 1: Rebuttal: # Response to all reviewers We thank all reviewers for their thoughtful and constructive feedback. We very much appreciate the diligent and constructive reviews by all reviewers, and believe the additional insights gained in preparing this rebuttal significantly strengthen our work. While we res...
Summary: The article proposes a certification method that reformulates the attribution task as a segmentation problem and uses the randomized smoothing technique to ensure the pixel-level robustness of black-box attribution methods. Meanwhile, two metrics, "percentage of certified pixels" and "Certified Grid Pointing G...
Rebuttal 1: Rebuttal: **Note:** We include a general response covering global updates and experiments in our comment to Reviewer **itGy**. Please refer to that comment and the supplementary PDF link there. ### We thank the reviewer for recognizing the significance of our certification framework, particularly the value...
Summary: The paper’s primary contribution is a pixel-level certification framework for attribution methods on deep classifiers, backed by a robust evaluation paradigm and demonstrated through high-quality visuals and quantitative comparisons. Claims And Evidence: The claims in this paper are well-supported and authors...
Rebuttal 1: Rebuttal: **Note:** We include a general response covering global updates and experiments in our comment to Reviewer **itGy**. Please refer to that comment and the supplementary PDF link there. We thank the reviewer for raising very fundamental questions regarding the motivation of our work. We clarify tha...
null
null
null
null
null
null
Agent-as-a-Judge: Evaluate Agents with Agents
Accept (poster)
Summary: The paper introduces a novel framework where agentic systems are evaluated by other agentic systems. In contrast to evaluating agentic systems with human efforts, the proposed method leverages agents to provide feedback throughout the task solving process. The paper first propose DevAI benchmark to generate ta...
Rebuttal 1: Rebuttal: **We sincerely thank you for the valuable time and encouraging words. Below, we address your concerns and provide additional clarifications to strengthen our paper.** --- > W1&W2. While the Agent-as-a-Judge paradigm is novel, the specific agentic system designed for evaluation appears to be clos...
Summary: This paper proposes the Agent-as-a-Judge framework, which leverages agentic systems to evaluate other AI agents, addressing the limitations of current evaluation methods that either ignore intermediate steps or are too labor-intensive. To showcase this approach, the authors introduce DevAI, a new benchmark con...
Rebuttal 1: Rebuttal: **We greatly value your time, insights, and positive feedback which encourage us a lot. Below, we will carefully address your specific questions and suggestions individually.** --- > W1&Q2. The experiments are only conducted on agent coding domain. Could the author specific why you choose this d...
Summary: This paper introduces the Agent-as-a-Judge framework for evaluating agentic systems using other agentic systems, extending the LLM-as-a-Judge paradigm by incorporating feedback on intermediate task-solving steps. The authors present DevAI, a benchmark of 55 realistic AI code generation tasks with 365 hierarchi...
Rebuttal 1: Rebuttal: **Thank you for your valuable feedback and insightful questions. We greatly appreciate your encouraging words for this paper. We will carefully address each concern in detail below.** > Q1: How generalizable is the Agent-as-a-Judge framework beyond DevAI's code generation tasks, and how could it ...
Summary: This paper proposes a new evaluation framework which uses agentic systems to judge a subject agentic system. To validate and demonstrate this framework, the paper also introduces a new benchmark of code generation tasks with manual annotations. The authors benchmark several existing code-generating agentic sys...
Rebuttal 1: Rebuttal: **We sincerely appreciate your time and insightful feedback, as well as your acknowledgment of our work’s novelty.** > Q1. What is stopping DevAI from becoming another measure that becomes a target for the community? DevAI is specifically designed to evaluate the entire intermediate automated AI...
null
null
null
null
null
null
Provable and Practical Online Learning Rate Adaptation with Hypergradient Descent
Accept (poster)
Summary: The authors addressed the challenge of establishing convergence of the hypergradient descent heuristic in this submission. They provided the first rigorous theoretical foundation for hypergradient descent and introduce a novel online learning perspective that extends to other first-order methods with adaptive ...
Rebuttal 1: Rebuttal: Thank you for your valuable feedback on our paper and for acknowledging the contribution of our results! **Essential References Not Discussed** Thank you for pointing out these references. We will thoroughly discuss the superlinear convergence results in the revised version. Here, we give a brie...
Summary: In this paper, the authors study the hypergradient descent method for smooth convex optimization, where the preconditioner is updated using online gradient descent. Building on the online learning framework of (Gao et al., 2024), they prove sublinear and linear convergence rates in the convex and strongly-conv...
Rebuttal 1: Rebuttal: Thank you for your time and insightful comments. Below are our responses to your two major concerns and other comments. If we have addressed your concerns, we kindly request a re-evaluation of the score, and please let us know if you have further questions. **Major concerns ("Questions")** 1. Nu...
Summary: This paper presents the first rigorous convergence analysis of the hypergradient descent method (HDM), a technique for adaptive stepsize selection in optimization that has been used heuristically for 25 years. The authors develop a theoretical framework based on online learning to analyze HDM's convergence pro...
Rebuttal 1: Rebuttal: Thank you for your feedback on our paper. **Questions** 1. Extension to the stochastic setting This is a very good question. One major bottleneck in extending HDM to the stochastic case is that the hypergradient feedback can be corrupted by noise, especially the gradient norm denominator of ...
Summary: The paper analyzes the local and global convergence behavior of the hypergradient descent method using online learning. Underlying the analysis is interpreting hypergradient descent as doing online gradient descent on “hypergradient feedback,” a proxy loss function that measures the quality of a preconditioner...
Rebuttal 1: Rebuttal: Thank you for your valuable feedback on our paper! **Experimental Designs Or Analyses** 1. Unclear how generalizable the trends are to broader tasks Thank you for the comment. Our paper is primarily theoretical, so the experiments are intended to test the theory; hence, we compare HDM-Best w...
null
null
null
null
null
null
Bayesian Parameter Shift Rules in Variational Quantum Eigensolvers
Reject
Summary: This paper studies variational quantum eigensolvers, a task of minimizing the quantum energy with hybrid computation on a quantum computer and a classical computer. The authors propose a Bayesian parameter shift rule, which estimates the gradient of the quantum energy by Gaussian processes. Concretely, noisy f...
Rebuttal 1: Rebuttal: Unfortunately, we found Reviewer gnv9 overlooking most of our main contributions, which were acknowledged by the other reviewers. Reviewer gnv9 gave the following main criticisms for recommending a clear reject: 1. The proposed method is not novel and quite straightforward from the machine learn...
Summary: The authors introduce a Bayesian optimization version of the parameter shift rule optimization method used to optimize the hardware parameters of quantum eigensolvers. Specifically, the problem of optimizing the parameters reduces to finding the ground state energy of a quantum hamiltonian defined by the gate...
Rebuttal 1: Rebuttal: We thank the reviewer for their careful and positive evaluation of our manuscript. > One weakness of the paper is that they do not consider imperfections in quantum hardware. This would be good to discuss in more detail, perhaps in the conclusion. Following the referee’s suggestion, we will incl...
Summary: The paper proposes Bayesian parameter shift rules to estimate the gradients of variational quantum circuits with the presence of measurement shot noises using the minimal number of observations. The optimality of the proposed method is theoretically proven under mild conditions. With the Bayesian parameter s...
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments. > Figure 3 and Figure 4 seem messy to read, and it is hard to fully understand the information in the figures. In Figs. 3 and 4 (as well as the new results in [this anonymized link](https://www.dropbox.com/scl/fi/uawszm00f70iu4dkbig0p/g12924.pn...
Summary: This paper proposes a method, Bayesian PSR, for optimizing Variational Quantum Eigensolvers (VQEs) by introducing a Bayesian variant of the parameter shift rule (PSR). This method integrates Gaussian Processes (GPs) to estimate gradients of the VQE objective with uncertainty information. The authors introduce ...
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful questions and comments. > The authors present experiments to show that Bayesian PSR and GradCoRe outperform standard optimization methods. However, the manuscript does not provide clear evidence or details regarding how Bayesian PSR performs on larger o...
null
null
null
null
null
null
Mitigating Over-Squashing in Graph Neural Networks by Spectrum-Preserving Sparsification
Accept (poster)
Summary: This papers proposes a method a rewiring technique for graph neural networks called GOKU. The goal of GOKU is to improve the connectivity of the graph (as measured by the effective resistance) while maintaining the structure of the graph (as measured by the Laplacian spectrum). GOKU takes a two step approach: ...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their time, effort, and constructive comments. Below are our responses. > ### **Comment 1**: Preserve all eigenvalues and the smallest eigenvalues are not increased in theoretical & experimental results Our method indeed increases the smallest eigenvalues. Her...
Summary: This paper addresses the issue of over-squashing in Graph Neural Networks (GNNs) by introducing a novel approach called the **Densification-Sparsification Rewiring** framework, with a practical implementation termed **GOKU**. The main contribution of the paper is the proposal of a two-step rewiring process: ...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their time, effort, and constructive comments. Here are our responses. ### **Comment 1**: Choices of Spectral Sparsification Algorithms We appreciate the reviewer’s interest in the choice of spectral sparsification algorithms. It is important to note that our ...
Summary: This paper introduces GOKU, a novel graph rewiring method that mitigates over-squashing in Graph Neural Networks through a two-phase densification-sparsification paradigm. The approach first identifies and adds critical missing edges to alleviate bottlenecks via inverse spectral sparsification, then selectivel...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their time, effort, and constructive comments. Below are our responses. > ### **Comment 1 (Claims And Evidence 2, Theoretical Claims 2&3)**: Lack of probability bounds, impact of feature similarity in Theorem 4.2 & feature similarity term in ISS (Algorithm2) -...
Summary: This paper proposes a method to mitigate the over-squashing issue of GNNs while preserving the spectrum of the original graph. To this end, the authors propose a two-stage process of densification followed by sparsification, where both steps are discussed in detail. Experimental results demonstrate the competi...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their time, effort, and constructive comments. Below are our responses. > ### **Comment 1**: Why $x^TLx$ as spectral similarity & advantages over degree distribution in SDRF **Summary**: Our approach defines similarity via the Laplacian quadratic form, alignin...
null
null
null
null
null
null
LLM-Augmented Chemical Synthesis and Design Decision Programs
Accept (poster)
Summary: This paper attempts to address the problem of multi-step retrosynthesis planning using large language models (LLMs). The authors first propose a format for representing a synthesis pathway, using a sequential reaction step format to characterize the synthesis pathway. This approach allows large models to avoid...
Rebuttal 1: Rebuttal: We are extremely appreciative of the reviewer’s feedback and suggestions to improve our work. We are glad that the reviewer considers our work to “potentially introduce a new paradigm for future multi-step retrosynthesis planning”. We provide additional experiment results in the following link: h...
Summary: This work attempts to investigate whether LLMs can solve complex chemical sequences and proposes a framework that uses LLMs without tuning for conducting retrosynthesis and molecular design tasks. An effective method for formatting synthesis sequences is introduced. Additionally, a novel approach for searching...
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback! We are thrilled the reviewer finds our LLM-based retrosynthesis framework “could be further extended to an autonomous laboratory framework or a molecular synthesis and generation pipeline using LLMs”. Below, we address the reviewer’s questions ...
Summary: This paper explores the potential of LLMs in addressing retrosynthesis planning and synthesizable molecular design tasks. The authors first formulate retrosynthesis planning as a sequential decision-making problem with sequential route format that is suitable for LLMs. The authors introduce an evolutionary se...
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback! We are glad that the reviewer considers our method to be “novel” and our paper to be “well-written”. We wholeheartedly agree with the reviewer that this method can “be easily adapted to molecular design problems, unifying LLM-augmented reaction de...
Summary: This paper explores the use of Large Language Models (LLMs) for the challenging task of multi-step retrosynthesis planning and extends this to synthesizable molecular design. The authors introduce a novel approach, LLM-Syn-Planner, which uses an evolutionary search algorithm where the LLM generates and optimiz...
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and thoughtful questions. We are ecstatic that the reviewer finds our method “crucial for tackling the combinatorial complexity of retrosynthesis”. We have updated the new results here: https://drive.google.com/file/d/1Car72zPjMG9__41PfJAGbMkbNObYHjBS/view?...
null
null
null
null
null
null
Triple-Optimistic Learning for Stochastic Contextual Bandits with General Constraints
Accept (poster)
Summary: The paper introduces Optimistic³, a triple-optimistic framework for stochastic contextual bandits with general constraints. The main contribution is that it achieves regret and constraint violation bounds of without requiring Slater’s condition. It is applicable not only to general constrained contextual bandi...
Rebuttal 1: Rebuttal: We appreciate the reviewer's comments and want to address your major concerns below. - **Table reference (Guo \& Liu, 2024).** We adopt a row-wise split in referencing their work to properly represent their work, as their unified algorithm delivers distinct theoretical guarantees across two regim...
Summary: The paper proposes a Triple-Optimistic Learning method for contextual bandit with general unknown constraints. And further the paper shows a regret and violation of $\tilde{O}(\sqrt(T))$, which outperforms the previous method on $\tilde{O}(T^{3/4})$. They also provide the regret bound of CBwK and extend to th...
Rebuttal 1: Rebuttal: We very appreciate the reviewer for the constructive comments and want to address your major concerns below. - **Online regression oracle.** The online regression oracle assumption is standard in contextual bandits (Foster et al., 2018; Slivkins et al., 2023; Han et al., 2023; Guo \& Liu, 2024; C...
Summary: $\newcommand{\vg}{\textrm{\v{$g$}}}$ $\newcommand{\ip}[1]{\langle #1\rangle}$ This paper is concerned with contextual finite-action bandits with unknown aggregate constraints. Concretely, in every round, the learner first observes a context $x_t$ drawn iid from a law which is known to the learner, and selects...
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s meticulous review and constructive feedback. The reviewer's concern is correct, where there are big typos in Lemma $5.6$ (and propagated into Lemma $5.7$). We sincerely apologize for the oversight and any confusion caused. Our proof remains valid when these t...
Summary: The submission studies contextual bandits where the agent aims to minimize the regret (Eqn. (4)) and the constraint violation (Eqn. (5)) simultaneously. The submission shows that the previous bounds can be improved (in the sense of removing the need for Slater’s condition or lowering the upper bounds) by a cle...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the positive feedback. We would like to address your questions as follows. - **"Tripe-optimism".** The intuition behind our optimistic dual update stems from the Optimistic Online Mirror Descent in [R1], which accelerates convergence through effectively using p...
null
null
null
null
null
null
Understanding Model Reprogramming for CLIP via Decoupling Visual Prompts
Accept (poster)
Summary: This paper introduces a new method for model reprogramming on CLIP by decoupling visual prompts (DVP) to improve performance in downstream classification tasks. Instead of training a single visual prompt for all class descriptions (which may lead to biases and suboptimal learning), the paper proposes: 1) Decou...
Rebuttal 1: Rebuttal: W (Weakness); R (Response) **W1:** Applicability to other VLMs and other tasks **R1:** Thank you for your advice! While we follow previous VR studies on classification, the idea of DVP-PRM can extend to other tasks, because CLIP learns general-purpose visual and text representations, suitable fo...
Summary: The authors studied the visual prompting problem of CLIP and proposed a new method DVP. DVP focuses on introducing different causes to generate attributes on the text side and assigning an optimizable VP to each cause. In addition, the authors also proposed a reweighting matrix strategy to integrate all causes...
Rebuttal 1: Rebuttal: T (Theoretical Claims); E (Experimental Designs Or Analyses); W (Weaknesses); R (Response) **T/W1:** Issues of mathematical symbols **R-T/W1:** Thanks for the feedback on clarity and notation! We acknowledge these concerns, will supplement a notation table and smooth out the presentation in the ...
Summary: This paper proposed the Decoupled Visual Prompts (DVP) to improve the limited learning capacity of the single VP. Explicit causes and unsupervised clusters are grouped with descriptions and decoupled-and-reweighted VPs are trained to improve the final performance. Experiments show better results, compared with...
Rebuttal 1: Rebuttal: M (Methods And Evaluation Criteria); E (Experimental Designs Or Analyses); R (Response) **M1/E1.** Increased computation resource caused by DVP **R-M1/E1:** Thank you for your suggestion! We have included the parameter numbers for various methods in the following table and will incorporate it in...
Summary: This paper proposes a visual reprogramming method for CLIP. The existing methods utilize a single visual prompt for all descriptions, but a single prompt may not enough to capture diverse aspects of class descriptions. The authors propose to have multiple visual prompt for each description (decouple), and inte...
Rebuttal 1: Rebuttal: F (Feedback); R (Response) **F1.** Pros and cons of visual reprogramming (VR) **R1:** Key difference: VR applies transformations only to input and/or output, while other PEFTs adapt within internal architectures, leading to 3 pros of VR: - VR untouches any parameters and architecture, is applica...
null
null
null
null
null
null
RestoreGrad: Signal Restoration Using Conditional Denoising Diffusion Models with Jointly Learned Prior
Accept (poster)
Summary: The authors present a new type of conditional diffusion model with an application image restoration. An image $x_0$ is predicted conditioned on a degraded version $y$. The presented method is based on interpreting the diffusion model as a variational autoencoder (VAE). Based on this idea, they allow the prior ...
Rebuttal 1: Rebuttal: We greatly appreciate that the reviewer found our method elegant and sensible, and that our experimental validation makes sense and is convincing. Below, we address the main concerns and questions. --- ## Methods And Evaluation Criteria > **The learned ...** Thank you for carefully checking ou...
Summary: The paper proposes a novel framework to enhance the efficiency of conditional Denoising Diffusion Probabilistic Models (DDPMs) for signal restoration tasks, such as speech enhancement (SE) and image restoration (IR). By jointly learning a data-informed prior distribution using two encoder networks (Prior Net a...
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging that our method is a significant contribution to address key limitation in existing works, and is theoretically sound and creative. In the following, we provide our responses to address the main questions and concerns. --- ## Experimental Designs Or Analys...
Summary: This paper proposes RestoreGrad to restore signals with diffusion models. In order to achieve this, the paper proposes to train a prior net and a posterior net jointly with the diffusion model so that the noise can be sampled from the noise distribution output by the prior net. During training, three losses ar...
Rebuttal 1: Rebuttal: We greatly appreciate that the reviewer found our method interesting and the quantitative results competitive. In this rebuttal, we provide our responses to address the main concerns and questions. --- ## Relation To Broader Scientific Literature > **The idea ...** We would like to take this o...
null
null
null
null
null
null
null
null
Reinforcement Learning with learned gadgets to tackle hard quantum problems on real hardware
Reject
Summary: This paper presents an algorithm for circuit architecture optimization. They propose using a combination of RL and program synthesis to extract and use high value gadget to augment the traditional RL optimization loop. They demonstrate their algorithm on a selection of Ising models. Claims And Evidence: The c...
Rebuttal 1: Rebuttal: **The claims are generally clear, but are not always as supported by the evidence as they should be. One of the most important aspects of any ML for quantum work is the scalability, especially for circuit optimization...** We appreciate this critical feedback regarding scalability evidence, which...
Summary: The authors put forward gadget reinforcement learning (GRL), which combines RL with program synthesis, to learn good parametrized quantum circuits (PQCs) for preparing the ground states of transverse field Ising models. Claims And Evidence: I don’t find the claim that GRL is “a versatile and resource-efficien...
Rebuttal 1: Rebuttal: **I don’t find the claim that GRL is “a versatile and resource-efficient framework for optimizing quantum circuits, with potential applications in hardware-specific optimizations, variational quantum algorithms, and other challenging quantum tasks” to be well-supported. How is GRL resource-efficie...
Summary: This paper proposes the Gadget Reinforcement Learning (GRL) framework that integrates RL with program synthesis to design parametrized quantum circuits (PQC). PQC is an important quantum algorithmic paradigm with many applications, such as combinatorial optimization and ground state preparation. GRL is tested ...
Rebuttal 1: Rebuttal: **The benchmark experiment only involves small systems (2- and 3-qubit). The limited system size may weaken the authors’ claims about the scalability and generalization of their methods.** Thank you for highlighting this weakness in the paper. In the revised manuscript, we address this limitation...
Summary: The paper proposes gadget reinforcement learning, where the action space of the RL agent keeps expanding with effective composite of gates explored in the training procedures. The new framework is evaluated on finding the ground energy of transverse field Ising models with different interaction strengths. It...
Rebuttal 1: Rebuttal: **The idea of using composite gates to expand the action space of RL exploration of the variational circuit ansatzes seems incremental to me...** We thank the reviewer for commenting on the significance of GRL for QAS. GRL's key innovation lies in its hierarchical exploration strategy: $\textbf{M...
null
null
null
null
null
null
Action-Constrained Imitation Learning
Accept (poster)
Summary: This work proposes Action-Constrained Imitation Learning (ACIL), a new problem setting in which an imitator learns from a constraint-free expert while operating under action constraints. The key challenge is the infeasible expert actions issue, and the authors solve the mismatch issue by trajectory alignment. ...
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s insightful comments and suggestions. We provide our point-by-point response as follows. > The author should make ablation results about the number of expert demos, and also discuss the different levels of action restrictions for the imitator. Based on the r...
Summary: This paper focues on a problem setting called Action-Constrained Imitation Learning. This setting involves an imitator agent that learns from an expert demonstrator, but the imitator has a more restricted action space than the expert. This difference in action capabilities creates a mismatch in the occupancy m...
Rebuttal 1: Rebuttal: We are grateful for the reviewer’s constructive feedback. >This paper overlooks an important set of baselines—state-based imitation learning [1,2,3,4]. In fact, the problem setting of "Action-Constrained Imitation Learning" is not entirely new and has been extensively discussed in these works. Ho...
Summary: This paper explores a new task, Action-Constrained Imitation Learning, that the imitator may have the constrained action space compared with export. Then, this paper analyze the challenge of this task and propose a methods to solve this challenge. Specially, this paper first employ MPC to build a surrogate dat...
Rebuttal 1: Rebuttal: We appreciate the reviewer’s time and effort in reviewing our work and raising valuable questions. We provide our point-by-point response as follows. > Analyzing the rationality of constraint design and the strength of constraints is crucial for exploring this new task. Conducting more experiment...
Summary: The manuscript proposes DTWIL, a data pre-processing mechanism for translating state-action trajectories between an expert and a weaker, action-constrained learner. After synthesizing this translated data, using an MPC with a dynamic time-warping (DTW) distance measure and a forward dynamics model to provide f...
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s insightful comments and valuable suggestions. We provide our point-by-point response as follows. > L162-164;C1 — Wouldn’t it be more accurate to say that there exist trajectories within \mathcal{T} that cannot be entirely replicated by the learner, since it ...
null
null
null
null
null
null
Online Laplacian-Based Representation Learning in Reinforcement Learning
Accept (poster)
Summary: This paper studies how to learn a Laplacian-based state representation in an **online** manner for reinforcement learning, rather than fixing a policy and then extracting a representation from its induced Markov chain. The authors propose a modified “asymmetric graph drawing” objective (AGDO) that can be joint...
Rebuttal 1: Rebuttal: We appreciate the reviewer's efforts in reviewing our work and providing their detailed feedback. Below we address their questions and concerns. **Zero Eigenvectors** We thank the reviewer for this insightful suggestion. Based on our analysis, we do not expect degenerate partial solutions to be ...
Summary: The paper titled "Online Laplacian-Based Representation Learning in Reinforcement Learning" explores the integration of Laplacian-based representation learning within reinforcement learning (RL) frameworks. The authors address the challenge of learning Laplacian-based representations online and with theoretica...
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and addressed their questions and concerns below. > Bounded drift assumption Regarding the concern about drastic or erratic policy shifts in real-world RL scenarios, our theoretical analysis explicitly assumes bounded policy drift (Assumption 4.2). This is...
Summary: The paper is about representation learning in RL. It analyses an online algorithm designed to learn Laplacian representations of a Markov Decision Process (MDP). Given a fixed policy $\pi$, generally the exploratory policy, the Laplacian operator is defined as $L = I - \frac{P_{\pi} + P_{\pi}^{\ast}}{2}$, wher...
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's thoughtful and considerate comments. Below we address their remark about related work. > I don't have a lot of suggestions for the authors. They mention however successor representations in the related work section, so I wondered if maybe their work and meth...
Summary: The authors propose a theoretical analysis of an algorithm for online representation learning based on Laplacian eigenvectors. Current methods tend to learn representations based on a uniform/exploratory dataset, while the authors argue that representations evolve with policy changes and that more efficient le...
Rebuttal 1: Rebuttal: We would like to thank the reviewer for taking the time to give a thorough review. We address their questions and concerns below. Due to space constraints, we have shortened the quotes from the review. We also appreciate the reviewer’s suggestions in the "Other Comments or Suggestions" section and...
null
null
null
null
null
null
CogReact: A Reinforced Framework to Model Human Cognitive Reaction Modulated by Dynamic Intervention
Accept (poster)
Summary: The paper proposed a novel method to model humans' response time under varying time pressure. The method combines the traditional model called Drift-Diffusion Model with Reinforcement learning. Claims And Evidence: I have a serious concern that the claim is not correct. Mentioned in the paper "Results are de...
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback and your recognition of our suitable benchmark datasets and evaluation metrics. We made significant efforts with additional experiments/ analysis to address your concerns following your suggestions. Below, we provide a point-by-point response. **Clai...
Summary: This paper presents a hybrid framework that integrates the Drift-Diffusion Model (DDM) from previous cognitive methods with deep reinforcement learning to simulate the impact of dynamic environmental stimuli on human cognitive processes. Unlike prior work, this paper considers human cognitive processes under t...
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of our comprehensive experiments and our contribution to more practical understanding of human cognition across different tasks and real-world datasets to fill the literature gap. We have made significant efforts including additional experiments and new ana...
Summary: This paper tackles a fundamental question: How can we accurately simulate the influence of dynamic environmental stimuli on human cognitive behavior at a fine-grained level? To address this, CogReact is introduced, a hybrid framework that integrates drift-diffusion models (DDMs) with deep reinforcement learnin...
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback and your recognition of our contribution in the understudied problem in ICML with suitable benchmarks. We made significant efforts with extra experiments to address your concerns. Below, we provide a point-by-point response. **Claim**: Unlike MSE, ...
Summary: This work introduces a problem in accounting for the effect of dynamic environmental stimuli on human behavior, such as a continuous time limit indicator that induces a time pressure on participants. The CogReact framework has many moving pieces but is quite elegant. An LSTM is trained to solve the task humans...
Rebuttal 1: Rebuttal: We are very grateful and encouraged by the reviewers’ recognition of our work’s contribution, including sound methodology and novel model contribution in ICML community (reviewer kUQA, uxDz, xPom), valuable dataset contribution (reviewer uxDz), extensive literature review (reviewer kUQA), convinci...
null
null
null
null
null
null
Scalable Meta-Learning via Mixed-Mode Differentiation
Accept (poster)
Summary: This paper develops a novel efficient bi-level optimization framework, called MixFlow-MG. Based on the fact that, when computing HVPs, forward-over-reverse mode is usually much more efficient than other modes, the authors first reparameterize the update function, and replaces the usual VHP and VMP computations...
Rebuttal 1: Rebuttal: We thank the Reviewer for the thorough feedback and for pointing out the parts of the paper that weren’y sufficiently clear. We will incorporate out responses below into the final version to improve clarity. > I was not aware that HVP can be more efficiently computed with the forward-over-reverse...
Summary: The paper introduces MixFlow-MG, a method for scalable bilevel optimization in meta-learning using mixed-mode automatic differentiation. The core idea involves reparameterizing the inner-loop dynamics to replace inefficient reverse-over-reverse differentiation with forward-over-reverse mode. This reduces compu...
Rebuttal 1: Rebuttal: We thank the Reviewer for the actionable feedback. We add clarifications and respond to the questions below. > The submission claims that the reverse-mode differentiation performed by the default autodiff can be suboptimal in practice. However, the underlying reasons for this drawback are not cle...
Summary: The paper introduces MixFlow-MG, an algorithm that uses mixed-mode differentiation for more efficient gradient-based bilevel optimization. It reduces memory consumption and computation time compared to standard methods. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes, the proposed methods and e...
Rebuttal 1: Rebuttal: We thank the Reviewer for the thorough feedback and address the raised concerns below. > While the experiments demonstrate the practical benefits of the proposed algorithm, the paper could be strengthened by including more experimental performance of the proposed method solving specific problems...
Summary: This paper proposes to smartly combine forward-mode and reverse-mode differentiations to efficiently perform JVP/HVP in bi-level optimization. This is motivated by the observation that different differentiation methods can show massively different performance in practice, despite their potentially same theoret...
Rebuttal 1: Rebuttal: We appreciate the feedback of the Reviewer and address the raised concerns below. > The only issue I see is a lack of the meta learning performance analysis. As far as I know, different meta learning algorithms lead to (sometimes vastly) different final results. We agree with the Reviewer that...
null
null
null
null
null
null
Agent Workflow Memory
Accept (poster)
Summary: This paper proposes the Agent Workflow Memory (AWM) that can summarize and abstract common workflow experiences from previous trajectories. However, it seems too similar to the series of Reflexion works. The author should discuss the significant contributions comparing with these works. Claims And Evidence: I...
Rebuttal 1: Rebuttal: Thank you for the feedback and for pointing out these relevant works. We list the core differences between AWM and these works below, and will add these more detailed discussions in our revised paper version! **Difference from Reflexion** > 1. Content: AWM learned procedural knowledge on how to s...
Summary: The paper introduces Agent Workflow Memory (AWM), a method designed to improve the performance of language model-based agents in long-horizon, complex tasks such as web navigation. AWM enables agents to learn reusable workflows from past task experiences and integrate them into memory for guiding future action...
Rebuttal 1: Rebuttal: **Claims and Evidence 1: AWM Improvement May Come from Structured Nature of Actions** > First, we argue that WebArena and Mind2Web queries are realistic and human-annotated without intentionally injecting superficial structures, so we believe that experimenting on them is well-motivated. Our metho...
Summary: Motivated by the ability of humans to flexibly solve complex tasks by learning reusable workflows from past experiences, this paper introduces Agent Workflow Memory (AWM), a training-free paradigm designed to enhance the performance of language model-based agents in solving long-horizon, complex tasks. The cor...
Rebuttal 1: Rebuttal: Thank you for recognizing the soundness of our method, the general applicability in online and offline settings, and the comprehensiveness of our experimental analysis! **W1. Experiments tailored to web agents** > We fully agree that AWM can be similarly applied to other agentic tasks such as ro...
Summary: The authors introduce Agent Workflow Memory (AWM), a method that enables agents to learn and reuse task workflows, similar to how humans leverage past experiences to solve complex tasks. AWM identifies commonly used routines and selectively provides them to guide future actions in both offline and online setti...
Rebuttal 1: Rebuttal: Thank you for recognizing the effectiveness of our AWM approach! **AWM’s robustness to varied offline task selections** > We agree that offline task selection is crucial for AWM to achieve better performance, where higher distribution overlap between offline/training tasks and online/testing task...
null
null
null
null
null
null
MERIT: Maximum-normalized Element-wise Ratio for Language Model Large-batch Training
Accept (poster)
Summary: This paper addresses the issue of designing an effective optimizer for large-batch training of LLMs. Normally, as batch size, B, increases, we also increase the LR (often linearly with B, sometimes with the square root of B). But at some point, optimizers struggle with such high LRs, and become unstable or f...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer ybUy for the valuable questions and comments. All suggestions on claims and layout revisions will be revised in the updated paper. For the concerns and questions, here are our responses: **Q1: Need for the figure plotting how the max attention logit changes as we i...
Summary: This paper proposes a new optimization algorithm MERIT for tackling the large-batch training of language models. The method normalizes the update scale based on the maximum norm of both column and row statistics of the update. Empirical study shows that MERIT with large batch size achieves similar performance ...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer QXND for the valuable questions and comments. For the concerns and questions, here are our responses: **Q1: Why MERIT is beneficial to large-batch training?** **A1:** Thanks for the point. We have provided a theoretical analysis of why the max-norm is efficient in...
Summary: The authors propose an extension of LAMB, which is an optimizer for large batch training. They propose to use an element-wise trust ratio using max norm and an element-wise clipping, rather than just an l2-norm based trust ratio as is the case for LAMB. The intuition is that the l2 norm is a poor surrogate for...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer ajhY for the valuable questions and comments. For the concerns and questions, here are our responses: **Q1: There may be other large batch training methods > 2019.** **A1:** Thank you for the comment. Most existing large-batch training studies focus on computer vi...
Summary: This paper proposes a new optimizer MERIT specialized for neural networks with self attention, especially Transformers, in a setting of large batch size training that typically lead low generalization performance. Based on observations that the training instability of Transformer comes from large amount of max...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer 1Wwm for the valuable questions and comments. For the concerns and questions, here are our responses: **Q1: Tendency of long-run training.** **A1:** Thank you for the comment. Due to computational resource constraints, we are currently unable to apply our proposed...
null
null
null
null
null
null
A New Concentration Inequality for Sampling Without Replacement and Its Application for Transductive Learning
Accept (poster)
Summary: The paper studies transductive learning where the training examples and testing examples are drawn from a given dataset without replacement. The problem builds generalization guarantees for transductive learning based on local Rademacher complexities. To this aim, the paper first develops a new concentration i...
Rebuttal 1: Rebuttal: We appreciate the review and the suggestions in this review. The raised issues are addressed below. **(1) “…many notations which may be hard to digest for readers. There are also many typos.”** Thank you for your suggestions. We have proofread this paper and fixed all the typos. In particular, ...
Summary: This paper studies the generalization of transductive learning. To do so, this paper proves a new concentration inequality for the test-train process, which is used to derive a sharp concentration inequality for the general supremum of empirical process involving random variables in the setting of sampling uni...
Rebuttal 1: Rebuttal: We appreciate the review and the suggestions in this review. The raised issues are addressed below. **(1) Minimax Optimal Lower Bound for Transductive Kernel Learning** We consider the following setup for Transductive Kernel Learning (TKL) wherein we will derive the minimax optimal lower bound f...
Summary: The paper presented an improvement of concentration inequality for sampling without replacement. More particular, the improvement is compared to the work of Tolstikhin et al,, 2014, in which, its upper bound of excess risk shown does not have the ratio between test and train data size over the whole given dat...
Rebuttal 1: Rebuttal: We appreciate the review and the suggestions in this review. The raised issues are addressed below. **(1) Proof Roadmap** Following the suggestion, we will give a detailed proof roadmap in the final main paper. In particular, we first obtain a novel concentration inequality of the test-train p...
Summary: The main contribution of this paper is presenting a new tool, Transductive Local Complexity (TLC), to analyze the generalization of transductive learning algorithms. In the way, the authors proved a new concentration inequality with certain advantages that may be of independent interest. They also showed a few...
Rebuttal 1: Rebuttal: We appreciate the positive review and the suggestions in this review. We will add a "Related Work" section to the final version of this paper following your suggestion. We will also discuss the application of our transudative learning bounds to linear Graph Neural Networks (GNNs) for a transudati...
null
null
null
null
null
null
HarmoniCa: Harmonizing Training and Inference for Better Feature Caching in Diffusion Transformer Acceleration
Accept (poster)
Summary: This study introduces a novel caching framework to enhance inference efficiency in diffusion transformers. The core idea is to employ a learnable router that determines optimal caching dynamics during the generation process. The primary contribution lies in developing a new learning paradigm that alleviate the...
Rebuttal 1: Rebuttal: Thanks to the reviewer for the constructive comments. **Literature Review and Citations** - **Missing work.** We appreciate the suggestions and will include ToMe, TokenFusion, and attention-driven efficiency methods in the revision. - **Citation updates.** We will replace arXiv references with c...
Summary: This paper introduces HarmoniCa, a learning-based caching framework that addresses two key discrepancies between diffusion training and inference: ***prior timestep disregard*** and ***objective mismatch***. First, it presents Step-wise Denoising Training (SDT), which adopts a student-teacher setup to consider...
Rebuttal 1: Rebuttal: Thanks for the reviewer’s constructive comments. > **Claims And Evidence** > - **Comparison with rectified flow models.** We clarify that our approach is not based on pretraining, unlike flow-based models. A direct comparison is not entirely appropriate, as our method merely trains a small tenso...
Summary: This paper introduces a method for improving the cache mechanism in diffusion transformers. The proposed method including two parts: * A step-wise denoising training. It changes the objective in Learning-to-cache from single step to a all step objective. * An image error proxy-guided objective. The authors pr...
Rebuttal 1: Rebuttal: Thanks for the reviewer’s constructive comments. > **Methods And Evaluation Criteria** > **Contribution insufficiently novel.** While our work builds upon the foundational concept (i.e., learn a caching strategy) introduced by Learning-to-Cache (L2C), our contributions and improvements are both...
Summary: This paper proposes HarmoniCa, a learning-based caching framework designed to accelerate Diffusion Transformersby reusing transformer block features during inference. HarmoniCa builds on the Learning-to-Cache framework by improving the training mechanism for the caching router. Extensive experiments demonstrat...
Rebuttal 1: Rebuttal: Thanks for the reviewer’s constructive comments. > **Claims And Evidence** > - **Concerns regarding acceleration rates and resolution scaling.** Thanks for the insightful question. We agree that the reason why the acceleration rates of HarmoniCa decline ($1.67\times\rightarrow1.56\times$) with t...
null
null
null
null
null
null
Towards Universal Offline Black-Box Optimization via Learning Language Model Embeddings
Accept (poster)
Summary: This paper explores universal offline black-box optimization (BBO) by leveraging language model-based string embeddings. Prior LLM work has two flaws: it doesn't detail paradigms to clarify their strengths and compatibility, and neglects offline BBO's unique need to prevent overfitting to limited historical da...
Rebuttal 1: Rebuttal: Thanks for your positive and valuable comments! Below please find our response. Corresponding experimental results can be found at https://anonymous.4open.science/api/repo/UniSO/file/F6Mo.pdf. ## R1 Computational cost Thanks for your suggestions. We have compared the computational cost in Table ...
Summary: This paper proposes to use recent works in LLMs as regressors for offline blackbox optimization. The key contributions are: * Organizing the same LLM bodies with different regression heads as UniSO-T (token output) and UniSO-N (numeric output). * Proposing two regularization methods, i.e. (1) a contrastive...
Rebuttal 1: Rebuttal: Thanks for your encouraging and valuable comments! Below please find our response. Corresponding experimental results can be found at https://anonymous.4open.science/api/repo/UniSO/file/GL3R.pdf. ## R1 Is "expert" in Table 1 a basic MLP regressor? Yes. It refers to a numeric-input MLP regressor....
Summary: This paper introduces UniSO, a novel framework for universal offline BBO that leverages string embedding spaces to address limitations of traditional methods in heterogeneous search spaces and cross-task generalization. Using LM embeddings, it converts numerical parameters into strings and proposes two variant...
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Below please find our response. Corresponding experimental results and references can be found at https://anonymous.4open.science/api/repo/UniSO/file/4Ajw.pdf. ## R1 EA-based BBO method doesn’t warrant ICML community’s much attention; why don’t you adopt more ef...
Summary: The paper explored using LLM embeddings as the universal representation space for offline black-box optimization across different domains. The paper discussed two approaches, next-token prediction and numeric prediction, and proposed several ways to constrain the learned latent space for effective offline BBO....
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Below please find our response. Corresponding experimental results can be found at https://anonymous.4open.science/api/repo/UniSO/file/TeSq.pdf. ## R1 Some typos Thanks for pointing these out. We have revised them and checked them thoroughly. Thank you very mu...
null
null
null
null
null
null
Learning State-Based Node Representations from a Class Hierarchy for Fine-Grained Open-Set Detection
Accept (poster)
Summary: This paper introduces a structured state representation for fine-grained open-set detection (FGOD). Prior approaches fall into two categories: local and global detectors. Local detectors iteratively multiply probabilities along a tree path but suffer from error accumulation. Global detectors flatten probabilit...
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We appreciate your constructive comments. Our main responses are summarized below. **Q1: Computational Cost** Thank you for the comment. We conducted a computational cost analysis. We define the input dimension of the classifier head as $D$, ...
Summary: This paper handles fine-grained open-set detection, which features a hierarchical structure of semantic classes. The authors discussed the limitations of existing local and global detectors, with theoretical analyses unveiling the suboptimality of these detectors. Then a state-based approach is proposed by the...
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We appreciate your constructive comments. Our main responses are summarized below. **Q1: Difference between open-set detection, near-OOD detection, and OOD detection?** The goal of open set detection is to identify test samples that are semant...
Summary: This paper addresses the challenge of Fine-Grained Open-Set Detection (FGOD) by leveraging hierarchical structures among known classes. It introduces a state-based node representation that systematically captures fine-grained dependencies between different hierarchical levels. Unlike traditional local or globa...
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We appreciate your constructive comments. Our main responses are summarized below. **Q1: Sibling relationship for theorem 3.1** Path probability based optimization (Theorem 3.1) indeed considers sibling relationships. As we mentioned in the p...
Summary: This paper considers fine-grained open-set detection. The proposed approach utilizes hierarchical relationships to determine if a sample does not belong to any of the known classes, but still might be related to them, by e.g. sharing a superclass with some of them. The paper proposes to leverage the hierarchic...
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We appreciate your constructive comments. Our main responses are summarized below. **Q1: Reference not discussed** Thank you for suggesting this relevant work. We would like to clarify that the primary focus of this work is to perform hierarc...
null
null
null
null
null
null
IL-SOAR : Imitation Learning with Soft Optimistic Actor cRitic
Accept (poster)
Summary: The paper presents an imitation learning method for the tabular setting and establishes a probabilistic bound on its regret, considering both, regret due to the policy updates and the cost updates. The method is based on the principle of optimism in the face of uncertainty by learning a transition matrix the t...
Rebuttal 1: Rebuttal: Dear reviewer, Thanks a lot for appreciating our work ! Please find our responses to your comments below. ***The experimental design is sound, however, it uses too few seeds (5). It would also be useful to compare the performance for different data set sizes (the appendix does contain experimen...
Summary: This paper presents SOAR: an algorithmic template that learns a policy from expert demonstrations with a primal dual style algorithm that alternates cost and policy updates. The method boosts consistently the performance of imitation learning algorithms based on Soft Actor Critic across various Mujoco tasks. ...
Rebuttal 1: Rebuttal: Dear reviewer, Thanks a lot for appreciating our work ! We will make sure to capitalize the algorithm names in the figures. Thanks again for the time spent reviewing our paper, we remain available for further clarification if needed. Best, Authors --- Rebuttal Comment 1.1: Comment: Thank you...
Summary: The authors propose an optimistic $Q$-function update for SAC-based inverse RL algorithms. They prove sample complexity benefits in the tabular setting and show an approximation of their idealized algorithm can improve the performance of a suite of prior approaches with minimal increase in complexity. ## Upda...
Rebuttal 1: Rebuttal: Dear reviewer, thanks a lot for appreciating our work and for all the suggestions ! Please find below the responses to your questions. ***Have you thought ... overhead.*** We agree with the reviewer! We think that interesting next steps are applying the exploration technique based on multiple ...
Summary: This paper introduces IL-SOAR, an imitation learning (IL) framework designed to bridge the gap between theoretical sample efficiency and practical scalability. The IL-SOAR framework alternates between updating a cost function (critic) and updating the policy (actor). A key contribution of IL-SOAR is the intro...
Rebuttal 1: Rebuttal: Dear reviewer, Thanks a lot for appreciating our work and your careful review! Please find the answers to your questions/remarks in the following. ***..experiments using explicitly exploration-critical tasks would strengthen the central claims of the paper.*** In order to assess the exploratio...
null
null
null
null
null
null
Policy Guided Tree Search for Enhanced LLM Reasoning
Accept (poster)
Summary: The paper introduces Policy-Guided Tree Search (PGTS), a framework that enhances large language models (LLMs) for complex reasoning tasks by integrating reinforcement learning with structured tree search. Unlike traditional approaches such as Chain-of-Thought (CoT) or Monte Carlo Tree Search (MCTS), PGTS emplo...
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful assessment of our work and the recommendation for acceptance. We appreciate the positive feedback on our method's performance, efficiency, and adaptive policy design. Below we address the questions and suggestions raised: ## On Quantifying "Overthinking"...
Summary: This paper presents a novel algorithm that integrates **tree search** and **reinforcement learning** which can be used alongside an **off-the-shelf LLM** to improve reasoning capabilities. Unlike conventional approaches that focus on **LLM fine-tuning**, the authors propose training a **Graph Neural Network (G...
Rebuttal 1: Rebuttal: We thank the reviewer for their positive assessment of our work and the constructive feedback. We address the key points below: ## Missing References We appreciate the suggestion to discuss the works by Zelikman et al. (STAR) and Zhang et al. (REST-MCTS*). These papers indeed share conceptual co...
Summary: The paper introduces Policy-Guided Tree Search (PGTS), a novel framework designed to acquire the reasoning capabilities of LLMs by combining reinforcement learning with structured tree exploration. The key innovation of PGTS is a learned policy that dynamically decides between four actions: expanding, branchin...
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and constructive feedback. We address the key points raised below: ## Fair Comparison of Computational Budget The reviewer raises an excellent point about fair comparisons between methods. However, on most datasets, the token usage ratio is much more f...
Summary: This paper introduces Policy-Guided Tree Search (PGTS), a novel framework that combines reinforcement learning with structured tree search to enhance reasoning capabilities in Large Language Models (LLMs). The key innovation is a learned policy that dynamically decides between expanding the current reasoning p...
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and constructive feedback. We address the key points raised below: ## PGTS Performance on Complex Tasks like GPQA The reviewer correctly notes that PGTS underperforms MCTS on GPQA (27.78% vs. 34.85%). This performance gap can be attributed to three prim...
null
null
null
null
null
null
Guided Search Strategies in Non-Serializable Environments with Applications to Software Engineering Agents
Accept (poster)
Summary: This paper investigates guided search strategies for Large Language Models (LLMs) in non-serializable environments, focusing on SWE agents. The authors identify a significant gap between the average-case and best-case performance of LLMs in complex multi-step tasks. While effective search techniques like Monte...
Rebuttal 1: Rebuttal: We thank the reviewer for thoughtful comments and valuable feedback. We agree that Pass@N is the correct term for trajectory selection with oracle and will change the text and plots accordingly. We respectfully disagree that “run until submitted” is not a fair setting for evaluation: all evaluat...
Summary: The paper introduces and formalizes the notion of a non-serializable environment. Traditional guided search techniques cannot be applied to non-serializable environments. A particular instance of a non-serializable environment is mutable software engineering environments, where it can be difficult to revisit p...
Rebuttal 1: Rebuttal: We thank the reviewer for thoughtful comments and valuable feedback. Regarding the question about the reasons for choosing LLaMa3 70B as the base model for training critic, they are purely historical: this was the best 70B model available at the time when this project started, which gave a reason...
Summary: This paper proposes training a critic for action-value estimates for guided search in non-serializable environments (SWEbench). The high variance of LLM agents performance on this benchmark is well known, and the proposed trained critic can address this issue while enabling inference-time scaling through guid...
Rebuttal 1: Rebuttal: We thank the reviewer for thoughtful comments and valuable feedback. We thank the reviewer for pointing out the missing reference to “SWE-Search: Enhancing Software Agents with Monte Carlo Tree Search and Iterative Refinement, ICLR 2025”. We will add the reference to the related work section and ...
Summary: This paper introduces two algorithms for guiding LLM policies in non-serializable RL environments using a learned action-value estimator (LLaMA-3-70B) as a critic. The proposed methods are: ***1-step lookahead**, which acts as a process-level reward model by selecting actions at each step, and **Trajectory Sel...
Rebuttal 1: Rebuttal: We thank the reviewer for thoughtful comments and valuable feedback. Regarding the comment about the trade-off between the lookahead and the trajectory selection methods (assuming that is what the reviewer means by “best-of-N sampling strategy”), we’d like to emphasize that trajectory selection s...
null
null
null
null
null
null
BRiTE: Bootstrapping Reinforced Thinking Process to Enhance Language Model Reasoning
Accept (poster)
Summary: Authors propose a new approach to incorporate latent thinking processes and evaluation signals in the model training. Authors propose two-step approach: 1. generate high-quality rationales by approximating desired posterior of thought given the question-answer pair; this approach relies on reinforcement learni...
Rebuttal 1: Rebuttal: **Comments on "Experimental Designs Or Analyses":** Authors test their approach on math and code reasoning tasks. It would be interesting to see how this approach will work on other types of reasoning tasks (for ex, some popular benchmarks include MMLU, GPQA, BIG-Bench Hard, etc). **Response:** ...
Summary: The authors propose a unified graphical model to incorporate latent thinking processes and evaluation signals. Drawing inspiration from the EM approach, they unify current post-training methods into the same framework, such as SFT, RS and DPO. Claims And Evidence: I believe the authors may have exaggerated th...
Rebuttal 1: Rebuttal: **Comments on "Claims and Evidence":** In reality, datasets like MATH have very careless annotations, so the performance of SFT may not be as effective as LLMs generating their own data. This is a point that has been widely discussed in many papers and is not necessarily an interesting finding. ...
Summary: This paper works on the reasoning process generation problem. The authors propose formalizing the reasoning problems as a probabilistic graphical model involving input, latent rationals, answer, and evaluation signal. Then they presented the BRiTE algorithm with a theoretical guarantee on convergence. The prop...
Rebuttal 1: Rebuttal: **Weakness:** (1) Reasoning task diversity is limited. (2) The performance of BRiTE is not significantly better than other efficient methods. **Response:** We perform BRiTE on the model *Qwen/Qwen2.5-7B* with a mixed dataset (the size is around 40K) of *RUC-AIBOX/STILL-3-Preview-RL-Data*, MATH, a...
null
null
null
null
null
null
null
null
Adaptive kernel predictors from feature-learning infinite limits of neural networks
Accept (poster)
Summary: The manuscript develops an approach for predicting deep non-linear neural networks outputs in mean-field and muP scaling. Central to their approach is the notion that kernels within each layer (Gram matrices) which, strictly speaking, are stochastic objects, concentrate leading to an effective description of t...
Rebuttal 1: Rebuttal: ### Summary Clarification Regarding the theoretical results, to avoid possible confusion, we decided to change the name *adaptive Neural Network Gaussian Process Kernel* (aNNGPK) into *adaptive Neural Bayesian Kernel* (aNBK) in the newer draft of the paper. Our feature learning theory for deep no...
Summary: This paper expands kernel theory to study networks in the feature learning limit. Coupled systems of equations are found for two different "adaptive" kernels, the aNNGP and aNTK using physics-based methods and based on different scaling assumptions on weight variances and learning dynamics. Helpful interpretat...
Rebuttal 1: Rebuttal: ### Analysis on the Cost of Solving Mean Field Eqns We thank the reviewer for this important question, and in the new draft (Appendix F) we gladly added an extensive discussion on numerical methods and computational costs. Regarding the cost of solving for Algorithm (1) compared to the network ...
Summary: The paper shows that a GP describes neural networks trained in the rich infinite-width regime with a data-dependent kernel. Two different settings for the training dynamics are considered: one with a noise term and the other with no noise, where in both settings, there is a weight decay term. These correspond,...
Rebuttal 1: Rebuttal: ### Relationship to Fischer et al We thank the reviewer for pointing us toward Fischer et al 2024 which we have added a detailed comparison to in the main text, as well as expanded the discussion of Seroussi et al and Rubin et al. Our work differs from Fischer et al in a number of important way...
Summary: This paper proposes two ways of taking the infinite-width limit of fully connected and convolutional neural networks trained by noisy gradient flow to arrive at kernel solutions in what the authors call the "rich regime", in which feature learning takes place, which contrasts with the "lazy regime", in which t...
Rebuttal 1: Rebuttal: We thank their reviewer for their detailed feedback and suggestions, which we try addressing below. ### How is the infinite limit analyzed? Here we provided exact expressions for the adaptive kernel predictors that one obtains in various feature learning limits through rigorous analyses of the...
null
null
null
null
null
null
MGD$^3$ : Mode-Guided Dataset Distillation using Diffusion Models
Accept (oral)
Summary: The paper proposes a mode-guided diffusion model for dataset distillation. It aims to address the limitations of existing dataset distillation methods, such as insufficient sample diversity and high computational costs. The key idea is to leverage a pre-trained diffusion model without fine-tuning using distill...
Rebuttal 1: Rebuttal: We appreciate the reviewer's positive assessment of our work, particularly our novel three-stage mode-guided approach, clear presentation, and contribution as a training-free dataset distillation method. ## Addressing Concerns About Our Novelty In dataset distillation, modes have been used to enh...
Summary: This paper focuses on the dataset distillation task based on generative models. Specifically, the authors argue that the current methods cannot guarantee the sample diversity, which is essential for model training. Based on this observation, the authors propose a mode-guided diffusion model. Distinct data mode...
Rebuttal 1: Rebuttal: We appreciate the reviewer’s positive assessment of our work, particularly the recognition of our method’s state-of-the-art performance, its effectiveness in improving sample diversity, and its relevance to generative model-based dataset distillation. We also sincerely appreciate the reviewer’s po...
Summary: This paper presents Mode-Guided Dataset Distillation using Diffusion Models (MGD³), a novel dataset distillation method that improves diversity and representativeness in distilled datasets without requiring additional fine-tuning of diffusion models. The approach comprises three key components: Mode Discovery ...
Rebuttal 1: Rebuttal: We appreciate the reviewer’s thoughtful feedback and recognition of our contributions. Specifically, we are pleased that the reviewer highlights the novelty of our Mode-Guided Dataset Distillation using Diffusion Models (MGD³), its computational efficiency, the well-structured methodology, and the...
Summary: This paper proposes a mode-guided diffusion model that leverages a pre-trained diffusion model for dataset distillation, eliminating the need for fine-tuning with distillation losses. The proposed approach enhances dataset diversity through three key stages: - Mode Discovery – Identifies distinct data modes t...
Rebuttal 1: Rebuttal: We appreciate the reviewers' recognition of our work as novel and acknowledging our work to address the key limitations of existing techniques without requiring any fine-tuning. We are also grateful that the reviewer highlighted the significance and impact of our approach, with mention of our work...
null
null
null
null
null
null
MissScore: High-Order Score Estimation in the Presence of Missing Data
Accept (poster)
Summary: The paper introduces MissScore, a novel framework for high-order score estimation in the presence of missing data. Existing high-order score-based generative models assume data completeness, necessitating imputation when data is missing. MissScore extends Denoising Score Matching (DSM) to estimate high-order s...
Rebuttal 1: Rebuttal: # To Reviwer bpYG We sincerely thank you for your valuable feedback and the time spent evaluating our work! Below, we summarize the main concerns and provide detailed responses to each point. --- ### **Related work on VAE- and diffusion-based methods.** Thank you for the suggestions. We agree ...
Summary: They introduce a method for estimation high-order scores in the presence of missing data with either MAR or MCAR. For MAR, they need to make use of logistic regression to estimate a needed quantity. They propose stability improvements with variance reductions. They show low error in the estimation of the firs...
Rebuttal 1: Rebuttal: # To Reviewer Bf4B We appreciate the reviewer’s thoughtful comments and the time spent on reviewing our paper! Below, we summarize the concerns and suggestions and provide detailed responses to each. --- ### **Faster Convergence Rate, Not Faster Wall-Clock Time.** Ozaki sampling achieves **fas...
Summary: The authors propose a higher-order score estimation method for missing data. They first introduce the correct learning objectives in Theorems 3.3 to 3.5 and then formulate a multi-task objective to learn both the first and second-order scores with variance reduction to improve stability. The authors validate t...
Rebuttal 1: Rebuttal: # To Reviewer ugL7 We appreciate your thoughtful comments and the time spent on reviewing our paper. Below, we summarize the concerns and suggestions and provide detailed responses to each. --- ### **Experiment on Real Dataset for Causal Discovery.** The Sachs dataset [1] models a network of cel...
null
null
null
null
null
null
null
null
Self-Organizing Visual Prototypes for Non-Parametric Representation Learning
Accept (poster)
Summary: This paper introduces ​Self-Organizing Prototypes (SOP), a non-parametric self-supervised learning (SSL) framework designed to address limitations in traditional prototypical SSL methods. Conventional approaches rely on static, learnable prototypes to represent clusters in unlabeled data, often leading to over...
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer’s careful assessment and constructive feedback on our work. Your insights have helped us improving the manuscript. Below, we address each point in turn and provide additional visualizations and tables through anonymous links as requested. > [...] paper does not ...
Summary: 1-1. The authors proposed a non-parametric design called Self-organizing visual embeddings (SOLVE) in the field of self-supervised learning. Specifically, SOVE assumes the presence of multiple anchors that together represent a single concept, combining them to form a single soft label for self-supervised learn...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for his/her careful reading and constructive feedback. The insights have significantly contributed to improving our manuscript. Below, we address each point in turn. >2-1 underrepresentation and overclustering [...] no evidence [...] We argue that over-clustering ...
Summary: This paper proposes a new self-supervised learning approach that leverages multiple anchors and support embeddings to infer soft label for self-supervised learning. Extensive experiments demonstrate that the proposed method achieves favorable performance across various downstream tasks. ** Update after rebutt...
Rebuttal 1: Rebuttal: We thank the reviewer for his/her careful assessment and constructive feedback, which helped us clarify key concepts about our work, improve the readability of the paper, and strengthen our arguments. Below, we address each of the reviewer's points in turn. > Figure 2 lacks some illustration deta...
Summary: This paper introduces a novel technique for self-supervised learning, Self-Organizing Visual Prototypes (SOP), to address limitations in existing prototypical self-supervised learning (SSL) methods. SOP enhances the feature set of prototypes by using multiple semantically similar support embeddings (SEs) to...
Rebuttal 1: Rebuttal: We appreciate the reviewer's fair assessment of our work. We would like to highlight key points from the review, including our "extensive experimental verification" and the accurate summary that captures the core innovation of our method. The review emphasizes our method's ability to mitigate over...
null
null
null
null
null
null
Scalable Private Partition Selection via Adaptive Weighting
Accept (poster)
Summary: This work focuses on differentially private partition selection problem, which is very crucial for many underlying applications like NLP. In this problem, the system would like to extract the data as much as possible it can do from users while preserving users’ privacy. It proposes an adaptive and non-uniform ...
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper! We respond to your comments and questions below. ### Organization with the appendix and conclusion section Thanks for the comment and suggestions! We agree that it would be a preferable reading experience to bring more content from the appendix ...
Summary: This paper studies private partition selection (or set union) under the constraint of differential privacy (DP). Here, they consider user-level DP (UDP), where each user is allowed to contribute multiple points to the dataset. It provides both theoretical and experimental results to to show that their work imp...
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper! ### Absolute utility (Question 1) Your intuition that the raw number of items output by our algorithm (and any private algorithm) is affected by the long tails of the dataset is absolutely correct! We report some interesting new results on how t...
Summary: The paper deals with the problem of differentially private set union. Each user holds a subset of items from an unbounded universe, and the goal is to output as many items from the union of all the users' sets while maintaining user-level privacy. Useful intuition for the problem is that items held by a single...
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper! We respond to your comments and questions below. ### Number of rounds for MAD2R and DP SIPS (Question 3) For this problem, using more rounds does not necessarily translate to better performance. There is a fixed privacy budget and more rounds me...
Summary: This work studies the problem of private partition selection, giving two new, highly parallelizable algorithms for privately outputting as large a subset of the union of user datasets as possible under the constraints of user-level privacy (MAD and MAD2R). They prove that their algorithms satisfies approximate...
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper! ### When to use MAD2R vs. DP SIPS (from Questions) and theoretical justification compared to DP SIPS **MAD2R should always be preferred to DP SIPS** by practitioners as it is strictly better (both for fundamental theoretical reasons and confirme...
null
null
null
null
null
null
DS-VLM: Diffusion Supervision Vision Language Model
Accept (poster)
Summary: This paper proposed Diffusion Supervision Vision-Language Model (DS-VLM), which uses a diffusion module to provide additional supervision on the vision encoder and connector in the convential MLLM frameworks. Specifically, a frozen Stable Diffusion module takes the multi-level vision encoder features and the o...
Rebuttal 1: Rebuttal: > However, the default input space of the Text Adapter (CLIP features) and the LLM backbone (word embeddings) are different. Why are the newly learned information assumed to be able to leveraged by the LLM backbone? > A1: In response to the reviewer’s question, our DS-VLM framework leverages a d...
Summary: This paper proposes DS-VLM, which introduces diffusion-based supervision to improve VLMs. To shorten the gradient propagation path, this paper reconstructs images using diffusion models, thereby improving supervision for the vision encoder and connector. Experimental results confirm the effectiveness of propo...
Rebuttal 1: Rebuttal: Thanks for your feedback. We’ll address each point in detail. > The paper reports good results across various test sets. Section 4.3 shows accuracy gains from different supervision features and Section 4.5 show typical reconstruction process for diffusion models. However, the connection between p...
Summary: VLMs integrate visual and textual information to perform tasks such as image captioning, visual question answering, and image-text retrieval. However, current VLMs face two critical limitations: degraded supervision due to information loss during gradient propagation through LLMs, and the inherent semantic spa...
Rebuttal 1: Rebuttal: > The suggestions for additional analyses (e.g., statistical significance, computational overhead) would further strengthen the paper and provide a better evaluation of the proposed method. > > The training process involving diffusion models is computationally intensive. The paper does not provi...
Summary: The authors design a framework to train the visual encoders in VLMs, with diffusion supervision. The authors design a mechanism to incorporate visual features from multiple levels of the encoder to reconstruct the image using a frozen diffusion model during training. The authors evaluate their method, built o...
Rebuttal 1: Rebuttal: > The improvements are marginal ~1-2% and not "substantial" as claimed by the authors. Also for many benchmarks there is no improvement at all in Table 1. > > The authors need to analyse this inconsistency in performance a cross the different benchmarks. > A1: Thank you for pointing out the mo...
null
null
null
null
null
null
Finite-Sample Convergence Bounds for Trust Region Policy Optimization in Mean Field Games
Accept (poster)
Summary: This paper extends the known concept of trust region policy optimization (TRPO) from the general reinforcement learning literature to the case of ergodic and entropy regularized mean field games. Besides the exact TRPO, the authors also state a stochastic, sample based version of TRPO and present theoretical r...
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback, which helped us refine the clarity and contextualization of our work. We have revised the manuscript accordingly, including a more detailed comparison with related literature, especially in the ergodic and sample-based MFG setting, to better highl...
Summary: The authors formulate algorithms for learning equilibria in MFGs. They provide high-probability guarantees and finite sample complexity in the ergodic setting, with relaxed assumptions. ## update after rebuttal I thank the authors for their response and additional experiments. The additional experiments and c...
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback and insightful questions. We address them in order below. > **A strength is the ergodic setting and sample-based guarantees, as they make contributions more directly applicable in real systems, though assumptions such as $\nu$-restarting and bound...
Summary: This paper investigates the finite-sample complexity of a TRPO-inspired method for Mean Field Games (MFGs). It is a game with anonymous agents whose number goes to infinity. A mean-field game is formulated through a mean-field MDP that consists of a population distributions over the state spaces and the reward...
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their precise and thoughtful comments, which have greatly contributed to the refinement of our work. While our initial submission was positioned as primarily theoretical, we appreciate the suggestion and have decided to include **numerical experiments** to b...
Summary: The paper studies the problem of computing Nash equilibria for discounted tabular Mean Field Games, and proposes a trust region policy optimization framework to learn stable and optimal policies. It provides a theoretical analysis of the framework, showing theoretical guarantees of its convergence. The paper...
Rebuttal 1: Rebuttal: We would like to thank Reviewer SqLi for the valuable feedback. Below, we address the issues raised in the review. > **The practical significance of the Lipschitz continuity of the reward functions (with respect to μ and a) seems a bit strange in this tabular setting [...] I would think that th...
null
null
null
null
null
null
Stability and Generalization Capability of Subgraph Reasoning Models for Inductive Knowledge Graph Completion
Accept (poster)
Summary: The authors generalize the Tree Mover's Distance to relational graph to introduce Relational Tree Mover's DIstance (RTMD), which is a distance of sub-graphs in a relational graph. They proof the current subgraph message passing neural networks (SMPNNs) are Lipschitz contituous under the distance, so define the...
Rebuttal 1: Rebuttal: **Claims And Evidence**\ #1. The experiments in Section 6.1 are designed to demonstrate that RTMD is a valid metric for quantifying differences between subgraphs extracted from real-world KGs. If RTMD is an appropriate metric, subgraphs with positive labels should be located closer to other subgra...
Summary: This paper proposes a theoretical analysis of the relationship between stability and generalization capability for subgraph reasoning models in inductive knowledge graph completion. They define stability as the degree of consistency in a subgraph reasoning model’s outputs in response to differences in input su...
Rebuttal 1: Rebuttal: **Experimental Designs/Analyses**\ The datasets used in our experiments are the standard benchmarks for inductive KGC [1], where we choose to use three distinct KGs. For additional experimental results, please read our response to **“Claims/Evidence” #2, and “Experimental Designs/Analyses”** of **...
Summary: This paper presents the theoretical analysis of the relationship between stability and generalization capability in subgraph reasoning models for inductive KG completion. The authors define stability as the consistency of a model’s outputs across variations in input subgraphs and introduce the Relational Tree ...
Rebuttal 1: Rebuttal: **Claims And Evidence**\ We theoretically establish a connection between stability and generalization capability, where generalization capability refers to the gap between performance on training and test data. This indicates that more stable models tend to exhibit a smaller gap between performanc...
Summary: This paper attempts to theoretically analyze the stability and generalization capability of subgraph reasoning models in inductive knowledge graph completion (KGC). It introduces the Relational Tree Mover’s Distance (RTMD) as a metric to quantify input subgraph variations and proposes that model stability dire...
Rebuttal 1: Rebuttal: **Claim/Evidence #1, Theoretical Claims #1, and Other Strength/Weakness #1**\ Assumptions 1 and 2 in Section 5.1 and Assumption A.1 in Appendix A are the only assumptions we made in deriving our theorems: Assumptions 1 and 2 serve as fundamental premises for subgraph reasoning models[1]. Assumptio...
null
null
null
null
null
null
Training Diffusion-based Generative Models with Limited Data
Accept (poster)
Summary: The paper identifies that many existing diffusion models depend on large corpus of training image, and attempt to offer solutions to enable high quality training with only a fraction of the dataset size. The authors reveal the different cases for how a diffusion model can be trained with limited data, and cons...
Rebuttal 1: Rebuttal: **Q1. The paper's method section could be expanded on expense of the rest of the paper.** Due to the 8-page limitation of ICML submission, we have included additional details of our proposed LD-Diffusion in the Appendix. We will extend the Sections. 4.1, 4.2, and 4.3 based on the Appendix to pro...
Summary: This paper proposes a method to train diffusion models under settings with limited training data. The suggested method includes several modifications to the standard diffusion model, including: 1. Learning diffusion in an embedding space (that comes from a pre-trained VAE model), 2. Applying penalty for produ...
Rebuttal 1: Rebuttal: **Q1. One issue of the FFHQ-100 dataset.** The purpose of designing OOD regularization is to mitigate the potential leaking issue in diffusion models under limited data conditions, thus OOD regularization is deliberately designed and can only be applied to the scenarios where aiming to prevent ...
Summary: This paper focuses on the challenges of training diffusion-based models with minimal data. The authors introduce a novel model called Limited Data Diffusion (LD-Diffusion), which includes a compressing model to reduce hypothesis space complexity and a new data augmentation strategy called Mixed Augmentation wi...
Rebuttal 1: Rebuttal: **Q1. How sensitive of hyperparameters related to the compressing model and MAFP?** For the hyperparameters in the compressing model, we follow the default settings from the pretrained VAE models in Stable Diffusion. Regarding MAFP, we conduct experiments using fixed probabilities $p_1$ and $p_2...
Summary: The authors propose a method of training diffusion models with limited data. The proposed method uses a compressing model to constrain the complexity of the denoiser function hypothesis space and a mixed augmentation with a fixed probability (MAPF) strategy to provide more informative guidance. Claims And Evi...
Rebuttal 1: Rebuttal: **Q1. Figure 4 shows how the leaking issues are different for case 1 and case 2. I think it might be helpful in understanding the proposed LD-Diffusion to explain how these differences in leaking issues lead to different strategies for cases 1 & 2, while I also think Figure 4 is critical for the m...
null
null
null
null
null
null
Reasoning Limitations of Multimodal Large Language Models. A case study of Bongard Problems
Accept (poster)
Summary: - The paper proposes an evaluation benchmark for assessing the visual abstraction capabilities of large multimodal models by using real-world images to mimic Bongard problems (originally formulated with synthetic images), to close the real-synth gap that can hinder evaluation. - The analysis uses multiple eva...
Rebuttal 1: Rebuttal: Dear Reviewer, we are grateful for your insightful comments and suggestions, which helped us to improve the paper. > The first claimed contribution is confusing: … Thank you for this comment. We will rephrase this statement. Prior work on BP benchmarks has primarily focused on pre-trained image...
Summary: This paper proposes a novel benchmark for multi-modal large language models based on Bongard problems. In addition to standard Bongard problems the paper proposes "real-world" variants of such problems, where abstract images are replaced with real-world images. Furthermore, the paper proposes simpler versions...
Rebuttal 1: Rebuttal: Dear Reviewer, we are grateful for your insightful comments and suggestions, which helped us to improve the paper. > … the novelty … is not clear. In preliminary experiments, we observed that while MLLMs perform well on HOI and OpenWorld, they struggle with Synthetic BPs. We identified two key ...
Summary: This work evaluates vlms’ reasoning capabilities using Bongard problems. In addition to existing datasets, the authors constructed a new dataset that represents synthetic BP concepts using real-world images. Results show that some vlms can perform well if the task is simplified to binary classification, but th...
Rebuttal 1: Rebuttal: Dear Reviewer, we are grateful for your insightful comments and suggestions, which helped us to improve the paper. > … feed multiple (sub-)images from group A or B all at once … Thank you for this valuable suggestion. We explored this approach in preliminary experiments but the results were not...
Summary: The submitted work discusses the Abstract Visual Reasoning (AVR) capabilities of Multimodal Large Language Models (MLLM) with a focus on Bongard Problems (BP). To explore the gap between AVR capabilities on synthetic and real-world images, they introduce Bongard-RWR. This dataset aims to be more comparable to ...
Rebuttal 1: Rebuttal: Dear Reviewer, we are grateful for your insightful comments and suggestions, which helped us to improve the paper. > … some easy adjustments to the dataset … Thank you for these excellent suggestions regarding the dataset improvement. One of the key motivations behind Bongard-RWR is to test how...
Summary: The paper investigates the abstract visual reasoning capabilities of contemporary multimodal large language models (MLLMs), a problem that is aptly motivated. Firstly, the classic Bongard problems are posted to MLMMs in various forms. The machine learning models show performance that is significantly inferior ...
Rebuttal 1: Rebuttal: Dear Reviewer, we are grateful for your insightful comments and suggestions, which helped us to improve the paper. > … concurrent work, … (Wüst et al., 2024) := [1] Thank you for bringing this highly relevant work to our attention. At the time of our submission, we were unaware of [1], as it w...
Summary: In this work, the authors aim to test the abstract visual reasoning performance of MLLMs. To achieve this, they create a testing set that resembles abstract concepts but uses real images, a component lacking in standard Bongard datasets. They develop various prompt strategies and assess the puzzle-solving perf...
Rebuttal 1: Rebuttal: Dear Reviewer, we are grateful for your insightful comments and suggestions, which helped us to improve the paper. > The proposed test set is limited to 60 examples. Please, see our response to Reviewer 2kYM (*“The sample sizes are relatively small …”*). > Previous research has explored a simi...
null
null
DA-KD: Difficulty-Aware Knowledge Distillation for Efficient Large Language Models
Accept (poster)
Summary: This paper investigates knowledge distillation for large language models (LLMs) and introduces two main contributions: difficulty-aware data updating and bidirectional discrepancy distillation. The difficulty-aware data updating approach estimates sample difficulty using the ratio of cross-entropy losses betwe...
Rebuttal 1: Rebuttal: > Q1: The ablation studies suggest most performance improvements stem from BDL. A1: We need to clarify that DiffUp aims to reduce data and achieve efficiency. The BDL aims to improve performance. It is unfair to only compare accuracy improvement. To verify the effectiveness of DiffUp, we randomly...
Summary: This paper proposes Difficulty-Aware Knowledge Distillation to improve the efficiency of LLM distillation. It proposes a dynamic data updating strategy that leverages a Distillation Difficulty Score to filter out easy samples, and a Bidirectional Discrepancy Loss that stabilizes gradient updates during trainin...
Rebuttal 1: Rebuttal: We sincerely thank you for recognizing our contributions, by stating "well-motivated and appropriate", "experimental design is comprehensive" and "demonstrating reduced computational cost while maintaining/improving performance". Your thoughtful suggestions and questions help a lot to improve our ...
Summary: This paper presents an efficient knowledge distillation method. Specifically, it introduces a Difficulty-aware Data Updating strategy to identify challenging samples and proposes a novel loss function, Bidirectional Discrepancy Distillation, to reduce size of training samples required, thereby improving traini...
Rebuttal 1: Rebuttal: We sincerely thank you for recognizing the strengths of our work by stating “simple yet effective”, "well-written", and "comprehensive experiments". Your thoughtful suggestions help a lot to improve our work. We have carefully addressed your concerns below: > Q1: DDS Computation & Dataset Partiti...
Summary: The paper introduces DA-KD for knowledge distillation tailored to LLMs. It proposes two main innovations. First, a Difficulty-Aware Data Updating strategy computes Distillation Difficulty Score for each sample, to filter out easy examples and focus training on challenging ones, and employs a Stratified Data Up...
Rebuttal 1: Rebuttal: We sincerely thank you for recognizing the strengths of our work, including “addresses an important bottleneck in LLM distillation” and “experimental evidence supports the claims". Your suggestions are very helpful for improving our paper. We have carefully addressed your concerns below: > Q1: It...
null
null
null
null
null
null
Floating-Point Neural Networks Can Represent Almost All Floating-Point Functions
Accept (poster)
Summary: This paper explores the classes of functions over various floating-point arithmetics that can be represented using feedforward neural networks (FNNs) employing different activation functions, constrained by the corresponding floating-point arithmetic. In particular, the authors present a necessary condition, ...
Rebuttal 1: Rebuttal: We thank the reviewer for their positive evaluation and thoughtful feedback. We address all comments of the reviewer below. > The paper might benefit from offering more intuitions or informal explanations for its results. Overall, it is somewhat challenging to read for those not deeply engaged wi...
Summary: This paper investigates the expressive power of floating-point neural networks, focusing on their ability to represent floating-point functions under practical constraints such as finite precision and inexact arithmetic operations. The authors establish necessary and sufficient conditions for activation functi...
Rebuttal 1: Rebuttal: We thank the reviewer for their positive evaluation and thoughtful feedback. We address all comments of the reviewer below. > Lack of empirical validation. While the theoretical results are robust, the absence of experimental results or case studies makes it difficult to assess the practical impl...
Summary: The paper describes a condition, under which the discrete neural networks (all NNs running on digital computers with finite precision of real numbers) are universal approximators. I think this is important line of work, because the proof from 90s assumes real numbers, but, computers cannot store real numbers. ...
Rebuttal 1: Rebuttal: We thank the reviewer for their positive evaluation and thoughtful feedback. We address all comments of the reviewer below. > It is very hard to read. Sometimes, I have a feeling symbols are not well defined and explained. For example in Definition 3.5, I do not know what $\eta^+$ and $\eta^-$ is...
Summary: This article explores the representation of arbitrary floating-point functions using neural networks under floating-point arithmetic. It proves that, given certain conditions on the activation function, such representation is feasible for many floating-point functions. Throughout the proof, the article emphasi...
Rebuttal 1: Rebuttal: We appreciate the reviewer for their valuable comments. Our response is below. > I don't see how the generalizations of this paper are helpful for understanding neural networks in real-world applications. Classical universal approximation theorems establish that sufficiently wide neural networks...
null
null
null
null
null
null
Identifying biological perturbation targets through causal differential networks
Accept (poster)
Summary: The authors considered the problem of identifying the target of interventions from merely observational/interventional data. In some biological applications, there might be thousands of variables in the system and few samples are available per intervention. The authors utilized an amortized approach (which has...
Rebuttal 1: Rebuttal: Thank you for your comments! Please let us know if this helps answer your questions, and if you have any further concerns. > Theoretical guarantees We show in Appendix C that the differential network is *well-specified* as a *model class*. Due to the complexity of the architecture, it is difficu...
Summary: The authors focus on the problem of identifying direct targets of intervention in single-cell perturbation data. The proposed method, Causal Differential Network (CDN), first extracts causal networks behind unperturbed and perturbed data. These networks are then compared using an axial attention-based classifi...
Rebuttal 1: Rebuttal: Thank you for your comments! Please let us know if this response helps clarify your questions, and if you have any further concerns. > "direct application of existing CDN algorithms to the target identification task" CDN is the name of the method we propose; are you referring to causal discovery...
Summary: Identifying targets that induce cell state transitions has numerous implications in advancing targeted drug discovery and biological research. Authors propose causal differential networks (CDN), an attention-based framework independent of biological priors to identify targets of an intervention given a pair of...
Rebuttal 1: Rebuttal: Thank you for your comments! Please let us know if this response helps clarify your questions, and if you have any further concerns. > Architecture similarities and differences / novelty Our key contribution is the idea of supervised causal discovery (predicting graphs from simulated data) as a ...
Summary: This paper introduces Causal Differential Networks (CDN), a method to identify which variables in a biological system are directly perturbed between observational and interventional conditions. The authors frame the problem in terms of causal discovery: they first learn approximate causal graphs from each cond...
Rebuttal 1: Rebuttal: Thank you for your comments! Please let us know if this response helps clarify your questions, and if you have any further concerns. > Intended application We will make this more explicit in the paper. This work focuses on the application of predicting perturbation targets, for efficient experim...
Summary: The paper proposes a new method called causal differential networks (CDN) for identifying variables responsible for biological perturbations (e.g. drug targets) from pairs of observational and interventional datasets. The main idea involves first inferring noisy causal graphs from data, and then learning a map...
Rebuttal 1: Rebuttal: Thank you for your comments! Please let us know if these answers are helpful, and if you have any further concerns. > Top 1000 DEGs It's quite common to restrict analysis to subsets of genes, though the threshold and criteria may vary. Here, we believe that selecting the top 1000 genes (based on...
Summary: This paper proposes two modules for predicting true perturbation targets of cell perturbation datasets. Firstly, a causal structure learner is pre-trained on synthetic data in an attempt to recover causal graph representation from both observational and interventional dataset. Then, a differential network is t...
Rebuttal 1: Rebuttal: Thank you for your comments! Please let us know if this response helps clarify your questions, and if you have any further concerns. > Technical novelty Our key contribution is the idea of supervised causal discovery (predicting graphs from simulated data) as a pretraining objective, to obtain d...
Summary: The paper introduces causal differential networks (CDN), a supervised method designed to identify which variables (eg. genes) were directly targeted by biological perturbations (genetic knockouts or chemical treatments) using gene expression data from single-cell transcriptomics experiments. CDN integrates a c...
Rebuttal 1: Rebuttal: Thank you for your questions and recommendations! We hope this response addresses your questions, but please let us know if you have any additional concerns. > Additional historical context Thank you for these references and suggestions. We will incorporate them into the paper alongside addition...
Distributional Diffusion Models with Scoring Rules
Accept (poster)
Summary: This paper introduces a approach to accelerate diffusion models by learning the full conditional posterior distribution of clean data given noisy samples, rather than just the conditional mean, as done in standard diffusion models. The authors propose replacing the traditional regression loss (used to estimate...
Rebuttal 1: Rebuttal: We thank the reviewer FCZ5 for their feedback on our work. Please see our response below |$p_{0|t}(x_0|x_t)$,Dirac…not true Using $p_{0|t}(x_0|x_t)\approx\delta_{\hat{x}_\theta(t,x_t)}$ in $p(x_s|x_t)$ (see eq.5) leads to $p(x_s|x_t)\approx p(x_s|x_0=\hat{x}_\theta(t,x_t),x_t),$ where $p(x_s|x...
Summary: The paper proposes to learn the conditional distribution $p(x_0 | x_t)$ instead of learning the mean of this conditional distribution. To do this, the authors propose the use of scoring rules. In practice, this resorts to balancing two losses: the standard denoising loss, and a diversity loss on the learned co...
Rebuttal 1: Rebuttal: We thank reviewer r8Za for their overall positive assessment of our paper. Below, we address your comments. | Weakness 1 As suggested, we have now compared our method to the DPM-solver++ [1]. We have also compared it to a SOTA (multistep) distillation method [2]. Please see our response to the r...
Summary: This paper proposes Distributional Diffusion Models (DDM), an acceleration method of diffusion model. Specifically, the authors follow the Denoising Diffusion Implicit Models (DDIM) and replace the regression loss with a loss based on scoring rules to accomplish sample generation. This method learns the poste...
Rebuttal 1: Rebuttal: We would like to thank reviewer waLv for their comments and positive feedback. As requested, we re-implemented and have added comparisons to a popular numerical solver DPM-solver++ [1] (as suggested by reviewer r8Za) and to the recommended distillation method [2]. We did implement the parallel sa...
null
null
null
null
null
null
null
null
Learning Strategic Language Agents in the Werewolf Game with Iterative Latent Space Policy Optimization
Accept (poster)
Summary: This paper proposes Latent Space Policy Optimization (LSPO) to solve the werewolf game. The approach is to map text to latent space so that CFR and RL can learn a strategic policy, and then translate the policy back into natural language for DPO alignment. Claims And Evidence: Yes Methods And Evaluation Crit...
Rebuttal 1: Rebuttal: Thank you for your appreciation of our work and constructive suggestions. We hope the following response can address your concerns. > The paper would benefit from a more detailed mathematical formulation of both the problem and its solution. Also, adding some algorithm boxes might be better for u...
Summary: The proposed framework is designed to create strategic language agents for the Werewolf game, overcoming limitations of traditional AI methods and large language models. It maps free-form text to a discrete latent space for strategy optimization using Counterfactual Regret Minimization and then refines languag...
Rebuttal 1: Rebuttal: Thank you for your constructive comments and questions! We hope the following response can address your concerns. > Generalization to games other than Werewolf. We perform experiments on two additional games to show the general applicability and effectiveness of our method. Due to the word limit...
Summary: The paper proposes an iterative framework called Latent Space Policy Optimization (LSPO) to develop strategic language agents for free-form social deduction games, using Werewolf as the testbed. The approach begins by mapping free-form discussion actions into a finite latent strategy space via embedding and k-...
Rebuttal 1: Rebuttal: Thank you for your constructive comments and questions! We hope the following response can address your concerns. > Q1: Convergence behavior of the iterative LSPO process. (Also mentioned in Weakness 3) Theoretically, suppose the free-form language action has a finite vocabulary size $N_v$ and a...
Summary: This paper introduces Latent Space Policy Optimization (LSPO), an iterative framework designed to improve LLM-based agents in social deduction games like Werewolf. LSPO maps free-form text into a discrete latent space, enabling strategic policy learning using game-theoretic methods. It then translates the lear...
Rebuttal 1: Rebuttal: Thank you for your constructive comments and questions! We hope the following response can address your concerns. > Claims And Evidence: Evaluation on the Werewolf game alone is not sufficient. To show the general applicability and effectiveness of our method, we perform additional experiments o...
null
null
null
null
null
null
Rethinking Latent Redundancy in Behavior Cloning: An Information Bottleneck Approach for Robot Manipulation
Accept (poster)
Summary: This paper presents a method for incorporating the principal of information bottleneck (IB) into behavior cloning. They accomplish this by adding a mutual information objective to the loss function alongside the standard action regression term. This extra objective works to reduce the overfitting in low-data r...
Rebuttal 1: Rebuttal: ## **Please open [link to *anonymous material*](https://anonymous.4open.science/r/rebuttal_4879-FD3A/README.md) for easy reference.** ### **Comment 1 on Novelty: Limited novelty beyond adding a loss term to BC.** Our contributions go beyond simply adding a loss term to BC. Specifically, we highli...
Summary: This paper proposes an information bottleneck approach to learn policies via BC for robot manipulation. To achieve this the paper uses the mutual information via neural estimation (MINE) objective and uses that to add an additional mutual information based regularization loss during training. Theoretical resul...
Rebuttal 1: Rebuttal: ## Please open [link to anonymous material](https://anonymous.4open.science/r/rebuttal_4879-FD3A/README.md) for easy reference. ### **Comment 1 on Novelty of Method: simply apply the IB in BC and other works had done it [1-3].** 1. Differences with [1-3] (1) Papers [1] and [2] differ significant...
Summary: This manuscript explores latent representations in behavior cloning for robot manipulation by applying the Information Bottleneck principle to address redundancy in learned representations. It introduces an innovative perspective by applying mutual information to quantify and mitigate redundancy, supported by ...
Rebuttal 1: Rebuttal: Thanks for your detailed review of this paper! We hereby address your concerns as follows. ### **Comment 1 on Writing: The claimed improvements from incorporating IB are overstated.** Thanks for the feedback. We will emphasize the consistency of improvements rather than their magnitude in the rev...
Summary: The manuscript investigates the redundancy in representations for Behavior Cloning in robot manipulation and introduces the Information Bottleneck principle to mitigate this issue. By incorporating IB, we aimed to filter out redundant information in latent representations while preserving task-relevant feature...
Rebuttal 1: Rebuttal: Thanks for your affirmative review of this paper! We address your questions as follows. ### **Comment 1 on Writing and Organization: Some of the less important findings (e.g. finding 9) are unnecessarily highlighted.** We sincerely appreciate the reviewer’s feedback on refining the presentation ...
null
null
null
null
null
null
Trust-Region Twisted Policy Improvement
Accept (poster)
Summary: The authors propose several improvements to SMC: 1. Addressing path degeneracy by performing an online estimate of $Q_{SMC}$. 2. Using the $V$ values obtained by the search algorithm. 3. Reviving trapped particles. 4. Changing the proposal policy to be in the trust region. The authors provide experiments to ...
Rebuttal 1: Rebuttal: We thank the reviewer for their effort and accurate summary of our paper, we appreciate that you recognize our work as a solid advancement to SMC-planners for RL and the positive assessment. We address your points below. --- **Minor comments:** - We included the derivation for Corollary 2.3 i...
Summary: The authors propose Trust-Region Twisted Sequential Monte Carlo (TRT-SMC) to improve the sample efficiency of SMC methods for reinforcement learning. One of the main motivations is that SMC methods scale better than Monte Carlo Tree Search (MCTS) because they enable higher GPU parallelization. The authors prop...
Rebuttal 1: Rebuttal: We thank the reviewer for their time and effort. We divided our rebuttal in two parts, firstly on the experiment design and the second part on answering your comments. --- **Choice of environments:** For the choice of environments, we wanted a mix of discrete and continuous control environments...
Summary: Trust-Region Twisted Policy Improvement develops a new algorithm for performing Sequential Monte-Carlo (SMC) inspired by Monte-Carlo Tree Search (MCTS) for performing online planning in RL algorithms, which scales in performance with respect to the number of particles sampled during each planning step. The alg...
Rebuttal 1: Rebuttal: We thank the reviewer for their time, effort and, accurate summary of our paper. Regarding limited novelty, we understand the concern but would also like to point to a slightly different perspective. With the benefit of hindsight it is easy to dismiss contributions by being reductive. For instan...
Summary: - The paper creates a framework to apply sequential Monte Carlo planners to reinforcement learning settings by improving data generation through constrained action sampling and explicit terminal state handling, - Specifically, SMC methods traditionally estimate the trajectory distribution under an unknown poli...
Rebuttal 1: Rebuttal: We thank the reviewer for their effort and accurate summary of our paper. We address your points below. --- **Theoretical results:** Our paper does not introduce new theoretical results, **nor do we claim so**. It is correct that theorem 2.2 is known, it also goes quite farther back than Levin...
null
null
null
null
null
null
ReinboT: Amplifying Robot Visual-Language Manipulation with Reinforcement Learning
Accept (poster)
Summary: The paper introduces ReinboT, a VLA model integrated with offline RL for robotic manipulation. It improves decision-making by predicting dense rewards, which guide the robot to maximize long-term returns. ReinboT outperforms baseline models in simulated and real-world tasks, demonstrating superior few-shot lea...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your positive and insightful response to our work. In response to the reviewer's concerns about more baseline comparisons and reward ablation, we have added more experiments. We respond to all concerns point by point. **Q1**: While the proposed dense reward fun...
Summary: The paper "ReinboT: Amplifying Robot Visual-Language Manipulation with Reinforcement Learning" introduces an end-to-end Vision-Language-Action (VLA) model, ReinboT, which integrates reinforcement learning (RL) principles to improve robotic manipulation. The core idea is to incorporate a dense reward structure ...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your insightful comments, which can further enhance the completeness and thoroughness of our work. In response to the reviewer's concerns about more baseline comparisons, we have added more experiments. We respond to all concerns point by point. **Q1**: A direc...
Summary: This paper proposed an RL framework for finetuning the Vision-Language-Action model called ReinBoT, including automatic sub-goal division and reward densification, which is critical to applying RL in VLA training. The authors also use ReturnToGo to replace vanilla challenging value estimation. ReinBoT achieves...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your positive response and affirmation of our work. We respond to the reviewer's concerns point by point. **Q1**: Experiments: ReinBoT reported a SoTA performance in the CALVIN benchmark. I am not an experienced researcher in VLA area but wanna know why not inc...
Summary: This paper introduces ReinboT, a model that integrates reinforcement learning (RL) concepts, specifically the idea of maximizing cumulative dense rewards, into a vision-language-action (VLA) framework for robotic manipulation. The authors propose a reward design that breaks down long-horizon tasks into several...
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your high affirmation and recognition of our work. We respond to the reviewer’s concerns point by point. **Q1**: I think the criteria for keypoints in demonstration can be sensitive to the quality of demonstrations; for instance, when demonstrations have mis gr...
null
null
null
null
null
null
Modalities Contribute Unequally: Enhancing Medical Multi-modal Learning through Adaptive Modality Token Re-balancing
Accept (poster)
Summary: This paper proposed AMC, a top-down dynamic multi-modal fusion approach. AMC firstly quantifies the importance of each modality and then fuse the information from different modalities based on the importance. Experimental results show that AMC is effective on several public datasets. This paper addresses a sig...
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and constructive comments. We are especially grateful for the recognition of our contributions, including **well-supported claims** and **experiments that demonstrate a comprehensive understanding of the capabilities and limitations of the AMC method.** We ...
Summary: The paper proposes Adaptive Modality Token Re-Balancing (AMC) to address the challenge of unequal modality contributions in medical multi-modal learning. AMC dynamically quantifies modality importance and rebalances token contributions by replacing less informative tokens with inter-modal ones. It integrates d...
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and constructive comments. We are grateful for the recognition of our contributions, including **the interesting approach of token replacement, solid experiments, and innovative ideas.** We address the concerns and questions raised as follows. ### [Cons1] ...
Summary: The paper proposes Adaptive Modality Token Re-Balancing (AMC), a dynamic fusion method tailored specifically to handle varying data quality across modalities in medical multi-modal learning. AMC explicitly quantifies the importance of each modality by computing statistical information from attention distributi...
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive feedback. We address the key concerns and suggestions below. ### [Cons1] Interpretability Analysis in the Medical Setting For the interpretable aspect, **we are not primarily aiming to propose a method that focuses on giving interpretablea...
Summary: This paper proposes Adaptive Modality Token Re-balancing (AMC), a transformer-based fusion method that dynamically adjusts token contributions from different modalities based on estimated modality importance scores derived from attention distributions. AMC selectively replaces less informative tokens from weak...
Rebuttal 1: Rebuttal: We thank the reviewer for their time and thoughtful feedback. We are encouraged that the reviewer finds our method **novel in how it integrates token-level rebalancing** with **reasonable reproducibility and evaluation**. Below, we address each concern in turn. ### [Cons1] Overstated About the Mo...
null
null
null
null
null
null
Implicit Riemannian Optimism with Applications to Min-Max Problems
Accept (poster)
Summary: The paper addresses online optimization and min-max optimization on Hadamard manifolds. The authors propose a Riemannian optimistic online learning algorithm called RIOD, designed to match state-of-the-art Euclidean regret bounds while handling in-manifold constraints. Leveraging RIOD, they develop a min-max s...
Rebuttal 1: Rebuttal: We want to thank the reviewer for their time taken revising our manuscript, as well as the appendix and the experimental section.
Summary: This paper studies Riemannian online optimization as well as min-max optimization problems. The main contribution is to propose algorithms for these problems and provide theoretical analyses. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I did check the proofs in any de...
Rebuttal 1: Rebuttal: > For online Riemannian optimization, the authors apparently remove a spurious curvature-dependent term from the upper bound in previous analyses. However, curvature-dependent terms re-appear in the time-complexity in the inner steps of their algorithm. Note that in the online setting, the comput...
Summary: This paper studies Riemannian online optimization and min-max optimization. The authors propose a novel online optimistic Riemannian optimization algorithm with inexact updates, which achieves optimal dynamic regret, and also enforces in-mainfold constraints. The authors then applied their online optimization ...
Rebuttal 1: Rebuttal: > this paper does not have experiments. We note that as we stated in the paper, we did provide experiments in Appendix E to validate our theory, with an implementation of our method in a constrained problem setting, for an application that only one other method (RAMMA, an impractical method) coul...
Summary: In this paper the authors consider Riemannian optimism on Hadamard manifolds. Claims And Evidence: Theoretical paper, no evidence presented. Methods And Evaluation Criteria: None present. Theoretical Claims: Yes, they were briefly checked. Experimental Designs Or Analyses: None. Supplementary Material: Di...
Rebuttal 1: Rebuttal: > Experimental results We note that as stated in the paper, we did provide experiments in Appendix E with an implementation of our method, in a constrained problem setting, an application that only one other method (RAMMA, an impractical method) could tackle before, showing that now it is possibl...
Summary: This paper proposes implicit Riemannian optimistic online methods for problems with g-convex and g-concave objectives, accommodating in-manifold constraints and matching state-of-the-art Euclidean rates. The analysis removes the dependence on geometric constants which closes the gap between the Riemannian prob...
Rebuttal 1: Rebuttal: > Experiments in online learning settings would strengthen the paper We note that the online learning setting assumes an adversarial agent generating losses against our algorithm. This means that in order to empirically validate the bound, we would need to craft a worst-case adversary to play ag...
null
null
null
null
VinePPO: Refining Credit Assignment in RL Training of LLMs
Accept (poster)
Summary: This paper presents to use MC estimate (a well-established method in classical RL) to improve credit assignment in RL training for large language models (LLMs). The authors highlight the limitations of PPO’s value network, particularly in reasoning-intensive tasks, and propose VinePPO, which replaces tradition...
Rebuttal 1: Rebuttal: We thank the reviewer for their time and effort in reviewing the paper. We now address the concerns raised in the review. **Novelty** > MC estimation is one of the classic methods for value function estimation in RL. This paper did not introduce additional novel contributions. In fact, in PPO, GA...
Summary: This paper first analyzes why recent RL methods for LLM often ignore the critic component by showing the poor quality of the learned critic. To improve the critic component, the paper proposes a particularly simple but effective approach by running additional Monte Carlo samples to refine the value estimation,...
Rebuttal 1: Rebuttal: We are thrilled that the reviewer liked our work and happy to hear that our results were successfully replicated. We now address the question. **Questions** > How would vinePPO perform when training a reasoning model (e.g., o1/R1) with extremely long COT? How can we decide when to perform MC samp...
Summary: The paper highlights a problem with PPO applied to post-training LLMs: an inaccurate value function leads to poor credit assignment. To fix this, the paper proposes to replace the learned value function with an MC estimation. The experiments show that his approach can outperform classical PPO and other relevan...
Rebuttal 1: Rebuttal: We thank the reviewer for their time and their focus on details in their feedback. We now address the raised concerns: > The proposed method is not new, albeit it's not commonly seen for solving CA in RL training LLMs While estimating intermediate-states values via auxiliary MC was known to be t...
Summary: 1. Problem addressed: Credit assignment (CA) in RL training of LLMs, specifically for reasoning-heavy tasks. 2. Method proposed: VinePPO, which replaces value networks with Monte Carlo (MC)-based value estimation for improved CA. 3. Main results: 1. VinePPO outperforms PPO and other baselines (GRPO, RLOO, R...
Rebuttal 1: Rebuttal: We appreciate the reviewer’s attention to detail and positive feedback. Below, we address the questions asked: **Questions** > What about evaluation on more OOD benchmark datasets like AIME We considered benchmarks like AIME too difficult for the base LLMs used in our experiments. However, this ...
null
null
null
null
null
null
Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention
Accept (poster)
Summary: The paper introduces an attack and defense which target the confidence of model predictions. The attack, called Mirage, aims to reduce the confidence of a confidence-calibrated model for a targeted subregion of the data distribution, while maintaining high classification accuracy. The defense uses Zero Knowled...
Rebuttal 1: Rebuttal: **We thank the reviewer for their positive assessment of our paper and discuss their questions/concerns below:** > Reference [a] Thanks for this reference! We have cited it and included a comparison in Section 2: *Both Mirage and the attack in [a] exploit abstention to reduce model availability...
Summary: This paper introduces "Confidential Guardian," a novel framework that addresses the potential abuse of model abstention mechanisms in machine learning systems. The authors identify a critical threat: dishonest institutions can exploit model abstention to covertly discriminate against specific individuals by ar...
Rebuttal 1: Rebuttal: **We thank the reviewer for their positive assessment of our paper and discuss their questions and concerns below:** > The experimental evaluation could include more diverse application domains beyond image/tabular classification. We chose to use image (high-dimensional, unstructured data) and t...
Summary: This paper puts forth the possibility of discrimination by denial or delay of services using model uncertainty or confidence. The paper proposes an attack dubbed Mirage which systematically reduces confidence in a predefined uncertainty region while maintaining accuracy. Then the paper proposes a reference da...
Rebuttal 1: Rebuttal: **We thank the reviewer for their strongly positive assessment of our paper and discuss their questions below:** > What are some other ways of denial/delay/abstention? Indeed, our work considers abstention using model confidence, i.e. thresholding the maximum softmax score (as we describe in the...
null
null
null
null
null
null
null
null
Learnings from Scaling Visual Tokenizers for Reconstruction and Generation
Accept (poster)
Summary: Most current-day generative models operate on abstract representations rather than pixels, which are often produced by a VAE-like "stage 1" model. These models are commonly trained separately from the subsequent generative modeling step ("stage 2"), such as diffusion modeling. The central question the authors ...
Summary: This paper conducts empirical studies on training Variational Autoencoders (VAEs) for latent diffusion models using Vision Transformers (ViT) as backbones instead of commonly used CNN-based models. It reports two main findings: (1) The reconstruction quality of the VAE does not necessarily determine the genera...
Summary: This paper conducts an empirical study on scaling up the autoencoder for generative models, with a ViT-style autoencoder. The authors investigate bottleneck size, encoder size, and decoder size, and find the generation performance is not significantly better with scaling up the autoencoder. As a result, the au...
Summary: This paper introduces ViTok, a Vision Transformer-based autoencoder, and systematically investigates the impact of scaling bottleneck size, encoder, and decoder architectures on image and video reconstruction and generation performance. Key findings reveal that the total latent floating points E is the dominan...
null
null
null
null
null
null